Hari Krishna Krishnamoorthy Chandrasekaran Email and Phone Number
Hari Krishna Krishnamoorthy Chandrasekaran work email
- Valid
- Valid
Hari Krishna Krishnamoorthy Chandrasekaran personal email
Hari Krishna Krishnamoorthy Chandrasekaran phone numbers
Product Manager with experience in building and growing teams. Extensive experience in building Cloud based products with simple but delightful customer experiences as well as scalable platform-based products.My knowledge of the full data lifecycle from strategy definition to discovery, design and delivery enables me to build products tailored to the specific requirements of each organization.
Jpmorgan Chase & Co.
View-
Area Product Owner, Ccb Data Lake, Vice PresidentJpmorgan Chase & Co. Jul 2023 - PresentNew York, Ny, Us• Product Owner with demonstrated expertise in operating and improving a resilient Data Lake environment for the CCB Line of Business, where publishers securely store data and are federated via consumer access through AWS Lake Formation.• Lead resiliency efforts for CPOF (Critical Point Of Failure) applications to ensure highly available services for all consumers and managed publishers in the DR region, while maintaining an RTO/RPO of less than 4 hours across the data foundation.• Designed a resilient "hot-hot" architecture replicating data from primary regions to the disaster region in real time using comprehensive observability and monitoring to track replication status. This enables DR testing on periodic intervals with Chaos Monkey-style tools that build confidence in its readiness for DR.• Improved data governance-through User Activity Dashboard monitoring of access at a dataset level across the databases and tables. This allows proactive monitoring capability for the Data Governance team and Data Owners to detect unintended access or compliance with regulations.• Launched the Executive Summary Dashboard with key metrics of usage by Publishers and Consumers of the Data Lake, which drives strategic and data-driven decision making. The idea is to bring the insights together into one view to facilitate ease of transparency, scalability, and targeted data access that is very close to the product goals and the stakeholder requirements.• Experienced in forecasting and optimizing Data Lake and ingestion pipeline expenses across Cloud and On-premise environments. Skilled in estimating annual dollar spend for the current fiscal year and projecting year-over-year (YoY) costs based on anticipated growth in storage and data consumption, ensuring budget alignment with evolving business needs -
Data ArchitectAmazon Web Services (Aws) Sep 2022 - Jun 2023Seattle, Wa, Us• Developed comprehensive scalable solution for an application using a Search database - AWS OpenSearch, around 600TB in size, as the data store. My team and I re-designed their existing data architecture, considering the data growth for the next 10 years by leveraging a cell-based data storage approach. We employed AWS serverless services like Step Functions, Lambda, Batch to attain scalability for migrating the data from source to target architecture. • Designed and delivered a Data Lake solution for an online retail company who earlier had legacy databases and file shares to store real time and batch related data for reporting and analytics use-case. We re-designed the data architecture into serverless by implementing a Data Lake solution in Amazon S3. AWS Glue was used for ETL with scripts written in PySpark and metadata catalog management. Amazon Athena for data querying and Quicksight for visualization were used. • Implemented log analytics and observability data migration from SaaS based Elactic.co to AWS managed OpenSearch service. The project involved migration of schema involving mappings, templates & settings using custom created python scripts. Data migration was performed using scripts developed using AWS CloudFormation templates leveraging native snapshot restore feature and Elasticdump tool. We used Amazon Fargate and Step-functions for scalability. • Experience working closely with AWS CDAs to estimate the Level of Effort (LOE) for migrating a 20 year old On-Premises Oracle and DB2 database to managed Amazon Aurora PostgreSQL database. We analyzed data schema, application code to build a sustainable data model and employed services like AWS Schema Conversion Tool and AWS Database Migration Service to migrate the data. RTO and RPO requirements were satisfied as well.• Developed various IPs for AWS in the form of Amazon Prescriptive Guidance (APG) guides for helping internal consultants and partner execute data migration and scaling projects -
Aws Database And Datawarehouse ConsultantAmazon Web Services (Aws) Oct 2021 - Sep 2022Seattle, Wa, Us•Deliver on-site technical engagements with partners and customer. Includes participating in pre-sales on-site visits, understanding customer requirements, create consulting proposals, develop/deliver proof-of-concepts, technical workshops, and creating packages data analytics offering.•Engagements include short on-site projects proving the use of AWS services to support new distributed computing solutions that often span private cloud and public cloud services. -
Dc Senior Solution SpecialistDeloitte Aug 2017 - Oct 2021Worldwide, Oo• The client’s benefits application system had IBM DB2 v11.5 hosted on-premises as its backend data store. Scalability during influx of increased user traffic and its associated data handling were the primary issues faced by the client in On-prem data storage and this convinced them to move to cloud. We analyzed and documented the RAM, CPU utilization, Throughput, Latency parameters for peak and normal workload in the on-prem VM. We mapped these numbers to the target EC2 instance which hosted the target DB2 database. • Experienced in setting up the infrastructure related to the target data warehouse in AWS Redshift. Making data driven decisions to determine the right instance type, distribution key, sort keys, encryption and networking components for the target based on data analysis on the source Oracle data warehouse. My team and I analyzed the application query pattern to design fact-dimensional table taking into consideration the distributed architecture of MPP data warehouse.• As a performance tuning expert in traditional database and data warehouse systems, I have optimized data storage to improve throughput, minimize response times, and reduce overhead costs. Leveraged execution plans, optimized existing indexes and adjusted database parameters to have an average of 25% performance improvement.• Designed and implemented an automated archival system for unused data after analyzing the data access patterns. Utilized shell, PL/SQL and SQL for identifying and transferring data to archival database by maintaining data integrity. This helped the client save 35% in storage costs and query performance improvement by 20%Environment: DB2 LUW (10.1,10.5,11.1 and 11.5 HADR), RDS PostgreSQL, Jenkins, NewRelic, CA ERwin Data Modeler r9.64, Unix Shell Scripting, Python, Splunk, IBM Data Studio 4.1.1 Client, Rapid Application Development (RAD), JAMA, JIRA, Oracle SQL Developer. -
Solution EngineerDeloitte Sep 2015 - Aug 2017Worldwide, Oo• To streamline database development with a consistent environment, we dockerized databases with base objects into Docker containers. Using containers, we were able to quickly spin up multiple instances of a database with the same configuration, reducing the time it takes to set up a new environment by 30%.• To limit the time spent by client DBAs on managing databases, we proposed migrating EC2 hosted DB2 database to RDS PostgreSQL. We developed migration plans, schema conversion, data migration, and testing along with stakeholders for minimal downtime.• Developed industry standard data model using Erwin Data Modeler, after analyzing business processes and data structures.• To ensure business continuity and to minimize data loss, we implemented Disaster Recovery (DR) in both EC2 hosted DB2 and RDS PostgreSQL. Leveraged DB2 native features like HADR and RDS Cross Region snapshot for backing up data. Environment: DB2 LUW (10.1,10.5,11.1 and 11.5 HADR), RDS PostgreSQL, Jenkins, NewRelic, CA ERwin Data Modeler r9.64, Unix Shell Scripting, Python, Splunk, IBM Data Studio 4.1.1 Client, Rapid Application Development (RAD), JAMA, JIRA, Oracle SQL Developer. -
Software Engineering AnalystAccenture Consulting Sep 2011 - Jul 2014Dublin 2, IeSenior Programmer and Software Engineering Analyst offering solutions based on Data Analysis to a leading Telecommunication provider. -
Business Data AnalystAccenture Consulting Aug 2012 - Jan 2014Dublin 2, Ie- Attended the ANNUAL AUDIT conducted by my client for two consecutive years 2012, 2013 - Presented the annual security and risk management report for modules that include Non-Recurring charge(NRC),Mediation Overlay(MO) and Usages in my project. - Prepared business continuity plan (BCP) for all the modules in my project. - Credited for submitting a 100% consistent and compliant audit report. -
Associate Software EngineerAccenture Consulting Sep 2010 - Sep 2011Dublin 2, IeWas involved in ETL Data Migration of Legacy BigPond customers to Kenan Database. Also was involved in production QC fix and Incident resolution activities. -
Associate Software Engineer - TraineeAccenture Consulting Jun 2010 - Sep 2010Dublin 2, Ie- Trained in Database Management with specializations in tools like Oracle – Apps, PL/SQL and SQL. - Completed the stream training with an ‘A’ rating. - My stream training Group Project on “Central Billing System” using SQL and Unix was widely appreciated by the Learning, Knowledge Management (LKM) Trainers.
Hari Krishna Krishnamoorthy Chandrasekaran Skills
Hari Krishna Krishnamoorthy Chandrasekaran Education Details
-
University Of South FloridaGeneral -
Shanmugha Arts, Science, Technology And Research AcademyElectrical And Electronics Engineering -
Lisieux Matriculation Higher Secondary SchoolComputer Science
Frequently Asked Questions about Hari Krishna Krishnamoorthy Chandrasekaran
What company does Hari Krishna Krishnamoorthy Chandrasekaran work for?
Hari Krishna Krishnamoorthy Chandrasekaran works for Jpmorgan Chase & Co.
What is Hari Krishna Krishnamoorthy Chandrasekaran's role at the current company?
Hari Krishna Krishnamoorthy Chandrasekaran's current role is Area Product Owner | Compliance & Resiliency | Data Lake @ JPMorgan Chase & Co | Ex-AWS | Ex-Deloitte.
What is Hari Krishna Krishnamoorthy Chandrasekaran's email address?
Hari Krishna Krishnamoorthy Chandrasekaran's email address is hc****@****zon.com
What is Hari Krishna Krishnamoorthy Chandrasekaran's direct phone number?
Hari Krishna Krishnamoorthy Chandrasekaran's direct phone number is +181350*****
What schools did Hari Krishna Krishnamoorthy Chandrasekaran attend?
Hari Krishna Krishnamoorthy Chandrasekaran attended University Of South Florida, Shanmugha Arts, Science, Technology And Research Academy, Lisieux Matriculation Higher Secondary School.
What are some of Hari Krishna Krishnamoorthy Chandrasekaran's interests?
Hari Krishna Krishnamoorthy Chandrasekaran has interest in Children, Civil Rights And Social Action, Education, Environment, Poverty Alleviation, Science And Technology, Human Rights, Health.
What skills is Hari Krishna Krishnamoorthy Chandrasekaran known for?
Hari Krishna Krishnamoorthy Chandrasekaran has skills like Sql, Pl/sql, Data Migration, Business Intelligence, Shell Scripting, Testing, Sdlc, Requirements Analysis, Oracle Applications, Javascript, Oracle, Unix Shell Scripting.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial