As a highly experienced Data Engineering professional with over 5 years of expertise in the field, I bring a wealth of knowledge and skills to any data-focused project. I am knowledgeable in designing and implementing scalable data pipelines (ETL/ELT) in AWS and conducting data exploration, modeling, and analytics at scale. My expertise in the Hadoop ecosystem and analytics technologies, including Spark, Sqoop, Hive, and Airflow, has allowed me to consistently deliver high-quality results. I have a proven track record of working in cross-functional teams and utilizing CI/CD and Agile methodologies to effectively meet project goals and drive business value.Acheivments----Implemented scalable data pipelines using Apache Spark and Kafka, improving data processing efficiency by 40%Developed data models and automated ETL workflows, reducing manual data handling by 30% and enhancing reporting accuracyDesigned and built data lakes using AWS S3 and Azure Data Lake, ensuring seamless integration with downstream analytics tools.Domain Experiences--> Banking, Health Care, Real estate
-
Azure Data EngineerNorthern TrustChicago, Il, Us -
Big Data EngineerNorthern Trust Jul 2023 - PresentChicago, Illinois, United StatesWorked on gathering security (equities, options, derivatives) data from different exchange feeds and storing historical data. Designed and deployed a Kubernetes-based containerized infrastructure for data processing and analytics, leading to a 20% increase in data processing capacity.Automated end-to-end workflows using AWS Step Functions and AWS Lambda, enhancing scheduling reliability and reducing manual intervention by 60%.Implemented Elasticsearch-based search solutions, reducing query response times by 50% and enhancing user experience in real-time analytics applications.Created pipelines in AWS Glue using Glue Jobs, Crawlers, and Databases to extract, transform, and load data from different sources like S3, RDS, Redshift, and DynamoDB.Implemented horizontal scaling for Elasticsearch clusters, ensuring reliability during peak traffic and supporting 1M+ daily transactions.Managed Git repositories for version control, implementing branching strategies for smooth collaboration and code integration, improving codebase integrity and reducing merge conflicts by 20%.Led the migration and warehousing initiative, transforming on-premise SQL-based models to AWS Redshift, optimizing storage through partitioning and indexing, and cutting storage costs by 20%.Architected and deployed AWS serverless infrastructure using Lambda, DynamoDB, and S3 to handle real-time data processing, applying Infrastructure as Code (IaC) principles with AWS CloudFormation to automate resource provisioning and ensure high availability. -
Data EngineerOsf Saint Francis Medical Center Jun 2022 - May 2023Peoria, Illinois, United StatesEngineered a Data Transformation Pipeline for a Healthcare Analytics Platform on AWS infrastructure using DBT and Redshift, ensuring compliance with HIPAA regulations and improving data processing efficiency by 40%.Architected and implemented a dimensional model on AWS Redshift, structuring healthcare data into optimized fact and dimension tables, which enhanced reporting accuracy and reduced query times by 30%.Reduced storage costs by 30% through the use of S3 storage classes like S3 Glacier and S3 Intelligent-Tiering, optimizing the storage of infrequently accessed data while maintaining quick retrieval times.Developed and maintained ETL processes using Apache Spark integrated with Hadoop and HDFS, enabling the seamless processing and analysis of massive healthcare datasets and improving workflow efficiency. -
Applicatio Developer/ Data EngineerHdfc Bank Jan 2020 - Aug 2021Mumbai, Maharashtra, IndiaEngineered a Financial Model Engine for the Credit Risk Platform on Big Data infrastructure using Scala and Spark.Migrated Hive queries to Spark transformations using DataFrames, Spark SQL, SQL Context, and Scala, enhancing processing speed by 40%.Developed and optimized data processing pipelines using Java and Spring Boot, integrating with Apache Kafka for real-time data streaming, which improved data ingestion efficiency by 30%.Implemented shell scripts to automate system backups and log management for Hadoop clusters, ensuring data integrity and reducing system downtime by 20%. -
Data EngineerAjmera Realty & Infra India Ltd. Jun 2018 - Dec 2019Mumbai, Maharashtra, India
Sumanth Reddy Education Details
-
Computers And Information Science
Frequently Asked Questions about Sumanth Reddy
What company does Sumanth Reddy work for?
Sumanth Reddy works for Northern Trust
What is Sumanth Reddy's role at the current company?
Sumanth Reddy's current role is Azure Data Engineer.
What schools did Sumanth Reddy attend?
Sumanth Reddy attended Western Illinois University.
Who are Sumanth Reddy's colleagues?
Sumanth Reddy's colleagues are Greg King, Patty Colella-Work, Scott Boone, Binumol Balakrishnan, Devika B Prasad, Mark Mitchell, David Jed Consunji.
Not the Sumanth Reddy you were looking for?
-
-
Sumanth Reddy
Woburn, Ma -
Sumanth Reddy
Python Developer | Expertise In Flask, Django, And Aws | Building Scalable Applications For Travel And Financial Systems | Proficient In Api Integration And Cloud SolutionsHouston, Tx -
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial