Srikanth Reddy

Srikanth Reddy Email and Phone Number

Sr DevOps Engineer @ U.S. Bank
Irving, TX, US
Srikanth Reddy's Location
Irving, Texas, United States, United States
About Srikanth Reddy

As a dedicated Data Engineer with over 8 years of experience, I’ve been at the forefront of transforming complex data into actionable insights, driving business decisions through innovative data solutions. My journey has been fueled by a passion for big data, cloud computing, and the ever-evolving landscape of data engineering.I specialize in architecting and implementing scalable data pipelines, optimizing ETL processes, and leveraging cloud platforms like AWS and Azure to build resilient data infrastructures. My hands-on experience with tools like Databricks, Apache Spark, and Hadoop has enabled me to tackle diverse challenges, from data normalization to real-time analytics.Beyond the technical aspects, I thrive in collaborative environments where I can contribute to cross-functional teams, translating business needs into data-driven strategies. I’m always eager to explore new technologies, refine my skills, and take on complex projects that push the boundaries of what's possible in data engineering.Now, I’m looking for the next opportunity where I can apply my expertise, continue learning, and drive impactful data solutions that make a difference.

Srikanth Reddy's Current Company Details
U.S. Bank

U.S. Bank

View
Sr DevOps Engineer
Irving, TX, US
Employees:
79904
Srikanth Reddy Work Experience Details
  • U.S. Bank
    Sr Devops Engineer
    U.S. Bank
    Irving, Tx, Us
  • Usaa
    Sr. Data Engineer
    Usaa Jan 2022 - Present
    San Antonio, Texas, Us
    • Designed and implemented scalable data pipelines using Databricks on AWS to ingest, transform, and load data from various sources into Delta Tables.• Spearheaded the design, setup, and administration of Snowflake database environments, including ERWIN data modeling, schema design, DDL management, and query optimization for high-performance, enterprise-grade data platforms.• Implemented robust data governance frameworks, with strong focus on PII data handling, data security, and real-time monitoring for unauthorized data access or changes using Snowflake and third-party integration tools like OKTA, SailPoint, and CyberArk.• Onboarded and configured Snowflake with industry-leading BI/Analytical tools and ETL solutions such as Informatica, ensuring seamless integration of data ingestion, processing, and reporting pipelines.• Managed compute and storage resources in Snowflake, applying best practices for database object creation (schemas, tables) and fine-tuning data load operations to achieve optimal performance and cost-efficiency.• Developed and enforced data engineering best practices across development lifecycles, including coding standards, source management, CI/CD pipelines, and agile methodologies for streamlined project delivery.• Over 6 years of experience with AWS cloud technologies and SQL/NoSQL databases, such as Amazon S3, leveraging SQL and Python for scripting and automating data engineering workflows.• Led the integration of Snowflake with CI/CD pipelines using tools such as Jenkins to automate the build, deployment, and monitoring processes, ensuring continuous delivery and quality assurance.• Provided expertise in data governance and data access control, creating policies for secure handling of sensitive and PII data, and deploying advanced monitoring and alerting systems to prevent
  • Comcast
    Sr Data Engineer
    Comcast Jun 2020 - Aug 2021
    Philadelphia, Pa, Us
    • Demonstrated exceptional proficiency in leveraging Databricks, Snowflake, and Apache Spark to efficiently perform complex data transformations and sophisticated analytics on massive data sets.• Designed, developed, and maintained high-performance ETL pipelines using Databricks and Snowflake, employing advanced data partitioning and clustering techniques to optimize query speeds and resource utilization.• Designed and implemented scalable data pipelines on AWS for ingesting and processing large datasets into Redshift data warehouses• Optimized Redshift cluster configurations, including node types, distribution keys, and sort keys, to enhance query performance and cost-efficiency• Developed and scheduled ETL processes using AWS Glue and Redshift to extract, transform, and load data from various sources (e.g., S3, RDS, DynamoDB) into Redshift.• Spearheaded the migration of on-premise data warehouses to Snowflake, improving scalability and reducing infrastructure costs by 20%.• Designed and implemented data pipelines using Talend and AWS Glue to streamline ETL processes.• Conducted performance tuning and optimization of Snowflake queries and data models.• Provided technical leadership and mentoring to junior data engineers and architects.• Utilized Redshift's massively parallel processing (MPP) architecture to perform complex analytical queries and generate business intelligence reports• Implemented data security and access control measures, such as Redshift's column-level encryption and IAM roles, to ensure data privacy and compliance• Collaborated closely with data engineers and architects to architect and fine-tune Snowflake data models, implementing star schemas and snowflake schemas to ensure streamlined data retrieval and analysis.
  • Phillips 66
    Data Engineer
    Phillips 66 Nov 2018 - Jan 2020
    Houston, Texas, Us
    • Developed, optimized, and maintained large-scale data pipelines using Apache Spark, processing terabytes of data from multiple sources to ensure fast, reliable, and efficient data delivery across the organization.• Collaborated with cross-functional teams including data engineers, business analysts, and solutions architects to design, build, and implement data architectures for strategic enterprise initiatives.• Built standardized data processing libraries in Python to streamline ETL processes, reducing code redundancy and improving the efficiency of data transformations by 25%.• Led data migration initiatives to move CMS’ data platform into Chase’s cloud environment, ensuring data integrity, security, and seamless integration with existing systems.• Advised and coached junior data engineers on best practices for data pipeline development, Spark optimization, and data governance, contributing to a 15% improvement in team efficiency.• Designed and implemented automated data pipelines using Spark and Python, reducing manual intervention by 30% and ensuring scalability and flexibility in processing large datasets.• Partnered with business stakeholders to define and consolidate company data assets, enabling more informed decision-making through accurate and timely data insights.• Utilized AWS services such as EMR, Glue, and S3 to build scalable data processing infrastructure, optimizing the data storage and retrieval process, and improving performance by 20%.• Implemented automation in data ingestion and transformation processes using AWS Glue, Terraform, and Airflow, reducing overall pipeline maintenance time and improving deployment efficiency.
  • Deloitte
    Data Analyst
    Deloitte Apr 2017 - Oct 2018
    Worldwide, Oo
    • Performed Exploratory Data Analysis to identify trends and clusters and build models using various techniques.• Automated data cleaning using Python scripts on a combination of unstructured and structured data from multiple sources.• Performed large data read/writes to and from CSV and Excel files using pandas.• Collecting business requirements and coordinating with other departments to ensure successful project delivery.• Dealing with highly imbalanced datasets using under-sampling with ensemble methods, oversampling, and cost-sensitive algorithms.• Developed technical briefs based on business requirements, including detailed steps and timelines.• Developed and optimized SQL ETL pipelines for large-scale data ingestion and processing, ensuring high performance and data accuracy across multiple platforms including Snowflake and SQL Server.• Created Airflow DAGs for seamless automation and scheduling of data pipelines, reducing manual intervention and ensuring timely data processing.• Leveraged Azure Data Lake (Gen 2) and Databricks to build robust data storage and processing frameworks, enabling real-time data availability and enhancing analytics capabilities.• Designed and maintained Databricks notebooks using PySpark, Python, and SQL, ensuring efficient data sourcing, transformation, and storage while adhering to enterprise standards.• Applied a DevOps mindset to take ownership of production success, optimizing operations through automation, active alerting, and self-healing mechanisms to ensure high availability and performance.• Developed microservices-based applications using Spring Boot, Kubernetes, Docker, Maven, and Jenkins, optimizing deployment and scalability of data-driven solutions in cloud environments.• Led cloud migration initiatives from on-prem to Azure and AWS, leveraging Terraform and GitHub Actions for infrastructure as code and ensuring seamless transitions with minimal downtime.
  • Exl
    Python Developer
    Exl Aug 2015 - Mar 2017
    New York, Ny, Us
    • Developed web applications and RESTful web services and APIs using Python, Django, and PHP• Experience with Django, a high-level Python Web framework. Automated JIRA processes using Python and bash scripts. • To fetch data of selected options, Python routines were written to log into websites. Automated AWS S3 data upload / download using Python scripts.• Developed user interface using JSP, JSTL, Custom Tag Libraries, and AJAX to speed the application. • Creation of REST Web Services for the management of data using Apache CXF (JAX-RS) • Developed HTML, CSS, Javascript, and JSP pages for user interaction and data presentation. Developed interfaces and their implementation classes to communicate with the mid-tier (services) using JMS• Created Python and Bash tools to increase the efficiency of the application system.• Worked using React JS components, Forms, Events, Keys, Routers, Animations, and Flux concepts. • Developed user interface using the React JS, Flux for SPA development. • Built various graphs for business decision-making using the Python Matplotlib library. Extracted data from PostgreSQL, Cassandra, Redis, Influx dB, and Elastic Search.

Frequently Asked Questions about Srikanth Reddy

What company does Srikanth Reddy work for?

Srikanth Reddy works for U.s. Bank

What is Srikanth Reddy's role at the current company?

Srikanth Reddy's current role is Sr DevOps Engineer.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.