Dileep Kumar

Dileep Kumar Email and Phone Number

"Data Engineer/Analyst | Building ETL and Data Pipelines, and Scalable Data Platforms| Azure | ETL | Python, SQL | SQL Server, MySQL | NoSQL | Data Integration/Analysis | Power BI | Reporting | Open to W2 roles | @ Charles Schwab
san francisco, california, united states
Dileep Kumar's Location
Edmond, Oklahoma, United States, United States
About Dileep Kumar

Data Engineer with over 3 years of experience in designing, building, and optimizing data pipelines and scalable architectures for fast-paced, data-driven organizations. Proficient in ETL processes, data warehousing, and big data technologies such as Spark, Hadoop, and SQL. Demonstrated success in transforming complex datasets into actionable insights, improving data accessibility and reliability for business intelligence and analytics teams. Known for leveraging cloud platforms (AWS, Azure) to implement secure, high-performance data solutions.

Dileep Kumar's Current Company Details
Charles Schwab

Charles Schwab

View
"Data Engineer/Analyst | Building ETL and Data Pipelines, and Scalable Data Platforms| Azure | ETL | Python, SQL | SQL Server, MySQL | NoSQL | Data Integration/Analysis | Power BI | Reporting | Open to W2 roles |
san francisco, california, united states
Website:
schwab.com
Employees:
20505
Dileep Kumar Work Experience Details
  • Charles Schwab
    Data Engineer
    Charles Schwab Nov 2023 - Present
    Oklahoma, United States
    Architect and optimize data pipelines for ingesting and transforming large volumes of financial data using AWS services (Glue, Redshift, S3) to handle critical data workflows and ensure data accuracy.• Collaborate with compliance and legal teams to implement data governance and security frameworks that adhere to SEC and FINRA guidelines, using AWS KMS for encryption and IAM for strict access control policies.• Spearhead deployment of real-time streaming solutions using Apache Kafka and Spark Streaming, allowing faster transaction processing and enabling front-office systems to handle high-throughput, low-latency data needs.• Implement data quality validation within ETL workflows using SQL and Python, reducing data errors by 30% and improving the reliability of reports used by executive decision-makers and data analysts.• Design and maintain data models in Snowflake and Redshift, optimized for financial reporting, customer analytics, and risk assessment; reduced query times by 25% by refining table structures and indexing.• Lead initiatives to integrate machine learning capabilities into the data pipeline, supporting predictive analytics for customer behaviour and investment trend analysis, using AWS SageMaker for deployment.• Mentor junior engineers in best practices for data engineering within regulated environments, including data lifecycle management and GDPR compliance, ensuring consistent quality and security across projects.
  • Barclays
    Data Engineer
    Barclays Jun 2020 - Sep 2022
    Designed and implemented scalable ETL processes with Apache Airflow and Hadoop to manage high-frequency transaction data processing, providing stable and robust infrastructure for reporting and compliance.Developed and maintained credit risk and fraud detection pipelines using Spark and Kafka for real-time data processing, reducing fraud detection latency by 40% and enhancing risk mitigation capabilities.Integrated data anonymization and data masking techniques within data pipelines to ensure GDPR compliance and data security, leveraging AWS Glue for automated transformations and encryption tools.Built a data lake architecture on AWS to consolidate customer data from CRM systems, transaction data, and third-party sources, creating a single source of truth to enable comprehensive customer analytics.Optimized existing data warehousing solutions on Oracle and Snowflake, implementing table partitioning, clustering, and indexing strategies that improved query performance by 20% across the finance team.Partnered with finance and risk teams to develop customer 360° views for personalized marketing and risk modeling, integrating data across business units to support unified insights.Led a data engineering team in automating and scaling workflows using Python and SQL, reducing data preparation time by 50% and enabling faster reporting cycles for regulatory compliance.
  • Barclays
    Data Engineer
    Barclays Jun 2020 - Sep 2022
    India
  • Barclays
    Data Engineer
    Barclays Jun 2020 - Sep 2020
    India

Dileep Kumar Education Details

Frequently Asked Questions about Dileep Kumar

What company does Dileep Kumar work for?

Dileep Kumar works for Charles Schwab

What is Dileep Kumar's role at the current company?

Dileep Kumar's current role is "Data Engineer/Analyst | Building ETL and Data Pipelines, and Scalable Data Platforms| Azure | ETL | Python, SQL | SQL Server, MySQL | NoSQL | Data Integration/Analysis | Power BI | Reporting | Open to W2 roles |.

What schools did Dileep Kumar attend?

Dileep Kumar attended Oklahoma Christian University, Acharya Nagarjuna University, Oklahoma Christian University, Acharya Nagarjuna University.

Who are Dileep Kumar's colleagues?

Dileep Kumar's colleagues are Don Greenberg, Robert Niemeyer, Ms, Jonnalee Owens, Erick Tran, Jared Smith, Taylor Andrews, Danny Susca.

Not the Dileep Kumar you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.