Ashok Kumar

Ashok Kumar Email and Phone Number

Snowflake Data Engineer|Data Analyst|Snowflake Certified|ETL Consultant|DBT|Informatica|IICS|Python|Datamodeling| Oracle|Teradata|SQL Server|Jira|Bitbucket|CI/CD|Data Stage @ Immense Brains
Ashok Kumar's Location
United States, United States
About Ashok Kumar

Hello! I’m Ashok Kumar, a passionate data engineer inspired by the transformative power of data. Senior Technical Lead with over 13 years of experience delivering value in health tech, Retail and T&H. Proven track record in managing and mentoring software engineers, implementing agile methodologies, and driving high ROI projects. Building and maintaining scalable data pipelines that ensure the smooth ingestion, transformation, and storage of data from various sources. Strong technical background in software development, with expertise in multiple programming languages and frameworks. Skilled in ETL processes, data warehousing, Data Modeling, Python, cloud technologies such as AWS and GCP. Experience in requirements gathering. My zeal lies in Building and maintaining scalable data pipelines with batch and-real time solutions that ensure the smooth ingestion, transformation, and storage of data from various sources which enable organizations to make data-oriented decisions.- Cloud Platforms: AWS - S3, CloudWatch, Step Function, Lambda, EMR, IAM Role & Policy, Glue- GCP - Cloud Storage, Big Query, Composer, Airflow.- Big data pipelines: Hive, SQL, Airflow and Data Lake on S3- Cloud Datawarehouse: Snowflake - Databases: Oracle 10g and 8i/9i, Teradata 15.0, SQL Server 15.0, Hive- ETL/ELT Tools: Informatica power center, DBT, Data Stage and IICS- Reporting tool: Tableau- Data Model: Erwin- Snowpro Core certified and machine learning specialty- Software development: Python, Git- AI: Machine Learning, Predictive Analysis

Ashok Kumar's Current Company Details
Immense Brains

Immense Brains

View
Snowflake Data Engineer|Data Analyst|Snowflake Certified|ETL Consultant|DBT|Informatica|IICS|Python|Datamodeling| Oracle|Teradata|SQL Server|Jira|Bitbucket|CI/CD|Data Stage
Ashok Kumar Work Experience Details
  • Immense Brains
    Sr Data Engineer
    Immense Brains Jul 2024 - Present
    United States
    • Involving in designing building, and managing scalable data systems and pipelines to ensure the data accessible, reliable and usable across an organization.• Designing and implementing ETL/ELT pipeline to move and transform data from various sources into target data systems.• Build and automate workflows for batch and real-time data ingestion and processing.• Design and manage scalable, cost-efficient storage solutions for both structured and unstructured data.• Implement strategies for data backup, replication, and disaster recovery.• Process, cleanse, and transform raw data into meaningful formats for analysis.• Orchestrate complex workflows by combining different data services to ensure data pipelines run efficiently.• Responsible for secure data in transit and at rest, ensuring encryption and proper access control.• Responsible for role-based access control (RBAC) and data masking techniques.• Optimize data pipelines for speed, scalability and cost efficiency.• Implement data quality checks and ensure that data integrity is maintained.• Working closely with data scientists, analysts, and business stakeholders to understand data requirements.• Responsible for provision and manage the cloud infrastructure necessary for data engineering workloads.• Responsible for implementing data governance frameworks to ensure data consistency, accuracy and proper documentation.
  • Elevance Health
    Sr. Technical Lead , Data Analyst
    Elevance Health Oct 2022 - Jun 2024
    Project1: Coro• Have done source system Analysis, mapping analysis, and end to end Data lineage analysis.• Have done data profiling and data quality checks.• JSON data extraction, transformation, and data ingestion into Snowflake.• Used Time Travel, Data Retention Settings for Storage for Crucial Tables which helped to analyze the data for testing.• Wrote Snow SQL scripts to achieve the business requirements.• Wrote complex SQL scripts with Joins, Sub Queries, Co-related subqueries, window functions etc.• Migrated legacy transformation code into modular DBT data models.• Written and optimized SQL queries within DBT to enhance data transformation process and improve overall performance.• Utilized DBT to convert raw datasets, enabling efficient analysis and reporting.• UDFs and UDTFs and procedures for transformation into data warehouse facts and dimension tables.• Prepared migration Documents and took the complete responsibility in deployment activities.• Worked with business in UAT and PROD phases to assist on data deliverables.• Worked with Support team to provide the knowledge transfer as part of post-production support.• Involved in analysis, troubleshooting, and fixing the issues.Project 2: Medaware• Have done source system Analysis, mapping analysis, and end to end Data lineage analysis.• Established DBT process to improve performance, scalability and reliability.• Migrated legacy transformation code into modular DBT data models.• Written and optimized SQL queries within DBT to enhance data transformation process and improve overall performance.• Developed ETL pipelines on GCP using Apache Beam and Dataflow to process large-scale data in real-time, resulting in a 20% improvement in data processing time.• Built and deployed data pipelines using Cloud Composer and Cloud Functions, enabling seamless integration with other GCP services such as Big Query, Pub/Sub, and Cloud Storage.
  • Williams Sonama
    Lead Data Engineer Ii, Data Analyst
    Williams Sonama Jul 2019 - May 2022
    Client: WSI• Developed the Sqoop scripts to make the interaction between Oracle and Hive Database.• Created Hive tables to store processed results in a tabular format.• Have done source system Analysis, mapping analysis, and end to end Data lineage analysis.• Created the hive scripts to load the data from various sources into Hive database.• Developed the Crontab scripts to automate the jobs.• Completely involved in delivering the code end to end.• Involved extensively in data analysis with Tableau reporting tool.• Collaborated with the Support team to transfer knowledge as part of post-production support.• Create ad-hoc data sources to analyze the data and provide solutions to business.• Involved in performance improvement of the queries.
  • Syneos Health
    Data Engineer| Data Analyst
    Syneos Health Sep 2018 - Jul 2019
    Client:syneos health. Bridgewater, NJ• Converted existing Informatica code into Hive Scripts to transform the data from staging area to Analytics tables.• Fine tune the converted Hive scripts for better performance.• Created Oozie workflow and coordinator for ETL workflow orchestration.• Developed shell scripts to orchestrate the workflow using Oozie shell action.• Created shell scripts for sequential and parallel processing of the tables and adding the dependencies on the tables.• Involved in SIT and UAT testing Phases.• Using the Azure PaaS service, analyze, create, and develop modern data solutions that enable data visualization. • Contributed to the creation of PySpark Data Frames in Azure Databricks to read data from Data Lake or Blob storage and manipulate it using Spark SQL context.
  • Cognizant
    Consultant
    Cognizant Apr 2014 - Jul 2018
    IHG (InterContinental Hotels Group) Rochester, NYProject1: Smart• Developed mappings using various transformations like update strategy, lookup, stored Procedure, Router, Filter, sequence generator, joiner, aggregate transformation, Java, reusable transformations and expression.• Involved in source code review, the mappings created by my team members.• Prepared Migration Document while migration ETL objects and Code from Development to • different environment.• Extensively used Teradata utilities FAST EXPORT, FLOAD, TPUMP, MLOAD, and TPT to load the data to Teradata database.• Involved in creating test scripts.• Developed UNIX scripts involved in this project.• Tuning queries using tools like Explain plan for high efficiency.• Involved in performance tuning by optimizing the sources, targets, mappings, and sessions.• Tested the Informatica mappings with manual testing and assisted the QA team.• Scheduling the jobs using CAWA scheduler in Dev and QA Environments.Project 2: Bowtie• Developed mappings using various transformations like update strategy, lookup, stored Procedure, Router, Filter, sequence generator, joiner, aggregate transformation, Java, reusable transformations and expression.• Involved in source code review, the mappings created by my team members.• Prepared Migration Document while migration ETL objects and Code from Development to different environment.• Involved in creating test scripts.• Developed UNIX scripts involved in this project.• Tuning queries using tools like Explain plan for high efficiency.• Involved in performance tuning by optimizing the sources, targets, mappings, and sessions.• Mappings were designed to cater to the slowly changing dimensions.• Developed various reusable transformations and mapplets that were used in other mappings.• Tested the Informatica mappings with manual testing and assisted the QA team.
  • Capgemini
    Consultant
    Capgemini Mar 2011 - Apr 2014
    US Govt (Insurance) Federal Triangle, WARole & Responsibilities:• Used ETL Informatica tool for extraction, Transformation and loading data into data warehouse.• Informatica code Migrated from 7x to 8x.• Created Sessions, reusable Worklets in Workflow Manager.• Successfully implemented SCDs. Used type2 dimension to keep the history changes.• Involved into source code review, the mappings migrated by my team members.• Created test scripts and involved in deployment activities.• Involved in the preparation of migration Document while migration ETL objects and Code from Development to different environment.• Tested the informatica mappings and assisted QA team

Ashok Kumar Education Details

  • Scsvmvu
    Scsvmvu
    Mathematics And Computer Science

Frequently Asked Questions about Ashok Kumar

What company does Ashok Kumar work for?

Ashok Kumar works for Immense Brains

What is Ashok Kumar's role at the current company?

Ashok Kumar's current role is Snowflake Data Engineer|Data Analyst|Snowflake Certified|ETL Consultant|DBT|Informatica|IICS|Python|Datamodeling| Oracle|Teradata|SQL Server|Jira|Bitbucket|CI/CD|Data Stage.

What schools did Ashok Kumar attend?

Ashok Kumar attended Scsvmvu.

Not the Ashok Kumar you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.