Pradeep C

Pradeep C Email and Phone Number

Actively seeking new roles as Data Engineer, SKILLS :- Big Data Technologies II Cloud Computing: AWS , Azure, & GCP II Hadoop clusters II Data Processing II Data Streaming II Machine Learning II Informatica. @ AbbVie
north chicago, illinois, united states
Pradeep C's Location
West Haven, Connecticut, United States, United States
About Pradeep C

Senior Data Engineer with over 10 years of experience specializing in designing data-intensive applications using the Hadoop Ecosystem, Big Data Analytics, Cloud Data Engineering, Data Warehousing, reporting, and Data quality solutions.Expert in building and managing distributed data systems on cloud platforms such as AWS and Azure, ensuring seamless data processing and storage.Hands-on experience with AWS cloud services including AWS S3, EMR, EC2, Lambda functions, Redshift, Athena, and AWS Glue, optimizing data storage and retrieval.Managed the deployment and lifecycle of AWS RDS instances, ensuring high availability and performance of relational databases.Demonstrated proficiency in integrating Apache NiFi with Azure services like Azure Data Lake, Blob Storage, and Azure SQL Database for secure and efficient data transfer.Automated and scheduled data workflows using Azure Data Bricks and Azure Data Factory, implementing triggers, schedules, and event-based pipelines for streamlined operations.

Pradeep C's Current Company Details
AbbVie

Abbvie

View
Actively seeking new roles as Data Engineer, SKILLS :- Big Data Technologies II Cloud Computing: AWS , Azure, & GCP II Hadoop clusters II Data Processing II Data Streaming II Machine Learning II Informatica.
north chicago, illinois, united states
Website:
abbvie.com
Employees:
46234
Pradeep C Work Experience Details
  • Abbvie
    Senior Data Engineer
    Abbvie Jan 2023 - Present
    Chicago, Illinois, United States
    Designed and implemented complex data integration pipelines using Azure Data Factory and Azure Data Lake, ensuring seamless movement and transformation of pharmaceutical and healthcare data for analytics and reporting.Optimized data processing workflows with Azure Databricks and DBT (Data Build Tool) for faster transformations and model training, supporting efficient analysis and market insights.Developed and managed scalable data lake infrastructure on Azure Cloud, utilizing Azure… Show more Designed and implemented complex data integration pipelines using Azure Data Factory and Azure Data Lake, ensuring seamless movement and transformation of pharmaceutical and healthcare data for analytics and reporting.Optimized data processing workflows with Azure Databricks and DBT (Data Build Tool) for faster transformations and model training, supporting efficient analysis and market insights.Developed and managed scalable data lake infrastructure on Azure Cloud, utilizing Azure Synapse SQL for efficient data analysis and reporting.Implemented Databricks notebooks in Python with PySpark to preprocess pharmaceutical and healthcare data for reporting and analytics, including data cleansing, enrichment, aggregation, and de-normalization.Utilized PostgreSQL's full-text search capabilities to improve application search functionality.Designed scalable data processing architectures leveraging Hadoop and Spark technologies to manage growing data volumes and user demands effectively.Implemented data governance strategies to ensure compliance with GDPR, HIPAA, and other regulatory requirements, safeguarding pharmaceutical data security and privacy.Engineered high-performance data ingestion frameworks with Scala and Apache Flink for real-time analytics and event processing. Show less
  • Ohio Department Of Agriculture
    Senior Data Engineer
    Ohio Department Of Agriculture Feb 2020 - Dec 2022
    Reynoldsburg, Ohio, United States
    Designed and implemented end-to-end data solutions using AWS Glue, AWS S3, and AWS Redshift, optimizing data processing for agricultural data management.Integrated AWS into ETL workflows to extract, transform, and load agricultural data from various sources, ensuring alignment with business requirements.Developed and optimized Python-based ETL pipelines in both legacy and distributed environments, enhancing data processing efficiency.Utilized Spark with Python-based pipelines on AWS… Show more Designed and implemented end-to-end data solutions using AWS Glue, AWS S3, and AWS Redshift, optimizing data processing for agricultural data management.Integrated AWS into ETL workflows to extract, transform, and load agricultural data from various sources, ensuring alignment with business requirements.Developed and optimized Python-based ETL pipelines in both legacy and distributed environments, enhancing data processing efficiency.Utilized Spark with Python-based pipelines on AWS EMR for loading data to the Enterprise Data Lake (EDL) using AWS S3 as a storage layer.Developed and optimized NoSQL data models tailored for agricultural use cases, improving data retrieval performance and reducing storage costs.Integrated NoSQL databases with ETL pipelines to facilitate seamless data ingestion, transformation, and loading, ensuring enhanced data availability and accessibility.Employed NoSQL databases in real-time analytics and monitoring systems, enabling rapid processing and visualization of streaming agricultural data. Show less
  • M&T Bank
    Data Analytics Engineer
    M&T Bank May 2017 - Jan 2020
    Buffalo, New York, United States
    Designed and implemented ETL pipelines using Azure Databricks and Azure Data Factory to extract, transform, and load data from various sources for M&T supporting operational and analytical needs.Contributed to the development of an Azure cloud-based data centre on Azure Data Lake ensuring scalable and secure data storage solutions.Designed and implemented cloud data integration solutions using Informatica Cloud services facilitating seamless connectivity between on-premises and… Show more Designed and implemented ETL pipelines using Azure Databricks and Azure Data Factory to extract, transform, and load data from various sources for M&T supporting operational and analytical needs.Contributed to the development of an Azure cloud-based data centre on Azure Data Lake ensuring scalable and secure data storage solutions.Designed and implemented cloud data integration solutions using Informatica Cloud services facilitating seamless connectivity between on-premises and cloud-based platforms.Developed Python-based data pipelines for ETL tasks ensuring data quality and consistency in financial data processing.Created Python-based data visualization tools using Matplotlib, Seaborn, and Plotly providing insightful graphical representations of financial data.Utilized Apache Spark and Hadoop ecosystem tools (MapReduce, Hive) to develop scalable data processing solutions at M&T, enabling efficient analysis of large financial datasets.Implemented SAS data processing and statistical analysis routines to derive actionable insights from financial data. Show less
  • Global Logics
    Data Analyst
    Global Logics Jan 2014 - Mar 2017
    Hyderabad, Telangana, India
    Utilized the Pandas library in Python to analyze large datasets, extract meaningful insights, and perform advanced statistical analysis, supporting data-driven decision-making processes. Excel expert proficient in advanced functions, pivot tables, and data manipulation and analysis visualizations, enhancing data analytics capabilities.Implemented data preprocessing, feature engineering, and model building using SAS, contributing to predictive analytics and business… Show more Utilized the Pandas library in Python to analyze large datasets, extract meaningful insights, and perform advanced statistical analysis, supporting data-driven decision-making processes. Excel expert proficient in advanced functions, pivot tables, and data manipulation and analysis visualizations, enhancing data analytics capabilities.Implemented data preprocessing, feature engineering, and model building using SAS, contributing to predictive analytics and business intelligence.Developed and maintained ETL processes using Alteryx, ensuring seamless data extraction, transformation, and loading for operational efficiency.Created professional presentations and reports in PowerPoint, effectively communicating data-driven insights to stakeholders.Designed and developed reports in SSRS, providing detailed insights for various business processes and operational areas.Strong experience gathering requirements from stakeholders and transforming business requirements into functional specifications Show less

Frequently Asked Questions about Pradeep C

What company does Pradeep C work for?

Pradeep C works for Abbvie

What is Pradeep C's role at the current company?

Pradeep C's current role is Actively seeking new roles as Data Engineer, SKILLS :- Big Data Technologies II Cloud Computing: AWS , Azure, & GCP II Hadoop clusters II Data Processing II Data Streaming II Machine Learning II Informatica..

Who are Pradeep C's colleagues?

Pradeep C's colleagues are Hayley Barnard, Ph.d., Sara Durkin, André Rodrigues, Mba, Itil, Karen Rogers, Lesley Deane, Michelle Taylor, Rene Goyer.

Not the Pradeep C you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.