Sandeep C Email and Phone Number
Sandeep C is a Actively looking for opportunities | Senior Data Engineer | Azure | AWS | Kafka | Only C2C at Clinical Health Network For Transformation (CHN).
Clinical Health Network For Transformation (Chn)
View-
Senior Data EngineerClinical Health Network For Transformation (Chn) Feb 2023 - PresentPhiladelphia, Pennsylvania, United States• Responsible for requirement gathering, understanding business value, and providing the analytical data solutions. • Project deals with migrating data from on premises to cloud (AWS). Our sources are MySQL, Oracle, MongoDB.• Implemented Kafka as a messaging service. Created data pipelines from heterogeneous sources to Snowflake.• Developed Spark Jobs with the Spark core, Spark SQL libraries for Processing the data.• Worked on IAM security policies to provide fine grained access to AWS S3 using Lambda functions, DynamoDB.• Implemented the AWS Step functions to automate and orchestrate the Sage Maker related tasks such as publishing data to S3.• Configured, Scheduled and Triggered ETL jobs in ETL Orchestrator using JSON to load the source data into Data Lake.• Created a snowflake warehouse strategy and set it up to use PUT scripts to migrate a terabyte of data from S3 into Snowflake. -
Senior Data EngineerApex Health Dec 2021 - Jan 2023Houston, Texas, United States• Developed upgrade and downgrade scripts in SQL that filter corrupted records with missing values along with identifying unique records based on different criteria. • Implemented Azure Storage - Storage accounts, blob storage, and Azure SQL Server. Explored on the Azure storage accounts like Blob storage. • Experience in building, deploying, troubleshooting data extraction for a huge number of records using Azure Data Factory (ADF). • Design and implement database solutions in Azure SQL Data Warehouse, Azure SQL. • Migrate data from traditional database systems to Azure databases. • Design and implement migration strategies for traditional systems on Azure (Lift and shift/Azure Migrate, other third-party tools. • Experience in DWH/BI project implementation using Azure Data Factory.• Development level experience in Microsoft Azure providing data movement and scheduling functionality to cloud-based technologies such as Azure Blob Storage and Azure SQL Database. -
Data EngineerBank Of America Nov 2020 - Nov 2021Charlotte, North Carolina, United States• Proactively monitored systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures.• Worked on analyzing Hadoop clusters using different big data analytic tools including Kafka, Pig, Hive and MapReduce.• Configured Spark streaming to receive real time data from Kafka and store the stream data to HDFS using Scale.• Installed and configured Hadoop, MapReduce, HDFS (Hadoop Distributed File System), developed multiple MapReduce jobs in java for data cleaning and processing.• Worked on implementing Spark using Scala and SparkSQL for faster analyzing and processing of data.• Used JAVA, J2EE application development skills with Object Oriented Analysis and extensively involved throughout Software Development Life Cycle (SDLC)• Implemented AWS EC2, Key Pairs, Security Groups, Auto Scaling, ELB, SQS, and SNS using AWS API and exposed as the Restful Web services.• Involved in creating Hive tables, loading the data, and writing hive queries, which will run internally in map reduce and applied MapReduce framework jobs in java for data processing by installing and configuring Hadoop, HDFS. -
Big Data EngineerWells Fargo Apr 2019 - Oct 2020Hyderabad, Telangana, India• Designed a data workflow model to create a data lake in the Hadoop ecosystem so that reporting tools like Tableau can plugin to generate the necessary reports. • Created Source to Target Mappings (STM) for the required tables by understanding the business requirements for the reports. • Worked on Snowflake environment to remove redundancy and load real-time data from various data sources into HDFS using Kafka. • Developed PySpark and SparkSQL code to process the data in Apache Spark on Amazon EMR to perform the necessary transformations based on the STMs developed. • Hive tables were created on HDFS to store the data processed by Apache Spark on the Cloudera Hadoop Cluster in Parquet format. • Written multiple MapReduce programs in Java for data extraction, transformation, and aggregation from multiple file-formats including XML, JSON, CSV, and other compressed file formats -
Big Data EngineerInfosys Oct 2016 - Mar 2019Hyderabad, Telangana, India -
Software DeveloperMindtree Jul 2014 - Sep 2016Hyderabad, Telangana, India
Sandeep C Education Details
Frequently Asked Questions about Sandeep C
What company does Sandeep C work for?
Sandeep C works for Clinical Health Network For Transformation (Chn)
What is Sandeep C's role at the current company?
Sandeep C's current role is Actively looking for opportunities | Senior Data Engineer | Azure | AWS | Kafka | Only C2C.
What schools did Sandeep C attend?
Sandeep C attended Acharya Nagarjuna University (Anu), Guntur.
Not the Sandeep C you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial