Akhil S

Akhil S Email and Phone Number

Looking for New opportunities in Data Engineer Roles | Sr. Data Engineer at Wellcare | Analytics | Data Engineer | Python | Bigdata | Hadoop | Azure | AWS | GCP | Kafka | ETL | Talend | SQL | Snowflake | Databricks | BI @ Well Care Health
Akhil S's Location
Alpharetta, Georgia, United States, United States
About Akhil S

Akhil S is a Looking for New opportunities in Data Engineer Roles | Sr. Data Engineer at Wellcare | Analytics | Data Engineer | Python | Bigdata | Hadoop | Azure | AWS | GCP | Kafka | ETL | Talend | SQL | Snowflake | Databricks | BI at Well Care Health.

Akhil S's Current Company Details
Well Care Health

Well Care Health

View
Looking for New opportunities in Data Engineer Roles | Sr. Data Engineer at Wellcare | Analytics | Data Engineer | Python | Bigdata | Hadoop | Azure | AWS | GCP | Kafka | ETL | Talend | SQL | Snowflake | Databricks | BI
Akhil S Work Experience Details
  • Well Care Health
    Senior Data Engineer
    Well Care Health May 2020 - Present
    Wilmington, Nc, Us
    • Worked using Apache Hadoop ecosystem components like HDFS, Hive, Pig, and Map Reduce. • Designed AWS Glue pipelines to ingest, process, and store data interacting with different services in AWS. • Implemented usage of Amazon EMR for processing Big Data across Hadoop Cluster of virtual servers on Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3). • Developed a process to migrate local logs to CloudWatch for better integration and monitoring. • Executed the program by using python API written in python to support Apache Spark or PySpark. • Helped Dev ops Engineers for deploying code and debug issues. • Worked in writing Hadoop Jobs for analyzing data like Text format files, sequence files, Parquet files using Hive and Pig. • Worked on analyzing Hadoop cluster and different Big Data components including Pig, Hive, Spark, and Impala. • Populated database tables via AWS Kinesis Firehose and AWS Redshift. • Developed Spark code using Python and Spark-SQL for faster testing and data processing. • Created Hive External tables and loaded the data into tables and query data using HQL. • Developed ETL modules and data workflows for solution accelerators using PySpark and Spark SQL. • Used Spark SQL to process the huge amount of structured data. • Extracted the data from MySQL and AWS RedShift into HDFS using Kinesis. • Developed Pyspark application for creating reporting tables with different masking in both Hive and MySQL DB and made available for newly build fetch API’s.
  • Merck
    Cloud Data Engineer
    Merck Nov 2018 - Apr 2020
    Configured and deployed Azure Automation Scripts for a multitude of applications utilizing the Azure stack (includingCompute, Web & Mobile, Blobs, ADF, Resource Groups, Azure Data Lake, HDInsight Clusters, Azure Data Factory,Azure SQL, Cloud Services, and ARM), Services and Utilities focusing on Automation. Involved in Migrating Objects from Teradata to Snowflake and created Snow pipe for continuous data load. Increased consumption of solutions including Azure SQL Databases, Azure Cosmos DB, Azure SQL. Created continuous integration and continuous delivery (CI/CD) pipeline on Azure that helps to automate steps inthe software delivery process. Deploying and managing applications in Datacenter, Virtual environment, and Azure platform as well.Involved in converting Hive/SQL queries into Spark transformations using Spark RDD's and PySpark. Log data Stored in HBase DB is processed and analyzed and then imported into Hive warehouse, which enabled endbusiness analysts to write HQL queries. Handled importing of data from various data sources, performed transformations using Hive, and loaded data intoHDFS. Design, development, and implementation of performant ETL pipelines using PySpark and Azure Data Factory.
  • Truist
    Data Engineer
    Truist May 2016 - Oct 2018
    Charlotte, North Carolina, Us
    • Installed, configured, and managed the ELK (ElasticSearch, Logstash and Kibana) for Log management within AWS EC2/ Elastic Load balancer for Elastic search involving in cloud automation with configuration management system Ansible. • Collected data using Spark Streaming from AWS S3 bucket in near-real-time and performs necessary Transformations and Aggregations to build the data model and persists the data in HDFS. • Configured AWS IAM and Security Group as per requirement and distributed them as groups into various availability zones of the VPC. • Implemented a Spark-Streaming consumer job to consume data in near real time from AWS Kinesis and sink to S3 for downstream systems consumptions. • Involved in deploying spark and hive applications in AWS stack. • Worked on architecting Serverless design using AWS API, Lambda, S3 and Dynamo DB with optimized design with Auto scaling performance. • Populated database tables via AWS Kinesis Firehose and AWS Redshift.
  • Avon Technologies (I) Private Ltd.
    Hadoop Developer
    Avon Technologies (I) Private Ltd. Jun 2014 - Feb 2016
    Hyderabad, Andhra Pradesh, In
    • Installed and configured Flume, Hive, Pig, and Oozie on the Hadoop cluster. • Collected and aggregated large amounts of web log data from different sources such as webservers, mobile and network devices using Apache Flume and stored the data into HDFS for analysis. • Also used Spark SQL to handle structured data in Hive. • Involved in making Hive tables, stacking information, composing hive inquiries, producing segments and basins for enhancement. • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and extracted data from Teradata into HDFS using Sqoop. • Written and Implemented Teradata Fast load, Multiload and Bteq scripts, DML and DDL. • Involved in migrating tables from RDBMS into Hive tables using Sqoop and later generate particular visualizations using Tableau. • Analyzed substantial data sets by running Hive queries and Pig scripts. • Created Partitions, Buckets based on State to further process using Bucket based Hive joins. • Involved in transforming data from Mainframe tables to HDFS, and HBase tables using Sqoop
  • Grape Software Limited
    Data Analyst
    Grape Software Limited Aug 2013 - May 2014
    • Closely worked with business requirements converting them into technical requirements and working with data owners and stewards to gather all the data requirements for analysis for management of reports with help of excel pivot tables and Presentation charts with applied knowledge of HEDIS metrics. • Developed, analyzed, reported, and interpreted complex data for ongoing activities/projects while ensuring that all data are accurate. • Worked with Data Governance tools and extract-transform-load (ETL) processing tool for data mining, data warehousing, and data cleaning using SQL. • Used data to identify trends, needs, and opportunities, and prepares reports, visualizations, and recommendations to help the district determine what’s working and what needs to change. • Integrating Word, Excel, and PowerPoint for business communication more effective by organizing separated information into one place for easy access and analysis. • Optimized SQL performance, integrity, and security of the project’s databases/schemas. • Performed Data cleaning, Data pre-processing, and Manipulation using Python.

Frequently Asked Questions about Akhil S

What company does Akhil S work for?

Akhil S works for Well Care Health

What is Akhil S's role at the current company?

Akhil S's current role is Looking for New opportunities in Data Engineer Roles | Sr. Data Engineer at Wellcare | Analytics | Data Engineer | Python | Bigdata | Hadoop | Azure | AWS | GCP | Kafka | ETL | Talend | SQL | Snowflake | Databricks | BI.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.