Passionate and dedicated senior data engineer with over 10 years of experience. Highly experienced and have extensive knowledge of the Hadoop ecosystem and SQL-based technologies. Currently, working as a Sr. Big data engineer at UnitedHealth Group involved in designing and deploying multi-tier applications using all Azure services focusing on auto-scaling in Azure Cloud Formation.Previously, having experience in writing Hive and SQL queries. Migrating data from on-prem SQL cluster to Azure. Involved in creating data Pipelines and performed standardizations, transformations, and business logic to generate reports for valuable insights. Experience in Agile methodology projects intensively. Extremely motivated with interpersonal skills.Competencies: Hadoop, Big Data, Hive, SQL, PySpark, Scala, Python/Java/R, Machine Learning, Business Intelligence, Tableau, Power BI, Cloud services like AWS, Azure, etc.,
Empower Retirement
View- Website:
- unitedhealthgroup.com
- Employees:
- 108749
-
Senior Data EngineerEmpower RetirementEnglewood, Co, Us -
Lead Enterprise Data EngineerUnitedhealth Group Aug 2023 - PresentUnited States -
Senior Data EngineerUnitedhealth Group Jun 2022 - Apr 2024United States -
Senior Data EngineerEmpower Retirement Jan 2020 - Jun 2022Greenwood Village, Colorado, United States• Created automated pipelines in AWS Code Pipeline to deploy Docker containers in AWS ECS using S3. • Used HBase NoSQL Database for real-time and read/write access to huge volumes of data in the use case. • Extracted Real-time feed using Spark streaming and convert it to RDD and process data into Data Frame and load the data into HBase. • Developed AWS Lambda to invoke glue job as soon as a new file is available in Inbound S3 bucket. • Created spark jobs to apply data cleansing/data validation rules on new source files in inbound bucket and reject records to reject-data S3 bucket. • Developed AWS cloud formation templates and setting up Auto scaling for EC2 instances and involved in the automated provisioning of AWS cloud environment using Jenkins. • Created HBase tables to load large sets of semi-structured data coming from various sources. • Responsible for loading the customer's data and event logs from Kafka into HBase using REST API. • Created tables along with sort and distribution keys in AWS Redshift. • Created shell scripts and python scripts to automate our daily tasks (includes our production tasks as well) • Created, altered and deleted topics using Kafka Queues when required with varying. -
Senior Data EngineerChange Healthcare Jan 2018 - Dec 2019Nashville, Tennessee, United States• Optimized MapReduce Jobs to use HDFS efficiently by using various compression mechanisms. • Worked on and designed Big Data analytics platform for processing customer interface preferences and comments using Hadoop, Hive and Pig, Cloudera. • Importing and exporting data into HDFS and Hive using Sqoop from Oracle, MongoDB and vice versa. • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team. • Worked on reading multiple data formats on HDFS using Scala. • Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala. • Installed and configured Pig and also written Pig Latin scripts. • Developed multiple POCs using Scala and deployed on the Yarn cluster, compared the performance of Spark, with Hive and SQL. • Analyzed the SQL scripts and designed the solution to implement using Scala. • Build data platforms, pipelines, and storage systems using the Apache Kafka, Apache Storm and search technologies such as Elastic search. -
Big Data DeveloperMacy'S Jul 2016 - Dec 2017New York, United States• Contributing to the development of key data integration and advanced analytics solutions leveraging Apache Hadoop and other big data technologies for leading organizations using major Hadoop Distributions like Hortonworks. • Involved in Agile methodologies, daily Scrum meetings, Sprint planning. • Performed Data transformations in HIVE and used partitions, buckets for performance improvements. • Created Hive external tables on the MapReduce output before partitioning; bucketing is applied on top of it. • Developed business specific Custom UDF's in Hive, Pig. • Developed end to end architecture design on bigdata solution based on variety of business use cases. • Worked as a Spark Expert and performance Optimizer. • Member of Spark COE (Center of Excellence) in Data Simplification project at Cisco. • Experienced with Spark Context, Spark-SQL, Data Frame, Pair RDD's, PySpark. • Handled Data Skewness in Spark-SQL. • Implemented Spark using Scala, Java and utilizing Data frames and Spark SQL API for faster processing of data. • Developed Spark code and Spark-SQL/Streaming for faster testing and processing of data.
-
Hadoop DeveloperExperian Nov 2015 - Jun 2016Costa Mesa, California, United States• Implemented solutions for ingesting data from various sources and processing the Data-at-Rest utilizing Bigdata technologies such as Hadoop, Map Reduce Frameworks, HBase, and Hive. • Used Sqoop to efficiently transfer data between databases and HDFS and used Flume to stream the log data from servers. • Developer full SDLC of AWS Hadoop cluster based on client's business need • Involved in loading and transforming large sets of structured, semi structured and unstructured data from relational databases into HDFS using Sqoop imports. • Implement enterprise grade platform (mark logic) for ETL from mainframe to NOSQL (Cassandra) • Responsible for importing log files from various sources into HDFS using Flume • Analyzed data using HiveQL to generate payer by reports for transmission to payer's form payment summaries. • Imported millions of structured data from relational databases using Sqoop import to process using Spark and stored the data into HDFS in CSV format. • Used Data Frame API in Scala for converting the distributed collection of data organized into named columns. • Performed data profiling and transformation on the raw data using Pig, Python, and Java. • Developed predictive analytic using ApacheSparkScalaAPIs. • Involved in working of big data analysis using Pig and User defined functions (UDF). • Created Hive External tables and loaded the data into tables and query data using HQL. • Implemented Spark Graph application to analyze guest behavior for data science segments. -
Hadoop Developer/AdminHudda Infotech Apr 2014 - Jul 2015India• Involved in start to end process of Hadoop cluster setup where in installation, configuration and monitoring the Hadoop Cluster. • Automated Setup Hadoop Cluster, Implemented Kerberos security for various Hadoop services using Horton Works. • Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files. • Monitoring systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures. • Installation of various Hadoop Ecosystems and Hadoop Daemons. • Responsible for Installation and configuration of Hive, Pig, HBase and Sqoop on the Hadoop cluster. • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement • Involved in loading data from UNIX file system to HDFS, Importing and exporting data into HDFS using Sqoop, experienced in managing and reviewing Hadoop log files. • Responsible for data extraction and data ingestion from different data sources into Hadoop Data Lake ecosystem by creating ETL pipelines using Pig, and Hive. • Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes. Communicate and escalate issues appropriately. -
Big Data DeveloperCybage Software Jun 2012 - Mar 2014Pune, Maharashtra, India
Leela P Education Details
Frequently Asked Questions about Leela P
What company does Leela P work for?
Leela P works for Empower Retirement
What is Leela P's role at the current company?
Leela P's current role is Senior Data Engineer.
What schools did Leela P attend?
Leela P attended Kl University.
Who are Leela P's colleagues?
Leela P's colleagues are Francine Kleiss, Vinod Kumar Satveli, John Bolly Jr., Prem Kumari Prem Kumari, Rama Anem, Tricia Smith, Luciana Santos De Oliveira.
Not the Leela P you were looking for?
-
Leela kumar P S
Milpitas, Ca
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial