Laxmi Ch

Laxmi Ch Email and Phone Number

Data Engineer at Blue Cross Blue Shield @ Blue Cross Blue Shield
chantilly, virginia, united states
Laxmi Ch's Location
United States, United States
About Laxmi Ch

Experienced Data Engineer with over 8 years of experience in building data pipelines originating from different data sources, maintaining datalakes in order to increase the data processing efficiency and also for modernization of applications based out of mainframes and different RDBMS. Strong believer in working hard and efficiently to improve process times and reduce operating costs. Very strong skills in building applications for data analysis, data processing and data monitoring using different big data tools and cloud-based technologies.

Laxmi Ch's Current Company Details
Blue Cross Blue Shield

Blue Cross Blue Shield

View
Data Engineer at Blue Cross Blue Shield
chantilly, virginia, united states
Employees:
2016
Laxmi Ch Work Experience Details
  • Blue Cross Blue Shield
    Senior Data Engineer
    Blue Cross Blue Shield Jan 2022 - Present
    Chicago, Illinois, United States
    • Developed and maintained Data Pipelines to Extract-Transform-Load/ETL data from various sources into MongoDB for a high-throughput analytics platform.• Synthesized new examples with Synthetic Minority Oversampling Technique to handle minority class imbalance in the dataset. • Analyzed and found patterns and insights within Structured and Unstructured Data using Machine Learning Algorithms such as Logistic Regression, Decision Trees, Random Forests, and Support Vector Machines for fraud prediction.• Implemented Deep Learning Algorithms such as Auto encoders and Long Short-Term Memory/LSTM to detect fraudulent insurance claims from the policy history.• Integrated external data sources, such as Customer Relationship Management/CRM systems and web analytics tools, with Salesforce Audience Builder from Salesforce Marketing Cloud using Extract-Transform-Load/ETL processes.• Developed and implemented data processing pipelines using Apache Spark to analyze large-scale datasets for predictive modeling and machine learning tasks.• Participated in all phases of Feature Selection, Feature Engineering, Model Development and Model Validation.• Worked on Data Cleaning and ensured Data Quality, Consistency, and Integrity using NumPy and Pandas.• Built detection and classification models using Scikit-Learn in Python with Amazon SageMaker as cloud machine learning platform.• Conducted exploratory data analysis and feature engineering on a large dataset using Apache Spark Data Frames and Apache Spark SQL, extracting valuable insights, and improving the performance of subsequent machine learning models.• Used various Performance Metrics such as Precision, Recall, F-Score, Receiver Operating Characteristics and Area under Curve to evaluate the performance of each model. • Collaborated with cross-functional teams to identify key performance indicators and implemented them in Tableau.
  • Delta Air Lines
    Data Engineer/ Data Analyst
    Delta Air Lines Aug 2018 - Dec 2021
    Atlanta, Georgia, United States
    Project 1: Based on Data Analyst• Implemented Apache Spark Streaming to perform real-time data analysis and monitoring of streaming data sources, enabling timely decision-making and actionable insights.• Performed preliminary Data Analysis using descriptive statistics and handled anomalies such as removing duplicates and imputing missing values by extracting data with Structured Query Language/SQL.• Designed and maintained a distributed database such as Apache Cassandra to handle large-scale Time-Series data storage and retrieval.• Application of various Supervised Learning Algorithms for Regression and Classification such as Linear Regression, and Decision Trees for prediction using Scikit-Learn package in Python.• Employed Ensemble Methods such as Bagging, Random Forests and Boosting Methods such as AdaBoost and Gradient Boosting to enhance model performance.• Performed Hyper Parameter Tuning to find the optimal model with Grid Search.Project 2: Based on SQL Server Developer /DBA• Created alerts, notifications, and emails for system errors, insufficient resources, fatal database errors, hardware errors, and security breach.• Daily routine DBA tasks like handling user' s permissions and space issues on Production and Semi-Production Servers and handling maintenance Jobs.• Setting up maintenance plans to keep indexes and statistics up to date.• Created SSIS packages for loading data into warehouse.• Performed operations like Data reconciliation, validation and error handling after extracting data into SQL Server.• Implemented proper security policy on production servers.• Daily checking the tickets assigned and resolve the tickets depends on the Priority levels within the SLA.• Worked in database migration from on-prem to azure cloud using azure database migration service/lift and shift, database export and database assistance.
  • Softlabs Group - Innovation & Global Outsourcing Services
    Data Engineer
    Softlabs Group - Innovation & Global Outsourcing Services Apr 2015 - Dec 2017
    India
    • Evaluated business requirements, prepared detailed specifications, and led Big Data initiatives, focusing on Hadoop ecosystem, Spark, Kafka, and RDBMS technologies, including analysis, POC, and architecture.• Developed and maintained Apache Hadoop clusters, configured Hive, and used Hive UDFs for efficient data processing in the Hadoop environment.• Designed and implemented data pipelines using tools like Pig and Sqoop to ingest and transform datasets within the confines of on-premises Hadoop Distributed File System (HDFS).• Developed ETL processes from data sources, such as Ab Initio files and Excel spreadsheets, utilizing Java and Scala in on-premises environments. Utilized relational databases like Teradata and Oracle for data warehousing.• Utilized Unix/Linux shell scripting to manage and optimize data processing tasks within Hadoop clusters.• Developed and managed complex database objects such as Stored Procedures, Functions, Packages, and Triggers using SQL and PL/SQL within RDBMS environments.• Loaded and processed customer data and event logs from Kafka into HBase, created tables in MySQL, and transferred data from DB2 to AWS S3 using Sqoop and Pig for efficient data processing and storage.• Implemented workflow using Apache Oozie to manage and schedule data processing tasks in on-premises Hadoop environments, ensuring streamlined execution of data pipelines.• Utilized Informatica for data integration in traditional on-premises environments, overseeing periodic upgrades and ETL process development.• Utilized Oozie workflow to run Pig and Hive Jobs Extracted files from Mongo DB through Sqoop and placed in HDFS.• Develop dashboards and visualizations to help business users analyze data as well as providing data insight to upper management with a focus on Confidential products like SQL Server Reporting Service (SSRS) and Power BI.• Implemented CI/CD pipelines using Jenkins, Git, Ant, and Bamboo for efficient and automated deployment processes.
  • Birlasoft
    Data Analyst/ Data Engineer
    Birlasoft Apr 2014 - Mar 2015
    Hyderabad, Telangana, India
    • Worked in an Agile environment, managing Scrum stories and sprints, with a focus on Python- based data analytics and Excel data extraction.• Developed Scala scripts, UDFFs using Spark, Data frames/SQL and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into OLTP system directly or through Sqoop.• Developed Python programs to manipulate data from various Teradata sources and convert them into CSV files.• Extensively used ETL Informatica for the Extraction, Transformation and Load of source data which was from Oracle, DB2 and Flat Files containing more than 500 tables.• Conducted statistical data analysis and data visualization using Python pandas, identifying trends and patterns through techniques like exploratory data analysis and clustering.• Prepared reports by using and utilizing MS Excel (VLOOKUP, HLOOKUPS, pivot tables, Macros, data points)• Created clear and compelling data visualizations using tools like Tableau to effectively communicate findings to stakeholders.• Developed Python scripts for data manipulation and wrote SQL queries to extract and transform data for various analytics and reporting purposes.• Assisted in defining, monitoring, and reporting on key performance indicators (KPIs) to assess the effectiveness of data-driven initiatives and provided actionable insights.• Gathered requirements from Director of Engineering to implement into custom workflows and created Automation rules between Bitbucket and JIRA.• Tested ETL mappings, wrote complex SQL queries for data validation, and performed back- end testing, data quality assessments, and monitoring of key performance indicators (KPIs).

Laxmi Ch Education Details

Frequently Asked Questions about Laxmi Ch

What company does Laxmi Ch work for?

Laxmi Ch works for Blue Cross Blue Shield

What is Laxmi Ch's role at the current company?

Laxmi Ch's current role is Data Engineer at Blue Cross Blue Shield.

What schools did Laxmi Ch attend?

Laxmi Ch attended Wayne State University, Kl University.

Not the Laxmi Ch you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.