Lakshmi S

Lakshmi S Email and Phone Number

Senior Big Data Engineer | Innovating Algorithms, ML Models, and CI/CD Integration for Advanced Systems. Actively looking for C2C Job Opportunities. @ Truist
charlotte, north carolina, united states
Lakshmi S's Location
United States, United States
About Lakshmi S

I am a Senior Big Data Engineer at Truist, where I leverage my 10+ years of programming and software development experience to create scalable and efficient data processing pipelines using Spark, Python, AWS, and Azure. I use Agile/SCRUM methodologies to drive software development cycles from requirement gathering to deployment, and I am proficient in developing Spark applications using PySpark and Spark-SQL for data extraction, transformation, and aggregation. My data engineering skills enable me to uncover insights into customer usage patterns and support data-driven decision making. I have also worked as a Big Data Engineer at GSK and at Clarivate Analytics, where I designed and implemented end-to-end data solutions in Azure and Hadoop, migrated data from on-premises to cloud, and created data pipelines using Dataflow, Beam, Airflow, and Composer. I hold a B. Tech degree from GITAM Deemed University and have multiple certifications in Big Data and cloud technologies. I am passionate about innovating algorithms, ML models, and CI/CD integration for advanced systems and actively looking for C2C/C2H job opportunities.

Lakshmi S's Current Company Details
Truist

Truist

View
Senior Big Data Engineer | Innovating Algorithms, ML Models, and CI/CD Integration for Advanced Systems. Actively looking for C2C Job Opportunities.
charlotte, north carolina, united states
Website:
truist.com
Employees:
8972
Lakshmi S Work Experience Details
  • Truist
    Senior Big Data Engineer
    Truist Nov 2022 - Present
    Atlanta, Georgia, United States
    Skilled in developing Spark applications using PySpark and Spark-SQL for data extraction, transformation, and aggregation, enabling insights into customer usage patterns. Proficient in utilizing Agile/SCRUM methodologies to drive software development cycles from requirement gathering to deployment. Expertise in Java Spark, Splunk, and AWS, with a proven track record of building scalable and efficient data processing pipelines in cloud environments. Demonstrated success in migrating on-premises ETL processes to the cloud and deploying and maintaining data pipelines on AWS using Glue, Athena, Kinesis, Lambda, and Step Functions. Experienced in managing databases using AWS RDS and Amazon Aurora, and proficient in migrating databases to Amazon Aurora with minimal downtime. Strong proficiency in AWS services such as S3, Lambda, and Glue for scalable data storage and processing. Skilled in integrating data from multiple sources into data marts and utilizing shell scripts for automation. Experienced in Microsoft Azure DataBricks and Azure Data Factory, Snowflake data warehousing, and symmetric/asymmetric encryption. Proven ability to develop API endpoints, handle various file formats, and optimize data processing workflows for efficiency. Adept at using VSTS for source code management and committed to delivering high-quality solutions to meet business needs.
  • Gsk
    Big Data Engineer
    Gsk Jun 2021 - Nov 2022
    Collegeville, Pennsylvania, United States
    • Developed and designed data integration and migration solutions in Azure.• Kept our data separated and secure across national boundaries through multiple data centers and regions.• Create Spark Vectorized panda user defined functions for data manipulation and wrangling.• Migrating on-perm ETLs from MS SQL server to Azure using Azure Data Factory and Azure Data bricks.• Setting up Azure infrastructure like storage, integration runtime, service principal id, app registrations to enable scalable and optimized utilization of business user analytical requirements in Azure.• Create and maintain optimal data pipeline architecture in Microsoft Azure using Data Factory and Azure Databricks.• Creating Data factory pipelines that can bulk copy multiple tables at once from relational database to Azure data lake gen2• Migrate data into RV Data Pipeline using Databricks, Spark SQL and Scala.
  • Clarivate Analytics
    Data Engineer
    Clarivate Analytics Nov 2018 - Jun 2021
    • Environment Developed and maintained a regulatory data lake for federal reporting using big data technologies such as Hadoop Distributed File System (HDFS), Apache Impala, Apache Hive, and Cloudera Distribution.• Involved in importing data from various sources into HDFS using Sqoop, applying transformations using Hive, Spark, and loading data into Hive tables.• Primarily involved in the data migration process using AWS with GitHub repositories and integration with Jenkins.• Design, develop and maintain data integration programs in Hadoop and RDBMS environments. Using traditional and non-traditional source systems and his RDBMS and NoSQL data stores for data access and analysis.• Develop Python scripts using the Hadoop Distributed File System API to generate curl commands to migrate data and prepare various environments within a project.
  • Indriyn Data Analytics Pvt Ltd
    Hadoop Developer
    Indriyn Data Analytics Pvt Ltd Nov 2016 - Oct 2018
    • Developing different ETL jobs to extract data from different data sources like Oracle, Microsoft SQL Server, transform the extracted data using Hive Query Language (HQL) and load it into Hadoop Distributed file system (HDFS).• Experience in using Sqoop for Importing and exporting data into HDFS and Hive.• Primarily involved in the Data Migration process using AWS by integrating with GitHub repository and Jenkins.• Experienced in handling data from different data sets, and using Pig for data joins and data preprocessing operations.• Primarily responsible for designing, implementing, Testing, and maintaining database solution for AWS.
  • Schindler Group
    Hadoop Developer
    Schindler Group Oct 2013 - Aug 2016
    Bengaluru, Karnataka, India
    • Implemented Partitioning, Dynamic Partitions and Buckets in HIVE for efficient data access. • Create/Modify shell scripts for scheduling various data cleansing scripts and ETL load process.• Responsible for importing data to HDFS using Sqoop from different RDBMS servers and exporting data using Sqoop to the RDBMS servers after aggregations for other ETL operations.• Involved in Functional Testing, Integration testing, Regression Testing, Smoke testing and performance Testing. Tested Hadoop Map Reduce developed in python, pig, Hive.• Experience in designing and developing applications in PySpark using python to compare the performance of Spark with Hive.

Lakshmi S Education Details

Frequently Asked Questions about Lakshmi S

What company does Lakshmi S work for?

Lakshmi S works for Truist

What is Lakshmi S's role at the current company?

Lakshmi S's current role is Senior Big Data Engineer | Innovating Algorithms, ML Models, and CI/CD Integration for Advanced Systems. Actively looking for C2C Job Opportunities..

What schools did Lakshmi S attend?

Lakshmi S attended Gitam Deemed University.

Who are Lakshmi S's colleagues?

Lakshmi S's colleagues are John Wallace Robinson, Samantha Kear, Britain Lamm, Autumn Horrell, Thair Alabaidi, Tamara Puddy, Caroline Palmer, Mba.

Not the Lakshmi S you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.