Mani R

Mani R Email and Phone Number

Senior Data Engineer @ Delta Air Lines
Mani R's Location
United States, United States
About Mani R

Mani R is a Senior Data Engineer at Delta Air Lines.

Mani R's Current Company Details
Delta Air Lines

Delta Air Lines

View
Senior Data Engineer
Mani R Work Experience Details
  • Delta Air Lines
    Senior Data Engineer
    Delta Air Lines Apr 2023 - Present
    Atlanta, Georgia, Us
    As a senior data engineer ,I worked on a crucial project migrating HR data processes which sits on a desktop to advanced SQL Server technology, ensuring a seamless transition. By analyzing existing workflows and collaborating closely with stakeholders, I tailored solutions to optimize system performance and compatibility. With a focus on problem-solving, I addressed technical issues promptly, preventing disruptions to project timelines. Moreover, I provided ongoing support post-migration, ensuring smooth operations and enhancing system efficiency.- Orchestrated a project migrating HR data processes to newer SQL Server technology.- Analyzed and understood existing data workflows, making sure the transition was seamless.- Collaborated with stakeholders and team members, ensuring everyone was aligned with project goals.- Modified systems to work smoothly with the new SQL Server setup.- Fixed technical issues without causing delays to the project.- Worked closely with various teams to optimize our migration strategies.- Improved system performance by fine-tuning SQL Server functionalities.- Provided ongoing support after the migration to ensure everything continued running smoothly.- Upgraded SQL Server versions to meet project requirements.- Implemented solutions to prevent future technical problems.
  • Sap Concur
    Senior Data Engineer
    Sap Concur Jan 2021 - Apr 2023
    Bellevue, Wa, Us
    • Worked on Apache Spark data processing project to process data from RDBMS and several data streaming sources and developed Spark applications using Python on AWS EMR.• Performed reporting analytics on data from AWS stack by connecting it to BI tools (Tableau, Power Bi).• Migrated an entire oracle database to BigQuery and build Data pipelines in airflow in GCP for ETL related jobs using different airflow operators.• Designed and deployed multi-tier applications leveraging AWS services like (EC2, Route 53, S3, RDS, DynamoDB) focusing on high availability, fault tolerance, and auto-scaling in AWS Cloud Formation.• Performed data transformations using Spark Data Frames, Spark SQL, Spark File formats, Spark RDDs.• Transformed data from different files (Text, CSV, JSON) using Python scripts in Spark.• Loaded data from various sources like RDBMS (MySQL, Teradata) using Sqoop jobs.• Handled JSON datasets by writing custom Python functions to parse through JSON data using Spark.• Developed a preprocessing job using Spark Data Frames to flatten JSON documents to flat files.• Utilized REST APIs with python to ingest the data into big query. Computed PySpark Jobs using gsutil and got that executed In Data proc Cluster.• Improved performance of cluster by optimizing existing algorithms using Spark.• Performed wide, narrow transformations, actions like filter, Lookup, Join, count, etc. on Spark Data Frames. • Worked with Parquet files and Impala using PySpark,Spark Streaming with RDDs and Data Frames.
  • Samsung Research America (Sra)
    Senior Data Engineer
    Samsung Research America (Sra) Oct 2019 - Dec 2020
    Mountain View, California, Us
    • Developed custom-built ETL solution, batch processing, and real-time data ingestion pipeline to move data in and out of the Hadoop cluster using PySpark and Shell Scripting.• Integrated on-premises data (MySQL, Hbase) with cloud (Blob Storage, Azure SQL DB) and applied transformations to load back to Azure Synapse using Azure Data Factory.• Built and published Docker container images using Azure Container Registry and deployed them into Azure Kubernetes Service (AKS).• Imported metadata into Hive and migrated existing tables and applications to work on Hive and Azure.• Created complex data transformations and manipulations using ADF and Scala.• Configured Azure Data Factory (ADF) to ingest data from different sources like relational and non-relational databases to meet business functional requirements.• Designed cloud architecture and implementation plans for hosting complex app workloads on MS Azure.• Performed operations on the transformation layer using Apache Drill, Spark RDD, Data frame APIs, and Spark SQL and applied various aggregations provided by Spark framework.• Provided real-time insights and reports by mining data using Spark Scala functions. Optimized existing Scala code and improved the cluster performance.• Processed huge datasets by leveraging Spark Context, SparkSQL, and Spark Streaming. • Enhanced reliability of Spark cluster by continuous monitoring using Log Analytics and Ambari WEB UI.• Improved the query performance by transitioning log storage from Cassandra to Azure SQL Datawarehouse.
  • Moody'S Analytics
    Data Engineer
    Moody'S Analytics Nov 2018 - Sep 2019
    New York, Ny, Us
    • Implemented Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data.• Ingested data from RDBMS and performed data transformations, and then export the transformed data to Cassandra as per the business requirement.• Drive insight from the data perform impact analysis, suggest, and implement solutions to maintain the quality of the data ensuring timely generation and retrieval of quality client deliverables.• Onboard new clients into existing studies and assist in launching new studies for the client. • Revamp existing transaction data model to meet growing needs of the clients and the organization that continues to help Argus establish new revenue generating engagements. • Extensively worked on Performance Tuning of complex scripts and redesigned the tables to avoid bottlenecks in the system.• Evaluate correlations among statistical data, identify trends and summarize findings across issuers.• Responsibilities include gathering business requirements, developing strategy for data cleansing and data migration, writing functional and technical specifications, creating source to target mapping, designing data profiling and data validation jobs in Informatica, and creating ETL jobs in Informatica.• Used Git for version control with Data Engineer team and Data Scientists colleagues. Involved in creating Created Tableau dashboards using stack bars, bar graphs, scattered plots, geographical maps, Gantt charts etc. using show me functionality. Dashboards and stories as needed using Tableau Desktop and Tableau Serve• Performed statistical analysis using SQL, Python, R Programming and Excel.• Worked extensively with Excel VBA Macros, Microsoft Access Forms• Used Python& SAS to extract, transform & load source data from transaction systems, generated reports, insights, and key conclusions.
  • Oracle Financial Services Software Limited
    Data Engineer
    Oracle Financial Services Software Limited Jan 2017 - Sep 2018
    Mumbai, Maharashtra, In
    • Design and development of ETL processes using Informatica ETL tool for dimension and fact file creation • Performed wide, narrow transformations, actions like filter, Lookup, Join, count, etc. on Spark Data Frames. • Worked with Parquet files and Impala using PySpark, and Spark Streaming with RDDs and Data Frames.• Involved in Uploading Master and Transactional data from flat files and preparation of Test cases, Sub System Testing• Aggregated logs data from various servers and made them available in downstream systems for analytics by using Apache Kafka.• Improved Kafka performance and implemented security.• Developed batch and streaming processing apps using Spark APIs for functional pipeline requirements.• Worked with Spark to create structured data from the pool of unstructured data received.• Implemented intermediate functionalities like events or records count from the flume sinks or Kafka topics by writing Spark programs in java and python.• Documented the requirements including the available code which should be implemented using Spark, Hive, HDFS.• Experienced in transferring Streaming data, data from different data sources into HDFS, No SQL databases• Created ETL Mapping with Talend Integration Suite to pull data from Source, apply transformations, and load data into target database.• Transformed data from different files (Text, CSV, JSON) using Python scripts in Spark.• Loaded data from various sources like RDBMS (MySQL, Teradata) using Sqoop jobs
  • Iblesoft Solutions
    Java Developer
    Iblesoft Solutions Apr 2014 - Dec 2016
    • Developed test-driven web applications using Java J2EE, Struts 2.0 framework, Spring MVC, Hibernate framework, JavaScript, and SQL Server database with deployments on IBM WebSphere.• Designed & developed a web Portal using Struts Framework, J2EE. Developed newsletter as part of process improvement tasks using HTML and CSS to report the weekly activities.• Developed front-end, User Interface using HTML, CSS, JSP, Struts, Angular, and NodeJS, and session validation using Spring AOP.• Extensively used Java multi-threading to implement batch Jobs with JDK 1.5 features and deployed it on the JBoss server.• Ensured High availability and load balancing by configuring and Implementing clustering of Oracle on WebLogic Server 10.3.• Improved productivity by developing an automated system health check tool using UNIX shell scripts.

Frequently Asked Questions about Mani R

What company does Mani R work for?

Mani R works for Delta Air Lines

What is Mani R's role at the current company?

Mani R's current role is Senior Data Engineer.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.