Mani R Email and Phone Number
Mani R is a Senior Data Engineer at Travelport Englewood| Big Data | Python | Azure | PySpark | Spark SQL | Azure Databricks | Hadoop | Snow flake| ETL | SQL | Airflow | Agile | Actively looking for new opportunities on C2C/C2H at Travelport.
Travelport
View- Website:
- travelport.com
- Employees:
- 3779
-
Senior Azure Data EngineerTravelport Aug 2022 - PresentEnglewood, Colorado, United States• Developed ETL pipelines on Azure using Apache Beam and Dataflow to process large-scale data in real-time, resulting in a 20% improvement in data processing time.• Built and deployed data pipelines using Cloud Composer and Cloud Functions, enabling seamless integration with other GCP services such as BigQuery, Pub/Sub, and Cloud Storage.• Implemented monitoring and alerting mechanisms using Stackdriver, enabling proactive issue identification and resolution in GCP data pipelines.• Designed and executed end-to-end testing strategies for GCP data pipelines, ensuring the accuracy and completeness of data from ingestion to analysis.• Utilized DevOps practices and tools such as Jenkins, Terraform, and Ansible to automate GCP infrastructure deployment and configuration, resulting in a 50% reduction in deployment time.• Worked with Python, SQL, and Bash scripts to develop custom data transformations and data quality rules, resulting in a 25% reduction in data processing errors.• Developed and maintained CI/CD pipelines on Azure using Cloud Build and Cloud Run, enabling seamless code deployment and testing in a controlled environment.• Implemented data versioning and lineage tracking using tools such as Data Catalog and Data Studio, enabling auditability and traceability of healthcare data in GCP.• Conducted capacity planning and scaling of GCP data pipelines using Kubernetes and Cloud Autoscaling, ensuring optimal performance and cost-efficiency.• Developed multi-cloud strategies in better using GCP (for its PAAS) and Azure (for its SAAS).• Designed and developed Spark jobs with Scala to implement end-to-end data pipelines for batch processing.• Developed data pipeline using Flume, Kafka, and Spark Stream to ingest data from their weblog server and apply the transformation.• Developed data validation scripts in Hive and Spark and perform validation using Jupiter Notebook by spinning up the query cluster in AWS EMR. -
Aws Data EngineerMolina Healthcare Apr 2019 - Jul 2022Bothell, Washington, United States• Designing and deploying AWS Solutions using EC2, S3, EBS, Elastic Load balancer (ELB), auto-scaling groups• Set up and build AWS infrastructure for various resources, VPC EC2, S3, IAM, EBS, and Security Group. Auto Scaling, and RDS in Cloud Formation JSON templates.• Designed AWS Cloud Formation templates to create custom-sized VPC, Subnets, and NAT to ensure successful deployment of Web applications and database templates.• Developed stored procedures in MS SQL to fetch the data from different servers using FTP and processed these files to update the tables.• Performed data analysis and profiling of source data to better understand the sources.• Work related to downloading Big Query data into pandas or Spark data frames for advanced ETL capabilities.• Carried out data transformation and cleansing using SQL queries, Python and Pyspark.• Wrote scripts in Hive SQL for creating complex tables with high-performance metrics like partitioning, clustering, and skewing.• Created ETL Pipeline using Spark and Hive for ingest data from multiple sources.• Was responsible for ETL and data validation using SQL Server Integration Services.• Wrote Python scripts to automate the process of identifying trends, outliers, and anomalies in data and also to load data from Web APIs to staging DB.• Reverse-engineered existing data models to incorporate new changes utilizing Erwin.• Developed artifacts that are consumed by the data engineering team such as source-to-target mappings, data quality rules, data transformation rules, Joins, etc.• Performed data visualization for different modules using Tableau and ONE Click method. • Developed, deployed, and managed event-driven and scheduled AWS Lambda functions to be triggered in response to events on various AWS sources including logging.• Worked on building dashboards in Tableau with ODBC connections from different sources like Big Query/ presto SQL engine. -
Azure Data EngineerMacy'S Feb 2017 - Mar 2019New York City Metropolitan Area• Analyze, design, and build Modern data solutions using Azure PaaS service to support visualization of data.• Using Azure Data Factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics, extract, transform, and load data from source systems to Azure Data Storage services. • Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.• Implemented Proof of concepts for SOAP & REST APIs and utilized REST APIs to retrieve analytics data from different data feeds.• Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool, and backward.• Hands-on experience in developing SQL Scripts for automation purposes.• Developed Spark applications using PY-spark and Spark-SQL for data extraction, transformation, and aggregation from multiple le formats for analyzing & transforming the data to uncover insights into customer usage patterns.• Responsible for estimating the cluster size, monitoring, and troubleshooting the Spark data bricks cluster.• Experienced in performance tuning of Spark Applications for setting the right Batch Interval time, the correct level of Parallelism, and memory tuning.• Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that processes the data using the SQL Activity.• Created Build and Release for multiple projects (modules) in a production environment using Visual Studio Team Services (VSTS).• Respond to local area network (LAN) and wide area network (WAN) user requests for system upgrades and changes.• Monitor communications performance using visual, diagnostic equipment, status indicator checking methods, etc. to locate problems.• Analyze and cleanse raw data using HiveQL.• Experience in data transformations using Map-Reduce, and HIVE for different file formats. -
Etl/ Sql DeveloperCybage Software Jun 2015 - Nov 2016Hyderabad, Telangana, India• Creating mappings, sessions, and workflows to use with Informatica PowerCenter to load data for Ultimatix projects from source to target database.• Creating Informatica positions for SCD Types 1, 2, and 3. Transferring the processes and mappings from development to QA, QA to production, and unit testing the procedure.• Participated in the seamless migration of the mappings, sessions, workflows, and repositories from Informatica 9.0 to Informatica 9.6.• Created the Informatica Mappings by leveraging the Aggregator, SQL overrides in Lookups, source qualifiers, and Router to govern data flow into different targets.• Developed shell scripts to optimize the Informatica workflows' ETL flow.• Created Sessions, gathered data from multiple sources, processed it as needed, and loaded it into the data warehouse.• With the Informatica Power Center Designer, strong mappings were created using a variety of transformations, including Filter, Expression, Sequence Generator, Update Strategy, Joiner, Router, and Aggregator.• PL/SQL stored procedures were created, and push-down optimization tuning was used to minimize execution times by 30%.• Created materialized views, planned jobs, ETL workflows, and reporting that allowed data to move between eight ERP systems.• Provided financial reports in collaboration with the corporate finance team to track profitability and advance an organization's commercial development.• Conducted unit testing at various ETL stages and participated actively in team code reviews. • Working knowledge of tools for SQL Server 2005, 2008 R2, and 2012, including Management Studio, Query Analyzer, SQL Profiler, SQL Agent, SSIS, and SSRS.• Numerous functions and CTEs were written to remove duplicate records in OLTP tables.• Improved the execution time of DML statements by normalizing existing OLTP systems.• Created an OLAP database's dimensional model using ERWIN for analysis and reporting needs. -
Data AnalystAvon Technologies (I) Private Ltd. Sep 2013 - May 2015Hyderabad, Telangana, India• Developed and examined business needs to create technically proficient data solutions that can be put into practice.• Analyzing classified data items for data profiling and mapping from source to target data environments and created working documents to back up results and assign particular responsibilities.• Used complicated SQL to analyze and profile data from a variety of sources, including Teradata and Oracle.• Created, and executed a SQL script to carry out the development of views, store procedures, and indexes.• Participated in meetings for gathering information and JAD sessions to deliver a business requirements document and a draft logical data model.• Specify how the data will be sourced from sources and loaded into DWH tables by defining the ETL mapping specification and designing the ETL procedure.• Created mappings using the transformations Source Qualifier, Expression, Filter, Lookup, Update Strategy, Sorter, Joiner, Normalizer, and Router.• Carrying out data administration tasks and completing ad-hoc requests in accordance with user requirements using data management software and tools like Perl, Toad, MS Access, Excel, and SQL.• Recognized and examined sources of data from flat files, Oracle, and SQL Server.• Used ERWIN to perform forward and reverse engineering and apply DDLs to databases to restructure the current data model.• Created ETL specification documents to load the data into the target utilizing different transformations in accordance with the business needs.• Utilized PL/SQL to write, test, and implement triggers, stored procedures, and functions at the database level.• Designed and reorganized Logical and Physical Data Models extensively using ERWIN.• Created a number of reports using a collection of data from a SQL query and a set of pivot tables built up in Excel.• RANK functions were used to create Teradata SQL scripts that would conduct queries more quickly while obtaining data from big tables.
Mani R Education Details
-
Bachelor'S Degree
Frequently Asked Questions about Mani R
What company does Mani R work for?
Mani R works for Travelport
What is Mani R's role at the current company?
Mani R's current role is Senior Data Engineer at Travelport Englewood| Big Data | Python | Azure | PySpark | Spark SQL | Azure Databricks | Hadoop | Snow flake| ETL | SQL | Airflow | Agile | Actively looking for new opportunities on C2C/C2H.
What schools did Mani R attend?
Mani R attended Jntuh College Of Engineering Hyderabad.
Who are Mani R's colleagues?
Mani R's colleagues are Crina Ciuca, Alanso Savane, Brian Maly, Mary Malafi, Cobbina Alex, Angela Porter, Csm, Rte, Lanre Tiamiyu.
Not the Mani R you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial