Mukesh Reddy work email
- Valid
- Valid
Mukesh Reddy personal email
14+ years of experience in information technology industry, has a wide range of experience providing project and account management experience and is seasoned in Analysis, Design, Development, Implementation and Testing of various stand-alone and client-server architecture-based enterprise applications software in various domains and with 4+ years of experience in Big Data implementing, developing and maintenance of various applications using Microsoft Azure Stacks, Spark, Python, T-SQL, PL/SQL, Azure DW, Azure Data Factory (ADF), Azure Synapse Analytics, EDW (Enterprise Datawarehouse), Azure Datalake(Gen-2), Fabric, Cosmos DB, Azure Stream Analytics and Azure DevOps.Good understanding on Azure AI/ML, ML Studio, and ML Ops. Strong conceptual knowledge and working experience on all features of Azure ML Studio, Fabric, ML Services and ML Ops.Good experience in BI tools (Power BI, Tableau, and Toad Data Analyst).Experience with data analysis using Databricks, Python, Spark, TSQL, PL/SQL, Azure DW.Hands on experience of working in SAP projects of which 3 full life cycles implementation using ASAP methodology: from Business Blueprint, Realization, Finalization, to Go-live & Support.
-
Sr. Sap Sd ConsultantAmerigasHouston, Tx, Us -
Azure Data EngineerAms It Solutions / Chc Sep 2023 - Present• Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, Databricks. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.• Create conceptual, logical, and physical data models encompassing all entities utilizing Erwin Data Modeler.• Created Star Schema Modeling in Azure Synapse DW using the Erwin tool to meet customer needs for generating their Power BI reports. • Prepare Source to Target Data Mapping.• Developed mapping spreadsheets for ETL team with source and target data mapping with physical naming.• Created, documented, and maintained logical and physical database models in compliance with enterprise standards. Additionally, managed corporate metadata definitions for enterprise data stores within a metadata repository. • Design & Implement migration strategies for traditional systems on Azure.• Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.• Provide Azure technical expertise including strategic design and architectural mentorship, assessments, POCs, etc.• Design and build modern data pipelines to high-performing, efficient, organized, and reliable to given use case with Azure Data factory, Data Flows and Databricks.
-
Azure Data EngineerAmerigas Feb 2023 - Sep 2023King Of Prussia, Pennsylvania, UsExtract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks. Implemented ETL pipeline using Scope for business transformations and Data storage in Cosmos. Design & Implement migration strategies for traditional systems on Azure. REST API’s to retrieve analytics data from different data feeds. Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards. Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the Sql Activity. Provide Azure technical expertise including strategic design and architectural mentorship, assessments, POCs, etc. Design and build modern data pipelines to high-performing, efficient, organized, and reliable to given use case with Azure Data factory, Data Flows and Databricks. Working with different data storage options including Azure Blob, ADLS Gen-1, Gen-2 and providing data governance / data security layers for data sets. Strong development background in creating pipelines, data flows and complex data transformations and manipulations using ADF and Databricks. Building Azure data pipelines for E2E solutions to perform data ingestion to Data Lake and perform curated models to leverage customer needs. Designing and implementing Data models as per business needs using Databricks with PySpark, and Synapse Analytics. Implemented high availability with Azure Resource Manager deployment models. Implemented Triggers to schedule pipelines on daily base, weekdays, monthly and event based runs. -
Azure Data EngineerVerizon Dec 2019 - Jan 2023Basking Ridge, Nj, UsDesigned and architected scalable data processing and analytics solutions, including technical feasibility, integration, development for Data storage, processing and consumption of Azure data, analytics, business intelligence (Reporting Services, Power BI, Tableau), NoSQL, Data Factory, Event Hubs, Data Flows, Databricks, Azure synapse Analytics Notification Hubs Logic App, and Triggers. Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks. Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards. Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the Sql Activity. Design and build modern data pipelines to high-performing, efficient, organized, and reliable to given use case with Azure Data factory, Data Flows and Databricks. Working with different data storage options including Azure Blob, ADLS Gen-1, Gen-2 and providing data governance / data security layers for data sets. Strong development background in creating pipelines, data flows and complex data transformations and manipulations using ADF and Databricks. Building Azure data pipelines for E2E solutions to perform data ingestion to Data Lake and perform curated models to leverage customer needs. Experience in bulk importing CSV, XML and Flat files data using Azure Data Factory in complete automation to pick latest files. -
Big Data EngineerCigna Healthcare Jun 2016 - Nov 2019Bloomfield, Ct, Us• Created logical and physical data models, defining data structures, and setting up data schemas for Star-Schema using Erwin.• Ensure the accuracy and integrity of data through data quality checks. This includes identifying and resolving data inconsistencies, duplicates, and other issues.• Document data models, database designs, and data dictionaries for reference and training purposes. This involves creating diagrams, flowcharts, and other visual aids to illustrate data structures. • Used various spark Transformations and Actions for cleansing the input data.• Developed shell scripts to generate the Hive create statements from the data and load the data into the table.• Optimized Hive QL scripts by using execution engine like Spark, Python and Scala.• Integrated Maven build and designed workflows to automate the build and deploy process.• Involved in developing a linear regression model to predict a continuous measurement for improving the observation on wind turbine data developed using spark with Scala API.• The Hive tables are created as per requirement were Internal or External tables defined with appropriate static, dynamic partitions and bucketing, intended for efficiency.• Load and transform large sets of structured, semi structured data using hive. -
Sas/Python DeveloperCigna Healthcare Aug 2013 - Jun 2016Bloomfield, Ct, Us• Proficiency in using SAS PROC SQL to manipulate data, join statements and sub query’s.• Used different SAS procedures such as PROC REPORT, PRINT, SQL, SORT, TRANSPOSE, SUMMARY, CONTENTS, FREQ, and MEANS, TABULATE.• Used SAS /ODS to format HTML and excel, ACCESS DB reports.• SAS Macros for data cleaning.• Organized data to required type and format for further manipulation.• Maintain and enhance the source code and process flow for the department’s SAS production environment. Recommend improvements to increase automation/efficiency and reduce opportunity for error.• create SAS datasets for in-house post-hoc statistical analysis and used SAS data steps for large data processing for different projects.• Worked under the supervision of senior SAS developers to manage large volume of structured and unstructured data as well as extracted and cleaned data to make it amenable for analysis.• Analyzed substantial data sets by running PROC SQL’s and DATA Steps.• Involved in transforming data from Mainframe tables to SAS Data sets using PROC SQL.• Defined the Accumulate tables and loaded data into tables for near real-time data reports.• Involved in creating Shell scripts to simplify the execution of all other scripts.• Creating files and tuned the SQL queries in SAS.• Used Python modules like, Pandas, NumPy, OS, ox_Oracle, IBM_DB, SciPy for data analysis and reporting. -
Sas DeveloperCigna Healthcare Jun 2009 - Aug 2013Bloomfield, Ct, Us• Supporting the production cycle of Billing and ensuring the smooth running and completion of the cycle daily by rectifying the faults.• Ensuring the Delivery dates are met for the Service Requests requested by the Client for smooth running of Business like (Analysis, One shots and Report requests).• Analyzing the Service requests from the Business and mapping them on to the application to fix the production issues.• Analyzing the application programs to enhance the Application program to meet the new business requests• Coordinating the delivery with the Onsite and the offshore team.• Coding, Unit Testing, and Integration Testing of the Application programs• Daily creation of algorithms and reformates to support successful implementation and ongoing maintenance of legacy systems eligibility.• Daily creation of reformates to support successful implementation and on-going maintenance end-state systems eligibility.• Creation and deletion of legacy and end-state transmission, bypass, extracts, file copies, one shots on an ad-hoc basis.• Analyzing the client specifications and developing algorithms for reformatting the data received in various formats from CIGNA’s clients.
Mukesh Reddy Skills
Mukesh Reddy Education Details
-
Osmania UniversityBachelor Of Arts
Frequently Asked Questions about Mukesh Reddy
What company does Mukesh Reddy work for?
Mukesh Reddy works for Amerigas
What is Mukesh Reddy's role at the current company?
Mukesh Reddy's current role is Sr. SAP SD Consultant.
What is Mukesh Reddy's email address?
Mukesh Reddy's email address is mu****@****zon.com
What schools did Mukesh Reddy attend?
Mukesh Reddy attended Osmania University.
What skills is Mukesh Reddy known for?
Mukesh Reddy has skills like Microsoft Office, Sap Sd, Sap Abap Reports And Debugging, Sap Le, Adobe Acrobat, Sap Materials Management, Html.
Who are Mukesh Reddy's colleagues?
Mukesh Reddy's colleagues are Lisa Ekelund, Janine June, Nita Drabeck, Mark Reynolds, Sebastian Swider, Laird Shankle, Kathy Mcginty.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial