R K Email and Phone Number
Having 9+ years of professional IT experience Big Data Ecosystem experience in ingestion, storage, querying, processing, and analysis of big data/ Azure Data bricks/cloud Technologies and Data warehouse and ETL developer. Hands on experience in working with Azure Cloud and its components like Azure Data Factory, Azure Databricks, Logical Apps, Azure function Apps, snowflake, and Azure DevOps services. Hands on working experience on working on Azure stack moving data from Data Lake to Azure blob storage. Strong background in Data Load/Integration using the ADF. Experience in building ETL (Azure Data Bricks) data pipelines leveraging PySpark, Spark SQL.
Truist
View- Website:
- truist.com
- Employees:
- 8972
-
Senior Data EngineerTruist Dec 2022 - Present Performed all phases of software engineering including requirements analysis, application design, and code development & testing. Developed and maintained end-to-end operations of ETL data pipeline and worked with large data sets in azure data factory. Increased the efficiency of data fetching by using queries for optimizing and indexing. Wrote SQL queries using programs such as DDL, DML and indexes, triggers, views, stored procedures, functions and packages. Worked on Azure Data Factory to integrate data of both on-prem (MYSQL, Cassandra) and cloud (Blob storage, Azure SQL DB) and applied transformations to load back to snowflake. Developed custom activities using Azure Functions, Azure Databricks, and PowerShell scripts to perform data transformations, data cleaning, and data validation. Experience using Azure Data Factory, Airflow on multiple cloud platforms and able to understand the process of leveraging the Airflow Operators. Automated and validated data pipelines using Apache Airflow Hands on experience in setting up workflow using Apache Airflow and Oozie workflow engine for managing and scheduling Hadoop jobs. -
Data EngineerHomesite Insurance Sep 2020 - Nov 2022 Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks. Worked on Microsoft Azure services like HDInsight Clusters, BLOB, Data Factory and Logic Apps and also done POC on Azure Data Bricks. Perform ETL using Azure Data Bricks, Migrated on premise Oracle ETL process to azure synapse analytics. Worked on Migrating SQL database to Azure data lake, Azure data lake analytics, Azure SQL Database, Data Bricks and Azure SQL Data warehouse. controlling and granting database access and Migrating on Premise databases to azure data lake store using Azure Data Factory. Spearheaded the planning and development of climbing routes, leveraging data analysis to identify optimal locations and difficulty levels for new routes. Implemented rigorous safety protocols and standards in route development to ensure the safety of climbers, adhering to industry best practices and regulations. Collaborated closely with climbing communities, local authorities, and environmental agencies to gain approvals and support for route development projects, fostering positive relationships and community engagement. Data transfer using azure synapse and Polybase. Deployed and optimized Python web applications to Azure DevOps CI/CD to focus on development. Developed enterprise level solution using batch processing and streaming framework (using Spark Streaming, Apache Kafka. -
Data EngineerMerck Mar 2018 - Aug 2020 Analyzed the source data and handled efficiently by modifying the data types. Used excel sheet, flat files, CSV files to generated PowerBI ad-hoc reports. Worked on GIT to maintain source code in Git and GitHub repositories. Prepared an ETL framework with the help of Sqoop , pig and hive to be able to frequently bring in data from the source and make it available for consumption Imported Data using Sqoop to load Data from MySQL to HDFS on regular basis. Performing aggregations on large amounts of data using Apache Spark, Scala, and landing data in Hive warehouse for further analysis. Processed HDFS data and created external tables using Hive and developed scripts to ingest and repair tables that can be reused across the project. Developed ETL jobs using Spark -Scala to migrate data from Oracle to new MySQL tables. Worked with Data Lakes and big data ecosystems (Hadoop, Spark, Hortonworks, Cloudera). Load and transform large sets of structured, semi structured, and unstructured data. Written Hive queries for data analysis to meet the Business requirements. Built HBASE tables by leveraging on HBASE Integration with HIVE on the Analytics Zone. Developed Spark Streaming application for real time sales analytics. Hands on experience in using Kafka, Spark streaming, to process the streaming data in specific use cases. -
Data Warehouse DeveloperKotak Mahindra Bank Jul 2014 - Oct 2017 Create and maintain database for Server Inventory, Performance Inventory. Worked in Agile Scrum Methodology with daily stand-up meetings, great knowledge working with Visual SourceSafe for Visual studio 2010 and tracking the projects using Trello. Generated Drill through and Drill down reports with Drop down menu option, sorting the data, and defining subtotals in Power BI. Used Data warehouse for developing Data Mart which for feeding downstream reports, development of User Access Tool using which users can create ad-hoc reports and run queries to analyze data in the proposed Cube. Deployed the SSIS Packages and created jobs for efficient running of the packages. Expertise in creating ETL packages using SSIS to extract data from heterogeneous database and then transform and load into the data mart. Led the design and implementation of robust data integration pipelines for processing payer data from diverse sources, ensuring data accuracy, completeness, and timeliness. Designed and implemented analytical solutions to extract actionable insights from payer data, enabling stakeholders to make informed decisions and optimize healthcare delivery and reimbursement strategies. Ensured compliance with regulatory requirements such as HIPAA by implementing data security and privacy measures to safeguard sensitive payer information throughout the data lifecycle. Experienced in Building Cubes and Dimensions with different Architectures and Data Sources for Business Intelligence and writing MDX Scripting.
R K Education Details
Frequently Asked Questions about R K
What company does R K work for?
R K works for Truist
What is R K's role at the current company?
R K's current role is Senior Data Engineer at AT&T | Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | AWS | Hadoop | Azure | PySpark | Kafka | Yarn | HDFS | Scala | ETL..
What schools did R K attend?
R K attended Jntuh College Of Engineering Hyderabad.
Who are R K's colleagues?
R K's colleagues are Irisel Rivera, Joshua Rivera, Jennifer Crawford, Renna Spencer, Glenn Shaw, Kiana Fernandez, Nathaniel Staradumsky.
Not the R K you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial