Kiran P is a Senior Data Engineer at Shell.
-
Senior Cloud Data EngineerShell Jul 2021 - PresentHouston, Tx• Developed and maintained end-to-end operations of ETL data pipelines and worked with large data sets in Azure Data Factory. • Setting up Datalake in google cloud using Google cloud storage, Big Query, and Big Table.• Creating shell scripts to process the raw data, loading data to AWS S3, and Redshift databases.• Writing regression SQL to merge the validated Data into Prod environment.• Worked on data migration form On-prem servers to Cloud using Azure Data Factory and Sqoop.• Researched and implemented various cloud components like pipeline, activity, mapping data flows, data sets, linked services, Integration Run times, triggers, and control flow.• Performed data transformation using Azure Data Factory and Azure Databricks.• Have good experience working with Azure BLOB and Data Lake storage and loading data into Azure SQL Synapse analytics.• Proficient in Administrating Microsoft Azure Iaas/Paas services like Azure Virtual Machines (VMs), Virtual Network (VNET), Azure Storage, SQL Databases, Azure Active Directory (AAD), Monitoring, DNS, Autoscaling and Load Balancing.• Skilled in structuring cluster AutoScaler for Azure Kubernetes Service (AKS) using Terraform and worked with scheduling, deploying and managing pods and replicas in AKS.• Work experience in setting up alerts and deploying multiple dashboards for individual applications in Azure Kubernetes (AKS) clusters using tools like Prometheus and Grafana.• Expertise with Terraform templates for provisioning Infrastructure like Virtual Networks, Load Balancers, Storage Accounts, Virtual Machines, Virtual Machine Scale Sets, Azure Kubernetes Cluster (AKS), Key Vaults and Log Analytics Workspace in Microsoft Azure using Terraform modules. • Well versed with updating Azure Images in Azure Compute Galleries using Packer and update these Image references for Virtual Machines, Virtual Machine Scale Sets using Terraform across all environments. -
Senior Azure Data EngineerChevron Oct 2019 - Jul 2021San Ramon, California, United States•Develop, design data models, data structures and ETL jobs for data acquisition and manipulation purposes.Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.•Experience in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.•Performed ETL operations in Azure Databricks by connecting to different relational database source systems using JDBC connectors.•Setting up Datalake in google cloud using Google cloud storage, Big Query, and Big Table.•Creating shell scripts to process the raw data, loading data to AWS S3, and Redshift databases.•Writing regression SQL to merge the validated Data into Prod environment.•Developed Python scripts to do file validations in Databricks and automated the process using ADF.•Developed an automated process in Azure cloud which can ingest data daily from web service and load into Azure SQL DB.• Proficient in Administrating Microsoft Azure Iaas/Paas services like Azure Virtual Machines (VMs), Virtual Network (VNET), Azure Storage, SQL Databases, Azure Active Directory (AAD), Monitoring, DNS, Autoscaling and Load Balancing.•Skilled in structuring cluster Auto Scaler for Azure Kubernetes Service (AKS) using Terraform and worked with scheduling, deploying and managing pods and replicas in AKS.•Work experience in setting up alerts and deploying multiple dashboards for individual applications in Azure Kubernetes (AKS) clusters using tools like Prometheus and Grafana.•Expertise with Terraform templates for provisioning Infrastructure like Virtual Networks, Load Balancers, Storage Accounts, Virtual Machines, Virtual Machine Scale Sets, Azure Kubernetes Cluster (AKS), Key Vaults and Log Analytics Workspace in Microsoft Azure using Terraform modules. •Well versed with updating Azure Images in Azure Compute Galleries using Packer and update these Image references for Virtual Machines, Virtual Machine Scale Sets using Terraform across all environments. -
Senior Data EngineerApple Apr 2019 - Sep 2019Cupertino, California, United States• Designed, deployed, scheduled, and executed Spark jobs written in Python on Hadoop Cluster deployed on Hortonworks 3 to process data.• Implemented clusters processing 250 TB of batch data each month and about 50GB of streaming data. Loaded this to data warehousing systems for further internal use. • Proficient in Administrating Microsoft Azure Iaas/Paas services like Azure Virtual Machines (VMs), Virtual Network (VNET), Azure Storage, SQL Databases, Azure Active Directory (AAD), Monitoring, DNS, Autoscaling and Load Balancing.• Skilled in structuring cluster AutoScaler for Azure Kubernetes Service (AKS) using Terraform and worked with scheduling, deploying, and managing pods and replicas in AKS.• Work experience in setting up alerts and deploying multiple dashboards for individual applications in Azure Kubernetes (AKS) clusters using tools like Prometheus and Grafana.• Expertise with Terraform templates for provisioning Infrastructure like Virtual Networks, Load Balancers, Storage Accounts, Virtual Machines, Virtual Machine Scale Sets, Azure Kubernetes Cluster (AKS), Key Vaults and Log Analytics Workspace in Microsoft Azure using Terraform modules. • Well versed with updating Azure Images in Azure Compute Galleries using Packer and update these Image references for Virtual Machines, Virtual Machine Scale Sets using Terraform across all environments.• Skilled in writing templates for Azure Infrastructure as code using Terraform to deploy Virtual Machines, OMS Agent extension on VMs, Log Analytics Workspace and Integrated Log Analytics with Azure VMs for monitoring the log files.• Used Hive to analyze data ingested into HBase by using Hive-HBase integration and compute various metrics for reporting on the dashboard. • Used Hive to perform transformations, event joins and some pre-aggregations before storing the data onto HDFS. -
Software EngineerU.S. Bank Mar 2017 - Apr 2019Or, Usa• Involved in the implementation of design using vital phases of the Software development life cycle (SDLC) that includes Development, Testing, Implementation and Maintenance Support.• Strong experience creating real time data streaming solutions using Apache Spark Core, Spark SQL & Data Frames, Spark Streaming, Kafka. • Excellent understanding /knowledge on Hadoop (Gen-1 and Gen-2) and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager (YARN). • Proficient at using Spark APIs to cleanse, explore, aggregate, transform, and Store, sale, customer, and stock data.• Hands-on experience with message brokers such as Apache Kafka.• Hands-on experience with systems-building languages such as Scala, Java.• Experience in writing UNIX shell scripts. • Involved in Requirement Analysis, Design, Development and Testing of the risk workflow system.• Developed stored procedures and Triggers in PL/SQL and Wrote SQL scripts to create and maintain the database, roles, users, tables, views, procedures, and triggers.• Used SQL queries to perform data validation and verify data integrity on Oracle 11g database.• Extensively used Core Java such as Multithreading, Exceptions, and Collections.• Generated server-side SQL scripts for data manipulation and validation and materialized views.• Created database access layer using JDBC and SQL stored procedures.• Worked on Java based connectivity of client requirement on JDBC connection.• Managing Backup Policies, Handling CRs for Backup and Restorations as well as User management and group policy management.• Involved in analyzing system failures, identifying root causes and recommended course of actions.• Worked on root cause analysis for all the issues that occur in production batch processes and provide the permanent fixes for the issues. -
Technology AnalystWalmart Nov 2015 - Feb 2017De, Usa• Involved in the implementation of design using vital phases of the Software development life cycle (SDLC) that includes Development, Testing, Implementation and Maintenance Support.• Production Support of all Java based applications (Response tracker, Merchandise request, Returned • Equipment, Store planning, Tape Library, and regulatory compliance) which are built on MVC framework.• Support of third-party application Kana, Merced.• Provided Remedy User Profile management support, duties included user creation, support group. • assignments, profile updates, and user license management.• Performed daily administrative tasks for streamlined and appropriate. • Supporting web services for easy integration with other systems.• Involved in migration of definitions and data using Remedy Migrator, import and export tools, from • development to production system.• Modifying workflows as per business requirement.• Automating AR system failure alerts by scripts.• Managing server outage or failures and fixing them.• Create yearly archiving forms and archive the old records. -
Software EngineerApollo Hospitals Mar 2012 - Jul 2015Hyderabad, Telangana, IndiaResponsibilities:• Involved in the implementation of design using vital phases of the Software development life cycle (SDLC) that includes Development, Testing, Implementation and Maintenance Support.• Developed Web Services using Spring & REST.• Involved in fixing and analysis of the defects.• Involved in Understanding the Business Requirement Documents • Involved in implementing the business requirement using the above technologies. -
Associate ProfessorDjr Group Of Institutions Feb 2007 - Mar 2012Vijayawada, ApResponsibilities: - • Conducted the subject lectures regularly, keeping an eye on the understanding of the students.• Carried out Group discussions, just minute programs to develop communication skills.• Arranged expert meets for students to understand the subject matter.• Guided the students in completing their project work.• Conducted mock interviews to the students as a part of personality development program.• Conducted workshops and seminars to develop communication skills and soft skills.• Guided the students in paper presentations in different seminars.
Kiran P Education Details
Frequently Asked Questions about Kiran P
What company does Kiran P work for?
Kiran P works for Shell
What is Kiran P's role at the current company?
Kiran P's current role is Senior Data Engineer.
What schools did Kiran P attend?
Kiran P attended Acharya Nagarjuna University (Anu), Guntur, New England College.
Who are Kiran P's colleagues?
Kiran P's colleagues are Chris Hackney, Lukho Tshangela, Katarzyna Demidowicz, Hamza Zahid, Carlos Eduardo, Sandrine Suter, Vena Paramitha.
Not the Kiran P you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial