Venkatesu Sunkavalli

Venkatesu Sunkavalli Email and Phone Number

Python || SQL || Linux || AWS || VMware || Ansible || RHEL || Data Engineer || Apache Spark @ Verizon
basking ridge, new jersey, united states
Venkatesu Sunkavalli's Location
Little Elm, Texas, United States, United States
About Venkatesu Sunkavalli

Venkatesu Sunkavalli is a Python || SQL || Linux || AWS || VMware || Ansible || RHEL || Data Engineer || Apache Spark at Verizon.

Venkatesu Sunkavalli's Current Company Details
Verizon

Verizon

View
Python || SQL || Linux || AWS || VMware || Ansible || RHEL || Data Engineer || Apache Spark
basking ridge, new jersey, united states
Website:
verizon.com
Employees:
151940
Venkatesu Sunkavalli Work Experience Details
  • Verizon
    Sr Data Engineer
    Verizon Jun 2024 - Present
    Texas, United States
    As a Senior Data Engineer with 11 years of hands-on experience in data architecture, data pipelines, and analytics, I specialize in designing, building, and optimizing data solutions that empower organizations to make data-driven decisions. With a deep knowledge of cloud platforms (AWS, Azure, GCP) and big data technologies, I am passionate about transforming raw data into actionable insights that drive business growth.-> Created real-time processing jobs with Spark Streaming, Kafka,… Show more As a Senior Data Engineer with 11 years of hands-on experience in data architecture, data pipelines, and analytics, I specialize in designing, building, and optimizing data solutions that empower organizations to make data-driven decisions. With a deep knowledge of cloud platforms (AWS, Azure, GCP) and big data technologies, I am passionate about transforming raw data into actionable insights that drive business growth.-> Created real-time processing jobs with Spark Streaming, Kafka, and Scala, and stored data in Cassandra.-> Python, PySpark, and Spark programming language experience for data ingestion.-> Developed Optimized ETL Pipelines using Talend and Spark for batch and streaming data, improving data parallelism and reducing processing time by 30%, meeting the performance requirements of high-throughput systems. -> Created and Orchestrated Complex Workflows with Apache Airflow, leveraging DAGs to manage ETL processes and integrating with AWS Lambda for resource-triggered events.-> Enhanced Query Performance in Hive through bucketing and partitioning, coupled with Spark task optimization, resulting in faster data processing and efficient resource usage.-> Set Up Real-Time Crediting Data Loads using Kafka, Airflow, Python, and Spark, enabling near-instant data updates for critical financial systems. ->Created data integration and technical solutions for Azure Data Lake Analytics, Azure Data Lake Storage, Azure Data Factory, Azure SQL databases, and Azure SQL Data Warehouse for providing analytics. Show less
  • Chase
    Sr Data Engineer
    Chase Dec 2023 - May 2024
    Kentucky, United States
    Dynamic Data Engineer with extensive experience designing and implementing data pipelines, ETL workflows, and real-time data processing solutions on Azure. Proficient in Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), and Azure Synapse, I drive seamless data integration from on-prem and cloud sources to support data-driven decision-making. Skilled in developing batch and streaming pipelines, transforming data with PySpark, and creating data assets that ensure quality and reliability… Show more Dynamic Data Engineer with extensive experience designing and implementing data pipelines, ETL workflows, and real-time data processing solutions on Azure. Proficient in Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), and Azure Synapse, I drive seamless data integration from on-prem and cloud sources to support data-driven decision-making. Skilled in developing batch and streaming pipelines, transforming data with PySpark, and creating data assets that ensure quality and reliability. I leverage tools like Power BI, SQL, and Alteryx to deliver actionable insights and automate workflows, enabling business intelligence and predictive analytics.-> Built Data pipelines using Azure Data Factory (ADF) to transfer enterprise data assets from various on-prem and cloud sources to the new Data Lake built on Azure Data Lake Storage (ADLS Gen1 and Gen2).-> Created and updated Data Lake assets remotely using Microsoft Storage Explorer.-> Used Azure DevOps to migrate Azure Data Factory pipelines, Azure Data Flows, Azure Data Bricks notebooks, SQL Server stored procedures, and custom SQL Scripts and HQL scripts from development to QA and production environments.-> Developed and executed custom dashboards and reports by integrating Azure Synapse for business analytics and reporting.-> Wrote and modified PySpark and Python notebooks to transform enterprise data before loading them into Azure Data Lake, focusing on data quality and integrity.-> Utilized Power BI to develop comprehensive data visualizations, enabling business users to make data-driven decisions. Show less
  • Ihs Markit , Banglore
    Data Engineer
    Ihs Markit , Banglore Jun 2020 - Nov 2023
    India
    I am a dedicated and results-driven Data Engineer with over 3 years of experience in designing, developing, and optimizing data pipelines and architectures that power business intelligence and data-driven decision-making. With expertise in cloud technologies, big data platforms, and scalable data processing, I specialize in transforming complex data into valuable insights for organizations to thrive in the modern data landscape.->Developed an application to facilitate the required… Show more I am a dedicated and results-driven Data Engineer with over 3 years of experience in designing, developing, and optimizing data pipelines and architectures that power business intelligence and data-driven decision-making. With expertise in cloud technologies, big data platforms, and scalable data processing, I specialize in transforming complex data into valuable insights for organizations to thrive in the modern data landscape.->Developed an application to facilitate the required interaction with the cloud data hub and manage the data movement from the source system to the reporting layer in AWS.->Designed and developed data reconciliation jobs in AWS Databricks to ensure quality check and audit the data inconsistencies. ->Mitigate Security flaws in GitHub codes and enhance code products.->Developed Spark programs were used to process raw data, populate staging tables, and store refined data (JSON, XML, CSV etc.) in partitioned tables in the Enterprise Data warehouse. ->Designed and maintained ETL processes, loading and transforming data from various sources into Amazon Redshift, ensuring data consistency and availability for business analytics. Show less
  • Mercedes-Benz India
    Data Engineer
    Mercedes-Benz India Aug 2018 - May 2020
    Bengaluru, Karnataka, India
    As a Data Engineer, I specialize in building and optimizing scalable data pipelines and architectures that enable organizations to make data-driven decisions. With expertise in cloud technologies and big data platforms, I transform raw data into actionable insights through efficient ETL processes, real-time data streams, and optimized data storage solutions.-> Imported data from various data sources, including REST APIs, Elasticsearch, and other third-party vendor data sources, to… Show more As a Data Engineer, I specialize in building and optimizing scalable data pipelines and architectures that enable organizations to make data-driven decisions. With expertise in cloud technologies and big data platforms, I transform raw data into actionable insights through efficient ETL processes, real-time data streams, and optimized data storage solutions.-> Imported data from various data sources, including REST APIs, Elasticsearch, and other third-party vendor data sources, to ensure seamless data integration within the AWS ecosystem.-> Performed data transformations using Apache Spark on Amazon EMR, loading and extracting data from AWS S3 to other storage solutions and data lakes for further processing.-> Utilized Amazon OpenSearch (formerly Elasticsearch) to export data from Spark, enabling efficient data indexing for visualization, reporting, and insights generation for the BI team.-> Handled large datasets in formats such as JSON, CSV, and Parquet, performing in-depth data analysis using Spark SQL to extract meaningful insights and improve data understanding for business decision-making.-> Worked on analyzing and managing AWS-based data lakes and Hadoop clusters using big data analytic tools, including Hive and MapReduce, to optimize storage and access patterns. Show less
  • Intel Corporation
    Etl And Sql Developer
    Intel Corporation Jun 2017 - Jul 2018
    Bengaluru, Karnataka, India
    As an ETL and SQL Developer, I specialize in designing, implementing, and optimizing data pipelines and integrating data from various sources into cohesive systems for analytics and reporting. With a strong foundation in SQL and data transformation technologies, I focus on ensuring high-quality data processing, reliable integration, and seamless data flows that empower businesses to make informed decisions.->Worked as a developer in creating complex Stored Procedures, Triggers… Show more As an ETL and SQL Developer, I specialize in designing, implementing, and optimizing data pipelines and integrating data from various sources into cohesive systems for analytics and reporting. With a strong foundation in SQL and data transformation technologies, I focus on ensuring high-quality data processing, reliable integration, and seamless data flows that empower businesses to make informed decisions.->Worked as a developer in creating complex Stored Procedures, Triggers, Functions, Indexes, Tables, Views and SQL joins for applications. -> Collaborated with other departments to correct data value errors.-> Reduced average query response times by 30% through extensive query tuning and optimization of database structures.-> Understanding the customer requirements and analyzing the database to develop requirement documents.->Performed numerous data extraction requests involving SQL scripts.-> Involved in formal review meetings with project teams and management to report, demonstrate, prioritize and suggest resolution of issues discovered during testing. Show less
  • Kinit Info Solutions Pvt Ltd
    Linux System Engineer
    Kinit Info Solutions Pvt Ltd Dec 2013 - May 2017
    Chennai, Tamil Nadu, India
    As a skilled Linux System Engineer, I specialize in managing, configuring, and optimizing Linux-based systems and infrastructure to ensure maximum performance, scalability, and reliability. My focus is on delivering high-quality solutions through automation, system monitoring, and continuous improvement, with a deep understanding of system administration, cloud environments, and network protocols.-> Installation, configuration, and upgrade of Solaris, AIX, HP-UX, Linux (Red hat and… Show more As a skilled Linux System Engineer, I specialize in managing, configuring, and optimizing Linux-based systems and infrastructure to ensure maximum performance, scalability, and reliability. My focus is on delivering high-quality solutions through automation, system monitoring, and continuous improvement, with a deep understanding of system administration, cloud environments, and network protocols.-> Installation, configuration, and upgrade of Solaris, AIX, HP-UX, Linux (Red hat and SuSE), and Windows Operating System.-> Providing the day-to-day support monitoring alerts, Tickets, and Linux servers.-> Monitoring alerts that were generated and received from monitoring tools.-> Worked on Data Center Migration Project. Migrated Linux/Unix Servers from one data center to another data center with minimal downtime.->Installation, Configuration and Administration of High Availability using Redhat cluster and Veritas Cluster Servers for failover and redundancy purposes.-> Good understanding of Linux booting procedure.-> Implemented best practices for system hardening, access control, and network security, leading to a significant reduction in security incidents and vulnerabilities across the infrastructure.-> Developed Python scripts for monitoring and alerting on system health metrics, enabling proactive identification and resolution of potential issues before they impacted production.-> Designed and implemented automated workflows using middleware solutions to streamline business processes and improve operational efficiency. Show less

Venkatesu Sunkavalli Education Details

Frequently Asked Questions about Venkatesu Sunkavalli

What company does Venkatesu Sunkavalli work for?

Venkatesu Sunkavalli works for Verizon

What is Venkatesu Sunkavalli's role at the current company?

Venkatesu Sunkavalli's current role is Python || SQL || Linux || AWS || VMware || Ansible || RHEL || Data Engineer || Apache Spark.

What schools did Venkatesu Sunkavalli attend?

Venkatesu Sunkavalli attended University Of Madras.

Who are Venkatesu Sunkavalli's colleagues?

Venkatesu Sunkavalli's colleagues are Fayetta Waddell, Pettinato Giacinto Dino, Elliot Vazquez, Jennifer Szpara, Stuart Sullivan, Justin Harris, Jeffrey Ladd.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.