Srikanth Y

Srikanth Y Email and Phone Number

Actively looking for Data engineer positions | AWS, AZURE, SSIS,SSRS, Redshift, Glue, Lambda, Boto3,Power BI , Power apps, DynamoDB, Apache Spark, Airflow, Kafka, Python, Tableau, Jenkins, Maven, Ansible. @ UPS
Ground Floor, Abhilasha building, Behind NABARD Office 6, Royd Street, Kolkata, West Bengal 700016, India
Srikanth Y's Location
Suwanee, Georgia, United States, United States
About Srikanth Y

An enthusiastic and experienced data engineer who wants to be of service to transform the way your organization utilizes data and ensures that your architectural plan addresses the needs of everyone in your company, I worked on Hadoop, Spark, Kafka, SQL, NoSQL, Elastic Search, AWS and Azure Cloud. I can take all your data issues and resolve problems with my expertise in big data.• Overall, 7+ years of experience as Big Data Engineer, ETL Developer, Python developer specialized in Big Data ecosystem- Data Acquisition, Ingestion, Modeling, Storage Analysis, Integration, and Data Processing.• 4+ years of experience working in various cloud environments like Microsoft Azure, AWS.• 4+ years of experience in Spark, Hadoop, HDFS, HBase, Hive, Sqoop, Scala, Pig, Map reduce, Kafka, Flume, Python, Linux, Eclipse Juno, Impala, Oozie, XML, JSON, Maven, Zookeeper, Impala, and Hue.• 4 years of extensive experience working with various types of file formats like Parquet, Delta, Avro, CSV, XML, JSON.• 2+ years of Data Modeling & Data Analysis experience using Dimensional Data Modeling andRelational Data Modeling.• A Data Science enthusiast with strong Problem solving, Debugging and Analytical capabilities, who actively engages in understanding and delivering business requirements.• Strong Hadoop and platform support experience with all major Hadoop distributions' tools and services — Cloudera, Hortonworks, Amazon EMR, Azure HDInsight.• Extensive working experience with Big data ecosystem - Hadoop (HDFS, MapReduce, Yarn), Spark, Kafka, Hive, Impala, Sqoop, Airflow, Oozie.• Expertise in building PySpark, Spark Java and Scala applications for interactive analysis, batch processing, and stream processing.• SME with EMR/EHR full life cycle implementations, upgrades, and conversions, to include paper-to-electronic.• Acquired vast knowledge in developing production ready Spark applications utilizing Spark Core, Spark Streaming, Spark SQL, DataFrames, Datasets and Spark-ML.• Strong working experience with SQL and NoSQL databases, data modeling and data pipelines. Involved in end-to-end development and automation of ETL pipelines using SQL and Python.• Extensive knowledge in implementing, configuring, and maintaining Amazon Web Services (AWS) like EC2, S3, EBS, Lambda, Redshift, EMR, Glue and Athena.• Sound knowledge in working with Azure cloud components (HDInsight, Databricks, DataLake, Blob Storage, Data Factory, Storage Explorer, SQL DB, SQL DWH and CosmosDB).

Srikanth Y's Current Company Details
UPS

Ups

View
Actively looking for Data engineer positions | AWS, AZURE, SSIS,SSRS, Redshift, Glue, Lambda, Boto3,Power BI , Power apps, DynamoDB, Apache Spark, Airflow, Kafka, Python, Tableau, Jenkins, Maven, Ansible.
Ground Floor, Abhilasha building, Behind NABARD Office 6, Royd Street, Kolkata, West Bengal 700016, India
Website:
ups.com
Company email:
sef@ups.com
Srikanth Y Work Experience Details
  • Ups
    Senior Data Engineer
    Ups Jan 2022 - Present
    Atlanta, Ga, Us
     Engaged with the undertaking life cycle including the plan, Design, and execution of checking information got in the Data Lake. Plan and Develop ETL Processes in AWS Glue to relocate mishaps information from outer sources like S3, Text Files into AWS Redshift. Translated information access, change, and development prerequisites into utilitarian necessities and planning plans.Created, Tested, Performance improvement of complex mapping Used different types of functions like aggregation and joins. Implemented and analyzed SQL query performance issues in databases. Constructed ongoing streaming pipeline using Kafka, Spark Streaming and Redshift. Created consistent and actual information stream models for Informatica ETL applications. Improvised and changes on existing ETL/ELT cycles to Ensure to finish and accessibility of information in information stockroom for announcing use.
  • Global Medical Response
    Senior Data Engineer
    Global Medical Response Sep 2020 - Dec 2021
    Lewisville, Texas , Us
     Worked with the Spark for further developing execution and improvement of the current calculations in Hadoop utilizing Spark Context, Spark-SQL, Spark ML lib, Data Frame, Pair Redd’s, Spark YARN. Utilized Spark gushing to get ongoing information from the Kafka and store the stream information to HDFS utilizing Scala and NoSQL data sets like HBase and Cassandra. Stacked information from Web servers and Teradata utilizing Sqoop, Flume and Spark Streaming API. Involved Kafka for live streaming information and performed investigation on it. Dealt with Sqoop to move the information from social data set and Hadoop. Dealt with cloud arrangements utilizing Maven, Docker, and Jenkins. Composed numerous MapReduce programs for information extraction, change and collection from various record designs including XML, JSON, CSV, and other compacted document designs
  • Xo Communications
    Hadoop Developer
    Xo Communications Aug 2019 - Nov 2020
    Herndon, Va, Us
    • Designed data models for complex analysis needs like collecting huge customer data of people using Cigna insurance from various states of united states of America • Developed and delivered business information solutions like Implementing Email task to a workflow that helps users to get notify the scheduled job status.• Developed database architectural strategies at modeling, design, and implementation stages to address business or industry requirements.• Gathered, defined and refined requirements, led project design and oversaw implementation.• Planned and installed upgrades of database management system software to enhance database performance.• Collaborated with system architects, design analysts and others to understand business and industry requirements.• Reviewed project requests describing database user needs to estimate time and cost required to accomplish projects. Handled projects costing up to 500K$ US million dollars with a project time of 6 months.• Created plans and communicated deadlines with Offshore team members to ensure projects were completed on time.• Participated in continuous improvement by generating suggestions like Documenting any new issues occurred in Informatica, this is useful to other team members to solve the issues.
  • Ey
    Etl Developer
    Ey Mar 2017 - Jun 2019
    London, Gb
    • Worked as SQL developer, worked majorly on reporting the Citi bank customer data.• Involved in all the phases of Software Development Life Cycle (requirements gathering, analysis, design, development, testing, and maintenance).• Involved in project cycle plan for the data warehouse, analyzing the source data, deciding the exact data extraction, transformation, loading strategy & Dimensional modeling. Backup & recovery.• Developing reports using SQL Server Reporting Services in SSRS 2012. Creating various value sections in Parameter’s list, designing cascading parameters, matrix dynamics, and more features in Reporting Service.• Used advanced features of T-SQL to design and tune T-SQL to interface with the Database and other applications in the most efficient manner and created stored Procedures for the business logic using T-SQL.• Worked on SQL Profiler, Tuning Advisor in SQL 2012 to increase the performance of the reports.• Worked with report subscriptions and snapshots in SSRS 2012 to send the weekly run automated reports to end-users.• Develop complex MDX, Created Database Objects – Schemas, Tables, Indexes, Views, User-defined functions, Cursors, Triggers, Stored Procedure, Constraints, and Roles.
  • Genpact
    Data Analyst
    Genpact Mar 2015 - Feb 2017
    New York, Ny, Us
    • Worked as SQL developer, worked majorly on reporting the Citi bank customer data.• Prepared reusable stored procedures with indexes and joins. Prepared complex queries that joins 10 tables at maximum.• IFW (Inquiry Framework) is the reporting tool used to generate reports as per client requirement document.• SAP B.O as main back source of reports. Citi bank customized IFW reporting tool that generates reports in .CSV file.• Jasper studio is used to format those .CSV files.• Develop and advocate the development standards and practices for the development team.• Ingested data into Snowflake cloud data warehouse using Snow pipe. Having good knowledge of tools like Snowflake, SSIS, SSAS, SSRS to design warehousing applications.• Involved in designing developing and deploying reports in MSSQL served environment using SSRS and SSIS in PowerBI• Experienced in SSIS tools like import export wizard package installation and SSIS PACKAGE DESIGNER• Design and develop highly efficient / High performance ETL Mappings and workflows.• Design and develop and test ETL mappings, Mapplets, Workflows, Worklets using informatica power center 9.1x.• Migrated all the Batch jobs and workflows from Informatica power center 9.1x to Informatica power center 10.1x.• Involved with SME’s and other business team to provide key decisions, as well as work estimation and resource planning.

Srikanth Y Education Details

  • Christian Brothers University
    Christian Brothers University

Frequently Asked Questions about Srikanth Y

What company does Srikanth Y work for?

Srikanth Y works for Ups

What is Srikanth Y's role at the current company?

Srikanth Y's current role is Actively looking for Data engineer positions | AWS, AZURE, SSIS,SSRS, Redshift, Glue, Lambda, Boto3,Power BI , Power apps, DynamoDB, Apache Spark, Airflow, Kafka, Python, Tableau, Jenkins, Maven, Ansible..

What schools did Srikanth Y attend?

Srikanth Y attended Christian Brothers University.

Who are Srikanth Y's colleagues?

Srikanth Y's colleagues are Scott Lindsey, Pier Luigi Vincenzi, Ben Robertson, Caleb Kidd, Kevin Krysa, Sohail Abdullah, Roger Sanchez.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.