Aravind K

Aravind K Email and Phone Number

Actively looking for new opportunities in Big Data Engineering field | PySpark/Hadoop developer | Big Data Engineer | AWS Data Engineer | Data Analytics | @ Target
Aravind K's Location
Dallas-Fort Worth Metroplex, United States, United States
About Aravind K

Over 7+ years of IT experience in Retail, Banking, Health care and Insurance domain. Experience in Analysis, Design, Development, Integration, Testing and maintenance of various applications using JAVA/J2EE technologies along with around 5 years of Big Data/Hadoop experience.Big Data Technologies : HDFS, Map Reduce, Hive, Pig, Sqoop, Flume, Oozie, Avro, Hadoop Streaming, Zookeeper, Kafka, Impala, Apache Spark, Ambari, Apache ignite.Hadoop Distributions : Cloudera (CDH4/CDH5), Horton WorksLanguages : Java, C, SQL, PYTHON, PL/SQL, PIG-Latin, HQLCloud Computing Tools : Amazon AWS(S3, EMR, EC2, Lambda, VPC, Route 53, Cloud Watch), GCPFramework : Hibernate, Spring, Struts, JunitOperating Systems : Windows, UNIX, LINUX, Ubuntu, CentOSApplication Servers : J Boss, Tomcat, Web Logic, Web Sphere, ServletsReporting Tools/ETL Tools : Tableau, Power view for Microsoft Excel, InformaticaDatabases : Oracle, MySQL, DB2, Derby, PostgreSQL, No-SQL Database (HBase, Cassandra)

Aravind K's Current Company Details
Target

Target

View
Actively looking for new opportunities in Big Data Engineering field | PySpark/Hadoop developer | Big Data Engineer | AWS Data Engineer | Data Analytics |
Aravind K Work Experience Details
  • Target
    Senior Data Engineer
    Target Oct 2020 - Present
    Minneapolis, Mn, Us
    • Involved in writing Spark applications using Scala to perform various data cleansing, validation, transformation, and summarization activities according to the requirement.• Involved in creating data lake in Google Cloud Platform (GCP) for allowing business teams to perform data analysis in BigQuery• Automated launch of Dataproc clusters and autoscaling the clusters and submitted spark jobs to dataproc clusters.• Extensively worked with Partitions, Dynamic Partitioning, bucketing tables in Hive, designed both Managed and External tables, also worked on optimization of Hive queries.• Written kafka producers for streaming real time json messages to kafka topics and processed them using spark streaming and performed streaming inserts to Bigquery.• Worked extensively on performance tuning of Spark application to improve job execution times and troubleshooting failures.
  • Citi
    Big Data Developer
    Citi Nov 2018 - Sep 2020
    New York, New York, Us
    • Developed Spark applications using Scala utilizing Data frames and Spark SQL API for faster processing of data.• Developed highly optimized Spark applications to perform various data cleansing, validation, transformation and summarization activities according to the requirement• Data pipeline consists of Spark, Hive, Sqoop and custom build Input Adapters to ingest, transform and analyze operational data.• Developed Spark jobs and Hive Jobs to summarize and transform data.• Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data.• Involved in converting Hive/SQL queries into Spark transformations using Spark DataFrames and Scala.
  • Kaiser Permanente
    Hadoop Developer
    Kaiser Permanente Jan 2017 - Sep 2018
    Oakland, California, Us
    • Involved in creating data ingestion pipelines for collecting health care and providers data from various external sources like FTP Servers and S3 buckets.• Involved in migrating existing Teradata Datawarehouse to AWS S3 based data lakes.• Involved in migrating existing traditional ETL jobs to Spark and Hive Jobs on new cloud data lake.• Wrote complex spark applications for performing various de-normalization of the datasets and creating a unified data analytics layer for downstream teams.• Primarily responsible for fine-tuning long running spark applications, writing custom spark udfs, troubleshooting failures etc.,• Involved in building a real time pipeline using Kafka and Spark streaming for delivering event messages to downstream application team from an external rest-based application.
  • Huntington National Bank
    Hadoop Developer
    Huntington National Bank Sep 2015 - Dec 2016
    Columbus, Ohio, Us
    • Involved in writing Spark applications using to perform various data cleansing, validation, transformation, and summarization activities according to the requirement.• Load the data into Spark RDD and perform in-memory data computation to generate the output as per the requirements.• Developed data pipelines using Spark, Hive and Sqoop to ingest, transform and analyze operational data.• Worked on performance tuning of Spark application to improve performance.• Real time streaming the data using Spark with Kafka. Responsible for handling Streaming data from web server console logs.• Worked on different file formats like Text, Sequence files, Avro, Parquet, JSON, XML files and Flat files using Map Reduce Programs.• Developed daily process to do incremental import of data from DB2 and Teradata into Hive tables using Sqoop.• Wrote Pig Scripts to generate Map Reduce jobs and performed ETL procedures on the data in HDFS.
  • Avineon
    Java Developer
    Avineon Jul 2014 - Aug 2015
    Mclean, Virginia, Us
    • Performed analysis for the client requirements based on the developed detailed design documents.• Developed User Interface using JavaScript and HTML.• Implemented MVC architecture by creating Model, View and Controller classes.• Involved in unit testing, debugging and bug fixing of application modules. • Extensively involved in writing the SQL queries to fetch data from database.• Defined Web Services using XML-based Web Services Description Language. • Building Java API's/Services backing User interface screens using Spring MVC. • Have experience in integrating other systems through XML.• Worked with Core Java concepts like Collections Framework, multi-threading, memory management.• Experience of resolving issues with JVM and multi-threading. Connected to backend database by using JDBC

Aravind K Education Details

  • Osmania University
    Osmania University
    Computer Science

Frequently Asked Questions about Aravind K

What company does Aravind K work for?

Aravind K works for Target

What is Aravind K's role at the current company?

Aravind K's current role is Actively looking for new opportunities in Big Data Engineering field | PySpark/Hadoop developer | Big Data Engineer | AWS Data Engineer | Data Analytics |.

What schools did Aravind K attend?

Aravind K attended Osmania University.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.