Raghu K Email and Phone Number
Overall, 7+ years of technical IT experience in all phases of Software Development Life Cycle (SDLC) with skills in data analysis, design, development, testing and deployment of software systems.Having 5+ years of industrial experience in Big Data analytics, Data manipulation, using Hadoop Eco system tools Map - Reduce, HDFS, Yarn/MRv2, Pig, Hive, HDFS, HBase, Spark, Kafka, Flume, Sqoop, Flume, Oozie, Avro, Sqoop, AWS, Spring Boot, Spark integration with Cassandra, Avro, Solr and Zookeeper.Experience in developing data pipelines using AWS services including EC2, S3, Redshift, Glue, Lambda functions, Step functions, CloudWatch, SNS, DynamoDB, SQS.Proficiency in multiple databases like MongoDB, Cassandra, MySQL, ORACLE, and MS SQL Server. Worked on different file formats like delimited files, Avro, json and parquet. Docker container orchestration using ECS, ALB and lambda.Created Snowflake Schemas by normalizing the dimension tables as appropriate, and creating a Sub Dimension named Demographic as a subset to the Customer Dimension.Hands on experience in test driven development (TDD), Behavior driven development (BDD) and acceptance test driven development (ATDD) approaches.Managing Database, Azure Data Platform services (Azure Data Lake (ADLS), Data Factory (ADF), Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Databricks, NoSQL DB),SQL Server, Oracle, Data Warehouse etc. Build multiple Data Lakes.
Verizon
View-
Data EngineerVerizon Feb 2021 - PresentBasking Ridge, Nj, UsInvolved in designing and deploying multi-tier applications using all the AWS services like (EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, IAM) focusing on high-availability, fault tolerance, and auto-scaling in AWS Cloud Formation.Supporting Continuous storage in AWS using Elastic Block Storage, S3, Glacier. Created Volumes and configured Snapshots for EC2 instancesUsed Data Frame API in Scala for converting the distributed collection of data organized into named columns, developing predictive analytic using Apache Spark Scala APIsDeveloped Scala scripts using both Data frames/SQL/Data sets and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into OLTP system through Sqoop -
Big Data EngineerCostco Travel Aug 2019 - Jan 2021Issaquah, Wa, UsDesigning and Developing Azure Data Factory (ADF) extensively for ingesting data from differentsource systems like relational and Non-relational to meet business functional requirements.Designed and Developed event driven architectures using blob triggers and Data Factory.Creating pipelines, data flows and complex data transformations and manipulations using ADF andPySpark with Databricks. -
Big Data DeveloperCisco Sep 2017 - Jul 2019San Jose, Ca, UsCreated sophisticated visualizations, calculated columns and custom expressions and developed Map Chart, Cross table, Bar chart, Tree map and complex reports which involves Property Controls, Custom Expressions.Investigated market sizing, competitive analysis and positioning for product feasibility. Worked on Business forecasting, segmentation analysis and Data mining.Extensively used Agile methodology as the Organization Standard to implement the data Models. Used Micro service architecture with Spring Boot based services interacting through a combination of REST and Apache Kafka message brokers.Created several types of data visualizations using Python and Tableau. Extracted Mega Data from AWS using SQL Queries to create reports. -
Hadoop DeveloperMeta More Solutions Jul 2016 - Aug 2017Developed Hives Scripts for performing transformation logic and loading the data from staging zone to final landing zoneInvolved in loading transactional data into HDFS using Flume for Fraud AnalyticsDeveloped Python utility to validate HDFS tables with source tablesDesigned and developed UDF'S to extend the functionality in both PIG and HIVEImport and Export of data using Sqoop between MySQL to HDFS on regular basisDeveloped a process for Scooping data from multiple sources like SQL Server, Oracle and TeradataResponsible for creation of mapping document from source fields to destination fields mapping
-
Software EngineerGlobal Logics Aug 2014 - Jun 2016Renton, Wa, UsWorked on Hortonworks-HDP 2.5distributionResponsible for building-scalable distribution data solution using HadoopInvolved in importing data from MS SQL Server, MySQL, and Teradata into HDFS using SqoopPlayed a key role in dynamic partitioning and Bucketing of the data stored in Hive MetadataWrote HiveQL queries for integrating different tables for create views to produce result setCollected the log data from Web Servers and integrated into HDFS using FlumeWorked on loading and transforming of large sets of structured and unstructured data
Frequently Asked Questions about Raghu K
What company does Raghu K work for?
Raghu K works for Verizon
What is Raghu K's role at the current company?
Raghu K's current role is Data Engineer at Verizon Analytics | Data Engineer | BigData | Azure | AWS | Hadoop | ETL | Talend | SQL | Snowflake | Databricks | BI..
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial