Prince R

Prince R Email and Phone Number

Sr.Data Engineer at Homesite Insurance,Actively looking for new Opportunities @ Homesite Insurance
Prince R's Location
Kent, Washington, United States, United States
About Prince R

Prince R is a Sr.Data Engineer at Homesite Insurance,Actively looking for new Opportunities at Homesite Insurance.

Prince R's Current Company Details
Homesite Insurance

Homesite Insurance

View
Sr.Data Engineer at Homesite Insurance,Actively looking for new Opportunities
Prince R Work Experience Details
  • Homesite Insurance
    Senior Data Engineer
    Homesite Insurance Nov 2020 - Present
    Boston, Ma, Us
     Developed data pipeline using Flume, Pig and Python MapReduce to ingest customer behavioral data and financial histories into HDFS for analysis.  Involved in HBASE setup and storing data into HBASE, which will be used for further analysis.  Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS.  Wrote Spark RDD for processing the unstructured data.  Load log data into HDFS using Flume. Worked extensively in creating jobs to power data for search and aggregation.  Worked on Hive for exposing data for further analysis and for generating transforming files from different analytical formats to text files.  Responsible for creating Hive tables, loading data and writing hive queries.  Used forward engineering to create a Physical Data Model with DDL that best suits the requirements  Maintaining and monitoring clusters. Loaded data into the cluster from dynamically generated files using Flume and from relational database management systems using Sqoop.  Involved in all phases of the Big Data Implementation including requirement analysis, design, development, building, testing, and deployment of Hadoop cluster in fully distributed mode Mapping the DB2 Data Types to Hive Data Type sand validations.  Performed load and retrieve unstructured data.
  • Merck
    Senior Data Engineer
    Merck May 2018 - Oct 2020
    Rahway, New Jersey, Us
     Pipeline creations in ADF (Azure Data factory) to pull data from various sources (On Prem, SAP HANA, SQL Server, MongoDB) to Azure Cloud (ADLS)  Created notebooks in Azure Data Bricks using PySpark  Used Azure Data Factory for orchestrating Data Pipelines in BLOB, ADLS Gen-2, HDInsight & SQL Data warehouse  Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks  Built pipelines to move hashed and un-hashed data from Azure Blob to Data lake  Data Ingestion and Conversion from ORC to Parquet using Azure Data Factory  Created Azure Data Factory pipeline to insert the flat file, Orc file data into Azure SQL  Setup Spark cluster with AKS (Azure Kubernetes Service) in Linux Virtual Machine  Development and maintenance of Machine Learning Model pipelines  Created pipelines to move data from on-premise servers to Azure Data Lake  Migrate Machine Learning Models from Dev to On Prem in AKS using Build and Release Pipelines  Debugging Errors and Connectivity in Existing Pipelines  Analyzed, Strategized & Implemented Azure migration of Application & Databases to cloud
  • Drug Plastics - Official Page
    Data Engineer
    Drug Plastics - Official Page Jan 2016 - Apr 2018
    Boyertown, Pa, Us
      Developed Spark scripts by using Scala as per the requirement.  Load the data into Spark RDD and performed in-memory data computation to generate the output response.  Performed different types of transformations and actions on the RDD to meet the business requirements.  Developed a data pipeline using Kafka, Spark and Hive to ingest, transform and analyzing data.  Also worked on analyzing Hadoop cluster and different Big Data analytic tools including HBase and Sqoop.  Involved in loading data from UNIX file system to HDFS.  Responsible to manage data coming from various sources.  Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.  Use Amazon Elastic Cloud Compute - EC2 infrastructure for computational tasks and Simple Storage Service - S3 as storage mechanism.  Capable of using AWS utilities such as EMR, S3 and Cloud Watch to run and monitor Hadoop and Spark jobs on AWS.  Created S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS.
  • Ubs
    Hadoop Developer
    Ubs Feb 2014 - Dec 2015
    Zurich, Ch
     Loaded and transformed large sets of structured and semi structured data from various downstream systems.  Developed ETL pipelines using Spark and Hive for performing various business specific transformations.  Building Applications and automating the pipelines in Spark for bulk loads as well as Incremental Loads of various Datasets.  Having experience in Agile Methodologies, Scrum stories and sprints experience in a Python based environment, along with data analytics, data wrangling and Excel data extracts.  Worked closely with our team's data analysts and consumers to shape the datasets as per the requirements.  Automated the data pipeline to ETL all the Datasets along with full loads and incremental loads of data.  Worked on building input adapters for data dumps from FTP Servers using Apache Spark.  Wrote spark applications to perform operations like data inspection, cleaning, load and transforms the large sets of structured and semi-structured data.  Developed Spark with Python and Spark-SQL for testing and processing of data.  Wrote and executed various MYSQL database queries from python using Python MySQL connector and MySQL DB package. Wrote scripts in Python for extracting data from HTML file.  Reporting the spark job stats, Monitoring and Running Data Quality Checks are made available for each Datasets.  Used SQL Programming Skills to work around the Relational SQL Databases
  • Homesite Insurance
    Datastage Developer
    Homesite Insurance Jun 2013 - Jan 2014
    Boston, Ma, Us
      Optimized the performance of the Informatica mappings by analyzing Job logs and understanding various bottlenecks (source/target/stages)  Created UNIX shell scripts to invoke the Informatica workflows & Oracle stored procedures  Implemented Slowly Changing Dimension Type 1 and Type 2 for inserting and updating Target tables for maintaining the history.  Prompt in responding to business user queries and changes. Designed and Developed jobs in DataStage to load the data from Flat Files, Oracle and MS SQL Server sources.  Developed custom ETL objects to load the data in generic fashion.  Responsible for developing and re-defining several complex jobs and job sequencers to process various feeds using different DataStage stages, properties.  Troubleshoot and created automatic script/SQL generators.

Frequently Asked Questions about Prince R

What company does Prince R work for?

Prince R works for Homesite Insurance

What is Prince R's role at the current company?

Prince R's current role is Sr.Data Engineer at Homesite Insurance,Actively looking for new Opportunities.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.