Nilay Kumar

Nilay Kumar Email and Phone Number

Big Data Developer | Cloud Platform Engineer @ Hitachi Solutions America
Nilay Kumar's Location
Irvine, California, United States, United States
Nilay Kumar's Contact Details

Nilay Kumar work email

Nilay Kumar personal email

n/a
About Nilay Kumar

Big Data Engineer with 6+ years of professional IT experience in Data Modeling, Ingestion, Processing, ETL,storage, Data-Driven quantitative analysis, Data Integration and Resource utilization in the Big Data ecosystem. Experience in project development, implementation, deployment, and maintenance using Hadoop and Spark related technologies using Cloudera, Hortonworks, Amazon EMR, and Azure HDInsight. Experienced on Data architecture including data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing. Strong experience in Business and Data Analysis, Data Profiling, Data Migration, Data Integration, Data governance and Metadata Management, Master Data Management and Configuration Management. Working knowledge on Azure cloud components (HDInsight, DataBricks, DataLake, Blob Storage, Data Factory, Storage Explorer, SQL DB, SQL DWH, CosmosDB). Experienced in developing Data Pipelines in Azure Data Factory and Datasets/pipelines during ETL process from Azure SQL, Blob Storage, Azure SQL Datawarehouse. Experience with working on AWS platforms (EMR, EC2, RDS, EBS, S3, Lambda, Glue, Elasticsearch, Kinesis, SQS, DynamoDB, Redshift, API Gateway, Athena, Glue, ECS).TECHNICAL ACUMEN:Big Data Ecosystem: HDFS, MapReduce, Yarn, Spark, Kafka, Airflow, Hive, Impala, StreamSets, Sqoop, HBase,Flume, Pig, Ambari, Oozie, Zookeeper, Nifi, Sentry, Ranger.Hadoop Distributions: Apache Hadoop, Cloudera CDP, Hortonworks HDP, AWS (EMR, EC2, EBS, RDS, S3,Athena, Glue, Elasticsearch, Lambda, DynamoDB, Redshift, ECS, QuickSight), Azure (HDInsight, DataBricks,DataLake, Blob Storage, Data Factory ADF, SQL DB, SQL DWH, CosmosDB, Azure AD).Programming Languages: Python, Scala, Java, Shell Scripting, Pig Latin, HiveQL.NoSQL Database: MongoDB 3.x, Hadoop HBase, Apache Cassandra, Redis.Database: Snowflake, AWS RDS, Teradata, Oracle, MySQL, Microsoft SQL, Postgres SQL.Version Control: Git, SVN, BitbucketETL/BI: Snowflake, Informatica, SSIS, SSRS, SSAS, Tableau, Matplotlib, Power BI.Operating systems: Linux (Ubuntu, Centos, RedHat), Windows.Others: ARM Templates, Terraform, Docker, Kubernetes, Jenkins, Ansible, Splunk, Jira.

Nilay Kumar's Current Company Details
Hitachi Solutions America

Hitachi Solutions America

View
Big Data Developer | Cloud Platform Engineer
Nilay Kumar Work Experience Details
  • Hitachi Solutions America
    Azure Data Platform Engineer
    Hitachi Solutions America Feb 2020 - Present
    Irvine, Ca, Us
    • Worked on Azure cloud platform (HDInsight, DataBricks, DataLake, Blob Storage, Data Factory, SQL DB, SQL DWH and Data Storage Explorer).• Worked on AZURE services to smoothly manage applications in the cloud.• Involved in building and creating HDInsight cluster and Storage Account with End-to-End environment.• Created Pipelines in Azure Data Factory using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool & backwards.• Developed Json Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the Cosmos Activity to load data from on prem to AZURE cloud storage and databases.• Worked on ingression source files (data, csv formats) from on-premises SQL server to Azure Data Lake Store (ADLS).• Worked on process the source files in Azure Data Lake Analytics (ADLA) using U-SQL, and generate different types of output files (Parquet, csv, data etc.)
  • Cotiviti
    Aws Data Engineer
    Cotiviti May 2018 - Feb 2020
    South Jordan, Ut, Us
    • Experienced in using distributed computing architectures like AWS (EC2, Redshift, and EMR, Elastic search), Hadoop, Spark, Python, and effective use of MapReduce, SQL, and Cassandra to solve big data type problems.• Worked with Spark and improved the performance and optimized the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, RDD's, Spark YARN.• Developed and Implemented various spark jobs using AWS EMR to perform bigdata operations in AWS.• Installed the application on AWS EC2 instances and configured the storage on S3 buckets.• Utilized Spark’s in memory capabilities to handle large datasets stored on S3 Data lake.• Worked on data ingestion by going through cleansing and transformations and leveraging AWS Lambda, AWS Glue and Step Functions and loaded into S3 buckets.• Created workflows using Airflow to automate the process of extracting weblogs into S3 Data lake.• Developed and executed a migration strategy to move Data Warehouse from an Oracle platform to AWS Redshift.
  • Capital One
    Aws Data Engineer
    Capital One Sep 2016 - Apr 2018
    Mclean, Va, Us
    • Built and supported several AWS, multi-server environment using Amazon EC2, EMR, EBS, and Redshift and deployed the Big Data Hadoop application on the AWS cloud.• Involved in migrating large amounts of data from on-prem Cloudera cluster to EC2 instances deployed on Elastic MapReduce (EMR) cluster.• Implemented solutions for ingesting data from various sources and processing the Data-at-Rest utilizing Big Data technologies using Hadoop, MapReduce, Hive, HBase, and Cloud Architecture.• Designed and developed end to end ETL processing from Oracle to AWS using Amazon S3, EMR, and Spark.
  • Infogain
    Hadoop Big Data | Etl Developer
    Infogain Jan 2015 - Aug 2016
    Los Gatos, Ca, Us
    • Worked with Hortonworks distribution. Installed, configured, and maintained a Hadoop cluster based on the business requirements and optimized the query performance using Hive• Worked on bigdata components like HDFS, MapReduce, YARN, Hive, HBase, Sqoop, Pig, and Nifi.• Used Sqoop to import and export data from HDFS to RDBMS and visualization to generate reports.• Worked on creating Hive External tables and loaded the data into tables and query data using HQL.• Worked on importing metadata into Hive using Sqoop and migrated existing tables and applications to Hive.• Worked on importing data from data sources, performed transformations using Hive and loaded data into HDFS.• Created different UDF’s and UDAF’s to analyze partitioned, bucketed data and compute various metrics for reporting on dashboard and stored them in different summary tables.• Successfully loaded files to Hive and HDFS from MongoDB, Cassandra, HBase and MySQL.

Nilay Kumar Education Details

  • Gujarat Technological University (Gtu)
    Gujarat Technological University (Gtu)

Frequently Asked Questions about Nilay Kumar

What company does Nilay Kumar work for?

Nilay Kumar works for Hitachi Solutions America

What is Nilay Kumar's role at the current company?

Nilay Kumar's current role is Big Data Developer | Cloud Platform Engineer.

What is Nilay Kumar's email address?

Nilay Kumar's email address is kn****@****ons.com

What schools did Nilay Kumar attend?

Nilay Kumar attended Gujarat Technological University (Gtu).

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.