Jason E Email and Phone Number
• Overall 8+ years of total IT experience in all phases of software development life cycle,• 5+ years of experience in Hadoop and Big Data Eco System.• Great Experience and knowledge in Hadoop architecture and various components such as HDFS, YARN, Job tracker, Task Tracker, Name Node, Data Node and MapReduce.• Good experience in Hadoop ecosystem like Hadoop MapReduce, HDFS, NIFI, Oozie, Hive, Sqoop, Pig, Zookeeper, Flume, Spark streaming, Spark SQL, HBase and Cassandra.• Expertise in Hadoop 2.0 and YARN architecture.• Experience in using Hadoop cluster using Cloudera's CDH, Horton works HDP.• Expertise in writing Hadoop Jobs for analyzing data using MapReduce, Hive and Pig.• Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems (RDBMS) and vice-versa.• Implemented Kafka, Spark streaming and HBase for establishing real time pipeline.• Developed and implemented Apache NIFI across various environments, written QA scripts in Python for tracking files.• Experienced data pipelines using Kafka and Akka for handling large terabytes of data.• Expertise in writing custom UDF's and UDAF's for extending Hive and Pig core functionalities.
Northwestern Mutual
View- Website:
- northwesternmutual.com
- Employees:
- 31224
-
Big Data/Hadoop DeveloperNorthwestern Mutual Mar 2018 - PresentMilwaukee, Wi• Worked on Hadoop cluster scaling from 4 nodes in development environment to 8 nodes in pre-production stage and up to 24 nodes in production.• Involved in complete Implementation lifecycle, specialized in writing custom MapReduce, Pig and Hive programs.• Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.• AWS Cloud and On-Premise environments with Infrastructure Provisioning / Configuration.• Extensively used Hive/HQL or Hive queries to query or search for a particular string in Hive tables in HDFS.• Used Apache NiFi for creating a pipeline which consumes the data from source, does data processing and stores the data into AWS HBase tables using Kafka.• Data files are retrieved by various data transmission protocols like Sqoop, NDM, SFTP, DMS etc., these data files are then validated by various Spark Control jobs written in Scala.• Possess good Linux and Hadoop System Administration skills, networking, shell scripting and familiarity with open source configuration management and deployment tools such as Chef.• Worked with Puppet for application deployment.• Expertise in Storm for reliable real-time data processing capabilities to Enterprise Hadoop.• Implemented Kafka for streaming data and filtered, processed the data.• Excellent knowledge in Import/Export structured, un-structured data from various data sources such as RDBMS, Event logs, Message queues into HDFS, using a variety of tools such as Sqoop, Flume etc. -
Hadoop/Scala DeveloperMastercard Sep 2016 - Feb 2018St. Louis, Mo• Worked on analyzing Hadoop stack and different big data analytic tools including Pig, Hive, Hbase database and Sqoop.• In depth understanding of Classic Map Reduce and YARN architectures.• Developed Map Reduce programs for some refined queries on big data.• Hands-on experience in training, evaluating and predicting the data as a part of Machine Learning using Spark MLlib, Tensor Flow, and a regular contributor to Machine Learning projects on GitHub.• Created Azure HDInsight and deployed Hadoop cluster in could platform.• Designed and Modified Database tables and used HBase Queries to insert and fetch data from tables.• Experienced in collection of Log Data and JSON data into HDFS using Flume and processed the data using Hive/Pig.• Used HIVE queries to import data into Microsoft AZURE cloud and analyzed the data using HIVE scripts.• Using Ambari in Azure HDInsight cluster recorded and managed the data logs of name node and data node• Creating Hive tables and working on them for data analysis to cope up with the requirements.• Migration of SQL Server Teradata to SQL Azure developed the necessary Stored Procedures and created Complex Views using Joins for robust and fast retrieval of data in SQL Server using T-SQL. -
Big Data EngineerCenter For Family Health And Development Rain Light Dec 2015 - Aug 2016Bronx, Ny• Worked with Hadoop Ecosystem components like HBase, Sqoop, ZooKeeper, Oozie, Hive and Pig with Cloudera Hadoop distribution.• Experience in data cleansing and data mining.• Responsible for processing ingested raw data using Kafka and Hive.• All the data was loaded from our relational DBs to HIVE using Sqoop. We were getting four flat files from different vendors. These were all in different formats e.g. text, EDI and XML formats.• Ingest data into Hadoop / Hive/HDFS from different data sources.• Load data into HBase tables using Java Map Reduce.• Created Hive External tables to stage data and then move the data from Staging to main tables• Import data using Sqoop into Hive and Hbase from existing SQL Server.• Worked on batch processing and stream processing of data using Spark and Spark Streaming.• Writing Hive join query to fetch info from multiple tables, writing multiple Map Reduce jobs to collect output from Hive.• Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, HBase database and Sqoop.• Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting on the dashboard.• Involved in migration of data from existing RDBMS (oracle and SQL server) to Hadoop using Sqoop for processing data.• Worked on writing Perl scripts covering data feed handling, implementingmark logic, communicating with web-services through SOAP Lite module and WSDL.• Developed the code for Importing and exporting data into HDFS and Hive using Sqoop• Installed and configured Hadoop and responsible for maintaining cluster and managing and reviewing Hadoop log files. -
Hadoop DeveloperWells Fargo Sep 2013 - Nov 2015San Francisco, Ca• Installed and configured Hadoop Ecosystem components and Cloudera manager using CDH distribution.• Developed multiple Map Reduce jobs in Java for complex business requirements including data cleansing and preprocessing.• Developed Oozie Bundles to Schedule Pig, Sqoop and Hive jobs to create data pipelines.• Handled importing of data from various data sources, performed transformations using Hive, Pig, and loaded data into HDFS.• Developed Spark streaming applications for consuming the data from Kafka topics.• Developed Sqoop scripts to import/export data from Oracle to HDFS and into Hive tables.• Developed a Kafkaproducer which bring the data streams from JMS client and passes to the Kafka Consumer.• Worked on analyzing Hadoop clusters using Big Data Analytic tools including Map Reduce, Pig and Hive.• Involved in developing and writing Pig scripts and to store unstructured data into HDFS.• Involved in creating tables in Hive and writing scripts and queries to load data into Hive tables from HDFS.• Scripted complex Hive QL queries on Hive tables for analytical functions.• Optimized the Hive tables utilizing improvement techniques like partitions and bucketing to give better execution Hive QL queries.• Worked on programming the Kafka Producer and Consumer with the connection parameters and methods from Oracle Sonic JMS, until the Data Lake or HDFS.• Worked on Hive/Hbase vs RDBMS, imported data to hive, created internal and external tables, partitions, indexes, views, queries and reports for BI data analysis.• Developed Java custom record reader, partition and serialization techniques.• Used different data formats (Text format and Avro format) while loading the data into HDFS.• Created tables in HBase and loading data into HBase tables.• Developed scripts to load data from HBase to Hive Meta store and perform Map Reduce jobs.• Created custom UDF's in Pig and Hive. -
Java DeveloperApollo Hospitals Oct 2011 - Aug 2013Chennai, India• Involved in the coding of JSP pages for the presentation of data on the View layer in MVC architecture.• Used J2EE design patterns like Factory Methods, MVC, and Singleton Pattern that made modules and code more organized, flexible and readable for future upgrades.• Worked with JavaScript to perform client-side form validations.• Used Struts tag libraries as well as Struts tile framework.• Used JDBC to access Database with Oracle thin driver of Type-3 for application optimization and efficiency.• Actively involved in tuning SQL queries for better performance.• Worked with XML to store and read exception messages through DOM.• Wrote generic functions to call Oracle stored procedures, triggers, functions.• Developed JSP, JSF and Servlets to dynamically generate HTML and display the data to the client side.• Used Hibernate Framework for persistence onto oracle database.• Written and debugged the ANT Scripts for building the entire web application.• Implemented Java Message Services (JMS) using JMS API.• Coded using Servlets, SOAP Client and Apache CXF Rest API's for delivering the data from our application to external and internal for communication protocol.• Created SOAP Web Service using JAX-WS, to enabled client to consume a SOAP Web Service.• Experienced in designing and developing multi-tier scalable applications using Java and J2EE Design Patterns.
Jason E Education Details
-
Bachelor’S In Information Technology
Frequently Asked Questions about Jason E
What company does Jason E work for?
Jason E works for Northwestern Mutual
What is Jason E's role at the current company?
Jason E's current role is Big Data/Hadoop Developer at Northwestern Mutual.
What schools did Jason E attend?
Jason E attended Bachelor’s In Information Technology.
Who are Jason E's colleagues?
Jason E's colleagues are Evan Drake, Karen Tran, Jesse Gable, Martin Choromanski, Jeannette Kelley, Stephanie Carter, Xinyuan Wu.
Not the Jason E you were looking for?
-
Jason Hynek
Aliso Viejo, Ca2ucadvantage.net, gmail.com3 +194946XXXXX
-
2gmail.com, trincapinvestment.com
2 +148037XXXXX
-
Jason DeLoss
Atlanta, Ga6gmail.com, yahoo.com, hotmail.com, msn.com, us.ibm.com, vmware.com7 +167828XXXXX
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial