8+ years of experience in IT in fields of software design, implementation, and development and also support of business applications for health, insurance and telecom industries. 4+Years of experience in Linux and Big data Hadoop, Hadoop Ecosystem components like MapReduce, Sqoop, Flume, Kafka, Pig, Hive, Spark, Storm, HBase, Oozie, and Zookeeper. Experienced in AWS Platform.
Charter
-
Sr. Hadoop And Spark DeveloperCharterEnglewood, Co, Us
-
Sr. Hadoop/Spark DeveloperCharter May 2017 - PresentEnglewood, Co• Involved in designing and deployment of Hadoop cluster and different Big Data analytic tools including Pig, Hive, Cassandra, Oozie, Sqoop, Kafka, Spark, Impala with Horton works distribution.• Designed the Column families in Cassandra.• Ingested data from RDBMS and performed data transformations, and then export the transformed data to Cassandra as per the business requirement.• Developed Spark code to using Scala and Spark-SQL for faster processing and testing.• Experience in NoSQL Column-Oriented Databases like Cassandra and its Integration with Hadoop cluster.• Created applications using Kafka, which monitors consumer lag within Apache Kafka clusters. Used in production by multiple companies.
-
Hadoop DeveloperLeidos Feb 2016 - Apr 2017Baltimore, Maryland• Experience in developing customized UDF's in java to extend Hive and Pig Latin functionality.• Responsible for installing, configuring, supporting, and managing of Hadoop Clusters.• Importing and exporting data into HDFS from Oracle 10.2 database and vice versa using SQOOP.• Installed and configured Pig and written Pig Latin scripts.• Designed and implemented HIVE queries and functions for evaluation, filtering, loading and storing of data. • Created HBase tables and column families to store the user event data. • Written automated HBase test cases for data quality checks using HBase command line tools.• Developed a data pipeline using HBase, Spark and Hive to ingest, transform and analyzing customer behavioral data.• Using Spark-Streaming APIs to perform transformations and actions on the fly for building the common learner data model which gets the data from Kafka in near real time and Persists into Cassandra.
-
Hadoop DeveloperShivam Med Mar 2014 - Nov 2015Hyderabad, India• Wrote ETL jobs to read from web APIs using REST and HTTP calls and loaded into HDFS using java and Talend. • Developed ETL test scripts based on technical specifications/Data design documents and Source to Target mappings. • Hands on experience in loading data from UNIX file system and Teradata to HDFS • Installed and configured Flume, Hive, Pig, Sqoop and Oozie on the Hadoop cluster. • Developed PIG scripts for the analysis of semi structured data. • Developed Java MapReduce programs on log data to transform into structured way to find user location, age group, spending time. • Collected and aggregated large amounts of web log data from different sources such as webservers, mobile and network devices using Apache Flume and stored the data into HDFS for analysis • Used Informatica as an ETL tool to create source/target definitions, mappings and sessions to extract, transform and load data into staging tables from various sources.• Developed ETL procedures to ensure conformity, compliance with standards and lack of redundancy, translating business rules and functionality requirements into ETL procedures.Supported Data Analysts in running Map Reduce Programs.
-
Java DeveloperZen3 Info Solutions Mar 2013 - Feb 2014Hyderabad, India• Developed web components using JSP Servlets, JDBC and Coded JavaScript for AJAX and client side data validation.• Designed and Developed mappings using different transformations like Source Qualifier, Expression, Lookup (Connected & Unconnected), Aggregator, Router, Rank, Filter and Sequence Generator. • Imported data from various Sources transformed and loaded into Data Warehouse Targets using Informatica Power Center. • Made substantial contributions in simplifying the development and maintenance of ETL by creating re-usable Source, Target, Mapplets, and Transformation objects. • Experience in development of extracting, transforming and loading (ETL), maintain and support the enterprise data warehouse system and corresponding marts • Prepare DR plan and recovery process for GDW application. • Developed JSP pages using Custom tags and Tiles framework and Struts framework. • Used different user interface technologies JSP, HTML, CSS, and JavaScript for developing the GUI of the application.
-
Java DeveloperRead Mind Info Services Nov 2011 - Feb 2013Hyderabad, India• Interact with Business Users and Develop Custom Reports based on the criteria defined. Requirement gathering and information collection. Analysis of gathered information so as to prepare a detail work plan and task breakdown structure• Experience in develop of SDLC life cycle and undergo in all the phases in it.• Implemented applications using Java, J2EE, JSP, Servlets, JDBC, RAD, XML, HTML, XHTML, Hibernate Struts, spring and JavaScript on Windows environments. • Developed action Servlets and JSPs for presentation in Struts MVC framework. • Developed PL/SQL View function in Oracle 9i database for get available date module. • Used Oracle SQL 4.0 as the database and wrote SQL queries in the DAO Layer. • Used RESTFUL Services to interact with the Client by providing the RESTFUL URL mapping. • Developed ETL processes to load data from Flat files, SQL Server and Access into the target Oracle database by applying business logic on transformation mapping for inserting and updating records when loaded.• Created the UI tool - using Java, XML, XSLT, DHTML and JavaScript
-
Java DeveloperCms Info Systems Jun 2010 - Oct 2011Hyderabad, India• Developed presentation layer using HTML, JSP, Ajax, CSS and JQuery. • Worked with Struts MVC objects like Action Servlet, Controllers, validators, Web Application Context, Handler Mapping, Message Resource Bundles, Form Controller, and JNDI for look-up for J2EE components. • Skills gained on web-based REST API, SOAP API, Apache for real-time data streaming • Programmed Oracle SQL, T-SQL Stored Procedures, Functions, Triggers and Packages as back-end processes to create and update staging tables, log and audit tables, and creating primary keys. • Extensively used Transformations like Aggregator, Router, Joiner, Expression, Lookup, Update Strategy, and Sequence Generator. • Developed mappings, sessions and workflows using Informatica Designer and Workflow Manager based on source to target mapping documents to transform and load data into dimension tables.
Frequently Asked Questions about Deepak B
What company does Deepak B work for?
Deepak B works for Charter
What is Deepak B's role at the current company?
Deepak B's current role is Sr. Hadoop and Spark Developer.
Not the Deepak B you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial