Archana B Email and Phone Number
• 10 years of overall experience with strong emphasis on Design, Development, Implementation, Testing and Deployment of Software Applications.• Over 6 years of comprehensive IT experience in Big Data and Big Data Analytics, Hadoop, HDFS, Map Reduce, YARN, Hadoop Ecosystem and Shell Scripting.• Experienced in developing applications using Java/J2EE technologies.• Good understanding/knowledge of Hadoop Architecture and its components such as HDFS, Yarn, Resource Manager, Node Manager, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce.
Humana
View- Website:
- humana.com
- Employees:
- 36660
-
Sr. Hadoop DeveloperHumana Jan 2023 - PresentTexas, United States• Developed Hive scripts in Hive QL to de-normalize and aggregate the data.• Created HBase tables and column families to store the user event data.• Developed Python scripts to update content in the database and manipulate files. • Involved in converting Hive/SQL queries into Spark Transformations using Spark RDD’s and PySpark.• Worked on importing and exporting data from DB2 into AWS and HIVE using Sqoop for analysis, visualization and to generate reports.• Written automated HBase test cases for data quality checks using HBase command line tools.• Used Hive and Impala to query the data in Hbase.• Analyzing the functional specs provided by the client and developing detailed solution design document with the Architect and the team.• Parsers written in Python for extracting useful data from unstructured data.• Developed and implemented core API services using Python and Spark (PySpark).• Convert CSV files into parquet format and load the parquet file into data frames and query them using Spark and SQL.• Used Pyspark to process and analyze the data.• Migrating data from Amazon AWS to databases such as MYSQL and Vertica using Spark dataframes.• Build a continuous ETL pipeline by using Kafka, Spark Streaming and HDFS.• Perform ETL on the data from different formats like JSON, Parquet, and Database. Then run ad-hoc querying using Spark SQL.• Loaded all datasets into Hive from Source CSV files using Spark and Cassandra from Source CSV files using Spark/PySpark.• Connect Tableau and Squirrel SQL clients to Spark SQL (Spark thrift server) via data source and run queries. -
Sr. Hadoop/Spark DeveloperVerizon Oct 2019 - Dec 2022United States• Extracted and updated the data into HDFS using Sqoop import and export command line utility interface.• Responsible for developing data pipeline using Flume, Sqoop, and Pig to extract the data from weblogs and store in HDFS.• Develop transformations using custom MapReduce, Pig and Hive• Perform Map side joins in both Pig and Hive• Optimize joins in Hive using techniques such as Sort-Merge join and Map side join.• Control parallelism at relational level and script level in Pig• Implement partitioning and bucketing techniques in Hive.• Developed Spark programs using Scala API’s to compare the performance of Spark with Hive and SQL.• Built a Ingestion Framework that would ingest the files from SFTP to HDFS using Apache NIFI and ingest Financial data into HDFS• Worked with Senior Engineer on configuring Kafka for streaming data.• Worked in Spark streaming to get ongoing information from the Kafka and store the stream information to HDFS.• Developed and Configured Kafka brokers to pipeline server logs data into Spark streaming.• Develop script to create external tables and updated partitioning information on a daily basis.• Convert MR algorithms into Spark transformations and actions by creating RDDs, pair RDDs.• Build reusable Hive UDF libraries for business requirements which enabled users to use these UDFs in Hive querying.• Involved in converting Hive/SQL queries into Spark functionality and analyze them using Scala API• Built Spark Scripts by utilizing Scala shell commands depending on the requirement.• Responsible for developing scalable distributed data solutions using Hadoop.• Loaded cache data into HBase using Sqoop.• Build Spark Data frames to process huge amounts of structured data.• Use JSON to represent complex data structure within a MapReduce job.• Store and preprocess the logs and semi structured content on HDFS using MapReduce and import it into Hive warehouse. -
Big Data/Hadoop DeveloperEy Oct 2017 - Sep 2019New York, United States• Extensively involved in Installation and configuration of Cloudera distribution Hadoop, Name Node, Secondary Name Node, Job Tracker, Task Trackers, and Data Nodes.• Developed MapReduce programs in Java and Sqoop the data from ORACLE database.• Responsible for building scalable distributed data solutions using Hadoop. Written various Hive and Pig Scripts.• Worked with Senior Engineer on configuring Kafka for streaming data.• Moved data from HDFS to Cassandra using Map Reduce and Bulk Output Format class.• Experienced with different scripting languages like Python and shell scripts.• Developed various Python scripts to find vulnerabilities with SQL Queries by doing SQL injection, permission checks and performance analysis.• Hands-on experience with Data warehouse and SQL databases like Oracle.• Installed Oozie workflow engine to run multiple Hive and Pig jobs which run independently with time and data availability.• Developed Hive Queries in Spark-SQL for analysis and processing the data. Used Scala programming to perform transformations and apply business logic.• Experienced with handling administration activations using Cloudera manager.• Expertise in understanding Partitions, Bucketing concepts in Hive.• Used Oozie Scheduler system to automate the pipeline workflow and orchestrate the Map Reduces jobs that extract the data on a timely manner. Responsible for loading data from UNIX file system to HDFS.• Developed and Configured Kafka brokers to pipeline server logs data into spark streaming.• Developed a suit of Unit Test Cases for Mapper, Reducer and Driver classes using MR Testing library.• Analyzed the weblog data using the HiveQL, integrated Oozie with the rest of the Hadoop stack.• Utilized cluster co-ordination services through Zookeeper.• Worked on the Ingestion of Files into HDFS from remote systems using MFT. -
Java/Software DeveloperMicro Focus May 2014 - Sep 2017India• Involved in Requirements Analysis and design an Object-oriented domain model.• Implemented test scripts to support test driven development and continuous integration.• Experience in Importing and exporting data into big data, HDFS and Hive using Sqoop.• Developed MapReduce programs to clean and aggregate the data.• Worked in complete SDLC phase like Requirements, Specification, Design, Implementation and Testing• Developed Spring and Hibernate data layer components for application.• Developed profile view web pages add, edit using HTML, CSS, JQuery, Java Script• Developed the application by using MAVEN script.• Developed the mechanism for logging and debugging with Log4j.• Involved in developing database tractions through JDBC.• Used GIT for version control.• Responsible for troubleshooting issues in the execution of MapReduce jobs by inspecting and reviewing log files.• Used oracle as Database and used load for queries execution and involved in writing SQL scripts, PL/SQL code for procedures and functions.• Developed Front-end applications which will interact the mainframe applications using J2C connectors.• Hands on experience in exporting the results into relational databases using Sqoop for visualization and to generate reports for BI team.• Designing, Development and implementation of JSPs in presentation layer for submission, Application, reference implementation• Deployed Web, presentation and business components on Apache Tomcat Application Server.• Involvement in post-production support, Testing and used JUNIT for unit testing of the module.• Worked in Agile methodology.
Archana B Education Details
-
Electrical, Electronic And Communications Engineering Technology/Technician
Frequently Asked Questions about Archana B
What company does Archana B work for?
Archana B works for Humana
What is Archana B's role at the current company?
Archana B's current role is Actively looking for C2C Positions on Hadoop Developer, Data Engineer..
What schools did Archana B attend?
Archana B attended Jawaharlal Nehru Technological University Hyderabad (Jntuh).
Who are Archana B's colleagues?
Archana B's colleagues are Ajay Kumar, Kristen Rainey, Preethi Undefined, Ashumi Dharia, Angela Wright, Ryan Massa-Mckinley, G Gees.
Not the Archana B you were looking for?
-
Archana B.
Greater Seattle Area5takeda.com, gmail.com, nationalgrid.com, yahoo.com, groupon.com2 +1 312-XXXXXXXX
Free Chrome Extension
Find emails, phones & company data instantly
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial