Niharika Rao Email and Phone Number
Niharika Rao is a Actively looking for Senior Data Engineer roles | MS Azure | AWS | GCP | Hadoop Developer | SQL | Snowflake | Python | NoSQL | ZooKeeper | Java | Data Warehouse Engineer | ETL Developer || at Johnson & Johnson.
-
Senior Data EngineerJohnson & Johnson Oct 2022 - PresentNew Brunswick, New Jersey, United StatesAs a Senior Data Engineer, I led the migration of legacy data platforms to cloud-based systems, utilizing AWS services such as S3, Glue, and Redshift to enhance data accessibility and scalability. I developed and optimized complex ETL pipelines for ingesting and transforming data from diverse sources, ensuring seamless integration and high-quality data flow. I implemented data governance frameworks and real-time streaming pipelines using Amazon Kinesis, improving business insights with real-time analytics. I collaborated with cross-functional teams to operationalize machine learning models, leveraging Amazon SageMaker and Databricks for scalable data processing and model deployment. Additionally, I spearheaded the development of secure, compliant data lakes using AWS Lake Formation and led a team of engineers in automating CI/CD pipelines, streamlining deployment processes and reducing errors. My work enabled advanced data-driven decision-making across the organization, utilizing technologies such as Snowflake, Kafka, IICS, and Ataccama to drive performance improvements and optimize data architecture. -
Senior Data EngineerCharter Communications Mar 2021 - Oct 2022St Louis, Missouri, United StatesI was instrumental in designing and deploying advanced data integration and analytics solutions utilizing Apache Hadoop and related big data technologies. I focused on optimizing the performance of Hadoop ecosystems, particularly Hortonworks, and implemented secure, compliant data lakes in S3 integrated with AWS Lake Formation and Glue for metadata management. My role extended to creating custom UDFs in Hive and Pig to meet specific business requirements, optimizing Spark applications for large-scale data processing, and leading a Spark Center of Excellence (COE) initiative aimed at data simplification. I spearheaded the architecture and deployment of a multi-terabyte data warehouse on AWS Redshift, achieving a 50% improvement in query performance and a 30% reduction in operational costs. My work involved developing complex data pipelines using Kafka, HBase, Spark, and Hive for ingesting, transforming, and analyzing customer behavior data. I also ensured the migration of existing databases and applications to a big data platform, improving system performance and scalability while maintaining stringent security and compliance standards. -
Senior Data EngineerAscension Aug 2019 - Feb 2021Austin, Texas, United StatesI was at the forefront of transforming large datasets into actionable insights using Hadoop and big data technologies. My responsibilities included designing and optimizing MapReduce jobs for efficient data processing, creating an automated build and deployment process that enhanced user experience, and implementing a continuous integration system that streamlined application development and deployment. I played a key role in architecting a big data analytics platform that processed customer interface preferences using Hadoop, Hive, Pig, and Cloudera. I also designed and developed Azure-based cloud solutions, including relational servers and databases, and utilized services such as Azure Data Lake Storage (ADLS) and Synapse Analytics for managing and analyzing data. My work on Azure Data Factory (ADF) pipelines facilitated seamless data extraction, transformation, and loading (ETL) from various sources, ensuring data accuracy and availability. Additionally, I developed Spark jobs in both Python and Scala for faster data processing and utilized Openshift and Kubernetes to create DevOps pipelines, ensuring efficient microservices architecture deployment. -
Big Data DeveloperGrapesoft Solutions Sep 2016 - Nov 2018Hyderabad, Telangana, IndiaI implemented and managed data ingestion and processing solutions using Hadoop, focusing on high-volume, structured and unstructured datasets. My role included efficiently transferring data between relational databases and HDFS using Sqoop, managing streaming log data with Flume, and optimizing the ingestion process to handle large-scale data environments. I was responsible for developing robust ETL processes to transform and analyze data using Pig and Hive, and I implemented prototypes for big data analysis utilizing Spark’s RDD and DataFrame APIs. In addition to technical responsibilities, I managed and led a diverse team, coordinating efforts to ensure timely delivery of complex projects. My expertise in Hadoop ecosystems allowed me to optimize data processing tasks, enhance system performance, and deliver actionable insights from large datasets. I also played a key role in automating data workflows, ensuring smooth operations and high data integrity across various business applications.
-
Hadoop DeveloperBrio Technologies Jun 2015 - Sep 2016Hyderabad, Telangana, IndiaI was deeply involved in the end-to-end setup and management of Hadoop clusters, ensuring high availability, performance, and security of big data environments. My responsibilities included automating the installation and configuration of Hadoop clusters, implementing Kerberos security for user authentication, and performing critical maintenance tasks such as data node commissioning and decommissioning, cluster monitoring, and data backup management. I collaborated with the systems engineering team to expand Hadoop environments, ensuring they met the growing data processing needs of the business. My role also involved troubleshooting system failures, identifying root causes, and implementing solutions to prevent future issues. Additionally, I worked on data analysis and feature extraction using Apache Spark and its machine learning libraries, which allowed for efficient processing and analysis of large datasets. My contributions ensured that Brio Technologies maintained a robust and scalable big data infrastructure, capable of supporting complex analytical tasks and meeting business demands.
Niharika Rao Education Details
Frequently Asked Questions about Niharika Rao
What company does Niharika Rao work for?
Niharika Rao works for Johnson & Johnson
What is Niharika Rao's role at the current company?
Niharika Rao's current role is Actively looking for Senior Data Engineer roles | MS Azure | AWS | GCP | Hadoop Developer | SQL | Snowflake | Python | NoSQL | ZooKeeper | Java | Data Warehouse Engineer | ETL Developer ||.
What schools did Niharika Rao attend?
Niharika Rao attended Cmr College Of Engineering & Technology.
Who are Niharika Rao's colleagues?
Niharika Rao's colleagues are Roberto Rivera, Carlos Mario Gonzalez, Jeanne Kuhta, Eubell Avila, Stephan Becker, Pmp, Jun Iijima, Michael Kovalenko.
Not the Niharika Rao you were looking for?
-
Niharika Rao
New York, Ny1advocatesforyouth.org -
Niharika Rao
Marketing Associate At Morgan Stanley | Mba At Uc, San Diego | Marketing & Analytics | Ex- Merkle & Mu SigmaSan Diego, Ca1morganstanley.com -
6capitalone.com, amazon.com, gmail.com, thewaltdisneycompany.com, speedway.com, speedway.com
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial