Niveditha Reddy

Niveditha Reddy Email and Phone Number

Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | Azure | HDFS | AWS | Hadoop | PySpark | Kafka | Scala @ Homesite Insurance
Niveditha Reddy's Location
Charlotte, North Carolina, United States, United States
Niveditha Reddy's Contact Details

Niveditha Reddy work email

Niveditha Reddy personal email

n/a
About Niveditha Reddy

Niveditha Reddy is a Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | Azure | HDFS | AWS | Hadoop | PySpark | Kafka | Scala at Homesite Insurance.

Niveditha Reddy's Current Company Details
Homesite Insurance

Homesite Insurance

View
Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | Azure | HDFS | AWS | Hadoop | PySpark | Kafka | Scala
Niveditha Reddy Work Experience Details
  • Homesite Insurance
    Sr Data Engineer/Administrator
    Homesite Insurance Nov 2019 - Present
    Boston, Ma, Us
    • Developed mat data pipelines using Spark and PySpark.• Analyzed SQL scripts and designed the solutions to implement using PySpark.• Developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations.• Used Pandas, NumPy, Spark in Python for developing Data Pipelines.• Perform Data Cleaning features scaling, features engineering using pandas and NumPy packages in Python.• Part of a team conducting logical Data analysis and Data modeling JAD sessions, communicated data-related standards.• Worked collaboratively to manage build-outs of large data clusters and real-time streaming with Spark. • Implement the Kafka to hive streaming process flow and batch loading of data into MongoDB using Apache NiFi.• Implement end-end data flow using Apache NiFi.• Responsible for loading Data pipelines from web servers using Kafka and Spark Streaming API.
  • Change Healthcare
    Data Engineer
    Change Healthcare Jul 2017 - Oct 2019
    Nashville, Tennessee, Us
    • Experienced in development using Cloudera Distribution System. • Design and develop ETL integration patterns using Python on Spark.• Optimize the Pyspark jobs to run on Secured Clusters for faster data processing.• Developed Spark scripts by using Python and Scala shell commands as per the requirement.• Used Python for SQL/CRUD operations in DB, file extraction/transformation/generation.• Developed spark applications in Python (PySpark) on distributed environment to load huge number of CSV files with different schema in to Hive ORC tables.• Designing and Developing Apache NiFi jobs to get the files from transaction systems into data lake raw zone. • Analyzed the user requirements and implemented the use cases using Apache NiFi.• Proficient working experience on big data tools like Hadoop, Azure Data Lake, and AWS Redshift.• Worked on reading and writing multiple data formats like JSON, ORC, Parquet on HDFS using PySpark.
  • Silicon Valley Bank
    Big Data Engineer
    Silicon Valley Bank Jan 2016 - Jun 2017
    Santa Clara, Ca, Us
    • Using Sqoop to import and export data from Oracle and PostgreSQL into HDFS to use it for the analysis. • Migrated Existing MapReduce programs to Spark Models using Python. • Migrating the data from Data Lake (hive) into S3 Bucket. • Done data validation between data present in Data Lake and S3 bucket. • Used Spark Data Frame API over Cloudera platform to perform analytics on hive data. • Designed batch processing jobs using Apache Spark to increase speeds by ten-fold compared to that of MR jobs. • Used Kafka for real time data ingestion. • Created different topic for reading the data in Kafka. • Read data from different topics in Kafka. • Involved in converting the hql’s in to spark transformations using Spark RDD with the support of python and Scala.• Moved data from S3 bucket to Snowflake Data Warehouse for generating the reports. • Written Hive queries for data analysis to meet the business requirements. • Migrated an existing on-premises application to AWS.
  • Sam'S Club
    Hadoop Developer
    Sam'S Club Mar 2014 - Dec 2015
    Bentonville, Arkansas, Us
    • Installed and configured Hadoop MapReduce, HDFS and developed multiple MapReduce jobs in Java for data cleansing and preprocessing. • Involved in loading data from UNIX file system to HDFS. • Installed and configured Hive and written Hive UDFs. • Importing and exporting data into HDFS and Hive using Sqoop • Used Cassandra CQL and Java APIs to retrieve data from Cassandra Table. • Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, manage and review data backups, manage and review Hadoop log files. • Worked hands-on with the ETL process. • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, and loaded data into HDFS. • Extracted the data from Teradata into HDFS using Sqoop. • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior. • Exported the patterns analyzed back into Teradata using Sqoop.
  • Netxcell Limited
    Data Analyst
    Netxcell Limited Jun 2012 - Feb 2014
    Hyderabad, Andhra Pradesh, In
    • Document the complete process flow to describe program development, logic, testing, and implementation, application integration, coding. • Recommended structural changes and enhancements to systems and Databases. • Conducted Design reviews and technical reviews with other project stakeholders. • Was a part of the complete life cycle of the project from the requirements to the production support. • Created test plan documents for all back-end database modules. • Used MS Excel, MS Access, and SQL to write and run various queries. • Worked extensively on creating tables, views, and SQL queries in MS SQL Server. • Worked with internal architects and assisting in the development of current and target state data architectures. • Coordinate with the business users in providing an appropriate, effective, and efficient way to design the new reporting needs based on the user with the existing functionality.

Frequently Asked Questions about Niveditha Reddy

What company does Niveditha Reddy work for?

Niveditha Reddy works for Homesite Insurance

What is Niveditha Reddy's role at the current company?

Niveditha Reddy's current role is Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | Azure | HDFS | AWS | Hadoop | PySpark | Kafka | Scala.

What is Niveditha Reddy's email address?

Niveditha Reddy's email address is ni****@****ite.com

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.