Preethi Reddy

Preethi Reddy Email and Phone Number

Sr. Data engineer | Actively Looking for C2C/C2H position as Data Engineer| Bigdata | AWS | GCPI Azure l Hadoop | ETL | SQL | NoSQL l Snowflake| Databricks I BI | Python |Spark | Scala @ CareFirst BlueCross BlueShield
Preethi Reddy's Location
United States, United States
About Preethi Reddy

Preethi Reddy is a Sr. Data engineer | Actively Looking for C2C/C2H position as Data Engineer| Bigdata | AWS | GCPI Azure l Hadoop | ETL | SQL | NoSQL l Snowflake| Databricks I BI | Python |Spark | Scala at CareFirst BlueCross BlueShield.

Preethi Reddy's Current Company Details
CareFirst BlueCross BlueShield

Carefirst Bluecross Blueshield

View
Sr. Data engineer | Actively Looking for C2C/C2H position as Data Engineer| Bigdata | AWS | GCPI Azure l Hadoop | ETL | SQL | NoSQL l Snowflake| Databricks I BI | Python |Spark | Scala
Preethi Reddy Work Experience Details
  • Carefirst Bluecross Blueshield
    Azure Data Engineer
    Carefirst Bluecross Blueshield Dec 2020 - Present
    Baltimore, Md, Us
    Certified Data Engineer Associate with around 10 years of experience as a Data Engineer and extensively worked with designing, developing, and implementing Data models for enterprise-level applications and BI solutions.• Experience in designing and building Data Management Lifecycle covering Data Ingestion, Data integration, Data consumption, Data delivery, and integration Reporting, Analytics, and System-System integration.• Proficient in Big Data environment and Hands-on experience in utilizing Hadoop environment components for large-scale data processing including structured and semi-structured data.• Strong experience with all phases including Requirement Analysis, Design, Coding, Testing, Support, and Documentation.• I have extensive experience with Azure cloud technologies like Azure Data Lake Storage, Azure Data Factory, Azure SQL, Azure Data Warehouse, Azure Synapse Analytical, Azure Analytical Services, Azure HDInsight, and Databricks.• Solid Knowledge of AWS services like AWS EMR, Redshift, S3, EC2, and concepts, configuring the servers for auto-scaling and elastic load balancing.• Experience with monitoring the web services using Hadoop and Spark for controlling the applications and analyzing their operation and performance.• Experienced in Python data manipulation for loading and extraction as well as with Python libraries such as NumPy, Pandas, and SciPy for data analysis and numerical computations.• Good knowledge and experience with NoSQL databases like HBase, Cassandra, and MongoDB and SQL databases like Teradata, Oracle, PostgreSQL, and SQL Server.• Experience in the development and design of various scalable systems using Hadoop technologies in various environments and analyzing data using MapReduce, Hive, and PIG.• Hands-on use of Spark and Scala to compare the performance of Spark with Hive and SQL, and Spark SQL to manipulate Data Frames in Scala.•
  • Kaiser Permanente
    Azure Data Engineer
    Kaiser Permanente Aug 2019 - Dec 2020
    Oakland, California, Us
    Build Complex distributed systems involving huge amount data handling, collecting metrics building data pipeline, and Analytics.• Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake Analytics.• Develop deep understanding of the data sources, implement data standards, maintain data quality and master data management.• Involved in Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in In Azure Databricks.• Analyze, design and build Modern data solutions using Azure PaaS service to support visualization of data. Understand current Production state of application and determine the impact of new implementation on existing business processes.• Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, AzureSQL Data warehouse, write-back tool and backwards.• Involved in developing Spark applications using PySpark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.• Designed and developed scalable data solutions using Azure Synapse Analytics, effectively handling large volumes of data and delivering actionable insights.
  • Citibank (Banamex Usa)
    Big Data Engineer
    Citibank (Banamex Usa) Jan 2017 - Aug 2019
    Us
    • Implemented Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data.• Ingested data from RDBMS and performed data transformations, and then export the transformed data to Cassandra as per the business requirement.• Design and development of ETL processes using the Informatica ETL tool for dimension and fact file creation • Performed wide, and narrow transformations, actions like filter, Lookup, Join, count, etc. on Spark Data Frames. • Worked with Parquet files and Impala using PySpark, and Spark Streaming with RDDs and Data Frames.• Involved in Uploading Master and Transactional data from flat files and preparation of Test cases, Sub System Testing• Aggregated logs data from various servers and made them available in downstream systems for analytics by using Apache Kafka.• Improved Kafka performance and implemented security.• Developed batch and streaming processing apps using Spark APIs for functional pipeline requirements.• Worked with Spark to create structured data from the pool of unstructured data received.• Implemented intermediate functionalities like events or records count from the flume sinks or Kafka topics by writing Spark programs in java and python.• Documented the requirements including the available code which should be implemented using Spark, Hive, HDFS.• Experienced in transferring Streaming data, data from different data sources into HDFS, No SQL databases.Environment: Apache Hadoop, HDFS, MapReduce, Sqoop, Flume, Pig, Hive, HBASE, Oozie, Scala, Spark, Kafka, Linux.
  • Sakshath Technologies®
    Etl Data Engineer/Developer
    Sakshath Technologies® Jul 2015 - Jun 2016
    Bangalore, Karnataka, In
    • Involved in understanding the requirements of the End Users/Business Analysts and Developed Strategies for ETL processes.• Developed mappings/Reusable Objects/Transformation by using a mapping designer, and transformation developer in Informatica Power Center.• Extensively used Informatica Client Tools Source Analyzer, Warehouse Designer, Transformation Developer, Mapping Designer, Mapplet Designer, Informatica Repository.• Designed and developed ETL Mappings to extract data from flat files, and Oracle to load the data into the target database.• Developed SQL queries to develop the Interfaces to extract the data in regular intervals to meet the business requirements.• Used various transformations like Unconnected/Connected Lookup, Aggregator, Expression Joiner, Sequence Generator, Router etc.• Used ETL to load data using PowerCenter/Power Connect from source systems like Flat Files and Excel Files into staging tables and load the data into the target database.• Developed complex mappings using multiple sources and targets in different databases, and flat files. • Designed and developed mappings using Source Qualifier, Aggregator, Joiner, Lookup, Sequence Generator, Stored Procedure, Expression, Filter, and Rank transformations and validated the Data. • Documentation of Technical specifications, business requirements, and functional specifications for the development of Informatica Extraction, Transformation, and Loading (ETL) mappings to load data into various tables.
  • Mindtree
    Sql Developer
    Mindtree Jun 2013 - Jun 2015
    Bangalore, Karnataka, In
    • Proficient working experience with SQL, PL/SQL, and Database objects like Stored Procedures, Functions, and Triggers and using the latest features to optimize the performance of Inline views and Global Temporary tables.• Performed the data analysis and mapping database normalization, performance tuning, query optimization data extraction, transfer, loading (ETL), and clean up.• Created SSIS Packages using SSIS Designer for export heterogeneous data from OLE DB Source (Oracle), Excel Spreadsheet to SQL Server.• Extensive use of Triggers to implement business logic and for auditing changes to critical tables in the database.• Experience in developing external Tables, Views, Joins, Cluster indexes, and Cursors.• Defining data warehouse (star and snowflake schema), fact table, cubes, dimensions, and measures using SQL Server Analysis Services.• Used Execution Plan, SQL Profiler, and Database Engine Tuning Advisor to optimize queries and enhance the performance of databases.• Worked on the data warehouse design and analyzed various approaches for maintaining different dimensions and facts in the process of building a data warehousing application.• Using reporting services (SSRS) generated various reports.• Optimized query performance by creating Indexes.Environment: Oracle, SSIS, MySQL, Microsoft Office Suite

Frequently Asked Questions about Preethi Reddy

What company does Preethi Reddy work for?

Preethi Reddy works for Carefirst Bluecross Blueshield

What is Preethi Reddy's role at the current company?

Preethi Reddy's current role is Sr. Data engineer | Actively Looking for C2C/C2H position as Data Engineer| Bigdata | AWS | GCPI Azure l Hadoop | ETL | SQL | NoSQL l Snowflake| Databricks I BI | Python |Spark | Scala.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.