Ravinder Kaur

Ravinder Kaur Email and Phone Number

Azure data engineer @ CVS Health
Irving, TX, US
Ravinder Kaur's Location
Irving, Texas, United States, United States
About Ravinder Kaur

I'm actively looking for new opportunities in Data Engineer Currently working as Azure Data Engineer at CVS. Expertise in using Programming Languages like : Python, SQL, COBOL, Java, JavaScript, HTML, CSS. I hope you would agree, after checking over my profile and résumé, that I am the kind of qualified and competitive applicant you are seeking for. I look forward to describing how my unique skills and abilities will help your organization. Please contact me at 469-851-0821 or via email at ravinderkaaur97@gmail.com to arrange a convenient meeting time. #DATA ENGINEER #AZURE

Ravinder Kaur's Current Company Details
CVS Health

Cvs Health

View
Azure data engineer
Irving, TX, US
Ravinder Kaur Work Experience Details
  • Cvs Health
    Azure Data Engineer
    Cvs Health
    Irving, Tx, Us
  • Cvs Health
    Azure Data Engineer
    Cvs Health Feb 2021 - Present
    Woonsocket, Ri, Us
    • As a Data Engineer, I offered technical skills and aptitude in Hadoop technologies as they pertain to analytics development.• Strong expertise in the complete DLC, ITIL service management, and Production Support functions by handling key responsibilities as the Module Lead, Team Lead, and Tech Lead.• Responsibilities as the Module Lead, Team Lead, and Tech Lead over the years• Python programming was used to implement data pipelines• Analyze, design, and build Modern data solutions using Azure Paas service to support visualization of data. Understand the current Production state of the application and determine the impact of new implementation on existing business processes• Designed the AI artificial intelligence, ML Machine learning data pipeline for regular monitoring and performance evaluation of the deployed ML models • Build code base for neutral languages processing and AI (artificial intelligence) and ML machine learning framework.• Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL. Azure DW) and processing the data in Azure Databricks• Throughout the project, we strictly adhered to SDLC procedures.• I have built a Continuous Delivery pipeline with Git & Jenkins • Worked with time series data and perform various statistical and AI artificial intelligence (ML) Machine learning algorithms such as regression, filtering, correlation, and neural network.
  • Fedex
    Azure Data Engineer
    Fedex Jan 2020 - Jul 2021
    Memphis, Tn, Us
    • Propose designs that take cost/spend into account in Azure and offer recommendations for right-sizing data infrastructure.• Responsible tech lead Azure data engineer building solutions for performance optimization.• Expertise in leading larger teams during the project phase and guiding team members and enabling knowledge sharing among the team.• Apache big data Hadoop components such as HDFS, MapReduce, YARN, Hive, Ambari, and Nifi have been installed and configured.• Extensive design and development of Azure Data Factory (ADF) for ingesting data from various source systems such as relational and non-relational to suit business functional requirements.• Python scripts were created to automate the data sampling procedure. By examining the data for completeness, duplication, correctness, and consistency, we could guarantee its integrity.• Extensive experience with Spark Context, Spark-SQL, RDD Transformation, Actions, and Data Frames.• Performed data transformation such as filtering, deduplicating, null value checking, aggregation, data cleansing, and validation based on requirements by using Python (Pyspark).• Using Pyspark and shell scripting, I created bespoke ETL solutions, batch processing, and a real-time data intake pipeline to transport data into and out of Hadoop.• Using Databricks, you can build pipelines, data flows, and complicated data transformations and manipulations using ADF and Pyspark• Using Python, I worked on data pre-processing and cleaning in order to do feature engineering, as well as data imputation methods for missing values in the dataset.• Continuously monitor, automate, and improve data engineering solutions.• Collaboration between application architects and DevOps is required.• Used Spark SQL and Scala to convert Hive/SQL queries into Spark transformations.• Handle client requests for SQL objects, schedule adjustments, business logic updates, and ad-hoc queries, as well as analyze and resolve data sync problems.
  • Peraton
    Data Engineer
    Peraton May 2018 - Dec 2019
    Reston, Virginia, Us
    • As a member of the team in charge of analyzing business needs and designing and implementing the business solution.• Historical data was moved to the AWS cloud, and an automated mechanism for incoming clickstream aggregation was put up there.• Worked on converting Hive/SQL queries into Spark transformations using Spark RDDs, Python, and Scala.• Built the objects in Redshift when modeling and creating the database for the data warehouse task.• Proposed and implemented improvements to increase process efficiency and effectiveness, providing input to solution designs to ensure consistency, security, and fault-tolerant AWS solutions. AWS services such as EC2 and S3 were used for data set processing and storage. Experienced in maintaining a Hadoop cluster on AWS EMR.• Used Star/Snowflake schemas for the design of the data warehouse.• Implemented a data warehousing system based on Redshift.• Used Erwin's reverse engineering technique and the conversion of the target database structure.• leveraging S3 events and AWS Lambda functions, automated Glue ETL tasks.• Developed streaming pipelines using Apache Spark with Python.• Built data ingestion pipelines and moved terabytes of data from existing data warehouses to the cloud and scheduled through AWS Step Function and used EMR, S3, and Spark• Utilizing the Erwin tool, 3NF business area data modeling was created along with an analysis of the information needs and denormalized physical implementation data.• Comprehensive data analysis on SQL and Toad writing and querying.• Extensive experience in using the Boto3 package of Python for Accessing AWS services.• Included transferring data from the old database to the new database using Oracle and SQL Server, as well as populating the database using the ETL tool Informatica.• Employing PL/SQL, SQL Plus, SQL Loader, and managed Exceptions, database objects like tables, views, materialized views, procedures, and packages may be created.
  • Ford Motor Company
    Data Analyst
    Ford Motor Company Apr 2017 - Apr 2018
    Dearborn, Michigan, Us
    • Worked on the HDFS data collection and loading Kafka REST API. performed transformations, read, and write operations from several data sources using Spark SQL queries that had been optimized, and then stored the outcomes in HDFS. and created interactive querying utilizing aggregate functions created with Spark SQL.• Worked using Pandas and Python NumPy libraries for the transformation of data.• Dealt with HDFS file types including Avro, Sequence File, and different compression formats like Snappy and gzip, and executed several MapReduce tasks in Hive for data cleaning and pre-processing.• Utilizing the Spark Data Frame API, the imported data was transformed, cleaned, and filtered before being put into Hive.• Ad hoc tables were built using Azure functions to provide structure and schema to data in Azure Blob storage. Data validation, filtering, sorting, and transformations were carried out for each update in a Cosmos DB table, and the changed data was loaded into a Postgres database.• Built pipelines for processing data in Azure Data Factory to take information from outside sources, combine that information, conduct data enlightenment, and put that information into data warehouses.• Python scripts were created for data analysis, schema validation, and data profiling.• Python scripts were developed to read JSON documents for data cleaning, analysis, and schema validation before the enhanced data was imported into a database.• Handled Azure PaaS solutions such SQL Azure, Azure Blob Storage, Azure Web Apps, and Web Roles.• To upload, retrieve, manipulate, and manage sensitive data in Teradata, complicated SQL queries were developed.• Had experience with Shell scripting and had designed a completely automated continuous integration system utilizing GIT, Jenkins, and bespoke Python tools and scripts that were compatible with the LINUX environment.

Frequently Asked Questions about Ravinder Kaur

What company does Ravinder Kaur work for?

Ravinder Kaur works for Cvs Health

What is Ravinder Kaur's role at the current company?

Ravinder Kaur's current role is Azure data engineer.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.