Daideep Patel

Daideep Patel Email and Phone Number

Actively Looking for Data Engineer/Data Analyst Roles @
Daideep Patel's Location
Jersey City, New Jersey, United States, United States
About Daideep Patel

Daideep Patel is a Actively Looking for Data Engineer/Data Analyst Roles at Amplify Loyalty solutions.

Daideep Patel's Current Company Details
Amplify Loyalty solutions

Amplify Loyalty Solutions

Actively Looking for Data Engineer/Data Analyst Roles
Daideep Patel Work Experience Details
  • Amplify Loyalty Solutions
    Data Engineer
    Amplify Loyalty Solutions Jul 2022 - Present
    United States
    • Work independently in the development, testing, implementation, and maintenance system of moderate-to-large size and complexity.• Developed and implemented data pipelines using AWS services such as S3, Glue, Athena to process data.• Create Data pipeline using AWS Glue for loading data in various layer in S3 and reporting using AWS Athena.• Create Athena data sources on S3 bucket for Querying data.• Developed basic to complex SQL queries to research, analyze, and troubleshoot data.• Proficiency in data ETL workflows with programming/scripting languages such as Python.• Design and assist in developing complex data migration processes for the data warehouse.• Create automated data pipelines via SQL and python based ETL frameworks.• Participate in reference architecture build out of data lake and data warehouse solution.• Performed data analysis using SQL and involved in critical problem-solving situations.• Good working exposure on Cloud technologies of AWS - EC2, S3, Lambda, Glue, EMR, Athena.• Perform data cleansing, data imputation and data preparation using Pandas & Numpy• Experience In building end to end data pipelines for data transfers using Python & AWS including Boto3.• Prototyped pipelines using Databrick notebooks, Snowflake and PySpark.
  • Blue Kc-Comp
    Data Engineer
    Blue Kc-Comp Feb 2022 - Jun 2022
    New York, United States
    • Analyze, design, and build Modern data solutions using Azure service to support visualization of data.• Extract, Transform, and Load data from different Azure resources to snowflake using Azure Data Factory.• Working experience in utilizing Azure DevOps.• Creating Pipelines in ADF using Linked Services & Datasets to Extract, Transform, and Load from different sources like ADO, Blob Storage and Azure Data Lake.• Build data pipelines in airflow in GCP for ETL related jobs using different airflow operators.• Experience in public cloud based managed services for data warehousing/analytics in Microsoft Azure• Design and Implement database solutions in Azure Sql and Snowflake• Used cloud shell SDK in GCP to configure the services Data Proc, Storage, Big Query• Written complex SQL queries for data analysis to meet business requirements.• Experience working with project managers to understand and design the workflows architectures as per requirements and data scientists to assist with feature engineering.• Created Source Target Mappings (STM) for the required tables by understanding the business requirements for the reports.• Hands-on experience with Big Data application phases like data ingestion, data analytics, and data visualization.• Developed dashboards for visualization using Cloud Dashboard Tools Looker and AWS Quick sight for continuous monitoring of real-time data.• POC to explore AWS Glue capabilities on Data cataloging and Data integration.
  • Capital One
    Data Analytics Engineer
    Capital One Jul 2021 - Jan 2022
    Malvern, Pennsylvania, United States
    • Work independently in the development, testing, implementation, and maintenance of systems of moderate-to-large size and complexity.• Migrated application from internal data center to AWS cloud platform.• Used and developed python scripts to migrate data to AWS.• Worked on standard python packages like boto and boto3 for AWS.• Handled Business logics by backend Python programming to achieve optimal results.• Proficiency in wrangling data and creating data pipelines using fast, efficient Python code.• Familiar with DBMS table design, loading, Data Modeling, and experience in SQL.• Good knowledge in various stages of SDLC (Software Development Life Cycle) method.• Gained Knowledge to write cloud formation templates and deployed AWS resourcing.• Extensively worked using AWS services along with wide and in depth understanding of each one of them.• Created and executed HQL scripts that creates external tables in a raw layer database in Hive.
  • Vanguard
    Data Engineer
    Vanguard Feb 2020 - May 2021
    Malvern, Pennsylvania, United States
    • Work independently in the development, testing, implementation, and maintenance of systems of moderate-to-large size and complexity.• Migrated application from internal data center to AWS cloud platform.• Used and developed python scripts to migrate data to AWS.• Worked on standard python packages like boto and boto3 for AWS.• Handled Business logics by backend Python programming to achieve optimal results.• Proficiency in wrangling data and creating data pipelines using fast, efficient Python code.• Familiar with DBMS table design, loading, Data Modeling, and experience in SQL.• Good knowledge in various stages of SDLC (Software Development Life Cycle) method.• Gained Knowledge to write cloud formation templates and deployed AWS resourcing.• Extensively worked using AWS services along with wide and in depth understanding of each one of them.• Created and executed HQL scripts that creates external tables in a raw layer database in Hive.
  • Health Savings Administrators
    Data Analyst
    Health Savings Administrators Jul 2019 - Dec 2019
    United States
    • Extracting and mining data for analysis to aid in solving business problems.• Experience in supporting a cloud-based data warehouse environment. such as Snowflake• Demonstrate a full understanding of the Fact/Dimension data warehouse design model, including star and snowflake design methods.• Worked on Data migration project from Teradata to Snowflake.• Formulated SQL queries, Aggregate Functions, and database schema to automate information retrieval.• Experience with SQL and Relational & Multi-dimensional Databases.• Used Databricks as platform for high-scale analytics using Spark.• Involved in converting SQL queries into Spark transformations using Pyspark concepts.
  • Capital One Bank
    Data Analyst
    Capital One Bank Feb 2019 - May 2019
    Richmond, Virginia, United States
    • Extracting and mining data for analysis to aid in solving business problems.• Experience in supporting a cloud-based data warehouse environment. such as Snowflake• Demonstrate a full understanding of the Fact/Dimension data warehouse design model, including star and snowflake design methods.• Worked on Data migration project from Teradata to Snowflake.• Formulated SQL queries, Aggregate Functions, and database schema to automate information retrieval.• Experience with SQL and Relational & Multi-dimensional Databases.• Used Databricks as platform for high-scale analytics using Spark.• Involved in converting SQL queries into Spark transformations using Pyspark concepts.
  • Shivaasha Technologies And Services Private Limited
    Data Developer
    Shivaasha Technologies And Services Private Limited Dec 2013 - May 2016
    Gujarat, India
    • Designed and developed web-applications using Python, SQL, PHP, and CMS systems• Used Python scripts to update content in the database and manipulate files.• Familiar with DBMS table design, loading, Data Modeling, and experience in SQL.• Analyze data from multiple data sources and develop a process to integrate the data into a single but consistent view Designed and developed data management system using MySQL.• Performed troubleshooting, fixed, and deployed many Python bug fixes of the two main applications that were a main source of data for both customers and internal customer service team.• Developed tools using Python, Shell scripting, XML to automate some of the menial tasks.• Carried out various mathematical operations for calculation purpose using python libraries.• Implement code in python to retrieve and manipulate data.• Written multiple MapReduce programs to extract data for extraction, transformation and aggregation from more than 20 sources having multiple file formats including XML, JSON, CSV &other compressed file formats.• Implemented Spark Core in Scala to process data in memory.• Performed job functions using Spark API’s in Scala for real time analysis and for fast querying purposes.• Involved in creating Spark applications in Scala using cache, map, reduceByKey etc. functions to process data.• Created Oozie workflows for Hadoop based jobs including Sqoop, Hive and Pig.
  • Ilensys Technologies India
    Data Analyst/Data Developer
    Ilensys Technologies India Jun 2011 - Oct 2013
    India
    • Implemented the Slowly changing dimension scheme (Type II) for most of the dimensions. • Implemented the standard naming conventions for the fact and dimension entities and attributes of logical and physical model. • Reviewed the logical model with Business users , ETL Team, DBA's and testing team to provide information about the data model and business requirements. • Worked on Mercury Quality Center to track the defect logged against the logical and physical model. • Participated in requirement gathering session with business users and sponsors to understand and document the business requirements as well as the goals of the project. • Created and reviewed the conceptual model for the EDW (Enterprise Data Warehouse) with business users. • Analyzed the source system (JD Edwards) to understand the source data and JDE table structure along with deeper understanding of business rules and data integration checks. • Identified various facts and dimensions from the source system and business requirements to be used for the data warehouse. • Implemented the Slowly changing dimension scheme (Type II & Type I) for most of the dimensions.• Worked as an onsite project coordinator once the design of the database was finalized in order to implement the data warehouse according to the implementation standards

Frequently Asked Questions about Daideep Patel

What company does Daideep Patel work for?

Daideep Patel works for Amplify Loyalty Solutions

What is Daideep Patel's role at the current company?

Daideep Patel's current role is Actively Looking for Data Engineer/Data Analyst Roles.

Not the Daideep Patel you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.