Monica A

Monica A Email and Phone Number

Sr.Data Engineer|Open to Relocation|AWS,AZURE,GCP|Databricks|Python|SQL|Hadoop|Spark|Kafka|MongoDB|Tableau|Elasticserach| @ Navy Federal Credit Union
vienna, virginia, united states
Monica A's Location
Melissa, Texas, United States, United States
About Monica A

• Proficient IT professional experience with 7+ years of expertise as a Data Engineer, ETL Developer & implementation of data models for enterprise-level applications. • Created an Azure SQL database, monitored it, & restored it. Migrated Microsoft SQL server to Azure SQL database. • Experience with Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Big Data Technologies (Apache Spark), & Data Bricks. • Developed ETL pipelines in & out of the data warehouse using a mix of Python & Snowflakes, SnowSQL Writing SQL queries against Snowflake. • Created a connection from Azure to an on-premises data center using the Azure Express Route for Single & Multi-Subscription. • Excellent understanding of technologies on systems that include huge amounts of data & run in a highly distributed fashion in Cloudera, Hortonworks Hadoop distributions, & Amazon AWS. • Experience with Hortonworks Ambari in building & maintaining multi-node development & production Hadoop clusters with various Hadoop components (HIVE, PIG, SQOOP, OOZIE, FLUME, CATALOG, HBASE, ZOOKEEPER). • Expertise in all aspects of the Software Development Life Cycle (SDLC), including Agile & Waterfall techniques.• Experience in creating & managing reporting & analytics infrastructure for internal business clients using AWS services including Athena, Redshift, Spectrum, EMR, & Quick Sight.

Monica A's Current Company Details
Navy Federal Credit Union

Navy Federal Credit Union

View
Sr.Data Engineer|Open to Relocation|AWS,AZURE,GCP|Databricks|Python|SQL|Hadoop|Spark|Kafka|MongoDB|Tableau|Elasticserach|
vienna, virginia, united states
Website:
navyfederal.org
Employees:
12347
Monica A Work Experience Details
  • Navy Federal Credit Union
    Data Engineer
    Navy Federal Credit Union Dec 2022 - Present
    United States
    • Understand requirements, build codes, and guide other developers during development activities to develop high standard stable codes within the limits of Confidential and clients’ processes, standards, and guidelines.• Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.• Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL synapse, write-back tool and backwards.• Worked on transformations to transform the data required by analytics team for visualization and business Azure Cloud Data Factory, Databricks, Azure Analysis Service, Synapse, Data Lake, Logic App, Function App• Developed and Deployed Stored Procedures on Azure Synapse Analytics (SQL DW).• Performed ETL operations in Azure Databricks by connecting to different relational database source systems using JDBC connectors.• Good understanding Hadoop distributed file system architecture.• Authored Azure Data Factory pipeline to manage a regular process of data movement as part of a wider enterprise analytical solution.• Performed data migration from on-premises server to an Azure Data Lake Gen2 storage.• Designed, created, loaded and queries data warehouse schema and tables.• Involved in migrating the client data warehouse architecture from on-premises into Azure cloud.• Creating storage accounts which involved with end-to-end environment for running jobs.• Implement Azure Data Factory operations and deployment into Azure for moving data from on-premises into cloud.
  • Charter Communications
    Data Engineer
    Charter Communications Aug 2019 - Nov 2022
    United States
    • Involved in importing the data from various data sources into HDFS using Sqoop & applying various transformations using Hive, and Apache Spark & then loading data into Hive tables or AWS S3 buckets. • Extensively used AWS Athena to import structured data from S3 into other systems such as RedShift or to generate reports. • Develop Pipelines for migrating the data from Oracle DB to AWS Data Lake, using the Glue and Lambda necessarily. • Extensively utilized Databricks notebooks for interactive analysis utilizing Spark APIs.• Created Apache presto and Apache drill configurations on an AWS EMR (Elastic Map Reduce) cluster to integrate different databases such as MySQL and Hive. This allows for the comparison of outcomes such as joins and inserts on many data sources controlled by a single platform. • Proposed and implemented improvements to increase process efficiency and effectiveness, providing input to solution designs to ensure consistency, security, and fault-tolerant AWS solutions. AWS services such as EC2 and S3 were used for data set processing and storage. Experienced in maintaining a Hadoop cluster on AWS EMR. • Involved in the development of the new AWS Fargate API, which is comparable to the ECS run task API. • Experience in implementing CI/CD processes using AWS Code Commit, Code Build, Code Deploy, Code Pipeline, Jenkins, Bit bucket Pipelines, and Elastic Beanstalk. • Process raw data at scale in Hadoop big data platform & loading from disparate data sets from various environments. • Exploring with Spark to improve the performance & optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, RDDs, and Spark YARN. • Implemented advanced Spark procedures like text analytics & processing using the in-memory computing capabilities. • Performed information purging and applied changes utilizing Databricks and Spark information analysis.
  • Fifth Third Bank
    Aws Data Engineer
    Fifth Third Bank Sep 2017 - Jul 2019
    United States
    • Involved in importing the data from various data sources into HDFS using Sqoop & applying various transformations using Hive, and Apache Spark & then loading data into Hive tables or AWS S3 buckets. • Extensively used AWS Athena to import structured data from S3 into other systems such as RedShift or to generate reports. • Develop Pipelines for migrating the data from Oracle DB to AWS Data Lake, using the Glue and Lambda necessarily. • Extensively utilized Databricks notebooks for interactive analysis utilizing Spark APIs.• Involvement in working with Azure cloud stage (HDInsight, Databricks, Data Lake, Blob, Data Factory, Synapse, SQL DB and SQL DWH)• Created Apache presto and Apache drill configurations on an AWS EMR (Elastic Map Reduce) cluster to integrate different databases such as MySQL and Hive. This allows for the comparison of outcomes such as joins and inserts on many data sources controlled by a single platform. • Proposed and implemented improvements to increase process efficiency and effectiveness, providing input to solution designs to ensure consistency, security, and fault-tolerant AWS solutions. AWS services such as EC2 and S3 were used for data set processing and storage. Experienced in maintaining a Hadoop cluster on AWS EMR. • Involved in the development of the new AWS Fargate API, which is comparable to the ECS run task API. • Experience in implementing CI/CD processes using AWS Code Commit, Code Build, Code Deploy, Code Pipeline, Jenkins, Bit bucket Pipelines, and Elastic Beanstalk. • Process raw data at scale in Hadoop big data platform & loading from disparate data sets from various environments. • Developed ETL data flows using Hadoop & Spark in Scala ECO system components. • Implemented Spark using Scala & Spark for faster testing & processing of data. • Exploring with Spark to improve the performance & optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, RDDs, and Spark YARN.
  • Change Healthcare
    Data Engineer
    Change Healthcare Jun 2015 - Aug 2017
    United States
    • Wrote MapReduce code to parse the data from various sources & stored parsed data into Hbase & Hive. • Imported data from different relational data sources like Oracle, and Teradata to HDFS using Sqoop. • Used Scala to convert Hive/SQL queries into RDD transformations in Apache Spark. • Wrote ETL jobs using spark data pipelines to process data from a different source to transform data to multiple targets. • Created Scala apps for loading/streaming data into NoSQL databases (MongoDB) & HDFS is preferred. • Created streams using Spark, processed real-time data into RDDs & data frames & created analytics using SPARK SQL. • Developed distributed high-performance systems with Spark and Scala. • Involved in client meetings & explained the views to supporting & gathering requirements. • Designed data models for dynamic & real-time data to be used by various applications with OLAP & OLTP needs. • H&s on experience with importing & exporting data from Relational databases to HDFS, Hive & HBase using Sqoop. • Experienced in writing Python as ETL framework & PySpark to process massive amounts of data daily. • Used Python to extract, transform & load source data from transaction systems, generated reports, insights, & key conclusions • Effectively Communicated plans, project status, project risks & project metrics to the project team and planned test strategies under the project scope.

Monica A Education Details

Frequently Asked Questions about Monica A

What company does Monica A work for?

Monica A works for Navy Federal Credit Union

What is Monica A's role at the current company?

Monica A's current role is Sr.Data Engineer|Open to Relocation|AWS,AZURE,GCP|Databricks|Python|SQL|Hadoop|Spark|Kafka|MongoDB|Tableau|Elasticserach|.

What schools did Monica A attend?

Monica A attended Karunya Institute Of Technology And Sciences.

Who are Monica A's colleagues?

Monica A's colleagues are Monica Thomas, Salle Mickey, Tommy Tibbs, Allison Miller, Samantha Elmer, Rachel Hargis, Nicole Butler.

Not the Monica A you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.