Monica A Email and Phone Number
Proficient IT professional experience with 6+ years of expertise as a Data Engineer, ETL Developer & implementation of data models for enterprise-level applications. SQL development professional with strong verbal and written communication skills and solid analytical skills for writing relational database solutions for businesses. A self-starter who can work independently to a collaborative solution.
Navy Federal Credit Union
View- Website:
- navyfederal.org
- Employees:
- 12347
-
Data EngineerNavy Federal Credit Union Aug 2022 - Present• Worked with data transfer from on-premises SQL servers to cloud databases (Azure Synapse Analytics (DW) & Azure SQL DB). • Created Pipelines that were built in Azure Data Factory using Linked Services/Datasets/Pipeline/ to extract, transform, & load data from a variety of sources including Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool, & reverse. • Created infrastructure using ARM templets & automated with Azure DevOps pipelines. • Integrated data storage options with Spark, notably with Azure Data Lake Storage and Blob storage. • Ingestion of data into one or more Azure Services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) & processing of data in Azure Databricks. • Worked directly with the Big Data Architecture Team, which created the foundation of this Enterprise Analytics initiative in a Hadoop-based Data Lake. • Responsible for building scalable distributed data solutions using Hadoop. • Analyzed large amounts of data sets to determine the optimal way to aggregate & report on it. • Developed simple to complex MapReduce Jobs using Hive to cleanse & load downstream data• Created partitioned tables in Hive & Managed & reviewed Hadoop log files. • Involved in creating Hive tables, loading with data & writing Hive queries, which will run internally in MapReduce way. • Developed and Scheduled Data Load jobs in Data Studio Tool/Cosmos UI using Scope Scripts from both Structured and Non-Structured stream.• Used Hive to analyze the partition & bucket data & compute various metrics for reporting. • Load & transform large sets of structured, semi-structured & unstructured data & manage data coming from different sources. • Parsed high-level design specifications to simple ETL coding & mapping standards. • Designed & customized data models for the Data warehouse supporting data from multiple sources in real-time • Involved in building the ETL architecture & Source to Target mapping to load data into the Data warehouse. -
Aws Data EngineerWitty Technical Solutions Sep 2018 - Jul 2021Abu Dhabi Emirate, United Arab Emirates• Involved in importing the data from various data sources into HDFS using Sqoop & applying various transformations using Hive, and Apache Spark & then loading data into Hive tables or AWS S3 buckets. • Extensively used AWS Athena to import structured data from S3 into other systems such as RedShift or to generate reports. • Develop Pipelines for migrating the data from Oracle DB to AWS Data Lake, using the Glue and Lambda necessarily.• Created Apache presto and Apache drill configurations on an AWS EMR (Elastic Map Reduce) cluster to integrate different databases such as MySQL and Hive. This allows for the comparison of outcomes such as joins and inserts on many data sources controlled by a single platform. • Proposed and implemented improvements to increase process efficiency and effectiveness, providing input to solution designs to ensure consistency, security, and fault-tolerant AWS solutions. AWS services such as EC2 and S3 were used for data set processing and storage. Experienced in maintaining a Hadoop cluster on AWS EMR. • Creating Reports in Looker based on Snowflake Connections.• Involved in the development of the new AWS Fargate API, which is comparable to the ECS run task API. • Experience in implementing CI/CD processes using AWS Code Commit, Code Build, Code Deploy, Code Pipeline, Jenkins, Bit bucket Pipelines, and Elastic Beanstalk. • Process raw data at scale in Hadoop big data platform & loading from disparate data sets from various environments. • Define virtual warehouse sizing for Snowflake for different type of workloads.• Developed ETL data flows using Hadoop & Spark in Scala ECO system components. • Implemented Spark using Scala & Spark for faster testing & processing of data. • Exploring with Spark to improve the performance & optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, RDDs, and Spark YARN. -
Aws Data EngineerEtisalat Facilities Management L.L.C. Aug 2019 - Jun 2021Dubai, Uae• Involved in importing the data from various data sources into HDFS using Sqoop & applying various transformations using Hive, and Apache Spark & then loading data into Hive tables or AWS S3 buckets. • Extensively used AWS Athena to import structured data from S3 into other systems such as RedShift or to generate reports. • Develop Pipelines for migrating the data from Oracle DB to AWS Data Lake, using the Glue and Lambda necessarily. • Extensively utilized Databricks notebooks for interactive analysis utilizing Spark APIs.• Involvement in working with Azure cloud stage (HDInsight, Databricks, Data Lake, Blob, Data Factory, Synapse, SQL DB and SQL DWH)• Created Apache presto and Apache drill configurations on an AWS EMR (Elastic Map Reduce) cluster to integrate different databases such as MySQL and Hive. This allows for the comparison of outcomes such as joins and inserts on many data sources controlled by a single platform. • Proposed and implemented improvements to increase process efficiency and effectiveness, providing input to solution designs to ensure consistency, security, and fault-tolerant AWS solutions. AWS services such as EC2 and S3 were used for data set processing and storage. Experienced in maintaining a Hadoop cluster on AWS EMR. • Experience in implementing CI/CD processes using AWS Code Commit, Code Build, Code Deploy, Code Pipeline, Jenkins, Bit bucket Pipelines, and Elastic Beanstalk. • Process raw data at scale in Hadoop big data platform & loading from disparate data sets from various environments. • Developed ETL data flows using Hadoop & Spark in Scala ECO system components. -
Data EngineerSaras Analytics Jun 2017 - Aug 2019Hyderabad, Telangana, India• Wrote MapReduce code to parse the data from various sources & stored parsed data into HBase & Hive. • Imported data from different relational data sources like Oracle, and Teradata to HDFS using Sqoop. • Used Scala to convert Hive/SQL queries into RDD transformations in Apache Spark. • Wrote ETL jobs using spark data pipelines to process data from a different source to transform data to multiple targets. • Created Scala apps for loading/streaming data into NoSQL databases (MongoDB) & HDFS is preferred. • Created streams using Spark, processed real-time data into RDDs & data frames & created analytics using SPARK SQL. • Developed distributed high-performance systems with Spark and Scala. • Involved in client meetings & explained the views to supporting & gathering requirements. • Designed data models for dynamic & real-time data to be used by various applications with OLAP & OLTP needs. • H&s on experience with importing & exporting data from Relational databases to HDFS, Hive & HBase using Sqoop. • Experienced in writing Python as ETL framework & PySpark to process massive amounts of data daily. • Used Python to extract, transform & load source data from transaction systems, generated reports, insights, & key conclusions • Effectively Communicated plans, project status, project risks & project metrics to the project team and planned test strategies under the project scope. -
Data EngineerZelis Healthcare India Private Limited Jun 2016 - Aug 2018Hyderabad, Telangana, India• Wrote MapReduce code to parse the data from various sources & stored parsed data into Hbase & Hive. • Imported data from different relational data sources like Oracle, and Teradata to HDFS using Sqoop. • Used Scala to convert Hive/SQL queries into RDD transformations in Apache Spark. • Wrote ETL jobs using spark data pipelines to process data from a different source to transform data to multiple targets. • Created Scala apps for loading/streaming data into NoSQL databases (MongoDB) & HDFS is preferred. • Created streams using Spark, processed real-time data into RDDs & data frames & created analytics using SPARK SQL. • Developed distributed high-performance systems with Spark and Scala. • Involved in client meetings & explained the views to supporting & gathering requirements. • Designed data models for dynamic & real-time data to be used by various applications with OLAP & OLTP needs.• Created data sharing between two snowflake accounts.• Validating the data from SQL Server to Snowflake to make sure it has Apple to Apple match. • H&s on experience with importing & exporting data from Relational databases to HDFS, Hive & HBase using Sqoop. • Experienced in writing Python as ETL framework & PySpark to process massive amounts of data daily. • Used Python to extract, transform & load source data from transaction systems, generated reports, insights, & key conclusions • Effectively Communicated plans, project status, project risks & project metrics to the project team and planned test strategies under the project scope.
-
Data AnalystIbing Software Solutions Private Limited Apr 2013 - Mar 2017Hyderabad, Telangana, India• Performed data analysis & developed complex SQL queries based on requirements to generate data for mock-up reports. • Performed statistical analysis using SQL, Python, & Excel. • Participated in project reviews & team meetings to provide report updates. • Created Dashboards & reported deliverables in Tableau, utilized advanced features, capabilities, & designs. • Created pivot Table using Excel Pivot table, v-lookups & other excel functionalities are utilized for data presentation. • Experienced in Installing, Upgrading, Patching, Configuring and Administrating Red Hat Linux& nbsp• Analyzed & designed data in a SQL Server database environment. • Collaborated with business stakeholders, accountants, & programmers to develop as needed to meet the business needs on reporting & data analysis. • Implementing ETL (extract, transform & load) in SAS to import data from multiple sources like Mainframe, FLAT files, and spreadsheets to perform data analysis, validations & build tabular reports. • Analyzed business workflow & system needs. • Created dashboard reports & ad-hoc reports on a weekly & monthly basis. • Worked closely with the team of data analysts in defining sources & content for the data warehouse component. • Worked on integration of data from various sources, created comprehensive data mapping, & dataflow diagrams. -
Data AnalystIbing Software Solutions Private Limited Apr 2014 - Mar 2016Hyderabad, Telangana, India• Performed data analysis & developed complex SQL queries based on requirements to generate data for mock-up reports. • Performed statistical analysis using SQL, Python, & Excel. • Participated in project reviews & team meetings to provide report updates. • Created Dashboards & reported deliverables in Tableau, utilized advanced features, capabilities, & designs. • Created pivot Table using Excel Pivot table, v-lookups & other excel functionalities are utilized for data presentation. • Analyzed & designed data in a SQL Server database environment. • Collaborated with business stakeholders, accountants, & programmers to develop as needed to meet the business needs on reporting & data analysis. • Implementing ETL (extract, transform & load) in SAS to import data from multiple sources like Mainframe, FLAT files, and spreadsheets to perform data analysis, validations & build tabular reports. • Analyzed business workflow & system needs. • Created dashboard reports & ad-hoc reports on a weekly & monthly basis.• Worked closely with the team of data analysts in defining sources & content for the data warehouse component. • Worked on integration of data from various sources, created comprehensive data mapping, & dataflow diagrams.
Monica A Education Details
Frequently Asked Questions about Monica A
What company does Monica A work for?
Monica A works for Navy Federal Credit Union
What is Monica A's role at the current company?
Monica A's current role is Data Engineer at Navy Federal Credit Union.
What schools did Monica A attend?
Monica A attended University Of Central Missouri.
Who are Monica A's colleagues?
Monica A's colleagues are Brenda Van Zant, Ly'kesha Terrell Benson, Philip Arnold, Kristie Johnson, Ricky Harris, Kara Sherman, Krysti Krysti.
Not the Monica A you were looking for?
-
Monica Rogers
Phoenix, Az1nba.com
Free Chrome Extension
Find emails, phones & company data instantly
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial