Mounika V

Mounika V Email and Phone Number

Lead Data Engineer @ Citi
Irving, TX, US
Mounika V's Location
Irving, Texas, United States, United States
About Mounika V

Over 7+ years of IT experience in Analysis, Design, Development, Testing, Project planning and Implementation of Data Warehouse, Data integration and Bigdata applications• A Certified Developer in IBM Infosphere DataStage and Informatica Power center.• Strong experience in Data Warehouse concepts, ETL, OLAP, Star Schema, Snowflake Schema, FACT & Dimensions tables.• Experience in using Hadoop eco systems HDFS, Pig, Sqoop, HIVE, flume, NoSQL, Airflow, Python, Cloud Computing for scalability, distributed computing and high-performance computing.• Experience in using AWS services such as EC2, Simple Storage Service (S3), Autoscaling, EBS, Glacier, VPC, ELB, RDS, IAM, and Cloud Watch.• Experience in data migration to Snowflake.• Strong experience in using Oracle 10g/9i, SQL Server, DB2, and Teradata, PL/SQL, SQL*Loader, SQL Assistant, Stored Procedures, Cursors, Constraints, Triggers, Indexes-B-tree-Index, Bitmap Index, Views, Inline Views, Materialized Views.• Strong working experience on UNIX shell scripting.• Experience in Insurance, Banking, Financial and Retailers domains.• Exceptional analytical and problem-solving skills. Team Player with the ability to communicate effectively at all levels of the development process. Ambitious and hardworking with commitment to excellence.

Mounika V's Current Company Details
Citi

Citi

View
Lead Data Engineer
Irving, TX, US
Website:
citigroup.com
Employees:
196387
Mounika V Work Experience Details
  • Citi
    Lead Data Engineer
    Citi
    Irving, Tx, Us
  • Virtusa
    Lead Software Engineer
    Virtusa Nov 2024 - Present
    United States
  • Flysoft Inc
    Data Engineer
    Flysoft Inc Feb 2024 - May 2024
    New Jersey, United States
    Designed and implemented scalable ETL pipelines using Python, SQL, and Apache Spark, reducing data processing times by 40%.Managed and optimized data warehouses on MySQL, PostgreSQL, and MongoDB, improving query performance by 30%.Developed and maintained data models and schemas, ensuring data integrity and consistency across systems, increasing data accuracy by 25%.Utilized Hadoop and Hive for big data processing and storage, enabling 50% more efficient data retrieval and reporting.Developed an internal testing tool framework using Python.Worked closely with business, transforming business requirements to technical requirements part of Design Reviews & Daily Project Scrums and Wrote custom MapReduce programs by writing Custom Input formats.Experience working with Google Cloud Platform to help strategize, architect, and implement various solutions to migrate data hosted on our on-prem platform to Google cloud Platform (GCP).Utilized data stored within Docker containers to create detailed metrics and logs on a Datadog dashboard, ensuring real-time monitoring and visibility of critical system performance indicators.Integrate geospatial data sources with GCP services such as BigQuery GIS, Google Maps Platform, and Google Earth Engine.Built RESTful APIs with Python to enable real-time data access and integration with third-party applications, enhancing data availability and accessibility.Experience in designing and implementing enterprise infrastructure and platforms required for setting up data engineering pipelines utilizing the tools available on the GCP Platform.Implemented robust error handling and logging mechanisms in Python scripts, ensuring reliability and traceability in data processing workflows.Optimized Python code and SQL queries to enhance the performance of ETL jobs, reducing processing times and resource consumption.Developed a mapping document to map columns from source to target.
  • Ibm
    Senior Data Engineer
    Ibm Nov 2020 - Jun 2022
    Bengaluru, Karnataka, India
    • Conducted comprehensive manual testing of web and mobile applications, identifying and reporting bugs, which resulted in a 40 % reduction in production issues.• Developed detailed test plans, test cases, and test scripts based on functional specifications and requirements, improving test coverage by 30%.• Collaborated with development teams to understand application features and identify potential areas of concern, leading to early detection of 25% more defects.• Ensured the implementation and integration of Snowflake as a cloud-based data warehousing solution, enhancing data storage and analytics capabilities.• Parsed JSON documents with Python scripts for database loading.• Worked on Partitioning, Bucketing, Join optimizations and query optimizations in Hive.• Worked closely with business, transforming business requirements to technical requirements.• Ensured the scalability and performance of the monitoring solution by optimizing data ingestion and processing pipelines for both on-premises and cloud-based data sources.• Implemented automated data collection processes for both Docker and GCP environments, ensuring continuous and accurate data feed into Datadog for seamless metric tracking.• Designed and configured Datadog dashboards to aggregate and visualize metrics and logs from both Docker and GCP data sources, providing comprehensive monitoring solutions for stakeholders.• Successfully replicated the metrics and logging infrastructure by integrating data stored in Google Cloud Storage (GCP) into Datadog, expanding the monitoring capabilities to include cloud-based datasets.• Created Python merge jobs for data extraction and loading into MySQL• Ensured compliance with data security and privacy regulations while working with sensitive data in SQL Plus and SQL Navigator.• Implemented advanced Snowflake functionalities, such as data sharing and multi-cluster configurations, to improve the efficiency of complex analytical queries.
  • Ibm
    Senior Data Engineer
    Ibm Feb 2019 - Oct 2020
    Bengaluru, Karnataka, India
    • Collaborated with development teams to understand application features and identify potential areas of concern, leading to early detection of 25% more defects.• Designed and implemented highly scalable and fault-tolerant data architectures on AWS, leveraging Amazon S3 for storage, Amazon Redshift for data warehousing, and AWS Glue for ETL processes.• Ensured the implementation and integration of Snowflake as a cloud-based data warehousing solution, enhancing data storage and analytics capabilities.• Ensured compliance with data security and privacy regulations while working with sensitive data in SQL Plus and SQL Navigator.• Implemented advanced Snowflake functionalities, such as data sharing and multi-cluster configurations, to improve the efficiency of complex analytical queries.• Monitored and optimized Amazon EMR cluster performance, tuning configurations and resource allocation for optimal efficiency• Implemented scalable and resilient storage solutions with Amazon S3, utilizing its object storage capabilities for efficient data storage and retrieval.• Implemented monitoring and alerting solutions for Apache Kafka clusters with Amazon MSK to ensure timely detection and resolution of issues• Engineered high-performance computing solutions on Amazon EC2, fine-tuning virtual machine configurations for optimal performance and scalability.
  • Ibm
    Senior Data Engineer
    Ibm Feb 2017 - Jan 2019
    Bengaluru, Karnataka, India
    • Worked on analyzing data in the Hadoop cluster using big data tools including Pig, Hive, and Sqoop. Contributed towards maintaining Spark Applications using Scala and Python in conjunction with data development and software engineering teams. • Collected large amounts of log data using Apache Flume/Sqoop and aggregated using Pig/Hive in HDFS for further analysis. Worked on analyzing and writing Hadoop MapReduce jobs using Java API, Pig, and Hive. • Developed Spark SQL applications to perform complex data operations on structured and semi-structured data stored as Parquet. Used Pig to do data transformations, event joins filter bot traffic, and some pre-aggregations before storing the data onto HDFS. • Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in In Azure Databricks.• Implemented real-time data processing pipelines with Python, Apache Kafka, and Apache Spark, allowing for immediate insights and decision-making.• Developed Python-based data validation scripts to monitor and ensure data quality throughout the ETL process, reducing errors and maintaining data integrity.• Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.• Developed Python scripts for interacting with relational (SQL) and NoSQL databases, enabling efficient data querying, indexing, and storage.
  • Ascend Avenue Solutions Pvt Ltd
    Etl Informatica| Datastage Developer
    Ascend Avenue Solutions Pvt Ltd Jun 2015 - Jan 2017
    Hyderabad, Telangana, India
    • Assisted in the development of data pipelines and ETL processes using Python and SQL, contributing to a 30% reduction in data processing errors.• Design, develop, and implement ETL (Extract, Transform, Load) processes using Informatica PowerCenter to extract data from various source systems, transform it according to business requirements, and load it into target data warehouses or databases.• Conducted data cleaning, wrangling, and transformation tasks, improving data readiness for analysis and reporting by 40%.• Contributed to the development of machine learning models using Scikit-Learn and TensorFlow for predictive analysis, increasing prediction accuracy by 15%.• Integrated data from multiple heterogeneous sources such as databases, flat files, XML, and web services into centralized data warehouses, ensuring data consistency, accuracy, and quality.• Optimized ETL processes for performance, including tuning Informatica mappings, sessions, and workflows to minimize load times and maximize efficiency.• Implement data quality checks and validation processes to ensure the integrity and accuracy of data being loaded into the target systems. Handle data cleansing, filtering, and transformation as needed.• Developed robust error handling mechanisms in ETL processes to capture and log errors, ensuring issues are identified and resolved efficiently.• Designed and managed Informatica workflows, sessions, and tasks, ensuring proper scheduling, execution, and monitoring of ETL jobs.• Participated in the design and implementation of data visualization solutions using Power BI, boosting user engagement by 20%.• Collaborated with data architects and business analysts to design and maintain data models, including creating and maintaining data dictionaries and metadata repositories.• Participated in data migration projects, moving data from legacy systems to new platforms, ensuring smooth transitions and minimal disruption to business operations.

Mounika V Education Details

Frequently Asked Questions about Mounika V

What company does Mounika V work for?

Mounika V works for Citi

What is Mounika V's role at the current company?

Mounika V's current role is Lead Data Engineer.

What schools did Mounika V attend?

Mounika V attended Southern Arkansas University, Jawaharlal Nehru Technological University Hyderabad (Jntuh).

Who are Mounika V's colleagues?

Mounika V's colleagues are Coan Ng, Khasim Sayed, Bangtan Boys, Rafael Warde, Jorge Reyes, Aravindan N, Wendy Rosenholtz.

Not the Mounika V you were looking for?

  • Mounika V

    Eden Prairie, Mn
  • Mounika V

    Java Full Stack Developer
    United States
  • Mounika V

    Java Developer With 10 Years Of Experience | Full-Stack Expertise In Spring Boot, Hibernate, React Js | Skilled In Microservices, Docker, Kubernetes, And Cloud Platforms (Aws, Azure) | Proficient In Typescript, And Kafka
    United States
  • Mounika v

    React Js Developer
    United States

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.