I am a Data Engineer with over 8 years of expertise in the field. My journey in data engineering has been marked by a diverse skill set encompassing Python, R, SQL, Excel, Hive, Pyspark, and Spark SQL. Proficient in end-to-end ETL and ELT processes, I have architected numerous data pipelines, demonstrating a keen understanding of data ingestion and transformation. My technical prowess extends to cloud services, including AWS, fostering end-to-end data solutions that drive business intelligence and innovation.In the realm of business intelligence, I have successfully developed connections for Tableau, PowerBI, and Google Analytics. By integrating Docker into the continuous integration and continuous deployment (CI/CD) pipeline, I automate the building, testing, and deployment of data applications. My commitment to excellence is underscored by a track record of integrating diverse data sources to ensure consistency and accuracy. Proficient in SQL across multiple dialects, including Redshift and Oracle, I have a demonstrated ability to create reports that provide valuable insights.Passionate about driving innovation through data solutions, I bring a unique blend of technical expertise and business acumen. If you are seeking a professional with a proven track record in data engineering, let's connect and explore opportunities in the ever-evolving landscape of this dynamic field. With a solid foundation in data governance and modeling, I ensure compliance with standards and regulatory requirements. I bring proven expertise in database management, especially in designing and managing complex relational databases, particularly MYSQL and PostgreSQL. Real-time analytics is a forte, demonstrated through extensive experience in Spark streaming, Kafka, Hadoop, MapReduce, Pig, and Hive, coupled with proficiency in Scala's functional programming features.
-
Senior Data EngineerU.S. BankDallas, Tx, Us -
Senior Data EngineerUs Bank Jul 2021 - PresentUnited StatesDeveloped comprehensive data engineering projects, specializing in the design and implementation of end-to-end solutions across various Azure services. Managed data ingestion into Azure Data Lake, Azure Storage , Azure SQL, and Azure Data Warehouse, followed by efficient processing in Azure Databricks. Integrated Azure Data Factory and Azure Synapse Analytics with external services , enhancing functionality by incorporating machine learning models and connecting with third-party data providers .Deployed data processing algorithms and workflows within Azure HDInsight to improve overall performance and reduce processing time. Contributed to the development of informatica mappings , preparing design documents , technical design documents, and unit acceptance testing documents, Designed scalable and reliable data integration solutions using informatica power center to accommodate growing data volumes and meet business requirements.Developed and maintained data warehousing solutions using python, ensuring structured storage and easy access for analytical purposes Implemented CI/CD pipelines with jenkins to automate testing and deployment of python-based data engineering solutions, enhancing reliability and speed .Designed and implemented fault-tolerant Spark and Kafka architectures to guarantee data integrity and system reliability. Conducted regular code reviews to ensure adherence to best practices and efficiency of Spark code. Developed reusable spark scripts and functions for data processing in different pipelines and Kafka stream processors for data transformation , enrichment and cleasning .Dev
-
Senior Data EngineerSignet Jewelers Mar 2019 - Jul 2021Ohio, United StatesImplemented diverse data engineering projects, hearing the power of AWS services to transform, move, and manage large datasets. Utilized AWS EMR for seamless data transformation and movement, creating dynamic visualizations with Amazon QuickSight, and implementing serverless functions using AWS Lambda for event-driven data processing. Maintained comprehensive documentation for AWS Glue jobs and AWS Lambda functions.Designed and implemented complex data workflows using Apache Airflow, orchestrating end-to-end data processing tasks to ensure timely and accurate execution. Implemented data integration workflows using Informatica PowerCenter to move, transform, and load data from various sources into a data warehouse. Ensured data quality and integrity through the implementation of data validation and cleansing routines within the informatica environment.Maintained detailed documentation for snowflake data processes, ensuring clarity on data flows, transformations, and integrations for future reference and knowledge transfer. Implemented rigorous controls and encryption protocols in Snowflake to safeguard sensitive data and comply with data privacy regulations such as GDPR and HIPAA.Integrated Kafka with log management systems for centralized logging and auditing of data streams. Implemented advanced procedures like text analytics and processing using in-memory computing capabilities like Apache Spark written in Scala. Conducted Kafka security audits and implemented encryption mechanisms for data in transitI -
Data EngineerThomson Reuters Aug 2017 - Jan 2019Chennai, Tamil Nadu, IndiaDrove comprehensive data engineering projects , ensuring the security of sensitive data in Azure Data Lake and Azure Databricks while adhering to regulatory standards and data governance policies. Developed and implemented disaster recovery plans for Azure Data Factory and Azure SQL, incorporating regular backups to ensure data integrity and availability during unexpected events.Utilized advanced data partitioning and indezing techniques in Azure Synapse Analytics to enhance query performance and reduce latency. Employed metadata-driven approaches in Informatica to automate data integration processes, faciliating adaptability to changing data structures and minimizing development time.Maintained clear and concise documentation for Python scripts and codebases, fostering collaboration and ensuring code transparency. Developed Python scripts and applications with scalability in mind, enabling efficient handling of growing data volumes. Designed and deployed data table structures, reports, and queries in SQL Server, receiving and importing data from various formats.Utilized Hadoop custom scripting and automatic tools to streamline repetitive tasks, enhancing operational efficiency and reducing the likelihood of errors. I worked extensively with the Spark ecosystem using Spark SQL and Scala queries on different formats like text and CSV files .Implemented security measures at Power Bi Service by implementing authentication and authorization methods, ensuring the secure handling of sensitive information. Passionate about delivering robust and secure data solutions that align with business needs.Successfully migrated data pipelines from Azure to the Snowflake multi-cluster data warehouse, handling semi-structured data connected to analytical engines for reporting. I designed and developed Tableau visualizations, including dashboards with calculations, parameters, calculated fields , groups , sets and hierarchies . -
Hadoop DeveloperMerck Jun 2015 - Jul 2017Mumbai, Maharashtra, IndiaLed and regulated disaster recovery plans for AWS S3 and AWS Lambda, ensuring data resilience in unforeseen incidents. Designed and developed ETL processes in AWS Glue to migrate data from external sources like AWS S3, parquet, and text files into AWS Redshift.Conducted periodic reviews of AWS SNS message formats and structures for efficiency and relevance. Designed and implemented custom configurations for specialized processing tasks within AWS EMR. Developed custom functions and scripts in R and Python to address specific data analysis challenges, enhancing code reusability and overall project efficacy.Transformed and cleaned raw data into usable formats using Python libraries such as Pandas and NumPy, ensuring data accuracy and quality. Tunes SQL queries to bring down runtime, worked on indexes, and devised SQL scripts to check and validate dataflow in various applications. Implemented cost optimization strategies, selecting appropriate NoSQL database configurations and services to balance performance requirements with cost-effectiveness. Conducted performance optimization tasks, including query optimization and index management, to enhance PostgreSQL database efficiency and response times.Performed data validation and quality assurance checks within Tableau to ensure the accuracy and reliability of visualizations. Utilized version control systems to manage Tableau workbooks and dashboards, enabling efficient collaboration, tracking changes, and ensuring version consistency. I worked on Power BI reports using multiple types of visualizations, including line charts, tables, matrix, KPIs, scatter plots , and box plots . Passionate about delivering data solutions that align with business needs and drive meaningful insights
Bhargavi D Education Details
Frequently Asked Questions about Bhargavi D
What company does Bhargavi D work for?
Bhargavi D works for U.s. Bank
What is Bhargavi D's role at the current company?
Bhargavi D's current role is Senior Data Engineer.
What schools did Bhargavi D attend?
Bhargavi D attended Jawaharlal Nehru Technological University.
Not the Bhargavi D you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial