Pruthak Brahmbhatt Email and Phone Number
Pruthak Brahmbhatt personal email
- Valid
Experience data engineer with over 4 years of expertise in building and optimizing high throughput ETL pipelines and high-performance data lakes. Proficient in using advanced big data technologies, including map reduce, Hive, YARN, Spark, Spark SQL, and Dynamo DB. Skilled in working with popular Hadoop distribution platforms like Cloudera, and experienced in writing UDFs in JAVA and SCALA for HIVE QL and SPARK. Adept at optimizing Hive and Impala queries for better performance and troubleshooting complex issues within the Hadoop ecosystem. • Strong background in Python object-oriented programming (OOP) and comprehensive experience across all stages of the Software Development Life Cycle (SDLC). Proficient in writing subqueries, stored procedures, triggers, cursors, and functions in MySQL and PostgreSQL databases.• Proven ability in deploying clusters on EMR, managing data storage in S3, and constructing data pipelines using AWS services such as S3, EC2, EMR, IAM, and CloudWatch. Familiar with development tools like Bugzilla and Jira, and experienced with databases including MySQL, Oracle, SQL Server, NoSQL, and PostgreSQL.• Proficient in version control using SVN and GIT, with strong knowledge in agile and waterfall methodologies, unit testing, and Test-Driven Development (TDD).• Highly skilled in AWS services for deployment, data security, troubleshooting, and scripting in Shell, Bash, and various operating systems including Linux, and Windows.• Excellent communication, interpersonal, and analytical skills, with a strong ability to understand complex systems, provide solutions, maintain detailed documentation, and work both independently and as a motivated team player.
-
Data EngineerImetadex Feb 2024 - PresentWest Berlin, New Jersey, United States• Collaborated with engineering teams to design and develop Python-based ETL processes for data mapping and transformation.• Managed the full lifecycle of the Learning Management System (LMS), including system configuration, user setup, and ongoing system administration to ensure smooth functionality and user experience.• Integrated SQL, R, Python, and Excel functionalities into the Learning Management System (LMS), enabling advanced data analysis and reporting capabilities for course performance, learner progress, and engagement metrics.• Migrated data between HDFS and AWS S3 and optimized AWS Glue and Lambda jobs for data integration.• Developed Spark and Python-based data pipelines to load and transform data for analytics using Redshift and PostgreSQL.• Performed tuning on Spark jobs, scheduled jobs using CTRL M, and deployed clusters on AWS EMR. -
Data AnalystSupa Save Supermarket Apr 2022 - Dec 2023Kitwe, Copperbelt Province, Zambia• Designed and implemented Oracle DB changes, building PL/SQL packages, procedures, functions, and views to support new business rules and integrate new source systems.• Performed SQL tuning and data modeling using Erwin 9.5, creating conceptual, logical, and physical data models while considering partitioning management options.• Enhanced LMS application and Moody’s Risk Origin Application, developing and modifying BIRT reports and implementing Oracle VPD rules.• Contributed to rewriting the LMS application in the Hadoop ecosystem by creating Hive partitioned external tables, writing Sqoop queries, and developing PySpark jobs for data processing.• Built and enhanced Oozie jobs, monitored them via HUE, developed Impala views, and gained experience with distributed data streaming using Kafka and NoSQL databases like MongoDB.• Successfully created data pipelines for migrating data from Oracle to SQL Server Management Studio (SSMS) and facilitated the migration of stored procedures for further investigation and analysis. Built dashboards in BI tools such as PowerBI and Tableau.• Collaborated directly with the CEO of SupaSave supermarket to discuss and develop strategic business ideas, demonstrating excellent communication and interpersonal skills to effectively convey and implement innovative strategies.• Designed PL/SQL packages and functions to support Oracle DB changes for new business rules.• Migrated data from Oracle to SQL Server, developed Hive partitioned tables, and created PySpark jobs.• Built dashboards using PowerBI and Tableau, and developed distributed data processing using Kafka and NoSQL databases.
-
Data And Reporting AnalystNico Coutlis Transport Mar 2021 - Mar 2022Kitwe, Copperbelt, Zambia• Analyzed Oracle and Informatica requirements, enhancing workflows and Oracle packages.• Designed new logistics data models, performed data migration, and developed BI dashboards for transportation optimization.• Performed analysis and design of Oracle and Informatica requirements. Enhanced ADS Loader workflows for various vendors by making changes to Informatica mapping, session and Oracle package/procedure/functions.• Providing support to ETL jobs in system/integration/UAT environments. Carried out design, construction, review and defect analysis activities to Oracle Loader and data movement packages.• Performed data modeling to setup new logistics order load using Oracle logistic transportation management tool. Involved in the creation of database from scratch for Tables (including Order tables, Customer Tables, Vehicles Table, Finance management tables, Cash flows and Inventory management), Views, Stored procedures, Functions, Packages and Indexes.• Performing data migration activities by applying various DMLs to sync up current data to support new business logic on any key attribute. Prepared visuals and dashboards using BI tools for management to view and build better business strategies to optimize budget and seamless transportation.• Collaborated directly with the CEO of Nico Coutlis Transport to discuss and develop strategic business ideas, demonstrating excellent communication and interpersonal skills to effectively convey and implement innovative strategies.
-
Data AnalystAkash Technolabs Jan 2020 - Oct 2020Ahmedabad, Gujarat• Developed and optimized Hive and Spark scripts for data processing, integrated 20+ data sources, and contributed to backend development using Python and Flask.• Reduced cart abandonment rate by 8% through PHP Laravel development and implemented ETL processes for data extraction.• Optimized Hive and Spark scripts fetched data from Teradata to Hive and HDFS using TDCH and loaded data of various formats to HDFS.• Integrated over 20 data sources, developed Python modules for handling different file formats, and contributed to database design and schema development for MySQL, Cassandra, and MongoDB.• Developed Python workers for ETL processes, designed and implemented backend functionalities for RESTful APIs using Django, Flask, and Tasty-pie, and created shell scripts to automate production deployment.• Additionally, as an entry-level PHP Laravel web developer, contributed to reducing the cart abandonment rate by 8%.
Pruthak Brahmbhatt Education Details
-
3.69 Gpa
Frequently Asked Questions about Pruthak Brahmbhatt
What company does Pruthak Brahmbhatt work for?
Pruthak Brahmbhatt works for Imetadex
What is Pruthak Brahmbhatt's role at the current company?
Pruthak Brahmbhatt's current role is Data Engineer| Data Pipelines, ETL, Business Insights.
What is Pruthak Brahmbhatt's email address?
Pruthak Brahmbhatt's email address is pr****@****ail.com
What schools did Pruthak Brahmbhatt attend?
Pruthak Brahmbhatt attended Sacred Heart University, Durham College, Madhav University Freedom Education.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial