Chaitanya M.

Chaitanya M. Email and Phone Number

Data Engineer | Big Data Developer | Software Engineer @ UBS
zurich, zurich, switzerland
Chaitanya M.'s Location
United States, United States
About Chaitanya M.

I am a Data Engineer with over a decade of experience in designing and implementing robust data solutions across a variety of industries, including finance and telecommunications. My expertise encompasses the full Software Development Life Cycle (SDLC), specializing in big data technologies and cloud environments.In recent roles, I have significantly enhanced data operations by deploying cloud solutions, primarily on Microsoft Azure, to optimize data storage, integration, and analysis processes. My proficiency in Azure encompasses building and maintaining scalable data pipelines, leveraging Azure Data Factory, Azure Synapse, and Azure Databricks to facilitate efficient data flows from on-premises systems to the cloud. This includes implementing high-performance solutions that integrate seamlessly with Azure’s compute, storage, and networking services.Additionally, I possess a deep understanding of Hadoop ecosystems and real-time data processing using Spark and Kafka. My technical skills are complemented by strong analytical abilities and a strategic approach to tackling complex data challenges.I am passionate about utilizing cloud technologies to drive business transformation and operational excellence. My goal is to continue innovating within this space, delivering solutions that not only meet but exceed business expectations while contributing to a culture of technical excellence.

Chaitanya M.'s Current Company Details
UBS

Ubs

View
Data Engineer | Big Data Developer | Software Engineer
zurich, zurich, switzerland
Website:
ubs.com
Employees:
81377
Chaitanya M. Work Experience Details
  • Ubs
    Azure Data Engineer
    Ubs Dec 2022 - Present
    New York, United States
    • At UBS, I led pivotal cloud transformation initiatives, overseeing the migration and optimization of data management systems to Microsoft Azure. • This role involved developing robust, end-to-end data pipelines utilizing Azure Data Factory and Azure Synapse, significantly enhancing data orchestration from on-premises systems to cloud-based platforms. • My efforts resulted in a 30% improvement in data retrieval times and a 15% reduction in storage costs, underscoring my focus on efficiency and cost-effectiveness. • Collaborating closely with cross-functional teams, I integrated various data systems with ERP platforms, streamlining manufacturing processes and boosting inventory management. • Additionally, I implemented advanced analytics solutions using Azure Databricks, enabling deeper insights into customer usage patterns and supporting strategic, data-driven decision-making. • These contributions have been instrumental in advancing UBS's technological capabilities, positioning the firm at the forefront of financial services innovation.
  • The Bank Of New York Mellon/Bny Mel Lon, N.A
    Data Engineer
    The Bank Of New York Mellon/Bny Mel Lon, N.A Mar 2021 - Nov 2022
    New York, United States
    • At BNY Mellon, I spearheaded the integration and optimization of data solutions leveraging Azure Cloud technologies, significantly enhancing the firm’s data analytics and management capabilities. • I designed and implemented comprehensive data integration processes using Azure Data Factory, Azure Synapse, and Azure Databricks, which streamlined the movement and analysis of large data sets. • My role involved close collaboration with internal teams to ensure seamless data flow and integration across various cloud-based platforms, contributing to the firm's digital transformation strategy.• I also played a key role in advancing the company's data visualization capabilities by deploying Azure Databricks Dashboards and Power BI, providing actionable insights that drove strategic business decisions. • My efforts in optimizing data storage and computation in Azure led to improved system performance and cost efficiencies, underlining my commitment to leveraging cloud technologies to foster operational excellence. • Additionally, I was instrumental in automating data pipelines, enhancing data processing speeds, and enabling more efficient data handling and reporting mechanisms.• Through my expertise, BNY Mellon was able to enhance its data security measures, leveraging Azure’s robust security frameworks to safeguard sensitive financial data. • My contributions were pivotal in migrating legacy systems to modern cloud-based solutions, reducing infrastructure complexity, and setting new benchmarks for data governance and compliance. • These initiatives not only optimized operational workflows but also positioned BNY Mellon as a leader in technology-driven financial services.
  • Comcast
    Data Engineer
    Comcast Jan 2020 - Feb 2021
    Colorado, United States
    • At Comcast, I played a critical role in enhancing data processing and management systems, focusing on high-performance ETL operations and automation. • I was responsible for the development and maintenance of over 3000 data ingestion and processing jobs, ensuring their performance and reliability across production, testing, and development environments. • My technical expertise facilitated the resolution of complex ingestion failures, significantly improving system stability and efficiency.• I spearheaded the design and implementation of PySpark programs using Scala API, which dramatically optimized data extraction processes. • This initiative was pivotal in enhancing the performance of Comcast's ETL tools, leading to more efficient data handling and processing. • My efforts in implementing Apache Spark with Scala and Spark SQL enabled faster testing and processing of data, contributing to the streamlined data operations at Comcast.• My role also involved extensive data migration activities, where I utilized Apache PIG scripts and Hive to manage large datasets effectively. • I developed robust data pipelines using PIG and Hive, which supported complex data transformations and analytics, providing valuable insights for business decision-making.• As an admin for Comcast's data ingestion tools, I was instrumental in managing access and training for new users, ensuring they were well-equipped to utilize the company's data processing infrastructure effectively. • My leadership in this area ensured smooth operations and helped optimize the utilization of data resources.• Throughout my tenure at Comcast, I demonstrated strong problem-solving skills and a deep understanding of big data technologies, which were crucial in driving the company's data strategy forward. • My contributions not only enhanced Comcast's data capabilities but also positioned the company to better leverage its vast data assets for strategic advantages.
  • At&T
    Data Engineer/Hadoop Developer
    At&T Jul 2017 - Dec 2020
    Texas, United States
    • At AT&T, I significantly contributed to the cloud transformation efforts by spearheading the migration of AT&T’s 5G core network to Microsoft Azure, which marked a pivotal shift in network management and operational efficiency. • This strategic move utilized Microsoft's advanced cloud technologies, enhancing our capability to deliver robust and reliable services effectively. • My role was crucial in deploying a sophisticated network cloud platform that supported real-world application of 5G services, drastically improving network reliability and customer service metrics.• I led efforts in optimizing cloud architecture, resulting in considerable reductions in engineering and development costs. • This optimization enabled the rapid development and launch of new services that utilized 5G and edge computing technologies, driving forward AT&T's innovation agenda. • Working with over 100 data sources, I managed the implementation of complex ETL jobs on Spark frameworks, handling extensive datasets up to petabytes, which enhanced data availability and system performance by 40%.• Additionally, I played a key role in advancing data security and compliance across cloud platforms, implementing stringent encryption and data management practices to align with regulatory standards. • My strategic improvements in data ingestion processes cut down integration time by over 75%, significantly accelerating decision-making capabilities.• Through these initiatives, I also developed and supported real-time data processing using Spark Streaming and Kafka, enabling the analysis and accessibility of streaming data for deeper customer insights. • My leadership in these technological advancements not only supported AT&T’s operational objectives but also reinforced the company’s stature as a leader in leveraging advanced telecom technologies in cloud environments.
  • Regions Bank
    Hadoop Developer
    Regions Bank Jan 2014 - Jun 2017
    Nashville, Tennessee, United States
    • At Regions Bank, my tenure as a Hadoop Developer was marked by substantial contributions to the bank's big data processing capabilities, focusing on enhancing data architecture and streamlining operations. • I led the development and optimization of end-to-end big data solutions, utilizing Hadoop ecosystem technologies which significantly improved data processing efficiency and system scalability.• I was instrumental in architecting and implementing robust data flow systems that integrated with internal and client-facing platforms. • My expertise enabled the bank to handle complex data operations, ensuring high-performance data analytics that supported strategic business decisions. • I successfully optimized HIVE scripts, leveraging HDFS effectively by using various compression mechanisms, which enhanced query performance and reduced data storage costs.• My efforts included developing Spark applications using Scala and Spark-SQL/Streaming for faster data processing on Cloudera's Hadoop YARN. • This involved transforming large volumes of data, leading to more efficient data management and quicker access to insights. • I also migrated complex MapReduce programs and HIVE scripts into Spark RDD transformations and actions, achieving a more streamlined and cost-effective data processing pipeline.• Additionally, I managed the transfer of large datasets using Sqoop from Netezza and other RDBMS into Hadoop, which facilitated more integrated and real-time data analytics. • My role extended to production support, where I was responsible for ensuring the stability and reliability of the data processing environment, addressing any issues promptly to maintain continuous operational efficiency.• These initiatives not only elevated the data handling capabilities at Regions Bank but also significantly contributed to the bank's ability to leverage data for enhancing customer experiences and business operations.
  • Citadel
    Hadoop Developer
    Citadel Oct 2011 - Aug 2012
    India
    • At Citadel, I played a pivotal role in leveraging cloud technologies to drive significant advances in high-performance computing for medical research, specifically in a collaborative project with Harvard University and Google Cloud. • This groundbreaking study focused on developing digital twins for therapeutic devices aimed at treating circulatory diseases, utilizing Google Cloud’s immense computational power. • My efforts were instrumental in demonstrating that public cloud resources can be harnessed to perform large-scale, high-fidelity simulations that were previously confined to supercomputers.• In this project, I managed the integration of complex data simulations that replicated supercomputer resources on the public cloud, achieving nearly 80% of the efficiency of dedicated supercomputer facilities. • This not only showcased the potential of cloud technology in critical health research but also reduced the time and costs associated with traditional research methods.• Moreover, my technical expertise facilitated the deployment of sophisticated algorithms and computational models that enhanced the capabilities of researchers to conduct detailed analyses of heart disease treatments. • Through this initiative, we successfully demonstrated the practicality and scalability of using cloud resources for high-stakes medical research, setting a precedent for future studies.• My contributions at Citadel also extended to optimizing cloud-based data management and analytics platforms, significantly enhancing the firm's quantitative research capabilities. • This included improving the performance of data-heavy simulations by transitioning them to Google Cloud, where I managed to increase the efficiency of our quantitative research operations.

Chaitanya M. Education Details

Frequently Asked Questions about Chaitanya M.

What company does Chaitanya M. work for?

Chaitanya M. works for Ubs

What is Chaitanya M.'s role at the current company?

Chaitanya M.'s current role is Data Engineer | Big Data Developer | Software Engineer.

What schools did Chaitanya M. attend?

Chaitanya M. attended The University Of Memphis.

Who are Chaitanya M.'s colleagues?

Chaitanya M.'s colleagues are Bardh Osmani, Renato Yamaguti, Hanseong Kim, Brandon Mays, Acma Manish Meghwani, Magdalena Milewska, Karina Joyner.

Not the Chaitanya M. you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.