Sai Chandan R

Sai Chandan R Email and Phone Number

Senior Data Engineer at UBS | Big Data | Python | GCP | Pyspark | Spark SQL | Azure | AWS | Hadoop | Snow flake| ETL | SQL | Airflow | Agile @ UBS
zurich, zurich, switzerland
Sai Chandan R's Location
Milwaukee, Wisconsin, United States, United States
About Sai Chandan R

Sai Chandan R is a Senior Data Engineer at UBS | Big Data | Python | GCP | Pyspark | Spark SQL | Azure | AWS | Hadoop | Snow flake| ETL | SQL | Airflow | Agile at UBS.

Sai Chandan R's Current Company Details
UBS

Ubs

View
Senior Data Engineer at UBS | Big Data | Python | GCP | Pyspark | Spark SQL | Azure | AWS | Hadoop | Snow flake| ETL | SQL | Airflow | Agile
zurich, zurich, switzerland
Website:
ubs.com
Employees:
81377
Sai Chandan R Work Experience Details
  • Ubs
    Gcp Data Engineer
    Ubs Dec 2021 - Present
    Weehawken, New Jersey, United States
    • Spearheaded the development of data pipelines using GCP BigQuery, Cloud Dataflow, and Apache Beam, ensuring efficient ETL processing and real-time analytics, improving data availability and decision-making.• Designed and implemented data migration workflows from on-premises databases to GCP Cloud Storage (GCS Bucket) and BigQuery, leveraging Cloud Storage Transfer Service, reducing latency and enhancing system performance.• Developed cloud functions using G-Cloud Function to automate data transformation processes, optimizing the scalability of data operations across diverse environments.• Built and maintained large-scale data lakes in GCP Dataproc using Spark and Scala, facilitating high-speed data processing, reducing processing time for large datasets.• Engineered real-time data streaming solutions using Cloud Pub/Sub and Cloud Dataflow, ensuring timely data delivery across distributed systems, improving system responsiveness.• Managed Snowflake environments for advanced SQL querying and analytics, integrating with GCP Cloud SQL and PostgreSQL to handle complex data needs for high-level reporting.• Created automated workflows for data orchestration using Cloud Composer, enhancing pipeline reliability, reducing manual intervention, and increasing overall efficiency.• Architected ETL pipelines with Cloud Spanner and Cloud SQL, ensuring high availability and scalability of data for cross-platform analytics.• Developed and executed data transformation processes using Python and Spark-SQL in Databricks, streamlining data extraction and analysis for business intelligence applications.• Collaborated with business stakeholders to design custom dashboards and data visualizations using GCP BigQuery and Salesforce SOQL, improving data accessibility and user insights.• Utilized Bq Command Line Utilities and Gsutil for large-scale data ingestion and export, ensuring smooth data transfer and integration across systems.
  • Centene Corporation
    Aws Data Engineer
    Centene Corporation Jul 2019 - Nov 2021
    Missouri, United States
    • Led development of ETL pipelines on AWS using services like EC2, S3, RDS, and DynamoDB to process and store healthcare data, ensuring scalable and secure data operations across the organization.• Designed and implemented data migration workflows from on-premise databases to AWS S3 and Snowflake, improving data accessibility and enhancing analytical capabilities for business teams.• Built and deployed real-time data streaming solutions using AWS Kinesis and SQS, optimizing the flow of data from various sources for real-time analytics and improving operational decision-making.• Automated infrastructure provisioning using CloudFormation, creating reusable templates for VPC, EC2, S3, and other resources, streamlining environment setup and reducing deployment time.• Managed and optimized data warehouses in Snowflake, leveraging AWS RDS for integration and ensuring effective data retrieval for business intelligence reports and dashboards.• Utilized AWS Lambda with Python and Shell scripting for serverless processing tasks, automating data validation and transformation processes, increasing overall system efficiency.• Orchestrated and monitored data pipelines using AWS CodePipeline, integrating CodeBuild and CodeDeploy for continuous integration and deployment, ensuring faster release cycles and minimal downtime.• Configured and maintained AWS CloudWatch for monitoring and alerting on critical data processes, enhancing system reliability and allowing for proactive issue resolution.• Collaborated with cross-functional teams to develop containerized applications using Docker and deployed them on AWS ECS, improving scalability and management of microservices-based architecture.• Integrated Bitbucket for version control and utilized Jenkins and Bamboo for continuous integration, streamlining development workflows and reducing errors during deployment.
  • Empower Retirement
    Azure Data Engineer
    Empower Retirement Apr 2017 - Jun 2019
    Greenwood Village, Colorado, United States
    • Designed and developed scalable ETL pipelines using Azure Data Factory and Azure Synapse Analytics, facilitating the ingestion, transformation, and integration of retirement account data across multiple platforms for reporting and analytics.• Architected data lake solutions using Azure Data Lake and Cosmos DB, optimizing large-scale data storage for structured and unstructured data, improving data accessibility and query performance.• Implemented Azure Databricks for processing large datasets, utilizing Hadoop, Hive, and MapReduce to handle distributed computing, improving data processing times and enhancing operational efficiency.• Developed real-time data streaming solutions using Azure Event Hub and Azure Service Bus, enabling real-time monitoring of retirement transactions, enhancing system responsiveness and data accuracy.• Configured Azure Synapse Analytics (Data Warehouse) for optimized data storage and retrieval, improving query performance and facilitating complex analytical operations across the retirement platform.• Built and deployed data models in Power BI for financial reporting and analysis, integrating with Azure SQL to deliver business intelligence dashboards, improving decision-making for stakeholders.• Automated infrastructure deployment using Azure DevOps pipelines, managing continuous integration and continuous deployment (CI/CD) for data projects, reducing manual intervention and increasing system reliability.• Created and managed Azure Virtual Machines (VMs) for hosting data services, ensuring high availability and scalability of resources for data processing applications.• Developed and executed SQL queries for complex data transformations and analytics, optimizing queries on Teradata and Azure SQL, ensuring efficient data retrieval and analysis for business insights.
  • Walkingtree Technologies
    Data Analyst
    Walkingtree Technologies Aug 2014 - Dec 2016
    India
    •Interpreted complex data sets using Informatica 8.1, identifying patterns and trends that supported strategic business decisions.•Utilized Data Flux for data cleansing and standardization, ensuring high data quality and integrity for accurate reporting.•Extracted, transformed, and loaded data from Oracle 9i databases, creating comprehensive datasets to support analytical processes.•Performed thorough data validation and testing using Quality Center 8.2, ensuring accuracy and reliability in analytical outcomes.•Developed and executed complex SQL queries for data retrieval and manipulation, supporting business intelligence and reporting needs.•Leveraged TOAD to manage and analyze database objects, optimizing performance and ensuring efficient data retrieval.•Designed and implemented PL/SQL scripts for ETL processes, streamlining data extraction, transformation, and loading workflows.•Integrated data from Flat Files into the data warehouse, facilitating seamless data flow for enhanced analysis.•Analyzed large datasets in Teradata, providing actionable insights that improved business operations and efficiencies.•Created detailed reports and dashboards to present data findings, enabling stakeholders to make informed decisions based on accurate information.

Sai Chandan R Education Details

Frequently Asked Questions about Sai Chandan R

What company does Sai Chandan R work for?

Sai Chandan R works for Ubs

What is Sai Chandan R's role at the current company?

Sai Chandan R's current role is Senior Data Engineer at UBS | Big Data | Python | GCP | Pyspark | Spark SQL | Azure | AWS | Hadoop | Snow flake| ETL | SQL | Airflow | Agile.

What schools did Sai Chandan R attend?

Sai Chandan R attended Jntuh College Of Engineering Hyderabad.

Who are Sai Chandan R's colleagues?

Sai Chandan R's colleagues are Philipp Beck, Esra Genccan, Cyril Moser, Joël Gauchat, Sam Postlethwaite, Cfa, Bartlomiej Sedlak, Yuriy Korolev.

Not the Sai Chandan R you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.