Chandu G Email and Phone Number
With over 9 years of dedicated experience in data engineering, I bring a wealth of expertise in big data implementations and cutting-edge technologies. My professional journey has been defined by a deep specialization in Spark technologies and a robust understanding of various big data ecosystems, including Hadoop, Airflow, Snowflake, Teradata, and Kafka. I have consistently leveraged my skills in Scala, Java, Python, SQL, T-SQL, and R to develop and optimize data solutions that drive significant business value.One of my key achievements has been the successful migration of complex on-premises applications to cloud platforms such as AWS and Google Cloud. My proficiency in cloud services encompasses EC2, S3, AWS Glue, Google Cloud Dataflow, and Azure Data Factory. I have designed and executed efficient ETL processes using a range of tools including Informatica, Talend, SSIS, and AWS Glue, focusing on creating scalable and reliable data pipelines.My expertise in Snowflake is extensive, covering advanced concepts like Resource Monitors, Role-Based Access Controls, Data Sharing, and Cross-Platform Database Replication. I have optimized Snowflake environments by tuning queries, managing Virtual Warehouses, and implementing Snowpipe for continuous data ingestion. My work with Snowflake has also involved designing data marts with dimensional data modeling, optimizing multi-cluster warehouses, and ensuring effective data governance practices.In the realm of data analysis and visualization, I am adept at using tools such as Tableau, Power BI, and Data Studio to generate insightful reports and dashboards. I have applied Python's multi-threading capabilities to enhance data processing and leverage Spark for advanced analytics. My background also includes significant experience in automating data workflows and optimizing data pipelines with Python and SQL, as well as using Terraform for infrastructure management and Kubernetes for managing containerized applications.I have a proven track record of working in Agile environments, collaborating with cross-functional teams to gather requirements, design data solutions, and deliver high-quality results. My approach combines technical acumen with a strategic mindset, ensuring that data management practices are aligned with business objectives and compliance standards.Driven by a passion for leveraging data to solve complex challenges, I am committed to advancing data engineering practices and contributing to innovative projects.
Walgreens
View- Website:
- walgreens.com
- Employees:
- 95752
-
Data EngineerWalgreens Nov 2022 - PresentChicago, Illinois, United StatesWith a robust background in data engineering and architecture, I specialize in designing, optimizing, and managing complex data systems to drive business insights and efficiencies. My expertise spans across multiple cloud platforms and data technologies, including Snowflake, AWS, Google Cloud Platform, and Databricks.I have substantial experience with Snowflake, leveraging its features such as Snowpipe, STREAMS, and TASKS to streamline data integration and processing. My work includes designing and implementing data storage solutions that ensure efficient access and retrieval of critical information, as well as integrating data from multiple sources using AWS S3 and Google Cloud Platform.Skilled in developing and optimizing ETL pipelines, I efficiently extract, transform, and load data from various source systems into Snowflake. My expertise extends to handling nested JSON data, using Spark, Pyspark, and SQL for data transformation, and implementing robust data processing solutions across platforms.I excel in designing and developing data models and schemas for warehousing and analytics, using tools such as Snowflake and Talend. My proficiency in Power BI enables me to create insightful reports and visualizations, helping stakeholders derive actionable insights from complex datasets.Experienced in managing and optimizing databases including Oracle and SQL Server. My role involved migrating data from RDBMS to Hadoop, designing database solutions, and ensuring compliance with platform-specific controls for security and performance.Successfully orchestrated the integration of the IQ tool with the Dynamic Framework, enhancing data processing efficiency.Implemented role-based access control in Snowflake, integrated with AWS services such as S3, EC2, Athena, and Glue.Enhanced data analytics capabilities by implementing Spark-based solutions on the Google Cloud Platform and developed scalable data pipelines with Pentaho. -
Data EngineerVizio Jan 2021 - Oct 2022Dallas, Texas, United StatesI am a seasoned Data Engineer with extensive experience in designing and implementing scalable data solutions using GCP's big data stack, including BigQuery, Cloud Dataflow, and Cloud Data Fusion. My expertise spans across building and optimizing data marts in BigQuery for analytical reporting, developing complex data pipelines with Airflow using Bash and BigQuery operators, and crafting Python programs with Apache Beam for real-time data streaming from Pub/Sub to BigQuery.I have led multiple data migration projects, moving data seamlessly from on-premises infrastructure to the cloud, utilizing tools like Sqoop, Cloud Data Fusion, and custom Python hooks. My deep understanding of cloud services extends to AWS, GCP, and Azure, where I have orchestrated data flows between these environments, ensuring efficient and reliable data integration.In addition to my cloud expertise, I have a strong foundation in SQL optimization, having reduced operation times by 35% through the design and tuning of SQL queries and ETL processes. I am proficient in using visualization tools like Tableau to develop insightful reports and dashboards, helping stakeholders make data-driven decisions.My experience also includes working with Apache SOLR for indexing and querying large datasets, handling production deployments with automation tools like Chef, and providing robust production support to resolve defects swiftly. I have implemented disaster recovery and backup strategies for Snowflake environments, ensuring business continuity and data integrity.I have developed custom code in Python to tag tables and columns using Cloud Data Catalog, built applications for user provisioning, and designed star schemas in BigQuery to optimize data storage and retrieval. My hands-on experience with Matillion has enabled me to design, develop, and maintain complex ETL solutions that support large-scale data warehousing across cloud platforms. -
Data EngineerBank Of America Jan 2017 - May 2019Hyderabad, Telangana, IndiaAs a highly skilled Data Engineer with a deep focus on Azure, I bring extensive experience in architecting, developing, and implementing end-to-end data solutions that drive business transformation and efficiency. My expertise spans across the entire Azure ecosystem, where I have successfully executed complex migration strategies and modernized traditional systems to leverage cloud-native capabilities.Experienced in designing and implementing sophisticated data transformations and pipelines using Azure Data Factory and PySpark within the Databricks environment. My hands-on experience includes orchestrating seamless data ingestion from diverse source systems, leveraging Azure Data Factory, and managing scalable, resilient data pipelines using Kubernetes, Apache Airflow, and Luigi.In the realm of data storage and processing, I have implemented optimized solutions with ORC, Parquet, and text file formats, ensuring efficient compression and retrieval of large-scale datasets. My expertise also extends to real-time data streaming, where I have utilized Spark with Kafka to handle high-velocity data from web server logs and other sources, ensuring timely and accurate data processing.I am proficient in implementing machine learning models and data science workflows using Spark MLlib within Azure Databricks, enabling advanced analytics and predictive modeling capabilities. My work also includes developing and deploying ETL pipelines in Azure Synapse Analytics, migrating on-premises processes to the cloud, and ensuring that data integration and transformation workflows are both robust and scalable.I have a strong foundation in DevOps practices, having implemented CI/CD pipelines using Jenkins, GitLab CI/CD, and Azure DevOps to automate and streamline the development lifecycle. My experience in data visualization and reporting is demonstrated through the development of dynamic dashboards and visualizations using Power BI and SQL Server Reporting Services (SSRS). -
Data EngineerAmazon Aug 2013 - Dec 2016Hyderabad, Telangana, IndiaI am an accomplished Data Engineer and IT Professional with deep expertise in cloud computing, data warehousing, ETL processes, and big data technologies. My background includes extensive experience with AWS, Azure, and Snowflake, where I've designed and implemented scalable, high-performance data solutions that align with business requirements and goals.Throughout my career, I have demonstrated a strong ability to conduct gap analyses, perform data modeling, and optimize data architectures. My proficiency in using tools like MS Visio for business flow diagrams and generating use cases and sequence diagrams ensures that I can clearly elucidate business processes from multiple perspectives. I have effectively used user stories to capture and describe business requirements, ensuring that application development aligns with these needs.I have played a key role in data warehouse management, performing data analysis to optimize internal schemas for better performance and designing complex ETL processes using Oracle Data Integrator (ODI) and other tools. My technical skill set includes a strong command of SQL for database access and manipulation, and I have collaborated with subject matter experts (SMEs) to design and establish robust business architectures.In the realm of cloud computing, I have leveraged AWS services like Aurora, DynamoDB, Lambda, Kinesis, and EMR to develop and manage data pipelines for real-time processing and analysis. I have successfully migrated on-premises applications to AWS, utilizing EC2, S3, and Redshift for data processing and storage, and performed end-to-end architecture and implementation assessments of various AWS services.I am proficient in big data technologies, having built Spark/PySpark-based ETL pipelines and developed Spark Streaming applications to process data from Kafka topics into HBase.
Chandu G Education Details
-
Computer Science -
Computer Science
Frequently Asked Questions about Chandu G
What company does Chandu G work for?
Chandu G works for Walgreens
What is Chandu G's role at the current company?
Chandu G's current role is Actively looking for Data Engineer roles on C2C and C2H | Senior Data Engineer | Snowflake Developer | AWS | Azure | GCP | KAFKA | Hadoop | SQL | Spark | Python | ETL | DBT | Pyspark | CI/CD Pipelines | Data Analysis |.
What schools did Chandu G attend?
Chandu G attended University Of North Texas, Jawaharlal Nehru Technological University, Anantapur.
Who are Chandu G's colleagues?
Chandu G's colleagues are Sayma Kaiser, Daniel Miranda, Sue Woelfel, Chelsea Salinas, Catherine Selim, Jeff Rubin, Vickie Frey.
Not the Chandu G you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial