Chandan Reddy

Chandan Reddy Email and Phone Number

Actively looking out for opportunities | Data Engineer | Bigdata | Cloud | Hadoop | ETL | Talend | SQL | Snowflake| Airflow | Databricks | BI | Tableau @ State of Montana
helena, montana, united states
Chandan Reddy's Location
Bridgewater, New Jersey, United States, United States
About Chandan Reddy

• Around 12 years of experience in development, design, analysis & implementation using SQL, Hadoop Ecosystem, R, Java, Python, cloud services. • Involved in the entire project life cycle and actively involved in all the phases including data extraction, data cleaning, statistical modelling and data visualization with large data sets of structured and unstructured data. • Developed applications and tools using Python throughout the SDLC along with maintenance of web frameworks via Django. • Extensively worked on Python 3.5/2.7 (Numpy, Pandas, Matplotlib, NLTK and Scikit-learn) • Experience in visualization tools like, Tableau for creating dashboards. • Skilled in performing data parsing, data manipulation and data preparation with methods including ETL and describe data contents, compute descriptive statistics of data. • Experience in Kubernetes to deploy scale, load balance and manage Docker containers with multiple name spaced versions and good understanding of managing Docker Containers and Kubernetes Clusters• Strong SQL programming skills, with experience in working with functions, packages and riggers. • Worked in Agile/Scrum environment to constantly develop python applications, according to the organization requirments. • Proficiency in multiple databases like MongoDB, Cassandra, MySQL, ORACLE and MS SQL Server.• Expertise in development of various reports, dashboards using various Tableau Visualizations• Experience in using different Hadoop eco system components such as HDFS, YARN, MapReduce, Spark, Pig, Sqoop, Hive, Impala, Hbase, Kafka, and Crontab tools.• Expert in creating HIVE UDFs using java in order to analyze data sets for complex aggregate requirements.• Experience in developing ETL applications on large volumes of data using different tools: MapReduce, Spark-Scala, PySpark, Spark-Sql, and Pig.• Experience in using SQOOP for importing and exporting data from RDBMS to HDFS and Hive.

Chandan Reddy's Current Company Details
State of Montana

State Of Montana

View
Actively looking out for opportunities | Data Engineer | Bigdata | Cloud | Hadoop | ETL | Talend | SQL | Snowflake| Airflow | Databricks | BI | Tableau
helena, montana, united states
Website:
mt.gov
Employees:
2626
Chandan Reddy Work Experience Details
  • State Of Montana
    Senior Data Engineer
    State Of Montana Apr 2021 - Present
    United States
    • Conducted in-depth exploratory data analysis and utilized visualization techniques to uncover compelling patterns within datasets.• Leveraged a diverse array of Python libraries, including pandas, NumPy, smtplib, ggplot2, Matplotlib, and Statsmodels, to experiment and derive real-world insights crucial for organizational applications.• Executed Spark implementation through Python and Spark SQL on Putty, facilitating interactive queries, streaming data processing, and seamless… Show more • Conducted in-depth exploratory data analysis and utilized visualization techniques to uncover compelling patterns within datasets.• Leveraged a diverse array of Python libraries, including pandas, NumPy, smtplib, ggplot2, Matplotlib, and Statsmodels, to experiment and derive real-world insights crucial for organizational applications.• Executed Spark implementation through Python and Spark SQL on Putty, facilitating interactive queries, streaming data processing, and seamless integration with databases.• Applied hands-on expertise in constructing data pipelines using either Python or Pyspark.• Developed SQL scripts for aggregating data and scheduling various data cleaning and loading processes.• Established an ETL framework employing Spark and Hive, including daily runs, error handling, and comprehensive logging, to transform raw data into actionable insights.• Utilized DataStage Director for scheduling, validating, running, and monitoring diverse jobs.• Employed DataStage Designer to craft processes for extracting, integrating, and loading data into staging tables.• Responsible for daily development of ETL pipelines in and out of the data warehouse, and creation of significant regulatory and financial reports using advanced SQL queries within Snowflake.• Engaged in writing, testing, and debugging SQL code for transformations using the databuild tool (dbt).• Configured EMR clusters for efficient data ingestion, employing dbt (databuild tool) to transform data within Redshift.• Demonstrated proficiency in utilizing Kubernetes to deploy, scale, load balance, and manage Docker containers with multiple namespace versions.• Orchestrated Docker images and containers through Kubernetes, creating comprehensive master and node configurations. Show less
  • Truist
    Data Engineer
    Truist Apr 2019 - Mar 2021
    Charlotte, North Carolina, United States
    • Orchestrated end-to-end systems catering to Data Analytics, Data Automation, and seamless integration with custom visualization tools, employing Hadoop and MongoDB technologies.• Conducted comprehensive data gathering from diverse sources, shaping datasets for analysis through proficient execution of ETL tasks.• Crafted Neo4j queries and leveraged Spark for extracting data from Neo4j databases, playing a pivotal role in the ETL pipeline.• Designed insightful Tableau… Show more • Orchestrated end-to-end systems catering to Data Analytics, Data Automation, and seamless integration with custom visualization tools, employing Hadoop and MongoDB technologies.• Conducted comprehensive data gathering from diverse sources, shaping datasets for analysis through proficient execution of ETL tasks.• Crafted Neo4j queries and leveraged Spark for extracting data from Neo4j databases, playing a pivotal role in the ETL pipeline.• Designed insightful Tableau visualizations, encompassing bar graphs, scattered plots, and geographical maps to generate detailed summary reports and dynamic dashboards.• Proficiently utilized Snowflake cloud data warehouse and AWS S3 bucket, successfully integrating data from various source systems, including the adept handling of nested JSON formatted data within Snowflake tables.• Spearheaded the data ingestion process through DataStage, facilitating the seamless loading of data into HDFS.• Demonstrated expertise in ETL/ELT concepts, employing DBT and Snowflake for data integration, consolidation, enrichment, and aggregation.• Implemented Airflow to architect complex data pipelines, ensuring effective scheduling and monitoring of workflows.• Executed data loading to Snowflake using Python-based Functional Specifications, automating tasks through Airflow workflows in Python.• Proficiently employed Gitlab CI and Jenkins for comprehensive Continuous Integration (CI) and End-to-End automation, managing all build and Continuous Deployment (CD) processes.• Applied cross-validation techniques to assess model performance with varying data batches, optimizing parameters for enhanced efficiency.• Designed and executed Talend jobs for seamless file transfer between servers, making adept use of Talend FTP components.• Developed ETL/Talend jobs, encompassing both design and code aspects, to efficiently process data into target databases. Show less
  • Cerner Corporation
    Data Engineer
    Cerner Corporation Nov 2017 - Apr 2019
    Kansas City, Missouri, United States
    Gathering all the data that is required from multiple data sources and creating datasets that will be used in the analysis along with ETL tasks. Developed and maintained scalable data pipelines for both streaming and batch requirements and builds out new API integrations to support continuing increases in data volume and complexity.Produced data building blocks, data models, and data flows for varying client demands such as dimensional data, standard and ad hoc reporting, data feeds… Show more Gathering all the data that is required from multiple data sources and creating datasets that will be used in the analysis along with ETL tasks. Developed and maintained scalable data pipelines for both streaming and batch requirements and builds out new API integrations to support continuing increases in data volume and complexity.Produced data building blocks, data models, and data flows for varying client demands such as dimensional data, standard and ad hoc reporting, data feeds, dashboard reporting, and data science research & exploration.Collaborated with enterprise DevSecOps team and other internal organizations on CI/CD best practices experience using JIRA, Jenkins, Confluence, etc.Utilized Kubernetes for the runtime environment of the CI/CD system to build, and test deploy.Kubernetes is being used to orchestrate the deployment, scaling, and management of Docker Containers. Show less
  • Broadridge
    Data Engineer
    Broadridge Oct 2015 - Nov 2017
    Lake Success, Ny, United States
    • Spearheaded the implementation, administration, and management of robust Hadoop infrastructures, ensuring seamless operations and optimal performance.• Conducted comprehensive evaluations of Hadoop infrastructure requirements, actively contributing to the design and deployment of solutions, encompassing high availability and big data clusters. Proactively engaged in cluster monitoring and troubleshooting to address Hadoop-related issues.• Demonstrated proficiency in handling diverse… Show more • Spearheaded the implementation, administration, and management of robust Hadoop infrastructures, ensuring seamless operations and optimal performance.• Conducted comprehensive evaluations of Hadoop infrastructure requirements, actively contributing to the design and deployment of solutions, encompassing high availability and big data clusters. Proactively engaged in cluster monitoring and troubleshooting to address Hadoop-related issues.• Demonstrated proficiency in handling diverse datasets for the purpose of gathering and pre-processing, showcasing adaptability and versatility in managing varied data sources.• Executed seamless data imports and exports between Teradata and HDFS bidirectionally, demonstrating a keen understanding of data movement and integration strategies.• Possessed a robust understanding of the Hadoop ecosystem, including HDFS, MapReduce, and HBase, contributing to a comprehensive knowledge base essential for effective data engineering.• Leveraged Google Cloud Services adeptly to perform impactful big data analytics, showcasing a multi-platform skill set to enhance data processing capabilities.• Played a pivotal role in the software development lifecycle, actively participating in all phases from scope definition and design to implementation, deployment, and testing. Consistently engaged in design and code reviews to maintain high-quality standards.• Developed and implemented scripts facilitating the loading of data into Google Big Query and executing queries for efficient data export, showcasing proficiency in automation and optimization.• Demonstrated hands-on expertise in GCP, Big Query, and GCS Bucket, effectively analyzing data stored in Google Cloud Storage using Big Query.• Crafted MapReduce jobs to generate insightful reports on daily activities, extracting data from multiple sources and storing the output back in HDFS. Show less
  • Avon Technologies
    Data Engineer
    Avon Technologies Jan 2014 - Sep 2015
    Hyderabad, Telangana, India
    • Orchestrated the development of data mapping specifications, meticulously crafting and executing comprehensive system test plans. These data mappings delineate the extraction of data from our internal data warehouse, its transformation, and subsequent transmission to external entities.• Spearheaded initiatives in data profiling and the formulation of diverse data quality rules, leveraging Informatica Data Quality to enhance data integrity and reliability.• Demonstrated proficiency in… Show more • Orchestrated the development of data mapping specifications, meticulously crafting and executing comprehensive system test plans. These data mappings delineate the extraction of data from our internal data warehouse, its transformation, and subsequent transmission to external entities.• Spearheaded initiatives in data profiling and the formulation of diverse data quality rules, leveraging Informatica Data Quality to enhance data integrity and reliability.• Demonstrated proficiency in crafting and disseminating tailored interactive reports and dashboards, coupled with adept report scheduling using the Tableau server.• Conducted in-depth analysis of business and system requirements, meticulously documenting functional and supplementary requirements in Quality Center. This involved a thorough understanding of data mapping specifications.• Conducted rigorous testing of complex ETL mappings and sessions, aligning them with business user requirements and rules to seamlessly load data from source flat files and RDBMS tables to target tables.• Took charge of diverse Data mapping activities, ensuring seamless transitions from source systems to Teradata.• Pioneered the creation of a test environment for the Staging area, effectively loading it with data from multiple sources.• Analyzed a diverse array of data sources, including flat files, ASCII data, EBCDIC data, and relational data from Oracle, DB2 UDB, and MS SQL Server.• Built S3 buckets and managed policies for S3 buckets and used it for storage and backup on AWS.• Imported Legacy data from SQL Server and Teradata into Amazon S3. • Created new servers in AWS using EC2 instances, configured security groups and Elastic Ips for the instances.• Delivered data files in various formats such as Excel, Tab-delimited text, Comma-separated text, and Pipe-delimited text.• Employed Java and Python to develop a PDF parser, showcasing adeptness in parsing structured data. Show less
  • Tata Consultancy Services
    Data Analyst
    Tata Consultancy Services May 2012 - Jan 2014
    Hyderabad, Telangana, India
    • Developed and crafted comprehensive test cases aligned with business requirements, referencing Source to Target Detailed mapping documents and Transformation rules documents.• Conducted extensive data validation through SQL queries and back-end testing.• Utilized SQL in a UNIX environment to query databases effectively.• Formulated distinct test cases for both inbound and outbound ETL processes, as well as reporting.• Collaborated closely with the Design and Development teams… Show more • Developed and crafted comprehensive test cases aligned with business requirements, referencing Source to Target Detailed mapping documents and Transformation rules documents.• Conducted extensive data validation through SQL queries and back-end testing.• Utilized SQL in a UNIX environment to query databases effectively.• Formulated distinct test cases for both inbound and outbound ETL processes, as well as reporting.• Collaborated closely with the Design and Development teams to implement project requirements.• Created and executed test scripts manually to validate and ensure the accuracy of expected results.• Designed and implemented ETL processes using the Informatica ETL tool, specifically focusing on dimension and fact file creation.• Conducted diverse testing methodologies, including Black Box (Functional, Regression, Data-Driven) and White Box (Unit and Integration Testing) encompassing positive and negative scenarios.• Tracked defects, reviewed, analyzed, and compared results using Quality Center.• Actively participated in MR/CR review meetings, contributing to issue resolution.• Defined the scope for System and Integration Testing.• Prepared and submitted summarized audit reports, taking corrective actions based on findings.• Played a key role in uploading Master and Transactional data from flat files, devising corresponding test cases for Sub-System Testing.• Documented and published test results, troubleshooting and escalating issues as necessary.• Conducted functionality testing of email notifications in ETL job failures, aborts, or data-related problems.• Identified, assessed, and communicated potential risks associated with the testing scope, product quality, and schedule.• Created and executed test cases for ETL jobs, specifically focusing on uploading master data to the repository.• Worked with Java MapReduce to append data and utilized JSP & Struts for generating aggregate data. Show less

Chandan Reddy Education Details

Frequently Asked Questions about Chandan Reddy

What company does Chandan Reddy work for?

Chandan Reddy works for State Of Montana

What is Chandan Reddy's role at the current company?

Chandan Reddy's current role is Actively looking out for opportunities | Data Engineer | Bigdata | Cloud | Hadoop | ETL | Talend | SQL | Snowflake| Airflow | Databricks | BI | Tableau.

What schools did Chandan Reddy attend?

Chandan Reddy attended Osmania University, Hyderabad.

Who are Chandan Reddy's colleagues?

Chandan Reddy's colleagues are Kendall Coniglio, Mckenna Sheridan, Terry Johnson, Pamela Seal, Corene Andrews, Sara N., Lois Vallance.

Not the Chandan Reddy you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.