I am a customer-focused, results-driven IT professional with over 16 years of hands-on experience in building complex, scalable, and optimal big data pipelines for enterprise data lakes and warehouses. My work enables organizations to develop robust business intelligence solutions for effective data-driven decision-making.My expertise spans a wide range of technologies and tools including Snowflake, Databricks, Hadoop Big Data, HDFS, Hive, Presto, AWS Glue, AWS Athena, Apache Spark, Python, PySpark, SparkSQL, SQL, RDBMS, MPP OLAP, Teradata, ETL/ELT, DBT, Unix, Shell scripting, AWS EMR, S3, AWS Cloud, GitHub, Jenkins, Airflow, Azkaban, CircleCI, PowerBI, Tableau, and JIRA.I have extensive experience processing large-scale enterprise data, and I am deeply involved in data modeling, SQL query optimization, and performance improvement methodologies. I possess a strong understanding of RDBMS concepts including FACT, Dimension, STAR, Snowflake, OLTP, OLAP, and partition techniques.Throughout my career, I have actively participated in the Software Development Life Cycle (SDLC) for numerous projects, covering requirements gathering, development, unit testing, and quality assurance. I have worked extensively with MPP OLAP data warehousing systems like Teradata and Redshift, as well as the Snowflake cloud data warehouse and Databricks data lake house.I have a proven track record of pulling data from various transactional databases such as Oracle, Salesforce, MySQL, Postgres, and Cassandra, and I am skilled in using data analytics and visualization tools like Tableau, Anaconda, Jupyter Notebook, R, and ggplot. My experience also includes building streaming data pipelines using Apache Kafka and AWS Kinesis, and collaborating closely with data scientists and analysts to meet their data needs. I have a foundational understanding of data science concepts, including machine learning algorithms based on decision trees and linear regression techniques.In addition to my technical expertise, I have over 5 years of experience as a manager and technical lead, where I have taken on responsibilities such as technical evaluation, team building, and code review, alongside hands-on development activities. I am well-versed in agile methodologies, conducting daily stand-ups, sprint grooming, planning, and retrospective sessions using JIRA.
-
Staff Engineer, DataIndigoTexas, United States -
Staff Data And Analytics EngineerIndigo Sep 2021 - Mar 2024Boston, Ma, UsIndigoAG - harnessing nature to help farmers sustainably feed the planet.Indigo develops biological and digital technologies that improve grower profitability, environmental sustainability, and consumer health. These technologies underpin its pioneering business model, which spans agriculture’s full value chain. I'm part of Indigo's Data Platform team and work on the data analytics side of crop marketing. My day at work is filled with data modeling, building ETL and ELT data pipelines, developing data marts, collaborating with data scientists, data analysts, and business intelligence teams, creating solution architecture and data flow diagrams, and documentation. Our technology stack includes Snowflake Cloud Data Warehouse, Apache Spark for data lake, dbt for data modeling and ETL, CircleCI for CI/CD, Airflow for workflow orchestration, AWS cloud services, Github for version control, Confluence for collaborative documentation, and JIRA for Agile sprint methodologies. -
Lead Data EngineerPlaystation Dec 2019 - Sep 2021San Mateo, California, UsGaming industry is the biggest and most profitable entertainment industry today and PlayStation has been a pioneer in the business for the last 25 years. Over 100 million PlayStation-4(PS-4) hardware console units sold and having more than 100 million registered users on "PlayStation Network", big data challenges are quite inevitable. I'm part of "Customer and Content Data Engineering" team and responsible for building complex, scalable and optimal big data pipelines using our technology stack: Apache Spark, Python, PySpark, SparkSQL, Athena, Airflow, Git and Jenkins backed by AWS Cloud Technology. -
Senior Data EngineerGrubhub Sep 2017 - Dec 2019Chicago, Illinois, UsGrubhub is the nation's leading online and mobile food ordering company dedicated to connecting hungry diners with local takeout restaurants. It serves about 20 million active diners by connecting to 80000+ local restaurants and processing 500000+ orders per day. Due to exponential growth that the company is now seeing, bigdata problem is quite inevitable. If we see GRUBHUB through 4 V's of bigdata:Volume – being nation's leading food delivery platform, serving about 20 million diners over 1600 cities, so, huge data volume is obvious. Velocity - Number of orders that are being processed on daily basis is standing at ~500000. Each order must deal with multiple facades of data such as order management, restaurant connect, diners, delivery e.t.c which implies data being accumulated at a greater rate. To study the dynamic nature of the data, realtime and near realtime based data analytics is taking the lead over daily batch processes.Variety - being a highly data driven company, top level management likes to see 360 degree of each entity part of GRUBHUB’s ecosystem. To achieve this, right from geo information to clickstream data, doors are open to all varieties of data. It can be of various formats such as text, raw images, structured or unstructured.Veracity - forecasting metrics is quite challenging due to its continuous behavior in daily data trends, exploring historical data is much needed to solve the inconsistency in forecasting stable metrics.By looking at all the 4 V data characteristics listed above, it is obvious that GRUBHUB is a perfect place for BIGDATA case study as we adopt cutting edge technologies as we evolve. big data, hadoop, hdfs, hive, sqoop, pig, presto, apache spark, python, pyspark, spark SQL, spark dataframes, aws emr, aws s3, aws redshift, ETL, UNIX, SQL, OLAP, RDBMS, mysql, cloud, github, jenkins, azkaban, JIRA , redash/tableau are few of the tools / technologies we use on daily basis. -
Dw Architecture, Design And Development - Ndw FusionComcast Nov 2016 - Aug 2017Philadelphia, Pa, UsComcast Corp, A fortune 50 company operates its business through five reportable segments: Cable Communications, Cable Networks, Broadcast Television, Filmed Entertainment and Theme Parks. The company provides video, high-speed Internet and phone services to residential and business customers in the United States. Due to its vast variety of product and service offerings to millions of customers, enormous amount of data is being generated part of its daily operations pertaining to state of the art billing systems, CRM, Support Services e.t.c. This data is being fed to NDW - Comcast's Enterprise Data Warehouse. The objective of NDW is to act as a single source of truth by consolidating historical data for organization's reporting and analytical needs. Fusion program is an enhancement initiative on top of existing NDW. Data Ingestion, Performance Engineering, ETL Design and development, Data quality management, DW storage forecasting and space optimization, evaluating open source tools such as Presto e.t.c are few of my daily duties here.teradata, big data, hadoop, hdfs, hive, sqoop, pig, presto, apache spark, python, pyspark, spark SQL, spark dataframes, aws emr, aws s3, aws redshift, ETL, UNIX, SQL, OLAP, RDBMS are few of the tools / technologies we use on daily basis. -
Solution Architect | Technical Product OwnerDell Mar 2015 - Oct 2016Round Rock, Texas, UsAs a Solution Architect for Marketing Applications part of Dell's Enterprise Business Intelligence, I've had an opportunity to technically lead 20 member team comprising of Data Architects, ETL Architects and ETL Developers. We as a team did focus on building Marketing BI applications to widen Dell's marketing contacts reach. Lead generation, Lead nurturing using Eloqua, analyzing buying power characteristics based on D&B firmographics data, sending data extracts to Merkle, Epsilon - 3rd party digital marketing firms for email campaigns e.t.c are few of our main focus points. We did start working on building a 360 degree customer dashboard based on HADOOP Bigdata to help Dell's sales reps by providing one click overview of existing customer's transaction history. We have actively taken a part in feeding data to Octane recommendation engine and predictive analytics based of R neighboring data analysis. As a solution architect, I do create high level Data Flow Diagrams, Solution Architecture Design Documents. On the other hand, I had interlocks with other solution architects from various domains to discuss, design and develop organization level Business Intelligence methodologies. Apart from the daily duties, I did participate in forecasting teradata space needs and checking the necessity for up-gradation of existing DW infrastructure. Also had an opportunity for technical evaluation / PoC creation using new tools such as Teradata Aster, Presto, Cloudera CDH and Tableau.teradata, big data, hadoop, hdfs, hive, sqoop, pig, presto, apache spark, python, pyspark, SQL, mysql, cloud, control-m are few of the tools / technologies we use on daily basis -
Business Intelligence / Data Warehousing EngineerAt&T Sep 2009 - Feb 2015Dallas, Tx, UsMy responsibilities include ...Design and development of ETLs using Teradata utilities such as tpt, mload and fexp.Formulation and execution of application development procedures.Preparation and maintenance of unit test cases and plans.Installation and management of ETL modules in adherence to design specifications.Testing and scheduling ETL workflows for Teradata sources.Analyzing ETL workflows for ensuring process performance.Writing Trivoli TWS Files for scheduling jobs and processes.Providing technical guidance during SDLC with IT teams.Providing Tier-1, 2 and 3 support from Development perspective to QA Team and Operations teams.Production deployment activities and check lists.Providing post deployment support and addressed performance issues during Application live.Leading 8 people team, work distribution and scheduled status meeting with clients.Team building, internal cross trainings and conducting technical interviews. -
Etl DeveloperTata Consultancy Services Sep 2007 - Aug 2009Mumbai, Maharashtra, InDesigned and developed Informatica ETL mapping and workflows to extract data from 3rd parties.Designed and developed ETL using Teradata utilities such as tpt, mload and fexpFormulated and executed application development procedures.Prepared and maintained unit test cases and plans.Installed and managed ETL modules in adherence to design specifications.Tested and managed ETL workflows Oracle/ DB2 and Teradata sources.Created and updated plans for ETL mappings and database application code.Analyzed ETL workflows for ensuring process performance.Wrote Trivoli TWS Files for scheduling jobs and processes.Provided technical guidance during SDLC with IT teams.Provided support from Development perspective to QA Team.Production deployment activities and check lists.Provided post deployment support and addressed performance issues during Application live.
Naresh Yegireddi Education Details
-
Motilal Nehru National Institute Of TechnologyElectical Engineering - Power Electronics -
Gmr Institute Of TechnologyElectrical And Electronics
Frequently Asked Questions about Naresh Yegireddi
What company does Naresh Yegireddi work for?
Naresh Yegireddi works for Indigo
What is Naresh Yegireddi's role at the current company?
Naresh Yegireddi's current role is Staff Engineer, Data.
What schools did Naresh Yegireddi attend?
Naresh Yegireddi attended Motilal Nehru National Institute Of Technology, Gmr Institute Of Technology.
Who are Naresh Yegireddi's colleagues?
Naresh Yegireddi's colleagues are Agrilearner App Team, Jitendra Bharti, Roberto Ribeiro, Osvaldo Machado, Carlos Eduardo Fernandes Delfante, Vanessa Alves, Sumit Chhachar.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial