Yashwanth Boddu

Yashwanth Boddu Email and Phone Number

Data Engineer at Union Pacific Railroad @ Union Pacific Railroad
1400 Douglas Street, Omaha, NE 68179, us
Yashwanth Boddu's Location
Omaha, Nebraska, United States, United States
About Yashwanth Boddu

Experienced professional with 6 years in Design, Development, Testing and Architecture, in building enterprise scale applications and semantic layers using big data technologies. Strong functional knowledge in healthcare domains - providers, members and claims.

Yashwanth Boddu's Current Company Details
Union Pacific Railroad

Union Pacific Railroad

View
Data Engineer at Union Pacific Railroad
1400 Douglas Street, Omaha, NE 68179, us
Website:
up.com
Employees:
10
Yashwanth Boddu Work Experience Details
  • Union Pacific Railroad
    Big Data Engineer (Azure)
    Union Pacific Railroad Feb 2023 - Present
    Omaha, Ne, Us
  • Molina Healthcare
    Data Engineer/Hadoop Developer
    Molina Healthcare Dec 2017 - Dec 2022
    Long Beach, California, Us
    • Responsible for building scalable distributed data solutions using Hadoop. Handled importing of data from various data sources using attunity and imported into our core data platform using Talend.• Work with data loads using Talend to ensure all the data is available to form a pipeline and later transform the data using spark SQL and Scala, implement the pipeline and validate the data in UAT environment and implement in production environment.• Validated the pipeline against source using Microsoft SQL server management studio and implemented the pipeline in production.• Experienced on loading and transforming of large sets of structured and semi structured data.• Coordinated with business team to ensure all of data is correct and implemented business logics on air.• Interface with business analysts, project Managers to determine the best requirement and programming specifications to design software programs that solve the business needs. Work effectively with design teams to ensure software solutions elevated client-side experience.• Experience, working on JIRA for project management, GIT for source code management, JENKINS for continuous integration and Crucible for code reviews.• Attend grooming sessions and come up with business logic and work on provider data integrity by writing some transformations (in our project, they are called as rules), get approval from product owners and implement the rules in production as well and work with PowerBI to provide a user interface to business. • Worked with eclipse to orchestrate execution of all other scripts using jars and Talend.• Prepare & Maintain Technical design document of the code development and subsequent revisions. Prepare workflow diagram to describe the flow of process for deployment by system admin.• Responsible for code migration from on prem cloudera spark 2 to databricks azure cloud in spark 3.
  • At&T
    Hadoop Developer
    At&T Jan 2017 - Dec 2017
    Dallas, Tx, Us
    • Help team in design and development of database. Develop solutions for real-time and batch mode event/ log collecting from various data source.• Actively participate in team agile planning and sprint execution.• My responsibilities include Hadoop development and implementation, loading from disparate data sets, Pre-processing using Hive and Pig.• Perform systems analysis and programming tasks to maintain and control the use of computer systems software.• Identify and suggest new technologies and tools for enhancing product value and increasing team productivity.• Performing analysis of vast data stores and uncover insights, maintain security and data privacy. • Performed High-speed querying, Managing and deploying HBase. • Real time streaming the data using Spark with Kafka.• Collaborate with team and performance engineers to enhance supportability and identify performance bottlenecks.• Writing high-performance, reliable and maintainable code, write MapReduce jobs. Demonstrable knowledge of database structures, Filesystems, theories, principles, and practices. Familiarity with data loading tools like Flume, SQOOP.• Developed code fixes and enhancements for inclusion in future code releases and patches. Stress-tested server code to validate code changes. Integrated live, virtual and constructive programs into a cohesive product.• Performed Smoke, Functionality, Integration, System, Regression tests based on Analysis, non-functional specifications and end-user needs.
  • Swift Pace Solutions Inc
    Software Developer
    Swift Pace Solutions Inc Jun 2016 - Jan 2017
    Irving, Texas, Us
    • System design using various J2EE patterns like Iterator pattern, Adaptor Pattern, Singleton Pattern, Business Delegate, Data Access Objects, Factory• Application development and enhancement using spring framework.• Developed User Interfaces using JavaScript and JavaScript frameworks using Ember JS, AngularJS and Backbone JS.• Experience using Hibernate, JDBC to access the database.• Experience with Restful Web Services to get customer address history information.• Mainly responsible for developing RESTful API using spring framework. Developed different controllers that return the response both in JSON and XML based on the request type.• Collaborated with Product Development group to design and implement highly interactive, data-intensive web applications for legal professionals.• Setup and benchmarked Hadoop /HBase clusters for internal use. • Developed Java MapReduce programs for the analysis of sample log file stored in cluster. • Developed MapReduce programs to cleanse the data in HDFS obtained from heterogeneous data sources to make it suitable for ingestion into Hive schema for analysis • Developed multiple scripts for analyzing data using Hive and Pig and integrating with HBase. • Used Sqoop to import data into HDFS and Hive from other data systems. • Created reports for the BI team using Sqoop to export data into HDFS and Hive.• Migration of ETL processes from Oracle to Hive to test the easy data manipulation. • Data was pre-processed and fact tables were created using HIVE. • The resulting data set was exported to SQL server for further analysis.• Create Hive scripts to extract, transform, load (ETL) and store the data. • Automated all the jobs from pulling data from databases to loading data into SQL server using shell scripts.• Organizing code with help of GitHub.

Yashwanth Boddu Education Details

  • Wayne State University
    Wayne State University
    Computer Engineering

Frequently Asked Questions about Yashwanth Boddu

What company does Yashwanth Boddu work for?

Yashwanth Boddu works for Union Pacific Railroad

What is Yashwanth Boddu's role at the current company?

Yashwanth Boddu's current role is Data Engineer at Union Pacific Railroad.

What schools did Yashwanth Boddu attend?

Yashwanth Boddu attended Wayne State University.

Who are Yashwanth Boddu's colleagues?

Yashwanth Boddu's colleagues are M Jackson, Edward Sparks, Christopher Ragone, Chuck Smoot, Fady Ehab, ਘੈਂਟ Jatt, Jake Gradoville.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.