Based on the provided professional summary, here’s a refined version that highlights your extensive experience, skills, and achievements in the field of data engineering and big data technologies:Professional SummaryWith over 10 years of comprehensive experience in the Full Software Development Life Cycle (SDLC), I specialize in designing, developing, and maintaining high-performance multi-tiered web applications. My expertise encompasses a robust understanding of the Hadoop ecosystem, including HDFS, MapReduce, Hive, Pig, Sqoop, HBase, and Spark, along with proficiency in real-time analytics and data streaming. I have hands-on experience in installing, configuring, and administering Hadoop clusters using major distributions such as Hortonworks and Cloudera. My technical skills extend to developing scalable data solutions using Apache Spark with Scala for efficient data processing and transformation. I am adept at utilizing tools like Kafka for real-time data ingestion and Flume for data movement across systems. My achievements include leading the migration of legacy MapReduce jobs to PySpark, enhancing performance by 40%, and developing large-scale ETL workflows that significantly improved data processing speed. I have successfully built real-time streaming data pipelines that drive actionable insights for critical business applications. In addition to my technical capabilities, I possess strong problem-solving and analytical skills, enabling me to make balanced independent decisions. My experience with cloud platforms like AWS and Azure further enhances my ability to implement effective data movement solutions and manage complex ETL processes. I thrive in collaborative environments and have a proven track record as an agile developer, consistently delivering high-quality results while fostering teamwork. My commitment to continuous learning ensures that I stay abreast of industry trends and best practices.
-
Data AnalystBristol Myers SquibbWorcester, Ma, Us -
Sr. Data EngineerCvs Health Sep 2022 - PresentWoonsocket, Rhode Island, United StatesAs a Sr. Data Engineer at CVS, I lead the migration of on-premises applications to AWS, leveraging services like EC2 and S3 for efficient data processing and storage. I specialize in developing real-time analytics platforms using Spark Streaming and Kafka to enhance patient monitoring, ensuring HIPAA compliance while reducing operational costs by 30%. My role includes optimizing data ingestion pipelines for structured and semi-structured data, migrating legacy systems to Snowflake, and implementing automated ETL workflows with AWS Glue and Amazon Redshift. Additionally, I focus on maintaining Hadoop clusters and designing scalable data solutions to support business intelligence initiatives. -
Data Engineer3M Apr 2020 - Aug 2022St Paul, Minnesota, United StatesAs a Data Engineer at 3M from April 2020 to August 2022, I was responsible for analyzing user stories and participating in agile development processes. I developed Spark jobs using Scala and Python for both interactive and batch analysis, optimizing existing algorithms in Hadoop. My role involved migrating data from on-premises systems to Microsoft Azure, implementing data ingestion workflows using NiFi and Kafka, and utilizing various Python libraries for ETL processes. I also designed and developed Oozie workflows, created Azure Stream Analytics jobs, and built visualizations using Power BI. My experience includes working with big data technologies, cloud services, and machine learning modules to enhance data processing and analytics capabilities. -
Data EngineerU.S. Bank Aug 2018 - Mar 2020Minneapolis, Minnesota, United StatesAs a Data Engineer at US Bank from August 2018 to March 2020, I administered and maintained Cloudera Hadoop clusters on Linux. My responsibilities included analyzing the Hadoop stack and utilizing various big data analytic tools such as Pig, Hive, HBase, and Sqoop. I developed multiple MapReduce programs for data extraction and transformation from over 20 sources in various formats (XML, JSON, CSV). I worked in an AWS environment for developing custom Hadoop applications and created Oozie workflows for Hadoop jobs. Additionally, I performed data validation using MapReduce, handled data imports from MySQL into HDFS using Sqoop, and transferred data between AWS S3 and Redshift using Informatica. My experience also included writing HiveQL queries, developing ETL workflows, and generating reports using SSRS. -
Data EngineerWells Fargo Oct 2016 - Jul 2018Charlotte, North Carolina, United StatesAs a Data Engineer at Wells Fargo from October 2016 to July 2018, I collaborated with the Business Intelligence (BI) team to gather report requirements and utilized Sqoop to export data into HDFS and Hive. My responsibilities included data collection and treatment, where I analyzed internal and external data for entry and classification errors. I performed data mining using techniques such as cluster analysis and decision trees to identify customer segments and analyze purchasing behavior. I developed multiple MapReduce jobs in Java for data cleaning and pre-processing, managed Flume infrastructure, and served as an administrator for Pig, Hive, and HBase. Additionally, I created Hive tables, wrote HiveQL queries for trend analysis, and automated data loading processes using Oozie. My role also involved importing log files into HDFS and optimizing data processing workflows. -
Hadoop DeveloperTanzanite Technologies Dec 2013 - Aug 2016IndiaAs a Hadoop Developer at Tanzanite Technologies from December 2013 to August 2016, I was involved in evaluating both functional and non-functional requirements. I installed and configured Hadoop MapReduce and HDFS, developing multiple MapReduce jobs in Java for data cleaning and pre-processing. My responsibilities included writing Pig Latin scripts, managing Hadoop log files, and importing data from MySQL to HDFS using Sqoop. I developed Hive queries for data analysis, created Hive tables, and constructed job flows. Additionally, I worked with NoSQL databases like HBase and SOLR, designed a custom file systems plug-in for Hadoop, and extracted social media feeds using Python scripts. My role also included setting up benchmarked Hadoop clusters for internal purposes.
Bharath Chandra Education Details
-
Bachelor'S Degree
Frequently Asked Questions about Bharath Chandra
What company does Bharath Chandra work for?
Bharath Chandra works for Bristol Myers Squibb
What is Bharath Chandra's role at the current company?
Bharath Chandra's current role is Data Analyst.
What schools did Bharath Chandra attend?
Bharath Chandra attended Manipal Institute Of Technology.
Who are Bharath Chandra's colleagues?
Bharath Chandra's colleagues are Priscilla D., Anthony Smith, Brian Fletcher, Jennifer Eberhardt, Mba, Ken Carpenter, Carolin Stammer, Manuela Yabar-Alvarez.
Not the Bharath Chandra you were looking for?
-
Bharath Chandra
Plano, Tx1t-mobile.com -
Bharath Chandra
Dallas, Tx -
-
Bharath Chandra
Houston, Tx1gmail.com
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial