Experienced Senior Engineering Consultant with a demonstrated history of working in the insurance industry. Skilled in Cloudera, Apache spark, Apache Kafka, Apache Flume, and Apache Tez. Strong engineering professional with a Bachelor of Engineering.
Confidential
-
Senior Hadoop Engineering ConsultantConfidential Oct 2017 - PresentSafeAuto was ahead of the curve many years ago in using data analytics and BI to transform its business strategy which has made it one of the most profitable oil producers today. As big data consultant, I was responsible for delivery of batch data integration and automation in a client consulting environment. Consulted in the areas of data and analytics, specifically using Hadoop, spark, hive and related tools. Hands-on major components in Hadoop Echo Systems like Spark, HDFS, HIVE, HBase, Zookeeper, Sqoop, Oozie, Flume, Kafka. Responsible for defining and understanding the key business problems to be solved. Gathered, integrated and prepared data for consumption in machine learning and advanced analytics usages. Identify and translate business requirements into data analysis and data acquisition requirements Assisted in the acquisition, transformation, and preparation of data for analysis and mining.
-
Big Data Hadoop EngineerVanguard Feb 2017 - Aug 2017Valley Forge, Pa, UsThis project required work across a variety of functional stakeholders (marketing, category, supply chain, etc.) to translate business needs into data and visualization requirements, and then back to pipelines to deliver to data analyst and BI teams. There was a strong focus on data governance processes with an objective to transform manual reporting tools dependent on a refined and optimized data cleaning and delivery system. Developed Spark scripts by using Scala as per the requirement. Load the data into Spark RDD and performed in-memory data computation to generate the output response. Performed different types of transformations and actions on the RDD to meet the business requirements. Developed a data pipeline using Kafka, Spark and Hive to ingest, transform and analyzing data. Also worked on analyzing Hadoop cluster and different BigData analytic tools including HBase and Sqoop. Involved in loading data from UNIX file system to HDFS. Responsible to manage data coming from various sources. Worked on loading and transforming of large sets of structured, semi structured and unstructured data. Performed cluster coordination services through Zookeeper. -
Big Data EngineerAt&T Jul 2015 - Dec 2016Dallas, Tx, UsAt&T uses Big Data to stay agile and repsonsive to ever changing consumer demands. Tio do this, it has established a self-sufficient Hadoop ecosystem on an AWS cloud platform, which it harnesses and captures powerful first-party data, and mines it for insights that enables it to build compelling content. Involved in scheduling Oozie workflow engine to run multiple Hive jobs Worked in an Hadoop big data ecosystem on Amazon AWS using EMR, EC2, SQS, S3, DynamoDB, Redshift, Cloud Formation. Work Experience with Cloud Infrastructure like Amazon Web Services. Developing parser and loader map reduce application to retrieve data from HDFS and store to HBase and Hive. Importing the unstructured data into the HDFS using Flume. Hands-on major components in Hadoop Echo Systems like Spark, HDFS, HIVE,HBase, Zookeeper, Sqoop, Oozie, Flume, Kafka. Experience in Importing and Exporting data using Sqoop from Oracle, MY-SQL DB to HDFS and Data Lake. Experience in developing Shell Scripts, Oozie Scripts and Python Scripts. -
Big Data AdministratorGss Infotech Aug 2012 - Jun 2015UsGSs infotech is large IT client provides services related to cloud computing, remote infrastructure management, virtualization, application management. Responsible for building Hadoop Cluster using Hortonworks Distribution with NameNode and Resource Manager Configured policies for all components. Troubleshooting of Oozie Workflows, Hive queries, and Spark Jobs. Created YARN Queues for each Customer and configuring Capacity for each Queue. Configuring views in Ambari on separate Ambari Server. Troubleshooting of port opening issues along with Firewall Team for Data Transfer, Kerberos Configuration. Daily Housekeeping of local file systems, HDFS and involved in scripts creation for automated housekeeping. Coordinated with different teams SME (Unix, Vmware) for OS configuration/Unix server hung issues. Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, manage and review data backups, manage and review Hadoop log files.
V. Krishna Education Details
-
Osmania UniversityElectronics And Communications Engineering
Frequently Asked Questions about V. Krishna
What company does V. Krishna work for?
V. Krishna works for Confidential
What is V. Krishna's role at the current company?
V. Krishna's current role is SENIOR HADOOP ENGINEERING CONSULTANT.
What schools did V. Krishna attend?
V. Krishna attended Osmania University.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial