Murali Krishna

Murali Krishna Email and Phone Number

Allen, TX, US
Murali Krishna's Location
Allen, Texas, United States, United States
Murali Krishna's Contact Details

Murali Krishna work email

Murali Krishna personal email

n/a
About Murali Krishna

I’m curious and passionate about technology, which is why I'm always looking for the latest technology and learning them.⏩ Excellent understanding of Hadoop and MapReduce architecture and internal working of its daemons Namenode, Datanode, Resource Manager etc.⏩ Strong knowledge of Hadoop Ecosystem and services such as Hive, Flume, Pig, Sqoop, Oozie, Zookeeper, Hbase, Kafka.⏩ Expertise in Creating ETL transforms, HIVE tables and designing the ETL Process.Skills: Here is my Skill set that exemplify my abilities:✔️ #Hadoop/#Big Data Technologies : #HDFS, #MapReduce, #Hive, #Sqoop, #Impala, #Oozie, #Cassandra, #MongoDB, #Control-M, #Kafka, #SparkSQL✔️ #Hadoop Distributions : #Cloudera, #Hortonworks✔️ #NoSQL Databases : #HBase, #MongoDB✔️ Development Methodologies : #Agile #Waterfall approach.Thank you for Spending your valuable time to have a look at my Profile.#You can reach me at Muralik3434@gmail.com

Murali Krishna's Current Company Details
Ashley Furniture Industries

Ashley Furniture Industries

View
Data Engineer
Allen, TX, US
Employees:
10546
Murali Krishna Work Experience Details
  • Ashley Furniture Industries
    Data Engineer
    Ashley Furniture Industries
    Allen, Tx, Us
  • Bank Of America
    Sr. Data Engineer
    Bank Of America Nov 2024 - Present
    Charlotte, Nc, Us
    Working on migrating Source Systems, Pipelines and existing data nodes from Oracle Exadata to Hadoop.Developed and Optimized HQL queries to load and transition the data which reduced to runtime by 30%.
  • Broadridge Financial Solutions
    Sr. Data Engineer
    Broadridge Financial Solutions Sep 2023 - Aug 2024
    • Analyzing the requirement to setup a cluster.• Installed and configured Hadoop, MapReduce, HDFS, developed multiple MapReduce jobs in Java.• Worked with Azure Data Factory for designing and building CI/CD pipelines and ETL pipelines to combine data analytics, Machine Learning and application development. • Worked with Azure HDInsight to run multiple MapReduce application to compare the metrics before deployment and improve the performance efficiency.• Developed Map Reduce programs in Java for parsing the raw data and populating Staging Tables.• Developed Spark scripts by using Scala shell commands as per the requirement.• Used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.• Developed Scala scripts, UDFFs using both Data frames/SQL and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into OLTP system through Sqoop.• Importing and exporting data into HDFS and Hive using Sqoop.• Experienced in analyzing data with Hive and Pig.• Writing PIG scripts to process the data.• Created complex SQL scripts involving JOINS, FILTERS, Data type Conversion, and functions like TRIM, ROUND, ISNULL, COALESCE, COUNT etc.• Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.• Integrating bulk data into Cassandra file system using MapReduce programs.• Involved in HBase setup and storing data into HBase, which will be used for further analysis.• Experienced in working with Azure HDInsight and setting up environments on Azure VM instances.• Involved in creating Hive tables, loading with data and writing Hive queries using the HiveQL which will run internally in the map-reduce way.• Extracted the data from MySQL into HDFS using Sqoop.• Performed advanced procedures like text analytics and processing, using the in-memory computing capabilities of Spark using Scala.• Used HiveQL to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Ashley Furniture Industries
    Data Engineer
    Ashley Furniture Industries Mar 2022 - Aug 2023
    Arcadia, Wi, Us
    • Good understanding of Semarchy xDM tools and created multiple models based on the pipeline design.• Worked with Azure Cloud Services to design pipelines for Data migration to increase the data efficiency.• Load various large sets of structured, semi structured and unstructured data into ETL pipeline and transformed.• Worked on Swagger API and auto-generated documentation for all REST call.• Used Rest client - POSTMAN and ARC tools to test REST API services.• Developed Data Models and created the mapping Java project for Semarchy reporting.• Integrated multiple Java Enrichers and Validators and created multiple plugin for Data enrichment and validation.• Created Forms, Collections, Views and Business View for multiple Models in Semarchy.• Worked on Data load and Data extraction through integration job processing.• Worked closely with team members and stakeholders to solve business challenges. • Developed RESTFUL Java API and created Java script to call the API and created ETL and mapping tables for business analysis.• Worked closely in setting up of different environment and updating configurations.• Worked on setting up the transformation configurations for running the Spark job in Azure Databricks.• Worked with Devops team schedule pipelines in Azure Data Factory using Kubernetes.
  • Apple
    Data Engineer
    Apple Nov 2021 - Mar 2022
    Cupertino, California, Us
    ➡️ Designed, tested, and maintained data management and processing systems.➡️ Developed data analysis, content validation frameworks for verifying logs using python and selenium, reducing the manual validation effort by 50%.➡️ Good understanding of REST API and created PySpark script to call the API and created ETL for business analysis.➡️ Worked closely with team members and stakeholders to solve business challenges. ➡️ Built scalable and fault-tolerant systems also evaluated the workflows and increased the efficiency of ETL pipelines. ➡️ Integrated a variety of programming languages and tools together, such as Python, Scala, MapReduce, etc. ➡️ Optimized SQL query design for the highest priority data science ETL pipeline and migrated its HiveQL queries to PySpark and redesigned intermediate table loading processes, reducing runtime by 40%.
  • Altech Solutions Llc.
    Data Engineer
    Altech Solutions Llc. Jan 2021 - Nov 2021
    Los Angeles, Ca, Us
    ➡️ Majorly Involved in developing spark programs using Python, Scala and Spark SQL for our project.➡️ Loaded and transformed large sets of structured, semi structured and unstructured data in various formats like text, zip, XML and JSON.➡️ Implemented MapReduce programs on log data to transform into structured way to find user information.➡️ Configured and created application log files using Log4J required to trace application messages.➡️ Created custom log4j appender for logging spark application logs into Kafka.➡️ Used Jsoup functions to parse the raw HTML data and extract the data into dataframe.➡️ Worked extensively on development of transformation notebooks in Databricks using Scala and Python.➡️ Loaded the transformed data into Hive Table.
  • Oclc
    Data Engineer
    Oclc Jan 2020 - Dec 2020
    Dublin, Oh, Us
    ➡️ Majorly Involved in developing spark programs using Python and Spark SQL for our project.➡️ Designed the ETL process and created the High-level design document including the logical data flows, source data extraction process, the database staging, job scheduling, and Error Handling.➡️ In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Application Master, Node Manager, Resource Manager, NameNode, DataNode and MapReduce concepts.➡️ Responsible for creating a Kafka Interface for storing the log information from various projects and visualize them using Kibana Dashboard and store the logs in HDFS for future analysis incase an error occurs.
  • Landis+Gyr
    Hadoop Developer
    Landis+Gyr Oct 2018 - Jan 2020
    Cham, Zug, Ch
    ➡️ Importing and exporting data jobs, to perform operations like copying data from HDFS and to HDFS using Sqoop.➡️ Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Python and Scala.➡️ Responsible for troubleshooting issues in the execution of MapReduce jobs by inspecting and reviewing log files. ➡️ Effectively used Oozie to develop automatic workflows of Sqoop, MapReduce and Hive jobs.

Murali Krishna Education Details

  • The University Of Texas At San Antonio
    The University Of Texas At San Antonio
    Electrical And Electronics Engineering

Frequently Asked Questions about Murali Krishna

What company does Murali Krishna work for?

Murali Krishna works for Ashley Furniture Industries

What is Murali Krishna's role at the current company?

Murali Krishna's current role is Data Engineer.

What is Murali Krishna's email address?

Murali Krishna's email address is mu****@****clc.org

What schools did Murali Krishna attend?

Murali Krishna attended The University Of Texas At San Antonio.

Who are Murali Krishna's colleagues?

Murali Krishna's colleagues are Jonothan Davidson, Ryan Preston, Ven Gao, Matthew Dyer, Lucky Nicholas, Michelle Pointer, Julissa Ventura.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.