Miki L.

Miki L. Email and Phone Number

Marlabs Inc.
Miki L.'s Location
United States, United States
Miki L.'s Contact Details

Miki L. work email

Miki L. personal email

n/a
About Miki L.

I'm a experienced big data engineer and have worked in variance domains like E-Commerce, Hospitality, Finance, etc. I have very good knowledge of and hands on experience with big data technologies such as Hadoop, MapReduce, Spark, Scala, Hive, Pig, AWS and etc.

Miki L.'s Current Company Details

Marlabs Inc.
Miki L. Work Experience Details
  • Ibm
    Data Engineer
    Ibm Jan 2021 - Aug 2022
    New Jersey, United States
    Developed complete Big Data processing data pipelines using Spark Scala Designed, developed, implemented, testing, and maintenance of data ingestion and integration of ETL pipelinesApplied Git for version control.
  • Dr. Leonard'S Healthcare Corp.
    Big Data Engineer
    Dr. Leonard'S Healthcare Corp. Jul 2019 - Jun 2020
    New Jersey, United States
    Developed Spark code using Scala and Spark-SQL for faster processing and testing.Utilized Spark SQL with DataFrame API to provide efficiently structured data processing.Implemented Spark RDD transformations to Map business analysis and apply actions on top of transformations.Stored the click stream data placed in HDFS and Spark for batch processing.Responsible to store batch processed data into Redshift.Performed storing and querying user session information in HDFS as Hive external tables from batch processing.Used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.Designed, developed, and maintained data ETL pipelines by Spark SQL and Scala to ingest million rows of raw data from different sources to AWS Redshift and S3.Deploy services on AWS and utilized Lambda function to trigger the data pipelines.
  • Cryptovc
    Data Engineer
    Cryptovc May 2018 - Jun 2019
    New Jersey, United States
    • Migrated difference formats of data from HDFS to Spark DataFrame API by ETL pipelines.• Applied Spark SQL with DataFrame API to provide efficiently structured data processing.• Used Spark to count the number of transactions from files containing Bitcoin Blockchain data. • Converted raw data with sequence data format, such as ORC and Parquet to reduce data processing time and increase data transferring efficiency through the network.• Applied Hive to analyze historical Bitcoin and Altcoin Blockchain data.• Scheduled and managed the whole workflow of data pipelines by AWS Data Pipeline.
  • Urban Rent
    Data Engineer
    Urban Rent Aug 2017 - May 2018
    New Jersey, United States
    Imported and exported data between RDBMS and HDFS by Sqoop script. Defined gross operation income as a fact metric to measure business performance on real estate market and build the database.Load the data as Spark DataFrame format and analyzed by Scala and Spark SQL for scalable storage and fast query. Created and normalized a star-schema database by data pipelines based on the defined metric, and store structured data for business analyze. Present results in layman term by Tableau dashboards based on ad-hoc data studies.
  • Credit Rating
    Data Scientist
    Credit Rating May 2016 - Jul 2017
    Acquired and manipulated financial statement data, ratios, bond credit rating and CDS for target industry companies.Created, inserted, and cleaned data into MySQL database for backend operation. Applied Sqoop commands to import, transfer, and store MYSQL database to HDFS. Applied MapReduce in Hadoop to split word into vectors and counted the frequency of the word in financial news, and built sentimental model using NLP NLTK.
  • Strategy Stock Automation
    Big Data Scientist
    Strategy Stock Automation Mar 2014 - Apr 2016
    Involved ETL processes including data processing and data storage.Applied Spark using Scala to do the data batch processing and store the output in HBase for scalable storage and fast query.Designed and created of Hive tables and worked on various performance optimizations like Partition, Bucketing in Hive.Implemented Hive and analyzed large data sets by running HiveQL to achieve comprehensive data analysis.Migrated of MapReduce jobs and Hive queries into Spark transformations and actions to improve the performance.Utilized Sqoop to import and output data between Oracle database and HDFS.Configure the Sqoop incremental import job for importing the updated input data.Implemented request, beautiful soup to scrape real time data through parsing yahoo finance HTML.

Miki L. Skills

E Commerce Cloudera Pandas Scikit Learn Amazon Elastic Mapreduce Amazon Web Services Apache Spark Streaming Apache Pig Mongodb Aws Step Functions Tensorflow Microsoft Sql Server Scala Apache Kafka Finance Hbase Tableau Hadoop Apache Spark Pyspark Apache Sqoop Hospitality Hive Sql Python Hiveql Mysql Mapreduce

Frequently Asked Questions about Miki L.

What is Miki L.'s role at the current company?

Miki L.'s current role is Marlabs Inc..

What is Miki L.'s email address?

Miki L.'s email address is ma****@****abs.com

What skills is Miki L. known for?

Miki L. has skills like E Commerce, Cloudera, Pandas, Scikit Learn, Amazon Elastic Mapreduce, Amazon Web Services, Apache Spark Streaming, Apache Pig, Mongodb, Aws Step Functions, Tensorflow, Microsoft Sql Server.

Not the Miki L. you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.