Taylor Hart

Taylor Hart Email and Phone Number

Data ETL Developer and Consultant @ Krypton Concurrency
Saint Petersburg, FL, US
Taylor Hart's Location
St Petersburg, Florida, United States, United States
About Taylor Hart

As a (Big) Data Professional I have a strong passion for developing Data Models, ETL Pipelines, Database Optimization, and strong experience utilizing various AWS cloud components. Since I also have experience as a BI Developer, I am also skilled at creating Dashboards and Reports. As a Data Analyst I have communicated business requirements with stakeholders and am well-suited to build various Data Systems that both perform efficiently and are made with input from relevant parties. With both scripting and data manipulation coding experience, I can also perform advanced analytics on Big Data stored in a variety of database implementations. Automating the process through scheduling also forms a cornerstone of my data expertise. Hard skills: Python, SQL, Data Modeling, Orchestration, Spark Streaming.

Taylor Hart's Current Company Details
Krypton Concurrency

Krypton Concurrency

View
Data ETL Developer and Consultant
Saint Petersburg, FL, US
Employees:
1
Taylor Hart Work Experience Details
  • Krypton Concurrency
    Data Etl Developer And Consultant
    Krypton Concurrency
    Saint Petersburg, Fl, Us
  • Ads.Com
    Data Engineer
    Ads.Com Nov 2023 - Oct 2024
    As a Data Engineer with Ads.com, my responsibilities included developing ETL orchestration with DAGster, creating SQL BI dashboards, and ingesting OLAP data with Python to gather business insights.• Optimized SQL queries and database performance to optimally render data dashboards for analysis by business stakeholders• Integrated data from multiple sources into the AWS environment to reduce overall cloud storage costs• Developed Python scripting to ingest data from the reporting database and output a CSV file to deliver business insights that boosted sales• Developed DAGster assets to automate replication between databases and reduce errors in database replication
  • Truist
    Cloud Data Engineer
    Truist Sep 2021 - Oct 2023
    Charlotte, North Carolina, Us
    As a Data Engineer with Truist, my primary responsibility was to develop ETL and batch-streaming pipelines with PySpark, and to optimize cloud storage.• Implemented solutions for data ingestion and processing using PySpark todeliver business insights across non-technical teams.• Worked with Spark, Python, AWS, GCP, and SQL to bring more efficientdata solutions to a legacy system.• Migrated virtual machines from GCP to AWS using Cloud Endure Migrationin order to lower the cost of data storage.• Integrated data from multiple sources into AWS environment in order to gaindiscounts within the AWS storage system.• Configured data with various ETL tools and instantiated Data lake with AWS S3for secure, effective, and easy data storage.• Orchestrated workflows with AWS Step functions and Lambda to automate datatransfer between database storage and reporting layers.
  • Georgia Pacific
    Cloud Data Engineer
    Georgia Pacific Oct 2019 - Aug 2021
    As a Data Engineer with Georgia Pacific, my responsibilities included developing efficient ETL pipelines with Spark, Python and Kafka.• Developed Python scripts for database updates and file manipulation to deliver meaningful insights.• Configured Spark streaming applications and developed Sqoop techniquesto instantly transfer data.• Designed Kafka brokers and consumers for data processing integration.• Implemented data pipelines and prototypes using Spark ecosystem to efficiently extract, transform, and load data from APIs, Relational Databases, NoSQL Databases, and flat files• Managed data in Hadoop data lakes and implemented Jenkins CI systems to store and process large amounts of data while keeping costs low
  • Ford Motor Company
    Big Data Engineer
    Ford Motor Company Oct 2017 - Oct 2019
    Dearborn, Michigan, Us
    At Ford Motor Company, my role as a Data Engineer included transfer between relational databases, increasing parallelization with streaming pipelines, and optimizing data storage.• Implement Incremental Import into Hive tables to allow parallel database calls during data importing to reduce data transfer and processing time• Instantiate Hive tables to hold transformed results in a tabular format to decrease query load times• Design and amend Hive queries with Hive QL so that business analysts proficient in SQL could provide business insights• Program Hive UDFs and oversee ETL to HDFS to maintain data integrity• Extensive practice in importing real-time logs to HDFS with Flume for real-time monitoring• Construct UNIX shell scripts to automatize the build process, and conduct regular jobs like file transfers to reduce manual intervention in routine tasks• Utilize Cloudera Manager for instituting Cloudera Cluster and performance analysis to identify bottlenecks and optimization opportunities• Committed incremental imports to Hive with Sqoop to efficiently transfer data between relational databases and Hadoop
  • Suncoast Credit Union
    Data Engineer
    Suncoast Credit Union Sep 2016 - Oct 2017
    Tampa, Fl, Us
    In my role as a Data Engineer with Suncoast, my role revolved around working with Hive and Tableau to effectively store and display big data.• Wrote shell scripts to synchronize workflows in order to pull data from various databases into Hadoop• Transfer consumption purposes into Hbase and Hive tables• Responsible for carrying out upgrades, patches, and bug fixes in Hadoopcluster within a cluster environment• Perform Hive queries for studying data in Hive warehouse with Hive Query Language• Coordinate with devops team to work with Hadoop in order to provide large-scale solutions• Utilize Hadoop Administration tools for installation and management of single-node and multi-node Hadoop cluster• Pull metadata of Hive tables with Hive QL• Amend the Hive views to the top of the source data tables, along with configuring a secured provisioning framework for users to access the data through Hive based views• Launch shell scripts to aid the process of loading data• Collect log data from multiple sources and synchronize it with HDFS using Flume and stage data within HDFS for further examination• Define Hive scripts to take data from• Formulate Hive queries to notice emerging trends by comparative analysis between Hadoop data and historical numerical markers• Implement and create Tableau Desktop to connect to Hortonworks Hive Framework containing structured and unstructured data• Form Hive tables with data along with drafting Hive queries in HiveQL

Frequently Asked Questions about Taylor Hart

What company does Taylor Hart work for?

Taylor Hart works for Krypton Concurrency

What is Taylor Hart's role at the current company?

Taylor Hart's current role is Data ETL Developer and Consultant.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.