Karthik S

Karthik S Email and Phone Number

Senior Big Data Engineer | Big Data | AWS | Azure | Hadoop | Talend | ETL SQL Snowflake | Databricks | BI | GCP | Actively looking for Data Engineer roles. @ John Deere
Karthik S's Location
Charlotte, North Carolina, United States, United States
About Karthik S

Karthik S is a Senior Big Data Engineer | Big Data | AWS | Azure | Hadoop | Talend | ETL SQL Snowflake | Databricks | BI | GCP | Actively looking for Data Engineer roles. at John Deere.

Karthik S's Current Company Details
John Deere

John Deere

View
Senior Big Data Engineer | Big Data | AWS | Azure | Hadoop | Talend | ETL SQL Snowflake | Databricks | BI | GCP | Actively looking for Data Engineer roles.
Karthik S Work Experience Details
  • John Deere
    Senior Data Engineer
    John Deere Oct 2020 - Present
    Moline, Il, Us
    Involved in complete Big Data flow of the application starting from data ingestion upstream to HDFS,processing the data in HDFS and analyzing the data and involved. Developed Json Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data usingthe Cosmos Activity. Collaborated with team members and stakeholders in design and development of data environment Experienced knowledge over designing Restful services using java-based APIs like JERSEY. Used airflow Operational Services for batch processing and scheduling workflows dynamically. Experience in developing customized UDF’s in Python to extend Hive and Pig Latin functionality. Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load datafrom different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool andbackwards. Worked on performance tuning of data bricks delta tables. Stage the API or Kafka Data (in JSON file format) into Snowflake DB by Flattening the same for differentfunctional services. Demonstrated expert level technical capabilities in areas of Azure Batch and Interactive solutions, AzureMachine learning solutions and operationalizing end to end Azure Cloud Analytics solutions. Day to-day responsibility includes developing ETL Pipelines in and out of data warehouse, develop majorregulatory and financial reports using advanced SQL queries in snowflake. Developing Json Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data usingthe Cosmos Activity. Created Cassandra tables to store various data formats of data coming from different sources Designed, developed data integration programs in a Hadoop environment with NoSQL data storeCassandra for data access and analysis. Implement One Time Data Migration of Multistate level data from SQL server to Snowflake by using Pythonand SnowSQL.
  • Edward Jones
    Big Data Engineer
    Edward Jones Jun 2018 - Oct 2020
    St. Louis, Mo, Us
     Responsible for analysing large data sets and derive customer usage patterns by developing newMapReduce programs using Java. Developed Spark/Scala, Python for regular expression (regex) project in the Hadoop/Hive environmentwith Linux/Windows for big data resources. As a part of Data Migration, wrote many SQL Scripts for Mismatch of data and worked on loading thehistory data from Teradata SQL to snowflake. Use SparkSQL to load JSON data and create Schema RDD and loaded it into Hive Tables and handledstructured data using SparkSQL. Perform structural modifications using MapReduce, Hive and analyze data using visualization/reportingtools (Tableau). Designed Kafka producer client using Confluent Kafka and produced events into Kafka topic. Evaluated big data technologies and prototype solutions to improve our data processing architecture. Datamodeling, development and administration of relational and NoSQL databases. Using Python in spark to extract the data from Snowflake and upload it to Salesforce on Daily basis. Use python to write a service which is event based using AWS Lambda to achieve real time data to One-Lake (A Data Lake solution in Cap-One Enterprise). Exported Data into Snowflake by creating Staging Tables to load Data of different files from Amazon S3. Developed reusable objects like PL/SQL program units and libraries, database procedures and functions,database triggers to be used by the team and satisfying the business rules. Experience with Data Analytics, Data Reporting, Ad-hoc Reporting, Graphs, Scales, PivotTables and OLTPreporting. Involved with writing scripts in Oracle, SQL Server and Netezza databases to extract data for reporting andanalysis and Worked in importing and cleansing of data from various sources like DB2, Oracle, flat filesonto SQL Server with high volume data Subscribing the Kafka topic with Kafka consumer client and process the events in real time using spark.
  • Micron Technology
    Big Data Engineer
    Micron Technology Mar 2016 - May 2018
    Boise, Idaho, Us
    Analyze the existing application programs and tune SQL queries using execution plan, query analyzer, SQLProfiler and database engine tuning advisor to enhance performance. Migrated on premise database structure to Confidential Redshift data warehouse. Was responsible for ETL and data validation using SQL Server Integration Services. Wrote various data normalization jobs for new data ingested into Redshift. Implementing and Managing ETL solutions and automating operational processes. Used Kafka functionalities like distribution, partition, replicated commit log service for messaging systemsby maintaining feeds and Created applications using Kafka, which monitors consumer lag within ApacheKafka clusters. Managed security groups on AWS, focusing on high-availability, fault-tolerance, and auto scaling usingTerraform templates. Along with Continuous Integration and Continuous Deployment with AWS Lambdaand AWS code pipeline. Used Hive SQL, Presto SQL and Spark SQL for ETL jobs and using the right technology for the job to getdone. Created various complex SSIS/ETL packages to Extract, Transform and Load data. Defined facts, dimensions and designed the data marts using the Ralph Kimball's Dimensional Data Martmodeling methodology using Erwin. Defined and deployed monitoring, metrics, and logging systems on AWS. Worked publishing interactive data visualizations dashboards, reports /workbooks on Tableau and SASVisual Analytics. Created Data Quality Scripts using SQL and Hive to validate successful das ta load and quality of the data.Created various types of data visualizations using Python and Tableau. Used Zookeeper to store offsets of messages consumed for a specific topic and partition by a specificConsumer Group in Kafka. Worked on data pre-processing and cleaning the data to perform feature engineering and performed dataimputation techniques for the missing values in the dataset using Python.
  • Hitachi Vantara
    Database Developer
    Hitachi Vantara Apr 2013 - Dec 2015
    Santa Clara, California, Us
    Involved in complete Software Development Lifecycle (SDLC). Worked on different dataflow and control flow task, for loop container, sequence container, script task,executes SQL task and Package configuration. Extensive use of Expressions, Variables, Row Count in SSIS packages Created SSIS packages to pull data from SQL Server and exported to Excel Spreadsheets and vice versa. Created batch jobs and configuration files to create automated process using SSIS. Data validation and cleansing of staged input records was performed before loading into Data Warehouse Automated the process of extracting the various files like flat/excel files from various sources like FTP andSFTP (Secure FTP). Deploying and scheduling reports using SSRS to generate daily, weekly, monthly and quarterly reports. Loading data from various sources like OLEDB, flat files to SQL Server database Using SSIS Packages andcreated data mappings to load the data from source to destination. Built SSIS packages, to fetch file from remote location like FTP and SFTP, decrypt it, transform it, mart it todata warehouse and provide proper error handling and alerting

Frequently Asked Questions about Karthik S

What company does Karthik S work for?

Karthik S works for John Deere

What is Karthik S's role at the current company?

Karthik S's current role is Senior Big Data Engineer | Big Data | AWS | Azure | Hadoop | Talend | ETL SQL Snowflake | Databricks | BI | GCP | Actively looking for Data Engineer roles..

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.