Advaitha C

Advaitha C Email and Phone Number

Data Engineer @ AT&T
Advaitha C's Location
United States, United States
About Advaitha C

Advaitha C is a Data Engineer at AT&T.

Advaitha C's Current Company Details
AT&T

At&T

View
Data Engineer
Advaitha C Work Experience Details
  • At&T
    Data Engineer
    At&T Apr 2022 - Present
    Dallas, Tx, Us
    • Experience in developing Spark applications using Spark-SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for Analyzing & transforming the data to uncover insights into customer usage.• Moved the data from AWS s3 ecosystem to Snowflake vice versa.• Coordinated with report analysts to integrate the data mart and Business objects reporting universe.• Built data pipelines in Python using Airflow.• Data Ingestion implemented using SPARK, loading data from various CSV, parquet, XML files.• Responsible for Performance Tuning of Spark Applications for setting right Batch Interval time, the correct level of Parallelism, and Memory tuning.• Used partitioning and bucketing concepts for performance optimization in Hive.• Developed Hive queries to process the data for visualizing and for data transformation and data analysis.• Converted existing Sqoop, Hive jobs to SPARK SQL applications to read data from RDBMS using JDBC and write it to Hive table.• Used snowflake JDBC connector to create temp tables using Scala.• Developed shell script to perform data profiling on the ingested data with the help of HIVE bucketing.• Created and worked on Sqoop jobs with incremental load to populate Hive External tables.• Skilled in using Spark Dataframe persistency and caching mechanisms to reduce data processing overhead and improve query performance.• Loaded and transformed large sets of semi structured data likes XML, JSON, Avro, Parquet.• Build data pipelines in Python using Airflow.• Optimized existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD’s.• Responsible for production ETL workload job monitoring using Control-M, ETL fixes and ETL enhancements raised by business stakeholders and users.
  • Leozues Dinesmart Technologies Pvt. Ltd.
    Data Analyst
    Leozues Dinesmart Technologies Pvt. Ltd. Jan 2019 - Oct 2021
    • Developed Slowly Changing Dimension Algorithm for Change Data Capture (CDC) mechanism using Spark SQL, Spark Data frames, Scala, Hive and AWS-EC2 to capture the changes from NoSQL databases and the changes processed will be stored in AWS-S3.• Converted HQL scripts to Snowflake and the respective jar files in HQL have been converted to java script UDF’s. Good Experience on Snowflake Micro Partitions, Data Clustering and Performance Tuning. • Developed Python/Pyspark scripts, UDFs using both Data frames/SQL and RDD in Spark for Data Aggregation, queries and writing data back into AWS S3, Broadcasts in Spark, Effective & efficient Joins, Transformations• Worked on Performing tuning of Spark Applications for setting right Batch Interval time, broad casting, and memory tuning.• Involved in every stage of the data product development life cycle from coordinating initial requirements with business owners through development with subject matter experts to production deployments.• Used informatica for ETL and reporting. • Involved in loading and transforming large sets of structured, semi structured, and unstructured data from relational databases into HDFS using Sqoop imports.• Developed Sqoop scripts to import export data from relational sources and handled incremental loading on the customer, transaction data by date.• Developed simple and complex MapReduce programs in Java for Data Analysis on different data formats. • Developed and implemented core API services using Scala and Spark.• Optimized MapReduce Jobs to use HDFS efficiently by using various compression mechanisms. • Worked on partitioning HIVE tables and running the scripts in parallel to reduce run-time of the scripts. • Worked on Data Serialization formats for converting Complex objects into sequence bits by using AVRO, PARQUET, JSON, CSV formats.
  • Marriott International
    Hadoop Developer
    Marriott International Feb 2017 - Nov 2018
    Bethesda, Md, Us
    • Imported Legacy data from SQL Server and Teradata into Hadoop.• As a part of Data Migration, wrote many SQL Scripts for Mismatch of data and worked on loading the history data from Teradata SQL to Hadoop.• Wrote Python code to manipulate and organize data frame such that all attributes in each field were formatted identically.• Developed SQL scripts to Upload, Retrieve, Manipulate and handle sensitive data (National Provider Identifier Data I.e. Name, Address, SSN, Phone No) in Teradata, SQL Server Management Studio and Snowflake Databases for the Project• Implemented UNIX scripts to define the use case workflow and to process the data files and automate the jobs.• Created bashrc files and all other xml configurations to automate the deployment of Hadoop.• Created and worked on Sqoop jobs with incremental load to populate Hive External tables.
  • T. Rowe Price
    Hadoop Developer
    T. Rowe Price Jul 2015 - Jan 2017
    Baltimore, Maryland, Us
    • Responsible for loading customer's data and event logs into HBase using Java API.• Created HBase tables to store variable data formats of input data coming from different portfolios. • Involved in adding huge volumes of data in rows and columns to store data in HBase.• Used Hive to find correlations between customer's browser logs in different sites and analyzed them to build risk profile for such sites. • End-to-end performance tuning of Hadoop clusters and Hadoop Map/Reduce routines against very large data sets.• Created and maintained technical documentation for launching HADOOP Clusters and for executing Hive queries and Pig Scripts.• Created User accounts and given the users the access to the Hadoop Cluster.• Implemented the secure authentication for the Hadoop Cluster using Kerberos Authentication protocol.• Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.• Developed the Pig UDF'S to pre-process the data for analysis. • Successfully loaded files to Hive and HDFS from Mongo DB.
  • Wati
    Software Engineer
    Wati Jun 2012 - Aug 2013
    Manhattan Beach, California, Us
    • Wrote SQL queries, stored procedures, and triggers to perform back-end database operations.• Developed user interface screens using HTML, CSS and JSP.• Involved in unit testing using JUnit.• Used Maven for building the application and Tomcat Server for deployments.• Used CVS for source code management.

Advaitha C Education Details

  • Osmania University
    Osmania University
    Bachelor Of Engineering - Be

Frequently Asked Questions about Advaitha C

What company does Advaitha C work for?

Advaitha C works for At&t

What is Advaitha C's role at the current company?

Advaitha C's current role is Data Engineer.

What schools did Advaitha C attend?

Advaitha C attended Osmania University.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.