Dynamic Data Engineer | Transforming Raw Data into Strategic Assets | Data Solutions | Data Pipeline DesignAs a results-driven Data Engineer with over 6 years of comprehensive experience, I am dedicated to transforming data into actionable insights that drive business success. My journey has equipped me with a diverse skill set, from designing and implementing sophisticated data pipelines to optimizing data workflows using cutting-edge technologies such as Spark, Scala, and Azure.I have a proven track record of enhancing operational efficiency, achieving remarkable reductions in processing times by up to 40% through automation and streamlined ETL processes. My expertise in SQL and NoSQL databases, including Snowflake and MongoDB, enables me to construct scalable solutions that enhance data accessibility, ensuring that organizations harness the full potential of their data.In my previous roles, I have architected high-performance data pipelines capable of processing over 2 million events per minute with Kafka, driving real-time insights that empower stakeholders to make informed decisions swiftly. My experience in data visualization tools like Tableau and Power BI has allowed me to create intuitive dashboards that translate complex data into compelling narratives, ultimately fostering a data-driven culture within organizations.A strong advocate of Agile methodologies, I thrive in collaborative environments where I can mentor others and continuously improve processes. My commitment to excellence extends beyond technical skills—I am passionate about articulating complex concepts to non-technical partners, ensuring everyone understands the value and impact of our data initiatives.As I look to the future, I am eager to tackle new challenges that push the boundaries of data engineering and explore innovative solutions that align with evolving business needs. Let’s connect and discuss how we can leverage data as a strategic asset to drive transformative change and achieve remarkable outcomes together!
Verizon
View- Website:
- verizon.com
- Employees:
- 151940
-
Data EngineerVerizon Feb 2023 - PresentIrving, Texas, United StatesDeveloped Spark jobs using Scala/PySpark and Spark SQL for faster data processing, reducing processing time by 30%.• Developed Scala scripts using both Data frames/SQL/Data sets and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into OLTP system through Sqoop.• Created graphical reports, tabular reports, scatter plots, geographical maps, dashboards, and parameters on Tableau and Microsoft Power BI improving insights delivery for stakeholders by 20%.• Worked with building data warehouse structures, and creating facts, dimensions, aggregate tables, by dimensional modeling, Star and Snowflake schemas.• Created Spark clusters and configuring high concurrency clusters using Azure Databricks to speed up the preparation of high-quality data.• Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, Spark SQL, and U-SQL Azure Data Lake Analytics.• Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks.• Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the SQLl Activity.• GIT used for version control and font management.• Created airflow DAG’s to sync files from box, analyze data quality, and alert for missing files.• Implemented a CI/CD pipeline using Jenkins, Airflow for Containers from Docker and Kubernetes.• Designed, documented operational problems by following standards and procedures using JIRA.• Developed SQL, PL/SQL stored procedures for efficient data querying and retrieval. -
Data EngineerDeloitte Oct 2021 - Jul 2022United StatesInvolved in full Software Development Life Cycle (SDLC) - Business Requirements Analysis, preparation of Technical Design documents, Data Analysis, Logical and Physical database design, Coding, Testing, Implementing, and deploying to business users.• Developed ETL pipelines in and out data warehouse using combination of Python and Snowflake Writing SQL quires against Snowflake.• Created reports in Power BI to visualize SharePoint list items, improving report delivery time by 15%.• Developed Tableau reports on the business information to examine the examples in the business.• Design Setup maintain Administrator the Azure SQL Database, Azure Analysis Service, Azure SQL Data warehouse, Azure Data Factory, Azure SQL Data warehouse.• Optimized the Hive tables using optimization techniques like partitions and bucketing to provide better performance with HiveQL queries.• Used tools like Jira, GitHub to update the documentation and code.• Worked on SQL queries in dimensional data warehouses and relational data warehouses. Performed Data Analysis and Data Profiling using Complex SQL queries on various systems.• Followed Agile methodology, participating in daily sprints and backlog grooming. -
Data EngineerCognizant Nov 2019 - Oct 2021Pennsylvania, United StatesWorked with the business users to gather, define business requirements and analyze the possible technical solutions.• Developed Spark jobs to clean data obtained from various feeds to make it suitable for data ingestion into Hive tables for analysis.• Created Custom Input Formats in Spark to handle various file formats, increasing processing efficiency.• Worked on AWS Redshift for data warehousing and querying large datasets. Utilized AWS Athena for serverless querying of structured data in S3, optimizing query performance with partitioning and parallel execution.• Created graphical reports, tabular reports, scatter plots, geographical maps, dashboards, and parameters on Tableau and Microsoft Power BI improving insights delivery for stakeholders by 20%.• Developed and managed ETL jobs using AWS Glue to automate data extraction, transformation, and loading processes, leveraging PySpark for complex data transformations.• Integration with S3: Leveraged Amazon S3 as a data lake to store raw and processed data, facilitating easy access for Glue jobs and downstream analytics.• Designed and maintained end-to-end ETL pipelines for real-time and batch processing using AWS services, including Redshift Spectrum and S3. This ensured the delivery of data to business intelligence tools like Tableau for visualization.• Developed multiple Spark jobs in Scala & Python for data cleaning and preprocessing.• Used PySpark jobs to run on Kubernetes Cluster for faster data processing.• Design and develop Tableau visualizations which include preparing Dashboards using calculations, parameters, calculated fields, groups, sets and hierarchies.• Performed Data Integration, Extraction, Transformation, and Load (ETL) Processes.• Worked on Snowflake environment to remove redundancy and load real time data from various data sources into HDFS using Kafka.• Implemented SQL, PL/SQL stored procedures. -
Data EngineerCredit One Bank Mar 2018 - Oct 2019Las Vegas, Nevada, United States• Designed and developed Spark jobs using Scala for end-to-end data pipelines, handling batch processing of large datasets.• Used spark and spark-SQL to read the parquet data and create the tables in hive using the Scala API.• Developed highly complex Python and Scala code, which is maintainable, easy to use, and satisfies application requirements, data processing and analytics using inbuilt libraries.• Developed Spark code in Python and SparkSQL environment for faster testing and processing of data and Loading the data into Spark RDD and doing In-memory computation to generate the output response with less memory usage.• Designed, developed, tested, and maintained Tableau functional reports based on user requirements.• Developed ETL’s in using Spark SQL, RDD, and Data Frames.• Migrated MapReduce programs into Spark transformations using Scala, reducing execution time by 40%.• Worked with different feeds data like JSON, CSV, XML and implemented Data Lake concept.• Analyzed the SQL scripts and designed the solution to implement using PySpark. • Use SQL queries and other tools to perform data analysis and profiling.• Followed agile methodology and involved in daily SCRUM meetings, sprint planning, showcases and retrospective.
Sriram A Education Details
Frequently Asked Questions about Sriram A
What company does Sriram A work for?
Sriram A works for Verizon
What is Sriram A's role at the current company?
Sriram A's current role is Data Engineer.
What schools did Sriram A attend?
Sriram A attended The University Of Texas At Arlington.
Who are Sriram A's colleagues?
Sriram A's colleagues are Dustin Moore, Julie Anne Kellett, Cynthia Sibley, Bill Maher, Rahul Kumar Rahul Kumar, Lakshman Sagar, Christopher Carey.
Not the Sriram A you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial