D G Email and Phone Number
D G is a Lead Data Architect at Fidelity | Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | AWS | Hadoop | Azure | PySpark | Kafka | Yarn | HDFS | Scala | ETL at Fidelity Investments.
Fidelity Investments
View- Website:
- fidelity.com
- Employees:
- 52767
-
Senior Data EngineerFidelity Investments Oct 2021 - PresentCharlotte, North Carolina, United States• Involved in complete Big Data flow of the application starting from data ingestion upstream to HDFS, processing the data in HDFS and analyzing the data and involved.• Demonstrated expert level technical capabilities in areas of Azure Batch and Interactive solutions, Azure Machine learning solutions and operationalizing end to end Azure Cloud Analytics solutions. • Day to-day responsibility includes developing ETL Pipelines in and out of data warehouse, develop major regulatory and financial reports using advanced SQL queries in snowflake. • Developing Json Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the Cosmos Activity. • Designed, developed data integration programs in a Hadoop environment with NoSQL data store Cassandra for data access and analysis. • Implement One Time Data Migration of Multistate level data from SQL server to Snowflake by using Python and Snow SQL. -
Big Data EngineerEdward Jones Jun 2018 - Oct 2021St Louis, Missouri, United States• Designed and Developed Real Time Stream Processing Application using Spark, Kafka, Scala and Hive to perform Streaming ETL and apply Machine Learning. •Led cross-functional Agile teams in the successful delivery of multiple software projects, including web applications and mobile apps, using Scrum methodology • Used Airflow for scheduling the Hive, Spark and MapReduce jobs. • Developed reusable objects like PL/SQL program units and libraries, database procedures and functions, database triggers to be used by the team and satisfying the business rules• Developed Spark code using Scala and Spark-SQL/Streaming for faster processing of data. • Developing Spark programs with Python and applied principles of functional programming to process the complex structured data sets. • • Data sources are extracted, transformed and loaded to generate CSV data files with Python programming and SQL queries. • Worked with Hadoop ecosystem and Implemented Spark using Scala and utilized Data frames and Spark SQL API for faster processing of data. -
Big Data EngineerMicron Semiconductors Mar 2016 - May 2018Boise, Idaho, United States• Involved in using Terraform migrate legacy and monolithic systems to Amazon Web Services and developed environments of different applications on AWS by provisioning on EC2 instances using Docker, Bash and Terraform.• Managed AWS infrastructure as code using Terraform and used scripts to automate instances. • Used Hive SQL, Presto SQL, and Spark SQL for ETL jobs and using the right technology for the job to get done. • Defined facts, dimensions and designed the data marts using the Ralph Kimball's Dimensional Data Mart modeling methodology using Erwin. • Defined and deployed monitoring, metrics, and logging systems on AWS. • Worked publishing interactive data visualizations dashboards, reports /workbooks on Tableau and SAS Visual Analytics. • Created Data Quality Scripts using SQL and Hive to validate successful das ta load and quality of the data. Created various types of data visualizations using Python and Tableau. • Implemented data streaming capability using Kafka and Talend for multiple data sources. -
Database DeveloperHitachi Vantara India Pvt Ltd Jul 2013 - Dec 2015India• Involved in complete Software Development Lifecycle (SDLC). • Worked on different dataflow and control flow task, for loop container, sequence container, script task, executes SQL task and Package configuration. • Extensive use of Expressions, Variables, Row Count in SSIS packages • Created SSIS packages to pull data from SQL Server and exported to Excel Spreadsheets and vice versa. • Created batch jobs and configuration files to create automated process using SSIS. • Data validation and cleansing of staged input records was performed before loading into Data Warehouse • Automated the process of extracting the various files like flat/excel files from various sources like FTP and SFTP (Secure FTP) • Interacted with Business analysts and stake holders to gather business requirements
Frequently Asked Questions about D G
What company does D G work for?
D G works for Fidelity Investments
What is D G's role at the current company?
D G's current role is Lead Data Architect at Fidelity | Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | AWS | Hadoop | Azure | PySpark | Kafka | Yarn | HDFS | Scala | ETL.
Who are D G's colleagues?
D G's colleagues are Brandon Webster, Kerri Van Briggle, Conner Siekman, Tim Humble, Lauren Henson, Charlene Beebe, Jack Giuliotti.
Not the D G you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial