Abhishek D Email and Phone Number
Data Engineer with over 10+ years of experience, transitioning from a deep-rooted background in data engineering to a specialized focus on Python-based solutions.Proficient in implementing cutting-edge, data-driven solutions and big data analytics across major platforms like AWS, Azure and GCP.Proficient in advanced Python programming, using libraries such as Pandas, NumPy and Scikit-Learn for complex data analysis, machine learning and predictive modeling.Skilled in cloud computing, adept at deploying Python applications on AWS, Azure and GCP and familiar with services like AWS Lambda, Azure Functions and GCP Cloud Functions
Charles Schwab
View- Website:
- schwab.com
- Employees:
- 33019
-
Senior Data EngineerCharles Schwab Dec 2022 - PresentTexas, United StatesLed comprehensive data engineering initiatives, adeptly applying both Agile and Waterfall methodologies to manage projects from conception through to deployment and ongoing optimization, ensuring alignment with compliance and data security standards.Utilized advanced Python libraries such as NumPy and Pandas for precision-oriented data analysis tasks, creating detailed data visualizations with Matplotlib to uncover behavioral trends and transactional outcomes.Designed and managed robust data infrastructures on AWS, using EC2 instances and integrated AWS CloudWatch and Cloud Trail for continuous monitoring, enhancing data protection within S3 repositories.Directed the transition of legacy databases and ETL processes to AWS cloud solutions, with a focus on Redshift and Snowflake, leveraging asynchronous Python frameworks like Celery for efficient data handling.Enhanced data migration efficiency to analytics platforms like Snowflake, Oracle and DB2 using Sqoop and developed sophisticated SQL and PL/SQL scripts to support data analysis within Snowflake.Led healthcare data engineering projects using technologies such as Spark, PySpark, Hive and Hadoop adept at handling various data formats including JSON, CSV, Parquet and Avro.Designed and implemented data ingestion pipelines using tools like Apache Flume, Apache Sqoop, and custom ETL scripts to load large datasets into Hadoop from various sources.Managed and maintained Hadoop clusters, ensuring high availability, performance, and security of the distributed data processing environment.Simplified the ingestion of complex data sets from disparate sources using PySpark, utilizing Oozie for effective data pipeline orchestration, ensuring timely and accurate data availability for clinical analysis. -
Data EngineerCigna Sep 2021 - Nov 2022Texas, United StatesManaged the Software Development Life Cycle (SDLC) from initial requirement gathering through to data analysis and application deployment, leveraging Python for sophisticated data handling and patient data privacy.Introduced cutting-edge data management practices in the healthcare sector utilizing Azure's PaaS solutions, including Azure Data Factory and Azure Data Lake Analytics, to facilitate effective data acquisition, transformation, and integration, employing Python for data processing tasks.Oversaw the integration and processing of data across various Azure platforms, such as Data Lake and Azure SQL, using Python to interact with Azure Databricks for advanced data orchestration and analysis, ensuring compliance with regulations.Utilized Azure SQL Data Warehouse and Python to enhance data visualization capabilities, enabling professionals to access and interpret complex data sets easily.Constructed and maintained extensive data warehouses on platforms like Azure SQL Data Warehouse, using Python for the management and interrogation of diverse data types to genomic data.Developed efficient ETL pathways with Azure Data Factory, applying Python to automate data movements and transformations across various data repositories, ensuring data integrity and security.Championed the adoption of star-schema in data warehousing, employing Python to streamline the transformation of disparate data sources into coherent SQL tables and optimizing data structures for quick access to information.
-
Data EngineerNike Oct 2018 - Aug 2021Beaverton, Oregon, United StatesDesigned and implemented data backup and recovery strategies on Linux systems, utilizing tools like snapshot technologies to ensure data durability and availability in case of failures.Integrated Hadoop with other big data ecosystem tools, such as Apache Spark, Apache HBase, and Apache Kafka, to create comprehensive data processing and analytics solutions.Directed extensive data warehousing initiatives for financial institutions, now employing Amazon Redshift and AWS Data Exchange to merge and integrate vast arrays of financial transaction data and customer information, replacing Snowflake Cloud and IICS.Developed an API gateway using Flask to interface between financial databases and third-party service providers, ensuring secure data exchanges and streamlined operations. This gateway was key in expanding the client's service offerings by integrating new payment and reporting services without compromising on security.Led the integration of AWS Quicksight for a key financial project, enabling enhanced data visualization and analytics capabilities. Developed custom dashboards and reports that provided stakeholders with deep insights into performance, market trends, and customer behavior.Boosted financial data processing efficiency, now using Amazon Redshift and Amazon Kinesis for the rapid transformation of data from various sources, including Amazon S3 and Kinesis Streams, critical for real-time financial market analysis and decision-making.
-
Data EngineerVanguard. Aug 2016 - Sep 2018Philadelphia, Pennsylvania, United StatesAlteryx to Azure Migration: Successfully orchestrated the conversion of complex Alteryx workflows into Azure Data Factory, ensuring seamless data flow and consistency in a real-time production environment. Designed and implemented scalable data solutions using Azure Blob Storage and Azure SQL, ensuring high-performance data storage and retrieval. Implemented medium to large scale BI solutions on Azure using Azure Data Platform services (Azure Data Lake, Data Factory, Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Databricks, NoSQL DB). Leveraged Azure Data Factory (ADF) pipelines for end-to-end data movement and transformation, orchestrating data workflows across diverse platforms. Worked on reading and writing multiple data formats like JSON, ORC, Parquet on Data Factory using SQL. Built ADF pipelines to integrate, cleanse, and load data from disparate sources into Azure SQL databases, automating data processing tasks. Enhanced database security by implementing PL/SQL routines for data encryption and access control, complying with industry-standard data protection regulations.Deployed a scalable financial data processing platform using Azure Service Fabric, which enabled the dynamic scaling of resources to meet the demands of large-scale financial audits and compliance checks. The use of Azure Service Fabric significantly improved the system's resilience and fault tolerance, reducing downtime by over 50% during critical audit periods.Managed and optimized Teradata and SQL Server databases, implementing performance tuning techniques and query optimization for enhanced efficiency.
-
Data AnalystByte Alpha Systems Nov 2013 - Jun 2016IndiaAssisted in the development and maintenance of web applications using Python frameworks like Django and Flask, focusing on learning the architecture and design patterns that contribute to scalable and efficient web services.Participated in data analysis and manipulation tasks using Python libraries such as Pandas and NumPy, under the guidance of more experienced developers, to understand and contribute to complex data analysis projects.Supported the data engineering team in the process of data ingestion, cleaning and transformation using Python, gaining familiarity with tools like Apache Airflow to automate and streamline data workflows.Contributed to the development and testing of RESTful APIs using Python, learning to integrate applications with front-end technologies and external services effectively.Engaged in the practice of Test-Driven Development (TDD) with oversight, using testing tools like PyTest to ensure code quality and reliability from the early stages of development.Gained exposure to big data technologies and machine learning frameworks by assisting in projects that use Apache Spark, Hadoop, SciKit-Learn and TensorFlow, focusing on the application of Python in data processing and analytics.Learned and apply version control systems, primarily Git, in daily development practices, understanding the importance of code management and collaboration in a team environment.Participate in Agile and Scrum meetings and processes, understanding the dynamics of working in a collaborative environment and the importance of continuous integration and continuous deployment (CI/CD) practices using tools like Jenkins and Docker.
Abhishek D Education Details
-
Computer Science
Frequently Asked Questions about Abhishek D
What company does Abhishek D work for?
Abhishek D works for Charles Schwab
What is Abhishek D's role at the current company?
Abhishek D's current role is Senior Data Engineer | Azure | AWS | Hive| Hadoop | Spark | Tableau | Kubernetes | Jira | Sql | Terraform.
What schools did Abhishek D attend?
Abhishek D attended Sathyabama Institute Of Science & Technology, Chennai.
Who are Abhishek D's colleagues?
Abhishek D's colleagues are Matthew Mcgoldrick, Tracy O'brien, Jennifer Price, Cism, Cfe, Cams, Gus Mask, Cfp®, Mariajose D’santiago, Chezney C., Shoaib M.
Not the Abhishek D you were looking for?
-
-
Abhishek D
Servicenow Developer | Csa | Cad | Micro-Certs X 4 | Certified Javascript Programmer | Itsm | Itam | Cmdb | Itom | Hrsd | Fsm | Html | CssUnited States -
Abhishek D
Lead Us It Recruiter & Talent Acquisition Specialist At Realsoft Technologies Llc, Herndon, Virginia | Contact Number With Ext Number: (703) 689-9945 Ext 123 | Email Address: Abhishek@Realsoftech.Com | [Work Account]Herndon, Va -
Abhishek D.
Senior Data Engineer Seeking New Opportunities | Expert In Big Data, Machine Learning, And Scalable Data Solutions | Spark, Aws, AzureIrving, Tx
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial