Mohammed Abdul

Mohammed Abdul Email and Phone Number

Data Engineer at Citizens || Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | AWS | Hadoop | Azure | PySpark | Kafka | HDFS | ETL | Python | Scala @ Citizens
Mohammed Abdul's Location
Greater Chicago Area, United States, United States
About Mohammed Abdul

A Data Engineer with over all 7+ years of experience working with ETL, Big Data, Python/Scala, Relational Database Management Systems (RDBMS), and enterprise-level cloud-based computing and applications.

Mohammed Abdul's Current Company Details
Citizens

Citizens

View
Data Engineer at Citizens || Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | AWS | Hadoop | Azure | PySpark | Kafka | HDFS | ETL | Python | Scala
Mohammed Abdul Work Experience Details
  • Citizens
    Big Data Engineer
    Citizens Apr 2021 - Present
    Providence, Rhode Island, Us
    • Performs analytic queries using SparkSQL and minimizing queries for better performance.• Azure Cloud Services including create and manage Azure Resource groups, Azure Elastic Pools.• Creating HDInsight cluster and Storage Account with End-to-End environment for running the Jobssnow flak• AKS -DevOps - Working with development team to deploy Azure, Azure cloud infrastructure engineering (VNETS, resource groups, VM’s, SQL PaaS, hybrid connectors).• Develop deployment automation to provide a fully functional cloud stack in Azure that supports the Clients environment.• Build, migrate and test Azure environments and integrations, setup multi-level network environments using Azure cloud.• Translates business requirements into working logical and physical data models for Data warehouse, Data marts and OLAP applications using Erwin Data Modeler.• Extract Transform and Load data sources system to Azure Data Storage using combination of Azure Data Factory, T-SQL, Spark, PySpark, SQL, and U-SQL Azure Data. • Performs data extraction from API to flat file using PySpark. • Working on Azure Data factory, Azure Data Lake, Azure blob storage and data bricks, and understanding of Snowflakes, and login app. • Azure Networking, ARM, Resource Groups, Azure Site Recovery, Azure Back Up, Storage, Azure Serverless.• Performs data extraction from SharePoint to flat files using Python scripting, move it to raw data zone.• Have good experience working with Azure BLOB Lake Storage and loading data from Azure SQL Synapse analytics (DW)• Good Experience in Linux Base scripting and following PEP-8 Guidelines in Python.
  • Philips
    Big Data Engineer
    Philips Mar 2019 - Mar 2021
    Amsterdam, Noord-Holland, Nl
    • Working on writing complex workflow jobs using Shell/Bash Scripting and scheduling via Oozie to set up multiple programs scheduler system which helped in managing multiple Hadoop, Hive, Sqoop, Spark jobs.• Worked on integration of Hadoop cluster with Spark engine to perform BATCH operations and used Spark API over Hortonworks Hadoop YARN to perform analytics on data in Hive.• Involved in Configuring Hadoop cluster and Hadoop installation, Commissioning, Decommissioning, Load Balancing, Troubleshooting, monitoring, debugging Developing Google Cloud Big Data applications by following Agile Software development life cycle for fast and efficient progress• Designed Pipelines with Apache Beam, KubeFlow, Dataflow and orchestrated jobs into GCP.• Expertise in creating DevOps strategy by implementing CI/CD of code with tools like Version controls, Jenkins, Maven etc. and configuration & deployment tools like Chef, Ansible.• Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test, and deploy.• Developed playbooks using Ansible. Automation agent's installation of Nagios on windows servers and Linux servers.• Developed and Demonstrated the POC, to migrate on-prem workload to Google Cloud• Platform using GCS, Big Query, Cloud SQL and Cloud DataProc.• Working on Developing python code using API gateway to move data from on-premises servers to Data Lake Storage Gen2, and Databricks • Developed Spark Scripting and PySpark programs using Scala API’S, Pandas, NumPy, Python on Azure HDInsight for Data Aggregation, Validation to compare the performance of Spark with Hive and SQL and implemented Spark using Scala and Sparks SQL for Faster Testing and Processing of data. • Developed analytical component using Scala, PySpark and Spark Streaming.• Writing software scripts to ingest data into Hadoop and scripts for scalable, maintainable Extract, Transformation and Load jobs.
  • Twc It Solutions
    Data Engineer
    Twc It Solutions Jun 2017 - Feb 2019
    Eindhoven, Noord-Brabant, Nl
    • Performed analytic queries using Spark SQL and minimizing queries for better performance.• Pruned the ingested data to remove duplicates by applying window functions and perform complex transformations to derive various metrics.• Developed Sqoop job/scripts to import and export data from MySQL and Oracle into using Sqoop.• Involved heavily in setting up the CI/CD pipeline using Jenkins, Maven, Nexus, GitHub, and AWS.• Experience on Migrating SQL database to Azure Data Lake, Azure data Lake Analytics, Azure SQL Database, Data Bricks and Azure SQL Data warehouse and Controlling and granting database access and Migrating On premise databases to Azure Data lake store using Azure Data factory. • Created UDF’s in Spark to be used in Spark SQL.
  • Hilltop Holdings
    Data Engineer
    Hilltop Holdings Mar 2015 - May 2017
    Dallas, Texas, Us
    • Analyzed and understood the data generated by source systems, which includes CSV files, RDBMS tables and CDRs generated by Network elements.• Designed and developed data injection processes using SQOOP and SPARK SQL. Imported Data from Oracle Database to HDFS and HIVE.• Involved in writing SparkSQL applications for data analysing and processing.• Involved in writing Cron jobs to pull data from remote file system (OLTP Servers) to HDFS staging area.• Created HIVE managed and external tables using partitioning and bucketing. Performed operations on HIVE tables.• Involved in writing scripts and Cron jobs to pull processed transaction data from mediation system to HDFS Landing area (Data Lake).• Created and maintained logical and physical data models, maintained metadata which supports various enterprise applications.• Designed and developed different mappings using Oracle SQL, MongoDB, MySQL

Mohammed Abdul Education Details

  • Jawaharlal Nehru Technological University
    Jawaharlal Nehru Technological University
    Electronics And Communications Engineering
  • Trine University
    Trine University
    Master Of Science

Frequently Asked Questions about Mohammed Abdul

What company does Mohammed Abdul work for?

Mohammed Abdul works for Citizens

What is Mohammed Abdul's role at the current company?

Mohammed Abdul's current role is Data Engineer at Citizens || Actively Seeking for New Opportunities | Data Engineer | Big Data | SQL | AWS | Hadoop | Azure | PySpark | Kafka | HDFS | ETL | Python | Scala.

What schools did Mohammed Abdul attend?

Mohammed Abdul attended Jawaharlal Nehru Technological University, Trine University.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.