• Azure Data Engineer with 8+ years of experience in building data intensive applications, tackling challenging architectural and scalability problems.• Experience in building data pipelines using Azure Data factory, Azure databricks and loading data to Azure data Lake, Azure SQL Database, Azure SQL Data warehouse and controlling and granting database access.• Experience in Developing Spark applications using Spark - SQL, Pyspark and Delta Lake in Databricks for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.• Good understanding of Spark Architecture, MPP Architecture, including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.• Productionize models in Cloud environment, which would include, automated processes, CI/CD pipelines, monitoring/alerting, and troubleshooting issues. Present the model and results to technical and non-technical audience• Experience in Database Design and development with Business Intelligence using SQL Server 2014/2016, Integration Services (SSIS), DTS Packages, SQL Server Analysis Services (SSAS), DAX, OLAP Cubes, Star Schema and Snowflake Schema.• Excellent communication skills with excellent work ethics and a proactive team player with a positive attitude.• Strong skills in visualization tools Power BI, Microsoft Excel - formulas, Pivot Tables, Charts and DAX Commands.• Highly skilled in various automation tools, continuous integration workflows, managing binary repositories and containerizing application deployments and test environments.• Experienced working with configuration management tool chef by developing chef cookbooks to configure, deploy and manage web applications, config files, databases, mount points and packages of existing infrastructure.• Experience in creating Docker containers leveraging existing Linux Containers and AMI's in addition to creating Docker containers from scratch.• Experienced Application Security Researcher, thereby possess hands-on experience on wide• Exceptional communication skills, business insights to comprehend and convey technical issues. Proven ability to collaborate, lead and work incessantly towards organizations growth
-
Senior Data SpecialistMitra AiToronto, On, Ca -
Senior Data EngineerService Canada(Mpbsd) Mar 2021 - PresentGreater Toronto Area, Canada• Involved in requirements gathering, analysis, design, development, change management, deployment.• Involved in the development of real time streaming applications using PySpark, Apache Flink, Kafka, Hive on distributed Hadoop Cluster.• Utilized Apache Spark with Python to develop and execute Big Data Analytics and Machine learning applications, executed machine Learning use cases under Spark ML and Mllib.• Extracted data from heterogeneous sources and performed complex business logic on network data to normalize raw data which can be utilized by BI teams to detect anomalies.• Designed and developed Flink pipelines to consume streaming data from kafka and applied business logic to massage and transform and serialize raw data.• Developed common Flink module for serializing and deserializing AVRO data by applying schema.• Developed Spark streaming pipeline to batch real time data, detect anomalies by applying business logic and write the anomalies to Hbase table.• Implemented layered architecture for Hadoop to modularize design. Developed framework scripts to enable quick development. Designed reusable shell scripts for Hive, Sqoop, Flink and PIG jobs. Standardize error handling, logging, and metadata management processes.• Indexed processed data and created dashboards and alerts in splunk to be utilized/ action by support teams.• Responsible for operations and support of big data Analytis platform, Splunk and Tableau visualization.• Managed, developed, and designed a dashboard control panel for customers and Administrators using Tableau, PostgresSQL and RESTAPI calls.• Designed and Developed applications using Apache Spark, Scala, Python, Nifi, S3, AWS EMR on AWS cloud to format, cleanse, validate, create schema, and build data stores on S3.• Developed CI-CD pipeline to automate build and deploy to Dev, QA, and production environments..
-
Azure EngineerRogers Communications Jan 2018 - Feb 2021Greater Toronto Area, Canada• Designed and implemented data acquisition, ingestion, Management of Hadoop infrastructure and other Analytics tools (Splunk, Tableau).• Developed Spark batch job to automate creation/metadata update of external Hive table created on top of datasets residing in HDFS.• Responsible for continuous monitoring and managing Elastic MapReduce (EMR) cluster through AWS console.• Working knowledge of build automation and CI/CD pipelines.• Developed python scripts to automate data ingestion pipeline for multiple data sources and deployed Apache Nifi in AWS.• Developed Data Serialization spark common module for converting Complex objects into sequence bits by using AVRO, PARQUET, JSON, CSV formats.• Worked on ERModeling, Dimensional Modeling (StarSchema, SnowflakeSchema), Data warehousing and OLAP tools.• Populated HDFS and PostgreSQL with huge amounts of data using Apache Kafka.• Design and develop Rest API (Commerce API) which provides functionality to connect to the PostgreSQL through Java services.• Designed Batch Audit Process in batch\shell script to monitor each ETL job along with reporting status which includes table name, start and finish time, number of rows loaded, status, etc.• Developed Spark jobs in PySpark to perform ETL from SQL Server to Hadoop.• Design and develop Tableau visualizations which include preparing Dashboards using calculations, parameters, calculated fields, groups, sets and hierarchies.• Developed mapping document to map columns from source to target.• Created azure data factory (ADF pipelines) using Azure polybase and Azure blob.• Performed ETL using Azure Data Bricks. Migrated on-premises Oracle ETL process to Azure Synapse Analytics.• Worked on python scripting to automate generation of scripts. Data curation done using azure data -
Business Intelligence EngineerQualitytree Solutions Jan 2015 - Mar 2017Hyderabad, Telangana, India• Change data capture records, insert, update and delete experience applied to SQL Server tables.• Generated several SSRS reports from the new system.• Extensive work on creating ETL packages using SSIS.• Worked on integrating TFS with Visual Studio 2005 and 2008• Restrict data for users using row-level security and user filters.• Worked on developing enterprise solutions using fast SDLC Agile methodologies, SCRUM.• Creation of reports according to the required documents provided by the customer.• Database objects created – schemas, tables, indexes, views, user-defined functions, triggers, stored procedures, constraints, and roles.• Used various transformations in SSIS to load data from flat files, web services, XML files and DB objects into the SQL database.• Generation of daily, weekly and monthly reports according to customer requirements using subscription and prepared tabular reports, crosstab reports and sub-reports.• Involved in partitioning cubes to increase cube performance (SSAS).• Cascade parameters, gauges and interactive charts, etc. Reports designed with SSRS 2008.• Enterprise-grade, web-based reporting with SQL Server Reporting Services (SSRS) so you can create reports that extract content from various data sources.• Create custom Visual Basic and Access VBA code to automate business processes and enterprise ActiveX controls (spreadsheet controls) and enhance the Access GUI.• Solid experience integrating interactive reports, dashboards and/or modeling results and strong data visualization design skills and analytical background with Power BI.• Involved in tabular data warehouse and DAX operations for SSAS 2012 and 2008 OLAP databases.• High level of responsibility for the development and management of SSAS, SSRS and SSIS projects.• Create KPI reports for cubes and optimize reports.• Create a star schema dimensional cube using SQL Server Analysis Services (SSAS).
Frequently Asked Questions about Ali Mahmood
What company does Ali Mahmood work for?
Ali Mahmood works for Mitra Ai
What is Ali Mahmood's role at the current company?
Ali Mahmood's current role is Senior Data Specialist.
Who are Ali Mahmood's colleagues?
Ali Mahmood's colleagues are Jeewantha Madubashana, Hansaka Sudusinghe, Rumesh Tharaka, Dulmini Deerasuriya, Nilanthi Perera, Gimhan Jayasinghe, Shanaka Mendis.
Not the Ali Mahmood you were looking for?
-
-
-
Ali Mahmood
Milton, On -
1adp.com
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial