Jaz M

Jaz M Email and Phone Number

Citzen @ Intact
toronto, ontario, canada
Jaz M's Location
Whitby, Ontario, Canada, Canada
About Jaz M

I am an experienced Data Engineer with over 5 years of expertise in designing, developing, and optimizing data pipelines and architecture. My skills span data modeling, ETL processes, data warehousing, and data quality solutions. I have a proven track record of migrating SQL databases to Azure Data Lake and various Azure services. Additionally, I am well-versed in setting up and maintaining CI/CD pipelines, with proficiency in automation tools like GIT, Terraform, and Ansible. My hands-on experience includes AWS and a range of Big Data technologies, making me an asset for driving data-driven insights and informed decision-making.

Jaz M's Current Company Details
Intact

Intact

View
Citzen
toronto, ontario, canada
Website:
intactfc.com
Employees:
10937
Jaz M Work Experience Details
  • Intact
    Big Data Engineer
    Intact Aug 2021 - Present
    Toronto, On
    o Analyzing the data flow from different sources to target and provide the corresponding design Architecture in the Azure environment.o Creating an Application Interface Document for the downstream to create a new interface to transfer and receive the files through Azure Data Share. Creating pipelines, data flows, and complex data transformations and manipulations using ADF and PySpark with Databricks.o Creating a Source to Target Mapping for the SSIS Package Development. Design ETL/SSIS packages to process data from various sources to target databases. Create SQL Server Configurations, Performance Tuning of stored procedures, and SSIS Packages.o Creating High-level technical design documents and Application design documents as per the requirements and delivering clear, well-communicated, and complete design documents.o Attending requirement calls and working with Business analysts and Solution Architects to understand the requirements of clients.o Act as technical liaison between customer and team on all AWS technical aspects.o Designed and Implemented a test environment on AWS.o Creating DA specs and Mapping Data flow and providing the details to the developer along with HLDs.o Ingested data in mini-batches and performed RDD transformations on those mini-batches of data by using Spark Streaming to perform streaming analytics in Data bricks.o Improved performance by optimizing computing time to process the streaming data and saved cost to the company by optimizing the cluster run time.o Perform ongoing monitoring, automation, and refinement of data engineering solutions prepare complex SQL views, and store procs in Azure SQL DW and Hyperscale.o Loaded different files from ADLS by using U-SQL scripts into target Azure Data warehouse.o Designed and developed a new solution to process the NRT data by using Azure stream analytics, Azure Event Hub, and Service Bus Queue.
  • Manulife
    Data Engineer
    Manulife Jan 2020 - Jul 2021
    Waterloo, On
    o Used AWS EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.o Conducted Data blending and data preparation using Alteryx and SQL for Tableau consumption and publishing data sources to the Tableau server.o Developed Kibana Dashboards based on the Log stash data and Integrated different source and target systems into Elasticsearch for near real-time log analysis of monitoring End-to-end transactions.o Designed and developed a Security Framework to provide fine-grained access to objects in AWS S3 using AWS Lambda, and DynamoDB.o Set up and worked on Kerberos authentication principles to establish secure network communication on cluster and testing of HDFS, Hive, Pig, and MapReduce to access cluster for new users.o Designing and setting up Enterprise Data Lake to provide support for various use cases including Analytics, processing, storing, and Reporting of voluminous, rapidly changing data.o Used Spark SQL for Scala & amp, a Python interface that automatically converts RDD case classes to schema RDD.o Import the data from different sources like HDFS/HBase into Spark RDD and perform computations using PySpark to generate the output response.o Responsible for maintaining quality reference data in source by performing operations such as cleaning, transformation, and ensuring Integrity in a relational environment by working closely with the stakeholders & solution architect.o Developed a reusable framework to be leveraged for future migrations that automate ETL from RDBMS systems to the Data Lake utilizing Spark Data Sources and Hive data objects.o Performed end-to-end Architecture & implementation assessment of various AWS services like Amazon EMR, Redshift, and S3.
  • Veeva Systems
    Hadoop Developer
    Veeva Systems Sep 2017 - Nov 2019
    , Toronto, On
    o Designed workflows by scheduling Hive processes for Log file data, which is streamed into HDFS using Flume.o Developed schemas to handle reporting requirements using Tableau.o Actively participated in weekly meetings with the technical teams to review the code.o Used the JUnit framework to test the Unit testing of the application.o Hive queries for data were written to meet the business requirements.o Have a deep and thorough understanding of ETL tools and how they can be applied in a Big Data environment.o Participated in the requirement gathering and analysis phase of the project in documenting the business requirements by conducting workshops/meetings with various business users.o Implemented test scripts to support test-driven development and continuous integration.o Used Sqoop to extract data from SQL server and MySQL databases to HDFS.o Developed MapReduce (YARN) jobs for cleaning, accessing, and validating the data.o Wrote MapReduce jobs using Pig Latin and optimized the existing Hive and Pig Scripts.o Hive scripts were written in Hive QL to de-normalize and aggregate the data.o Automated the workflows using Java to export data from databases into Hadoop.o Involved in moving all log files generated from various sources to HDFS for further processing through Flume.

Jaz M Education Details

Frequently Asked Questions about Jaz M

What company does Jaz M work for?

Jaz M works for Intact

What is Jaz M's role at the current company?

Jaz M's current role is Citzen.

What schools did Jaz M attend?

Jaz M attended University Of Mumbai.

Who are Jaz M's colleagues?

Jaz M's colleagues are Guillaume Lambert, Krista Colwell, Emma Smith-Ritchie, Sylvie Guérard, Christy Lewis, Elanie Tordjman, Remi Jiang.

Not the Jaz M you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.