Michael Mitchell

Michael Mitchell Email and Phone Number

AWS Developer @ JJ Tech Inc
Denver, CO, US
Michael Mitchell's Location
Houston, Texas, United States, United States
Michael Mitchell's Contact Details

Michael Mitchell work email

Michael Mitchell personal email

n/a
About Michael Mitchell

MS FABRIC trained by SQL School and Pragmatic Works. Experienced engineer providing companies with experience in Transact SQL, Database Design, OLTP (working database) and OLAP Cubes (reporting database), Data Visualizations, Data Science, and Cloud Computing. I have fundamental skills in Tableau, Alteryx, SQL, Spotfire, Snowflake, SnowSight, SnowSQL Client, SnowPro (Python), ETL, ELT, Data warehousing, Data Lakehouse, Azure DataBricks, Microsoft Synapse, and AZURE Data Factory(Paas). I have also worked in a software development environment with Snowflake (Saas), Teradata, Oracle, and Microsoft Environment to query and report from hot and cold archived data in the data warehouse and OLAP Environment. Pluralsight Training Profile: https://app.pluralsight.com/profile/michael-mitchell-aaUdemy Training: https://www.udemy.com/user/michael-mitchell-59/Microsoft Domain Knowledge: MS Excel, Power Pivot, Power BI, Power Query, Power BI Cloud, Data Gateways, Microsoft Report Builder, SSRS, SSIS, SSAS, DAX, MDX, AZURE Data Factory, Azure Synapse, Cosmo DB, Azure Data Bricks, Azure Functions, Data Lake, Delta Lake, Lakehouse, MS Fabric, Dedicated SQL Pools, Storage Accounts, SQL Server Agents, SSMS, Visual Studio, Event & IOT Hub, Streaming Analytics, DB Migrations, Cosmo DB Migrations, SQL Server Optimizations, OLAP CUBES, SQL Database Admin, Data warehousing, DQS, & MDMS, and Database Diagrams.Non-Microsoft Knowledge: Fundamentals of Snowflakes, MYSQL, Tableau, Spotfire, BigQuery, Alteryx, GCP, REDSHIFT, Informatica, and DBT.T-SQL: Stored Procedures, Functions, CTE, Window Functions, Cursors, Subqueries, Merge, Temp Tables, Joins, Set Operators, Derived Tables, Triggers, Pivot & UnPivot.Over the past 15+ years, collaborated with operations and IT, helping relieve stress on the IT departments for reporting and data analytics for operations managers. • SQL Developer utilizing the Microsoft Business Intelligence Stacks (T-SQL, SSAS MDX, SSAS Tabular, SSIS for ETL, SSRS for Reports, Power Query, Excel, Power BI Desktop & Power BI Pro to assist clients with Data visualization.• Power Platform: Power Apps, Power Automate, Data Flows & Datasets, Power BI Desktop, Power BI Pro, DAX Studio, Tabular Editor 2 & 3, Bravo, & DAX Formatter. • 200+ certifications obtained on LinkedIn, Udemy, Super Data Science, Maven Analytics, Pluralsight, and other third-party sites in R, Python, SQL, SQL Server Stack, Visual Studio, Statistics, Tableau & Tableau Prep, Power BI, Spotfire, etc.

Michael Mitchell's Current Company Details
JJ Tech Inc

Jj Tech Inc

View
AWS Developer
Denver, CO, US
Michael Mitchell Work Experience Details
  • Jj Tech Inc
    Aws Developer
    Jj Tech Inc
    Denver, Co, Us
  • Is&T It Services
    Data Engineer
    Is&T It Services Aug 2024 - Present
    Houston, Texas, Us
    Creating Data Pipelines, Ingesting Data, Creating Serverless Tables, Spark Notebooks, DBT transformations, and Troubleshooting Issues for Project Managers dealing with Shipping Data Analytics.Tracing Errors from Production, Reporting, UAT, Dev, Staging, Data Quality, Source Data, and URL Data from Vendors using Postman APIs.Troubleshooting and Debugging Azure Data Factory Pipelines in Production, UAT, and Dev environments. Reading Errors and correcting incorrect outputs. Data Migration From ERP and SAAS system using over 50+ API connections to ingest data into Bronze Data Lake, Silver Data Lake Layer with Transformations using PYSPARK, and sending clean and transform data to the Gold Layer for Power BI reporting purposes. Using Metadata from Operations and IT while creating a new enriched Meta Data Layer and Meta Data Repository and New Enrich Data Dictionary Storage in Azure SQL Database for over 50+ PSL. Extracting data from USCG APIs using IMO Numbers and Activity IDs retrieving data to Azure Data Lake Parquet and Data Lake, and using Azure Serverless to turn these Parquet Files to External Tables in the Bronze Layer tables and Views.Utilizing Python, Pandas, PYSPARK, REST API's, Postman, SQL Server Management Studio, Azure Data Studio, Visual Studio Code for Python and SQL, User Defined URL's for Postman Get tests, Azure Database, Storage Accounts, Azure Data Factory, Azure Synapse, Azure Key Vault, Azure Storage Explorer, Azure Synapse Data LakeHouse, Serverless SQL in Azure Synapse, Azure Delta Lake, Azure DevOps, Azure Logical Server, Dedicated SQL Pools, Databrick Sources, Apache Spark, ARM Templates, Azure Log Analytics, Microsoft Fabric, ELT, ETL, Power BI, Power Shell, Data Governance, HTTP requests, Azure CLI, GIT, Medallion Architecture, Source Control, Control Tables, Upsert, and Spark SQL.
  • M & M Hydraulic Fracturing
    Azure Cosmo Db & Azure Synapse & Google Bigquery, Snowflake, & Aws Redshift & Azure Functions
    M & M Hydraulic Fracturing Feb 2017 - Present
    COSMO DB - Key-Value Stores, Graph, and Document Stores. Working with Databases, Collections, Documents, & JSON Scripts. Increased & Decreased Resource Allocated Units (Scale up & Scale down) & storage allocation. Partitioning containers item by dividing the items using the partition key.Creation of AZURE COSMO DB account, databases, notebooks, containers(collections, tables, and graphs), and items (document, row, node, and edges). Querying data from the tables, Importing data into the tables, Using API to connect to resources, etc. Using Data Migration Tools to import data into AZURE COSMO DB from Mongo DB, SQL Server, Excel, DynamoDB, Cosmos Containers, etc.Choosing Cosmo items based on API usage: document & collection, row & table, or node or edge for graphs. Working in different containers: collections, tables, and graphs.Querying Items JSON data using SQL's: Select, From, Where, GROUP BY, HAVING CLAUSE.COSMO DB MIGRATION TOOL: Importing data from multiple sources JSON, MongoDB, SQL, CSV, AZURE TABLE, DynamoDB, HBASE, AZURE COSMO DB, etc. BULK INSERTING multiple JSON FILES into multiple collections. Identifying Source and Target Destinations, Connection Strings, Collections, Partition Keys, Collection Throughput(RU's), & ID Fields.COSMO DB NOTEBOOKS: Worked in a notebook (Python, Spark) environment to code, query, visualize, and analyze data in Cosmo DB. Used Jupyter Notebooks for data cleaning transformations, data visualization, and statistical modeling.Utilization of Pandas Dataframe to create a two-dimensional data frame to import and analyze data.AZURE FUNCTIONS: connecting to AZURE data sources and message solutions to react to specific events. React to DML Language (Insert, Update, & Delete). Working with Trigger Events; Create COSMO DB Trigger. Retrieve the logs to monitor the DML Operations.
  • M&M Hydrualic Fracturing
    Azure Data Bricks, Azure Data Studio, Ssas Tabular Mode Cube, Dax, Mdx, & Azure Event Hubs & Iot
    M&M Hydrualic Fracturing Feb 2017 - Present
    AZURE DATABRICKS: CREATION OF CLUSTERS NODES (CPU, MEMORY) used to create databases and tables in spark environment using Scala runtime. Single and Multi-Node Clusters. Main & Worker Node computes in Scala runtime and Spark environment with Auto Scaling options. Data Science & Engineering NotebooksWorking with Notebooks (Sessions) in Python, SQL: SparkSQL environment, and Scala: ML and Data Science environment. Uploading files from multiple data sources: Excel, Azure Data Lake, Azure Storage, etc. Creating Spark Databases (collection of parquet files) & Tables in Spark environment. Auto Partition files are used to store big data or unlimited data in DBFS.Utilizing Python Notebooks for extracting, transforming, and loading data. Uploading Small Files and Large Files loaded to DBFS. Retrieving data from DBFS(storage), HIVE, Warehouse, and table storage or Previous Note. Use DBFS to store files and tables. Using SparkSQL to query the data in the SQL Notebooks with Cells executed by the Spark JOB.Reading data from Sources and writing it into a table and querying the data using the Python Notebooks or creating data frame from DBFS in PYTHON Notebook (read, load, and aggregate data) aggregate data using Temp VIEW, and read data in SQL using %SQL (magic code). Importing & Exporting IPYNB files into the Notebooks.SPARK NOTEBOOKS used to create SPARK DATABASES, TABLES, TEMPORARY VIEWS, BIG DATA LOADS, QUERIES, AGGREGATIONS, INCREMENTAL LOADS, AND DATA ANALYTICS. ETL from a storage container or data lake storage then creating a data frame, temp views, and spark database and table and Loading it to SQL SERVER LOGICAL SERVER AND DATABASE.Data Lake House -Using features of both the data lake and data warehouse in the Lakehouse using DELTA LAKE for metadata caching and indexing. Delta Lake for transaction support, version control, time travel, etc. Delta Engine used to accelerate the performance of Delta Lake for SQL. Delta Engine used to help improve SQL queries.
  • M&M Hydraulic Fracturing
    Azure Synapse, Azure Databricks, Azure Data Factory, Stream Analytics, Azure Data Lake
    M&M Hydraulic Fracturing Feb 2017 - Present
    I created data pipelines using data bricks, data factory, azure synapse, data lakes, etc. Power BI Desktop, Power BI Pro, Paginated Reports, DAX, Row Level Security, Reports, Dashboards, Apps, Pipelines, etc. Daily Projects in DEV, TEST, and Prod using SQL Stack- T-SQL, SSIS, SSRS, Tabular, Multidimensional, MDX, DAX, DMX. No-SQL Big Query, Redshift, Power Query.SQL Operations: DDL, DML, DQL, etc. Stored Procedures, Window Functions, Merge, JOINS, CTEs, etc.SQL Developer: SSAS, SSIS, SSRS, T-SQL, DAX, MDX, DMX, AZURE Data EngineeringCreated Views for select statements; Created Functions for select and DML statements; Created Stored Procedures for (select, DDL, & DML statements). Created Cursors for Row by Row scanning of tables.Use of the MERGE Statement for DML Operations. Creation of DDL AND DML TRIGGERS (INSERT, UPDATE, DELETE) AND TABLE LOCKINGCreating Single, Multiple, and Remote Joins using Linked Servers.Create SQL Logical Server, SQL Database (DTU), Firewall, & AZURE SYNAPSE Column store and SQL POOL(Massive Parallel Processing) in the cloud using DWUs. Querying T-SQL command to Control Node, it uses MPP, Data Movement Services, and Computes nodes to run in parallel. Use of Different Distributions Round Robin(Staging table for Loads), Hash-Distributed and Replicated Tables in AZURE Synapse.AZURE DATA FACTORY: use to design data pipelines, publish to ADF Store, and trigger and schedule the pipeline using automation. Use to move data from a source to a destination. Created ADF entities for pipeline design Linked Service (connection string), datasets, Activities (Operations on the dataset), Data Flow (Codeless ETL operations) with Spark Cluster for Data Load Optimizations, Power Query for Code Less ETL, and Polybase for Big Data Transfer to Synapse.Executing ADF pipelines using DIU and Apache Spark Clusters. Copy Data using Bulk Insert(Lock), Copy Command, Polybase, & Upsert methods.Add DIU more process units & Degree of Copy Parallelism;
  • M & M Hydraulic Fracturing Research
    Sql Server Developer, Data Warehouse, Ssis, And Data Modeling
    M & M Hydraulic Fracturing Research Feb 2017 - Present
    Knowledge of data warehousing, data marts, staging areas, ETL, star schemas, snowflake schemas, database diagrams, entities, attributes, dimensions, and fact tables.Knowledge of slowly changing dimensions (Type 0, 1, 2, & 3), data lineage, traditional & column store indexes, audit dimensions, and table partitioning. SSAS & MDX: Create Data Connections, Data Source Views, Cubes, Dimensions, Measures, Hierarchy, etc. Using MDX to query cubes with the Select, From, Where, Having, & Filter clause. Creating a column and row axis for dimension, measures, calculations. Creating session and query scoped members using the create member, with the member, and with set clause. Using calculations in Visual Studio to create an expression that's reusable in the cube.Create MDX statements via Dimensions, Hierarchy, Levels, & Members, Sets (Union), Tuples (Cross joins), Non-Empty, & aggregations using tuples. Creating hierarchies with Parent and Children, Ancestors and Ascendants, Descendants, first child, and last child, and sibling functions. Creating and joining sets and using functions. SQL Server Integration Services: Working daily with the connection manager, data viewer, data and control flow tasks. Data flow tasks utilization consists of aggregations, multi-cast, conditional splits, data conversions, row sampling, percentage sampling, data sort, union all, term extractions, backup tasks, etc. Control flow tasks consist of bulk inserts, sequence containers, execute process and package, enumerators, and transfer tasks. SQL SSAS Tabular models: importing data into Power Pivot, SSMS, or Tabular Model in Visual Studio to create a tabular data model. Utilization of the data view, diagram view, power pivot to relate data from the dimension and fact tables. Heavy utilization of DAX language logical, statistical, filter, and time intelligence functions.Dax Filter: Related: * to 1, RelatedTable: 1 to *, Table functions SumX, CountX, Filter, Calculate, ALL, Time Intel
  • M & M Hydrualic Fracturing
    Power Bi Desktop & Power Bi Report Builder, Git, Ssas, Ssrs Paginated Reports, Power Apps & Automate
    M & M Hydrualic Fracturing Feb 2017 - Present
    High Utilization of Power BI Report Server, SSRS, Power BI Service, Power BI Desktop, DAX, and SQL Server Analysis Service Tabular.Monitoring SQL Server Analysis Services Tabular both in Dev Environment, Test Environment, and Production Environment. Deploying and Processing SQL Server Tabular Models in all environments daily using automation and also during weekly and monthly releases.Daily Updates on Power BI reports in Power BI Service. Worked in Power BI Pro as a Creator building, designing, and maintaining reports, datasets, and dashboards. Worked as an Analyzer exploring and manipulating data to extract key insights. Also, worked as a Collaborator publishing reports and apps, sharing reports, and making sure workspaces are working to deliver data. Finally, working to create roles, row-level security, and app and workspace environments.Publishing reports from Power BI Desktop to Power BI Cloud.Managing Deployment Pipelines in Power BI to update documents in Dev, Test, and Production environments.Working with Power BI gateways to migrate on-premises data into a cloud environment.Work daily to create Dataflows to connect to raw data sources, apply Power Query steps, and store data as entities. Use Power BI Desktop to connect to a data flow to create a data source and dataset. Use of common data model to map tables across the company, attributes, metadata, and relationships.Daily Updates to the Model. Bim using Visual Studio, Tabular Editor 2, and DAX Studio.Power BI Data Analysis 1: Working to create key business metrics and creating key visuals for comparison analysis (column and bar charts), trend analysis (line and area charts), and ranking analysis. Power BI Data Analysis 2: Working to analyze contribution data (Pie and Tree Maps), figuring out the variance between items, correlating the relationship between two variables (scatter and bubble charts), and also creating groupings and buckets for data analysis (column graphs and histograms).
  • M&M Hydrualic Fracturing Research
    Sql Server Ssas Multidimensional, Ssas Tabular, Mdx, Dax, & Online Analytical Processing
    M&M Hydrualic Fracturing Research Feb 2017 - Present
    SSAS: use SSAS to make a data connection, creating impersonations, create a data source view via metadata, and creating measure and measure groups, adding cube dimensions from data warehouse dimensions, and dragging dimension attributes into the dimensions section of the cube. Dimensions: working in the structure of the dimension for each dimension adding attributes to each dimension, creating attribute relationships on hierarchies to help with the aggregation performance of the cube, creating translations for language assessments, and also working with the browser option in conjunction with the name and key options. Dimension Usage: defining the relationships between the dimension and measure group with fact and dimensions table on the primary and foreign key through regular, referenced, fact, many to many, and dimensions options. Calculations: creating calculations and named sets to use in the cube. KPI: Using the value, goal, status, and trend to create KPI's in visual studio and excel.Partitions: Creating partitions based on row levels or space on tables by years or any other metric to help with aggregation performance. Perspectives: creating perspectives similar to views used in Transact-SQL.Using English translations and browser exploration to query.Using MDX to query cubes with the Select, From, Where, Having, & Filter clause. Creating a column and row axis for dimensions, and measures.Creating session and query scoped members using the create member, with member, and set clause. Create MDX statements via Dimensions, Hierarchy, Levels, & Members, Sets (Union), Tuples (Cross joins), Non-Empty, & aggregations using tuples.Creating hierarchies with Parent and Children, Ancestors and Ascendants, Descendants, first child, and last child, and sibling functions. Creating and joining sets and using functions. Data Mining: using Microsoft Analysis (Clustering, Decision Trees, Linear & Logic Regression, Naive Bayes, and Neural Network)
  • M&M Hydrualic Fracturing Research
    Sql Developer (Transact-Sql, Ssis, Ssrs, Ssas, & Database Administration), Mongodb, And Data Quality
    M&M Hydrualic Fracturing Research Feb 2017 - Present
    Utilizing data quality services to connect to multiple data sources to create domains (unique values & rules), knowledgebase, and cleanse projects. Working with data deduplication to work with matching policies, deduplication projects, etc. Using advanced DQS to create term based relations, composite domains, regular expressions, and domain data types. Strong utilization of DQS activity monitoring, backup and restore, and configurations tools.Using Master Data Services (MDS) in conjunction with Data Quality Services which is used to cleanse and de-duplicate data, and ETL with Integration Services. MDS usage to standardize data definitions for key business entities, create master data hub, and manage data governance. Using MDS for Modeling Entities, Attributes, and Hierarchies, Data Validation to ensure data correctness, Data Matching, Role-based Security, Loading batch data through stage tables, API changes, subscription views, and notifications. Developing an MDS model, members, domain-based attributes, attribute groups, entity dependencies, data integration, and resources for data modeling.Statistical modeling using SPSS, Gretl, & SAS to run statistical analysis mainly simple linear regressions, multiple linear regression, and logistic regression analysis on models. Performance Tuning: Clustered, Non-Clustered Indexes, Virtual Machines, Query Optimizer, Statistics, Temp DB, Locking, Blocking, & DeadLock. Using SQL Server DB admin services to monitor baseline CPU, Performance Counters, Data Collection, SQL Profiler, Extended Events, DMV's, Forced Plan, SQL Server Agent, Page Splits, Database Backup & Restores.SSAS Data Mining: creating data mining structures using Microsoft Analysis (Clustering, Decision Trees, Linear & Logic Regression, Naive Bayes, and Neural Network)Knowledge of Data Mining (extraction, cleansing, data transformation, clustering, regression, classification, and data visualization)concepts and procedures.
  • M&M Hydrualic Fracturing Research
    Sql Developer, Data Science, Python, R, Tableau, & Seaborn
    M&M Hydrualic Fracturing Research Feb 2017 - Present
    Utilizing data tools associated with the Microsoft Suite SQL Server, Azure, NoSQL, Power Query, Power Pivot, Power BI, Power View, Power Map, Pivot Tables, Data Analysis Expressions (DAX), Multidimensional Expressions (MDX), and DMX. MDX is utilized to query data in SQL Server Analysis Services, Tabular SSAS, Microsoft Excel, and querying DAX expressions. SQL DB Admin: Backup, Restore, SQL Server Agent, SQL Profiler, Audit Logins, Performance Monitor, Blocks, DeadLock, Partition, DBCC, etc.Utilizing data visualization tools like Tableau, Power BI, Spotfire, SSRS, Python (seaborn and matplotib) and GGplot to create visualizations.Tableau Desktop & Public: working with dimensions and measures on discrete and continuous fields. T-SQL working with parameters, predicates, and query editor to create SQL codes to retrieve data from tables. Utilization of constraints, case statement, data types, sort and filtering, table joins and advanced joins, modifying tables, functions, aggregates, variable and batches, control flow, containers, parameters, stored procedures, etc.Advanced T-SQL querying with subqueries in the (Select, From, Where, and Having Clause), correlated subqueries, derived tables, set and conditional operators, window functions, grouping sets, handling transactions, views, query plans, and indexing. Python: utilization of Juptyer notebook research with For and While Loop, Lists, Slicing, Tuples, Functions, Packages, Numpy, Arrays, Matrices, Dictionaries, DataFrames, Pandas, Scikit Learn, Seaborn, Matplotlib, and Data Visualizations in Juptyer notebooks.Strong utilization of databases like SQL Server & SQL Server Management Studios, MongoDB, Mongo DB Compass, Cosmos DB, MySQL, PostgresSQL, and other relational and non-relational databases. DB admin for SQL Server only. Strong understanding of CRUD operations in SQL & Non-SQL databases. Utilization of MongoDB for working with JSON, and collections in the shell window and MongoDB Compass community.
  • M&M Hydrualic Fracturing Research
    Sql Server Reporting Services & Data Visualization Analyst (Power Bi, Spotfire, & Tableau)
    M&M Hydrualic Fracturing Research Feb 2017 - Present
    Engineering & IT professional utilizing SQL Server (Transact-SQL) to store, query, update, and creation of reports via Visual Studio ( SSIS, SSAS, SSRS, T-SQL, MDX, DAX, & DMX). The goal is to help clients identify data, authorize access, and administer to the database. I have experience working with Structured Query Languages DDL and DML. Working daily creating database schemas and multiple instances on SQL Server.Working daily with SQL Server Management Studio, (SSDT), SQL Server Reporting Services, and Relational Database Management Systems. Creating simple reports using the Report Wizard, DataSource and Shared DataSets, List Data Region, and the MailMerge Reports. SSRS: used to create Tabular and Matrix-style reports via the Report Wizard and Report builder via the Tablix and the toolbox.Utilizing Image Controls, Built-In Functions, Grouping, and Subtotals (Manually and Wizard), Running Totals, and Built-In Fields, and Bound & UnBound Parameters. Using Parameterized Reports, Parameterized Sorting, Cascading Parameters, Parameterized Reports, Drilldown Reports, Report Actions, Matrix Reports, Advanced Controls, Charts, Configuring Reporting Services, and Deployment Reports.Working with Authentication Modes (Ad-hoc query and stored procedures), Implementing Caching, Snapshots, Subreports, Subscriptions, & Web Services.Power BI: ETL data from various sources into Power BI Desktop for cleaning and shaping via Power Query Editor, building data models with Dimension and Fact tables with cardinality, filter flow, primary and foreign keys, using Dax to create calculated columns and measures, and creating visuals to end-users.DAX: Knowledge of Calculated Columns & Measures used in Power BI. Knowledge of Implicit & Explicit Measures, Filtered Context, Dax Date & Time, Text, Conditional & Logical, Related, Math & Stats, Count, Calculate, ALL, Filter, & Time Intelligence Functions.
  • M&M Hydrualic Fracturing Research
    Sql Business Intelligence| Petroleum Engineer| Oil Market Expert| Technical Writer| Sql Developer
    M&M Hydrualic Fracturing Research Feb 2017 - Present
    Specializing in IT programs, Business Intelligence, Data Science & Data Analytics for all industry applications. Researching methods to assist the Accounting, Finance, Engineering, HR, and the Operation departments through data warehouse management, database creation and maintenance, and utilization of data science programs.Expertise in Oil & Gas Industry Insights, Hydraulic Fracturing, Oil Field Completions, Energy Markets Technical & Fundamentals, and (knowledge of) DAT & SONAR Trucking & Shipping Market Statistics.Energy Specialist using Statistical and Regression Analysis on data analytical platforms like Tableau, Power BI Pro & Power BI Desktop, R Studio, Python (IDLE), PyCharm, & Tibco Spotfire X to understand the trend and correlations between multiple data sources.Creating databases to store valuable data from fracture spreadsheets to create illustrative graphs and charts to identify trends in various basins. Also, using Business Intelligence tools to clean data, Extract Transfer & Load government data, and combine multiple data sources from different departments into relational databases for management teams.Python Platforms: IDLE, Pycharm, Anaconda, NumPy, Jupyter, Matplotlib, Python Generators, Seaborn & Tkinter.R: R Studio & ggplotBusiness Intelligence: DAX, MDX, & M LanguageSQL: SQL Server Management Studios, SQL Server Data Tools, SSIS Toolbox, SQL Server Integration, SQL Server Reporting, SQL Server Admin, and SQL Server Analysis ServicesSQL Server DeveloperETL & ELT: Informatica, Snowflake, SSAS, & S3 bucketsMicrosoft AZURE, Microsoft Dynamics, MYSQL & TeradataBusiness Intelligence: Power BI, Tableau, Looker, & SpotfireVisual Studio: Business IntelligenceMapping: Google Maps, ArcGIS, & MapboxExcel: Advanced Excel, Power Pivot, Power Query, Power View, Pivot Tables & Charts, Excel Data Analysis, & Excel Regression AnalysisPython: working For and While Loops, Lists, Tuples, Dictionaries, Arrays, Matrices, Data Frames
  • M&M Hydrualic Fracturing Research
    Advanced Tableau, R, Mongodb, Mysql, Postgresql, Qlik Sense, Docker, Spss, & Machine Learning
    M&M Hydrualic Fracturing Research Feb 2017 - Present
    R & R Advanced: Working with variables, operators, for & while loops, if-statement, vectorized operations, Functions, and packages in R. Designing Matrices & Data Frames using rbind, cbind, matrix, named dimensions, named vectors, column and row names, matrix operations, merge and filtering data frames. Using R to visualize data using math plot, ggplot, and q plot. Building advanced visualizations in ggplot using Data, aesthetics, geometries, statistics, facets, coordinates, and themes.
  • Jj Tech Inc
    Data Engineer
    Jj Tech Inc Oct 2022 - Aug 2024
  • Vidhwan
    Sql Server Analysis Services: Tabular Semantic Model, Sql, Dax, R, & Power Bi & Excel
    Vidhwan May 2022 - Oct 2022
    Managing a Tabular Cube Semantic Model for faster query execution against a SQL database. The tabular Model consists of managing the data model metadata including calculated columns, measures, hierarchies, relationships, perspectives, calculated tables, and KPIs with over 100 healthcare tables. Creating and managing partitions in the data model.Tabular Mode Row Level Storage. Used to Design Virtual OLAP CUBE bringing in data from multiple data sources and Query the data with DAX or MDX in SSMS in Browse mode. Testing and Validating DAX queries with SQL Queries, identifying row count in SQL vs Excel, and Deploying and Processing the model in the tabular models.Deploying metadata and Processing Data from Visual Studio from Dev, SIT, & Production Servers On-Premise. Transferring data from SQL Server database to Tabular Model using Processing options on Tables, Databases, & Partitions. Full utilization of processing options: Process FULL, Process Default, Process Clear, Process Recalc, Process Data, Process Defrag, & Process AddAdding data to the models, Rebuilding Dictionaries, & Derived Structures (Calculated Columns, Tables, & Hierarchies)
  • Insight Global
    Sql Developer | Power Bi Developer | T-Sql, Ssis, Ssrs, Mdx, Dmx, Dax, Azure
    Insight Global Jun 2021 - Mar 2022
    Atlanta, Georgia, Us
    Worked as a software engineer in consulting capacity managing changes in Dev, Test, UAT, & Production environments for SSAS Multidimensional, DAX Studio, Tabular, SSRS, SSIS, T-SQL, Excel, Power BI Desktop & Power BI Pro.
  • Primerica
    Sales Analyst
    Primerica Aug 2019 - May 2021
    Duluth, Georgia, Us
    I use oil and gas data analytics and data repositories mainly completion, production, and permitting activity to locate areas with low activity and high unemployment rates to find candidates. Our LinkedIn group also provides reports and insights on oil and gas market activity. I am currently working with Primerica to help stranded oil and gas workers find alternative sources of income, learn about finance, income replacement for families, investments, and obtain professional certifications in the Financial Industry.
  • Freightwaves
    Senior Oil & Gas Market Analyst; Sonar Platform Research
    Freightwaves Jul 2018 - Dec 2018
    Chattanooga, Tennessee, Us
    Analyzing the Trucking Market using the SONAR platform on a daily basis. Oil and Gas Market Analysis, Energy Future Analysis, Energy Technical Market Analysis, Trucking Metrics, Economic Analysis, Oil & Gas Software Design, etc.Daily market analysis on a third-party platform on technical, fundamentals, permitting, drilling, completion, and production analysis in the Lower 48. Monitoring monthly changes in completion design trends in major basins.Data Science & Data Analysis: integration of R (vector oriented language), SQL Server 2017, Power BI, M, DAX, Tableau, Python (object-oriented language), and other languages significant to the industry. worked daily with the Data Scientist team studying segments of the economy.FreightWaves market expert following and contributing to Oil & Gas. Took classes on Business Writing to transition from technical training writing to formal structured writing.
  • C&J Energy Services
    Engineering Support Specialist & Business Intelligence
    C&J Energy Services Jun 2012 - Oct 2017
    Houston, Texas, Us
  • C&J Energy Services
    District Engineer, Hydrualic Fracturing, And Acidizing
    C&J Energy Services Dec 2010 - Jun 2012
    Houston, Texas, Us
  • Bj Services
    Field Application Engineer
    Bj Services Dec 2006 - Dec 2010
    The Woodlands, Texas, Us
  • Precision Drilling
    Field Service Engineer
    Precision Drilling Jul 2003 - Dec 2006
    Calgary, Ab, Ca

Michael Mitchell Skills

Petroleum Engineering Microsoft Excel Business Intelligence Data Visualization Petroleum Gas Oil And Gas Energy Project Engineering Project Management Business Analytics Data Analysis Data Analytics Business Modeling Financial Modeling Financial Statements Technical Analysis Experienced Business Analyst Commodity Markets Statistics Thomson Reuters Eikon Microeconomics Macroeconomics Energy Derivatives Equity Derivatives Fx Derivatives Financial Futures Derivatives Trading Investments Options Corporate Finance Predictive Analytics Statistical Data Analysis Bloomberg Terminal Portfolio Mangement Frac Pro Meyers Intermarket Analysis Decision Analytics Energy Modeling And Valuation Navport Data Analytics Platts Rigdata Thomson Reuters Datastream Data Analytics Completion Engineer Technical Writing Risk Management Demand Forecasting Economic Forecasting Data Stream Market Intelligence Data Modeling Data Mining Data Warehousing Pivot Tables Microsoft Sql Server Microsoft Power Bi Sql R Microsoft Project Microsoft Access Tableau Microsoft Visio Microsoft Powerpoint Json Sql Server Reporting Services Sql Server Integration Services Transact Sql Microsoft Azure Sql Server Management Studio Visual Studio Python Microsoft Power Query C# Mysql Nosql Postgresql Looker Microsoft Predictive Analytics Microsoft Excel Data Analysis Microsoft Statistical Analysis Microsoft Regression Analysis Arcgis Explorer Sql Server Analysis Services Spotfire Dax Microsoft Power Pivot Cluster Analysis Data Science M Language Sql Data Admin And Sql Tuning Alteryx Multidimensional Expressions Online Transaction Processing Olap Cube Studio Spss

Michael Mitchell Education Details

  • University Of Louisiana At Lafayette
    University Of Louisiana At Lafayette
    Petroleum Engineering
  • Sql School (Sql Server Dev, Dba , Msbi & Power Bi) Training Institute
    Sql School (Sql Server Dev, Dba , Msbi & Power Bi) Training Institute
    Power Bi)
  • Noble Work Foundation
    Noble Work Foundation
    Data Engineering
  • Codecademy
    Codecademy
    Business Intelligence
  • Intellipaat
    Intellipaat
    Information Technology Project Management
  • Pluralsight
    Pluralsight
    Information Technology

Frequently Asked Questions about Michael Mitchell

What company does Michael Mitchell work for?

Michael Mitchell works for Jj Tech Inc

What is Michael Mitchell's role at the current company?

Michael Mitchell's current role is AWS Developer.

What is Michael Mitchell's email address?

Michael Mitchell's email address is mi****@****ica.com

What schools did Michael Mitchell attend?

Michael Mitchell attended University Of Louisiana At Lafayette, Sql School (Sql Server Dev, Dba , Msbi & Power Bi) Training Institute, Noble Work Foundation, Codecademy, Intellipaat, Pluralsight.

What skills is Michael Mitchell known for?

Michael Mitchell has skills like Petroleum Engineering, Microsoft Excel, Business Intelligence, Data Visualization, Petroleum, Gas, Oil And Gas, Energy, Project Engineering, Project Management, Business Analytics, Data Analysis.

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.