Rizwan Mian, Phd Email and Phone Number
Rizwan Mian, Phd work email
- Valid
- Valid
Rizwan Mian, Phd personal email
- Valid
Rizwan Mian, Phd phone numbers
CtoC from own corp, EST, Hybrid with some flexibility to Travel. Generative AI has taken the world by storm and Azure Cloud is making it available to the industry. With architectural expertise in both, I solve business use cases on the topic making a material difference. My vertical experience lies in Healthcare, Finance, Insurance and Telco.I build Advanced Analytics systems that optimize top and/or bottom line of a business. I have added value to the business and the community, as shown by my LinkedIn recommendations. In favor of my candidacy and derisk any concerns, I list my public and independently assessed data points:- about 25 public references- many cloud and data certificates from different vendors including Microsoft, Amazon and Coursera.- many public github projects- many LinkedIn skill tests- PhD in Cloud Computing= Specialties =- Business Analytics- Data Quality;- Coder;- Retail investing; - Risk handling;- Wholistic financial management;- Digital Marketing;- Entrepreneurial; - Resourceful;- Open Source;- Analytical;- Strategist;
Old World Industries
View- Website:
- oldworldind.com
- Employees:
- 386
-
Lead Gen And Ai Agentic Architect And Developer | Azure And Solutions ArchitectOld World IndustriesNew York, Ny, Us -
Gen And Ai Agentic Architect And Developer | Azure And Solutions ArchitectBmoNew York, Ny, Us -
Gen / Ai Architect | Azure / Solutions Architect (Contract)Agilisium Consulting May 2024 - PresentLos Angeles, Ca, Us Resident Gen AI and Azure/Solutions Architect at our key healthcare client, Abbott in Chicago. Integral member of Enterprise Architect team with a visibility on about 80 Gen/AI use cases.o Developed Enterprise Architectural (EA) Point of View (PoV) on Web/M365/Copilots and Azure AI Ecosystemo Built Decision Tree and Rules of Thumb to map usecases to relevant Copilot and/or Azure AI. o Co erected EA standards and Checklists for Gen AI when promoting applications from PoC to Pilot.o Co built Buy/Build Selection Matrix. Developed a weighted analytical model making recommendations with a Confidence Score.o Defined selection criteria for relevant Gen AI Techniques between Prompt Engineering, RAG, Fine tuning and Full training. Key part of Pre/sales showcasing our solutions to clients. Live demos, walk throughs and Q&A on our products. Provide technical assistance. Rearchitecting our AWS solutions to the Azure Cloud. Member of Steering committee to drive our generic and Life Science specific solution development. Co supervising ML models development to predict kidney graft loss and hospital readmissions. Aiming to publish our work at a peer reviewed venue. Co supervising our Life cycle specific LLMs fine tuning and development.Gen AI Models: Azure Open AI, AWS Bedrock, Hugging Face, Agentic Framwork, Copilots (M365, Studio), Tuned LLMs. -
Cloud Data Architect | Data Scientist | Analytics Engineer | Data Quality DeveloperData++ Aug 2008 - PresentTop 5 coding / advanced analytical systems ordered by $$ impact below. At international/major:1. bank in Manhattan, built an analytical validator for the daily credit posts by the credit agencies like Equifax. It is coded in Java and deployed over Hadoop with Oozie orchestrator. The analyzer quantifies daily data quality as one number & forecasts quality of future posts. Allows early issue fixes & culls penalties due to bad quality.2. manufacturer in Chicago, built a prediction service using AzureML to forecast the failure of heavy machinery. With this REST service, client optimizes warranty offerings and scrambles proactive maintenance. Also, linked on-prem MSSQL with publicly available permit data. This linkage flags underequipped machinery in the field -- an upgrade candidate => revenue.3. insurance in Canada, built ML models to classify duplicate records, record types, and score similarities. Coded in python, numpy, NLTK and Scikit-learn. Models are deployed in dockers. The workflows and CICD are orchested by Airflow and Jenkins, respectively.4. startup in UK, built code analyzer (coco) in Perl to quantify product quality & optimize release cycles. Coco profiles code to emit footprints under regression tests. Results are fed into MySQL for data mining & spotting failure patterns. Coco is scaled with variant of NASA’s network queuing system. It uses idle capacity of workstations. The profiled code & coverage results are displayed inline in webpages. Also optimize the number/type of tests on code quality by statistical methods to estimate confidence levels/intervals.5. telco in Canada, created KPIs to sum data ingest/consume workloads with Python on Hadoop. Wireframed/led development of Microstrategy dashboard. Also, created a filtering engine in Python to classify the true/false alerts in Oozie workflows. With Visual Basic and Python respectively, alert corpora is copied to MySQL & mined with timeseries to build the engine. Saves 100 DevOp days / yr.
-
Adjunct ProfessorSeneca College Aug 2021 - PresentToronto, Ontario, CaCloud Computing, Databases -
Gen Ai | Ai Architect | Azure/Solutions Architect | Mlops (Contract)Unitedhealth Group Jun 2023 - Apr 2024UsLead AI & Azure Architect in the Medicare & Retirement AI Team. Using Gen/AI (LLM models): Live Call Assist: In Real Time, augment the attendant’s dashboard with the caller’s information using AI / GenAI: intent prediction, next best action, infobots, sentiment, conversation summary with bullet points, journey summary, relevant job aids, knowledge documents etc. Smart Routing: Mine Intelligence from the mix of the callers’ Interactive Voice Response (IVR) and audio instructions to augment the caller profile: geolocation, intent, personalized messages and offers, call intent, recognize speaker using audio, auto data population etc. Personalized web: Similar to the Amazon shopping experience, personalize UHG health plans to prospects visiting UHG websites and portals interactively: Inferred information from user lifestyle, customer segmentation, observed behaviour from previous visits Competitor’s intelligence: Scrape UHG’s competitors’ plans from their websites/PDFs, mine package data and compare with UHG offerings. Proposed unified architecture on Azure to support all the use cases. For each use case, estimating workload volumes and stating configurations to support the operations and nonfunctional requirements (NFRs). Validated the architecture and configurations against Azure’s Well Architected Framework. Defined MLOps for Pre and self trained ML models. Pre-trained models leverages Azure Cognitive Services and GenAI models. Built a UHG specific dollar cost model for the future pricing of our use cases. Optimized costs by introducing most cost-relevant services. Secured Azure subscriptions for the development and defined appropriate Azure Roles and Accesses. Also secured the production environments. This went through rigorous scrutiny including bringing our dev usage in compliance with enterprise firewall requirements. Seed / oversee infrastructure-as-Code (IaC) and CICD in Terraform and Github Actions respectively for the use cases. -
Sr. Solutions Architect | Data Architect | Analytic Architect | Azure Sme | Mlops (Contract)Td Feb 2022 - Apr 2023Toronto, Ontario, CaTD / AMCB is amongst the top-10 banks in North America, and the 23rd largest bank in the world [S&P Global. 2021-12-08] Member of the Enterprise Architect (EA) and the Data-as-a-Service (DaaS) team. Defining MLOps over Azure Cloud and Databricks ecosystem. This includes all stages of ML Development from data sourcing to model deployment, and operations from promoting models, detecting drifts and automated retraining. Investigating effective ML model management platform, interpretability and monitoring tools in production. Defining optimal access between maximing data expsoure for ML and minimizing data disclosure for compliance and privacy needs. Developing Enterprise AI Strategy. Minimizing the time-to-market KPI for the ML use cases. Advocating a hierachy to leverage pre-trained/existing models to optimize the KPI. The hierarchy order roughly maps to Azure Cognitive Services, Azure Marketplace, Auto ML and self-train methodologies. Equally importantly, a co-lead on the Advanced Analytics front in the Rahona program. Rahona is a multi-year intiative to migrate Hadoop datastores and workloads to the Azure cloud. Authored Architecture Blue Print (ABP) to migrate a ML team to the Azure cloud. Defining the Auto AI/ML capabilities post Rahona and automating the operations based on Monitor, Analyze, Plan and Execute (MAPE) loop. Assessing a combination of tools to provide comprehensive AI Model Management platform in Rahona. Mapped R Capabilities to the Azure/Databricks native ML services to facilitate migration to the cloud. Coding an interactive knowledge base with a chatbot. The content is sourced by mining the corpora of the architectural documentation. End state includes ChatGPT. Solutioning over Azure Cloud. -
Azure Data Architect | Data Quality Design Lead | Data Engineer | Devops (Contract)Cibc Nov 2020 - Jan 2022Toronto, Ontario, CaCanadian Imperial Bank of Commerce is one of the "Big Five" banks in Canada. Hands-on Data Quality Design and Team Lead for the data pipelines in the Azure cloud. The data quality is a regulatory requirement for some Lines of Businesses (LoBs). For others, bad data leads to a revenue loss and/or missed opportunities. This is because the issues are fixed reactively and the processes are done manually. Working in concert with other platform leads to build a cohesive platform. Pitching our offerings to the VPs and the core businesses. Coding a quality assessment service and a portable data library that plugs in to data engineering artifacts. The crown jewel of our analytical work is the numeric data quality scores as the data is ingested, and the domain-specific quality checks desired by LoBs at the consumption time. The quality checks include SLAs, data profiling and business validation. Non-conforming content is flagged for a later review. Acting Product Owner and Interim Scrum Master. Mapping the Scrum stories to actionable tasks and often single handedly coding the work in our lean team. Combed the tools, solutions and the studies on the data quality to drive the design. Architecting and designing to assess the data quality in both the ingestion and consumption functions using open source AWS Deequ library. Separately, undertook a PoC to develop and test spark code offline and then deploy in the Azure Databricks cluster when ready. The work is hybrid of devops and software development. Provide weekly team status to the wider group. Volunteer weekly progress to the executives. Knowledge sharing and peer development via confluence articles and information sessions. Recruited two data engineers. 100% remote work. Keywords: Azure, Databricks, Deequ, Data Architect, Hadoop to Azure migration, Hands-On, Coding, Scala, PySpark. -
Senior Data Engineer | Data ScientistDxc Technology Mar 2020 - Nov 2020Ashburn, Virginia, UsVirtual Clarity, a DXC Technology company, is a niche IT-as-a-Service service that evaluates and transforms legacy and enterprise IT into a cloud powered platform and infrastructure. Working on the confluence of data mining, cloud and AI. Developing methodology and building architecture ground-up by building individual components and then stringing them together in the AWS Cloud. Existing inventory and configurations of applications and systems get out-of-sync over time. Reverse engineering the configuration management database (cmdb). This involves data mining and preprocessing of machine data to identify software and hardware in different data centers and their inter-dependencies using AWS Jupyter server and SageMaker. This enables effort and cost estimates. Created python module to populate results in an excel template. Classifying business-powering vs housekeeping software based on filters, description and frequency. Collapsing software title clones (a data quality problem) into groups of manageable size using clustering and classification. Insourced open source csvdedup ML library and trained a classification model for the job. The groups are then mapped to their AWS or Azure cloud equivalent. This facilitates the development of the migration roadmap. Extended open source pcap-converter project to capture TCP conversations from a binary pcap file and map them to their respective textual netflows. Feasibility study of a hybrid survey combining Excel and Limesurvey. -
Sr. Data Scientist | Ai DeveloperCo-Operators Nov 2018 - Jan 2020Guelph, On, CaProject: Onboarding new acquisition (about half million records) into master database of over six million. Besides typical ETL, this includes consolidation, classification and deduplication with AI Implemented ETL work and business logic in Python as they integrate well with Pythonic Scikit-Learn libraries As an AI SME, led and took e2e technical solution ownership throughout the project lifecycle Architected data pipeline to ingest daily client data, process it in the python framework and report duplicates in a BI-friendly csv file format. Compare against Oracle database system to identify duplicates. The workflow was orchested with the Airflow engine Beyond primitive string comparison, the logic identifies duplicate records (a Data Quality problem) with supervised learning that works with missing or misspelt fields. The data is available with nested csv files and data structures Instead of coding all the permutations, developed AI classification model to identify individual, joint and company records. They merit different business actions and distinction is important Collapsed variants of a record into one for uploading. Allowing minor differences in names (e.g. John vs Jon) using Natural Language Processing (NLP) and string distances methods but flagging substantial differences Feature engineering to distinguish between rural & urban addresses, to generate similarity scores between addresses and email addresses. Leveraging Pandas and Numpy libraries to write high-level clean and readable code Writing production quality code using test-driven development with unit and integration tests. Continuous testing and integration with Jenkins. Using integrated bitbucket for git, code review, merging and Jira tickets Standard agile and devops: scrum/sprint, unit and integration testing, CI/CD High-quality quick-starts and manuals in confluence with checklists, tables, graphs and figures -
Cloud Engineer | Hadoop DeveloperLoblaw Companies Limited Sep 2018 - Oct 2018Brampton, Ontario, Ca Brought in the last stage of high-risk, high-value project. Bootstrapped architecture to define the basic building blocks and then dived straight into coding Automated spinning and configuration of HDinsight Spark, Hive and Hadoop clusters in Azure Cloud and mounting data archives to support on-demand data analytics and science. Showcasing demos to users and using feedback to direct further effort 50/50 split between advisory and delivery. Transforming thoughts into doable Scrum tasks with acceptance criteria -
Financial Modeler | Quantitative DeveloperInvestnow (Stealth) Jan 2018 - Oct 2018I modeled the "financial freedom" as a constrained optimization problem with Linear Programming (LP), a familiar work from my PhD. The key sub-models are cash inflow, cash outflow and risk management. The fundamental idea is to maximize inflow, minimize outflow and hedge any foreseeable risks (managed as constraints). The foundational step is to establish the current financial state, where my earlier work (www.budgetnow.ca) and experience has played a pivotal role. I extrapolate the current state to establish my goal state of "freedom", which is inflation adjusted. The tools used are Excel, IBM CPLEX, Python, Php, MySQL and HTML.Iteratively, I expand the inflow streams with multiple income sources with the primary earnings and stock investing. Using Python, I have automated the stocks sale and purchase with algorithmic trading and exchange APIs. I implement standard and novel algorithms that are back tested with stock performance in the NYSE. The trading kernel is based on a rule-based engine implemented using Azure Databricks. The data and results are stored in Azure Synapse and visualized using Power BI. All of this coding work is commonly known as quantitative development.
-
Data Engineer | Data Scientist | Hadoop DeveloperFreedom Mobile Apr 2018 - Aug 2018 Quick start architecture for data pipeline to extract data from Teradata, building graphs and executing SNA algorithms to generate KPIs – everything in the Hadoop cluster Social Network Analysis (SNA): SNA identifies communities, and influencers in them for use cases such as targeted advertisement, support priority and trending opinions. The challenging part is identifying “useful” communities, and is essentially a clustering problem in ML. Optimal community detection is NP-Complete in general. Mapped SNA research terms to Business KPIs. Rapid SNA prototyping in Python/Jupyter notebook and NetworkX. The challenging part is the community detection, but this time at scale in Hadoop. This includes monthly processing of last 90 days of call logs or about 100 million records (5gb). Pyspark and GraphX are used in a HWX cluster. The KPIs are stored in a Hive database. Explored DataBricks as an alternate. Other use cases: First-hand exposure to churn prediction, add-a-line likelihood and item-specific sale forecasting at the store level
-
Data Engineer | Big Data SpecialistBell/Data++ Mar 2016 - Jan 2018Montreal, Quebec, Ca* Filtering Engine: Developing disruptive filtering engine to slash away false positives in application alerts. The status of alerts, without inspection, is unknown. When inspected, about 70% of the alerts need no attention (false positives), but burn valuable devops cycles -- 2 days in a week. Building an analytical model based on heuristics to adjust the value of thresholds for filtering. * KPI development: Quantified the Hadoop ecosystem from generating ingestion and consumption KPIs. They summarize the functioning of data ingestion, and its usage by end users. For example, they include average response time of a report. These KPI are developed in-house and are not available “out-of-the-box”.* Performance Baselines: Established baselines for both the applications and the system. The application baselines quantify and standardize “performance” of applications across releases and months. Meanwhile, system baselines quantify the "performance" of the Hadoop ecosystem.* Upgrade: Developed sanity validation tests for the applications to ensure no impact is caused, and the upgraded the clusters in dev, pre prod and prod. Co-led node migration from preprod to prod.* Load Balancer: setting up and validation of F5 and haproxy load balancers in dev, preprod and prod with Kerberos.* Use Case development: Support devops in implementing use cases. Also, implement when required. * Peer-education: new hire quick-start. co-mentored multiple new hires. Developed a co-learning environment using wiki.* Data Dictionary: Annotated application data with “business” meta data e.g. peak call rate, to enrich with context for non-Hadoop specialists like managers and analysts.* Gitlab: Setup gitlab access for the entire team, quick start, showcase and team activity. * Support: On rotational support. Switched with peers when needed.Major Development Tools: Cloudera, Hadoop, HDFS, Impala, Sqoop, Kerberos, MySQL, Python, Perl, Microsoft Excel, Intellij, Gitlab, Linux, Windows. -
Systems Data Architect | Hadoop ConsultantHortonworks, Usa Jul 2014 - Feb 2016Santa Clara, California, Us As a trusted advisor, build relationships and provide technical guidance, vision and leadership to business stakeholders based on latest industry trends, technology platforms Act as interface to 3rd party cloud, Azure, and manage partnerships as part of strategic alliance and project implementation Engage clients and partners to get requirements and coordinate partners for compliance of requirements Design solution architects based on requirements Resident Architect at JPMorgan Chase in New York City. Architected and built a centralized data governance platform using Hadoop to support multiple applications. Augmented Hadoop eco-system with custom engineering and data quality work. Participant in the Consortium of Data Governance Initiative. Co-developed roadmap Consultant Architect at mid-sized financial client, namely Symcor. Facilitated installation and configuration of Hortonworks Data Platform (HDP). Led ETL from clients’ data sources, developed Hive schema, wrote Pig scripts, and created HBase Java clients, generated reports and presented in Excel and Jasper Consultant Architect at a major Canadian Telco, namely Rogers. Mentored Kerberization of development and production clusters. Facilitated productionizing of workloads with hands-on where needed. Enabled business team to query data in Hadoop cluster that provided exposure to higher management and showed business value of Hadoop Setup Linux HDInsight in Azure for an Industrial client, namely Komatsu. Ingested on-site MSSQL Server data into HDInsight. Architected and co-developed a prediction model to forecast the failure of machine components given the historical warranty claims. With this model, the client intends to optimize its warranty offerings. Separately, I co-developed an application that links client data with publicly available permit data and identifies under equipped machinery in the field, which is a candidate for upgrade, and hence revenue. -
Postdoctoral Research Fellow | Big Data Engineer And ScientistYork University/Ibm, Canada Dec 2013 - Jul 2014Toronto, On, Ca* Architected a data pipeline on the Cloudera platform* Worked on a highway traffic monitoring and analysis project (CVST) for Greater Toronto Area (GTA), and dealing with large amounts of traffic data (i.e. BigData). * Built a data platform, Godzilla, to ingest real-time data (using HBase over HDFS) coming from multiple sources including sensors, video cameras, mobile phones and twitter.* Mined data in Godzilla (using Mahout) to identify anomalies, clusters, build prediction models and recommender systems.* Conducted a Proof-of-Concept exercise by first building a MySQL data warehouse (data service) using a star schema with three months of traffic data (300GB). Provided a web-interface to the data service using phpmyadmin.* Analyzed data with Weka for outlier detection, cluster identification and correlation evaluation.* Extracted key-performance-indicators (KPIs) and visualized weekly patterns that are of interest to Ministry of Transport (MTO).* Setting up multiple Cloudera clusters (CDH4) over OpenStack compliant cloud (SAVI). Implementing MapReduce job bursting between Cloudera clusters. Providing an intelligent controller to manage job bursting based on the metrics extracted from Hadoop master (JobTracker). Developing cost and performance models to facilitate the controller agent. * Installation and management of Cloudera Hadoop distribution (CDH5) over SAVI. Sqoop-ed data from the data service to HBase/HDFS cluster. * Co-supervision of two MSc students.Major Development Tools: Cloudera (CDH4 & CDH5), Hadoop, HDFS, HBase, Pig, Hive, RapidMiner/Weka, Sqoop, Solr, MySQL, CloverETL, OpenStack, Java, Microsoft Excel, Eclipse, CVS, Linux, Windows. -
Phd: Minimizing Cost Of Executing Data-Intensive Workloads In Public CloudsQueen'S University, Canada Aug 2009 - Nov 2013Kingston, On, Ca• Research area: Lowering Deployment Cost for Data-Intensive Applications in Public Clouds• Consulting work to Gnowit in the use of cloud computing to scale out Online Media Monitoring platform from Canada to USA.• Explored elasticity strategies for Gnowit such that the number of used VMs can increase and decrease dynamically with the current need (e.g., measured in VM utilization).• Installed Cloudera Hadoop distribution (CDH) in a local cluster that included HDFS, Hive, Pig and Mahout.• Used Sqoop to transfer data from TPC database benchmarks into CDH.• Compared the scalability of executing analytical database in MySQL against CDH, and analyzed the results using IBM Many Eyes of IBM Cognos.• Experimented with machine learning over Hadoop with Radoop and Mahout.• Explored dollar cost of executing data-intensive workloads in local CDH versus Amazon’s Elastic MapReduce (EMR).• Developed performance and cost models to quantify the dollar-cost of deploying data-intensive applications in Amazon EC2 clouds. • Developed a data service using MySQL in the Amazon cloud. • Developed a prediction model to forecast the behaviour of a workload execution in a multi-partition database system in the Amazon cloud. Used R and Weka in building the model. • Developed a cost model that quantifies the dollar-cost of deploying data-intensive applications in the Amazon cloud. The model accounts for any workload type (analytical, transactional or mixed) and models costs for all the resources used in the workload execution.• Developed algorithms that search for the minimal dollar-cost deployment.• Systematic study of BigData processing platforms, parallel database systems (Vertica, Teradata, GreenPlum and Aster data) and provisioning techniques in public clouds. • Career consultant at Queen’s University for both undergraduate and graduate students. -
Cloud Engineer (Contract)Citi Sep 2008 - Apr 2009New York, New York, Us* Worked in a global grid computing team which aimed at providing a centralized grid across Citigroup. The grids are parents of cloud computing.* Identified and evaluated advanced/emerging technologies in grid and cloud computing.* Erected a centralized compute grid or Symphony cloud as part of a team. The cloud efficiently harnessed the computation of commodity servers but bottlenecked on data access for data-intensive applications. Evaluated the suitability of the cloud with centralized data-caches (Gemfire, kdb) to provide efficient and scalable data access. Grid monitoring using Ganglia.* Developed a comprehensive test plan for data-intensive applications on Symphony cloud.* Mentoring and training the use of cloud and testing framework.* Improved software development process by setting up source control system (CVS).Major Development Tools: Symphony, ETL, Ganglia, Java/C++, Gemfire, kdb, Microsoft Excel, CVS, Eclipse, Linux, Windows. -
Software EngineerPlatform Computing, Canada Sep 2006 - Mar 2008Armonk, New York, Ny, Us* Worked on 6 projects in Symphony team. Symphony (C/C++, Java, .NET) is a service-oriented architecture consisting of middleware, soam (500k loc), which is build on top of a grid resource allocation manager or ego (700k loc).* Implemented Service Replay Debugger that enables the customers to reproduce the error in detail occurred in a distributed application. This is being filed as a patent.* Resource Manager or resmgr is a component that binds soam and ego. Led resmgr refactoring project by liaising cross-team communications. * Worked on 1 project in Load Sharing Facility (LSF). LSF (1.8m loc) is a computational batch job scheduling system.* LSF is a 14 years old mature product written primarily in C. Any development or defect resolution required reverse engineering and ensuring the solution is backward compatible across all 10 platforms and 6 compilers.* Authored FAQs, used Scrum methods and a rotational chair of the team meetings.Major Development Tools: C/C++, .NET, Java, C#, Microsoft Visual Studio 2003, Eclipse, Purify, CVS, Linux, Windows. -
Software EngineerPicdar Nov 2005 - Apr 2006* Compared the latency of fetching images given metadata indices in various databases. Explored different methods for populating and searching the databases to avoid biases of experiments, random being one such method. Repeated the experiments and applied statistical significance tests to draw conclusions. to ensure the sanity of the results. In hindsight, this process also allowed analysis of quality of search indices.Major Development Tools: PosgreSQL, Oracle, Java, JDBC, JUnit, CVS, Jboss, Mac OS.
-
Software EngineerTransitive Feb 2002 - Mar 2004Los Gatos, Ca, Us* Developed dynamic binary translator (QuickTransit) using effective compiler technologies in C/C++. QuickTransit performs runtime decoding of executables into intermediate representation (IR), performs optimizations on IR, and then translates IR to allow execution on target hardware. QuickTransit is now part of Intel-based Macs (Rosetta ) that allows old Mac applications to run on new Intel-based Macs.* QuickTransit translation is aimed to be bit-wise accurate. The nature of the product requires rigorous testing. Consequently, the testing is automated and consists of independently executable jobs. Such jobs are beyond the capacity of a single computing resource. Harnessed the computing power of a compute farm for execution of such jobs. This was achieved by developing a Perl test harness as part of a team. The test harness uses network queuing system (specifically GNQS) to schedule and manage testing jobs in a computational cluster. The test harness is used in production for development, code reviews, regression testing and releases of DBT.* Explored visualization techniques to facilitate understanding of the virtual memory of QuickTransit. QuickTransit generated and modified the target binary, and the virtual memory of DBT was bulky and complex. Modelled the virtual memory as a 3D world that presented the bigger picture with an ability to "zoomin" in a particular area of memory.* Developed CodeCoverage quality assurance management tool in Perl used by both management and engineers. * Developed Coding Standard Inspection (CSI) tool to standardize the code development.* Automated product testing by developing a distributed Perl test harness to significantly improve engineers' productivity.* Managed engineers' development worlds and released code baselines on a rotational basis.* Responsible for overseeing all computer memory related testing for QuickTransit.Major Development Tools: C++, Perl, OpenGL, GDB, XML, Mpatrol, Vmalloc, CVS, Source Navigator, Linux. -
Software EngineerCisco Systems Aug 2001 - Jul 2002* Worked as a key team player on a complex telecommunication and VoIP product i.e. H.323 Signalling Interface.* Liaised with Cisco partner in Germany for software development. * Used C++ and Tcl/tk to develop Call Forwarding and Redirection features on the core product.* Multi-tasking by pursuing day to day activities, cross-team assignments and Cisco certification (CCNA).Major Development Tools: C++, Tcl/tk, GDB, CVS, ClearCase, Solaris.
Rizwan Mian, Phd Skills
Rizwan Mian, Phd Education Details
-
Stanford UniversityMachine Learning -
Queen'S UniversityComputer Science (Processing Large Data Sets In Cloud Computing) -
York UniversityComputer Science -
The University Of ManchesterMasters -
The University Of ManchesterBachelors (Hons)
Frequently Asked Questions about Rizwan Mian, Phd
What company does Rizwan Mian, Phd work for?
Rizwan Mian, Phd works for Old World Industries
What is Rizwan Mian, Phd's role at the current company?
Rizwan Mian, Phd's current role is Lead Gen and AI Agentic Architect and Developer | Azure and Solutions Architect.
What is Rizwan Mian, Phd's email address?
Rizwan Mian, Phd's email address is vi****@****ail.com
What is Rizwan Mian, Phd's direct phone number?
Rizwan Mian, Phd's direct phone number is +164753*****
What schools did Rizwan Mian, Phd attend?
Rizwan Mian, Phd attended Stanford University, Queen's University, York University, The University Of Manchester, The University Of Manchester.
What are some of Rizwan Mian, Phd's interests?
Rizwan Mian, Phd has interest in Reading Poetry, Travelling, Learning Art, Speed Reading, Sharpening Memory, Browsing Inter Disciplinary Research, Politics And Current Affairs.
What skills is Rizwan Mian, Phd known for?
Rizwan Mian, Phd has skills like Software Development, Java, Linux, C++, Databases, Distributed Systems, Perl, Cloud Computing, Programming, C, Unix, Algorithms.
Who are Rizwan Mian, Phd's colleagues?
Rizwan Mian, Phd's colleagues are Adiba Khan, Thomas Dadej, Linda Ackerson, Wai Ling Tham, Christine Hearne, Todd Wolfe, Russel Healey.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial