Ahmed M. Email and Phone Number
Interested in autonomous navigation, specifically the accurate spatial mapping of static/dynamic environments. Utilizing Machine Vision, Inertial navigation, Sensor Fusion, SLAM, Machine Learning, and Robotic perception.
Conelabs
View-
Chief Technology OfficerConelabs Aug 2023 - PresentWaterloo, Ontario, Ca -
Senior Software Engineer - Localization And Mapping Team LeadLuxolis Oct 2022 - Jul 2023Seoul, Kr -
Phd CandidateCarleton University Apr 2020 - Feb 2023Ottawa, Ontario, CaPoint of research: Enhanced Indoor Visual Navigation Using Sensor Fusion and Semantic Information.1- Sensor fusion for robust indoor visual navigation:• Developed and tested a hybrid vision-inertial fusion scheme that applies Kalman filtering to enhance the performance of indoor visual navigation systems against partial occlusions. The proposed system maintains the stereo vision system-level accuracy using a single camera aided by inertial sensors while achieving higher frame rates.• The developed fusion filter is further enhanced by integrating measurements from UWB positioning observations. The filter design considers the platform motion physical limits as a non-holonomic constraint, further enhancing overall accuracy and robustness.• One of the limitations of using UWB positioning in indoor environments is that the anchors are required to be positioned in previously known positions. An automatic UWB grid expansion technique was proposed to allow deploying UWB anchors on the fly in unstructured indoor environments.2- Enhanced visual SLAM using semantic segmentation and layout estimation:• Inspired by neuroscience observations of how humans naturally navigate indoors, an improved visual SLAM system was developed using semantic segmentation and indoor layout estimation technologies to optimize the map representation and increase the positioning accuracy by imitating the human brain navigational and spatial representation approaches. -
Research AssistantCarleton University Jan 2019 - Feb 2023Ottawa, Ontario, CaLab: Embedded Multi-Sensor Systems (EMS) LabMulti-sensor synchronized logger systemAn embedded platform was developed to perform real-time multi-sensor synchronized logging and visualization of multiple navigational sensors’ data. The platform is also considered a testbed that can support future research in the indoor navigation field.Embedded Multi-Sensor Systems-Lab (EMS-Lab) - (DND Funded Projects)• Led a team of (5 Grad + 1 Undergrad) students developing a pedestrian tracking and SLAM Visualization using ROS and C++ (Qt5).• Efficiently coordinated with the StereoLabs, X-sense and GeoSLAM technical support team regarding the data stream from the ZED2 cameras, MTI-100 IMU and GeoSLAM-ZEB Revo RT, to match our project requirements.• Implemented a Linux driver for the X-sense Awenda motion capture system.• Successfully designed and developed a C++ (Qt5) based Remote-Control Software (RCS) engine (multi-threaded) for online monitoring and controlling the indigenously (EMS-Lab) developed Logger System.• Successfully developed a Real-Time Multi-threaded Logger System for Sensor Fusion on NVIDIA Jetson TX2 and NANO using C++ to collect data from GPS, IMU, Camera, Laser scanner and UWB nodes. -
Phd StudentCarleton University Jan 2019 - Apr 2020Ottawa, Ontario, Ca -
Research AssociateMilitary Technical College Apr 2015 - Jan 2019
-
Research Assistant (Embedded/Computer Vision)Military Technical College Sep 2010 - Apr 2015• Weather conditions could severely affect driving abilities. Available haze-removal algorithms in the literature are far from real-time. An embedded real-time Haze-Removal system was developed. The system sped up the original algorithm from 1 frame/minute to 10 frames/Sec. The system was developed in C++ under Linux on Texas Instruments’ OMAP-L138 embedded platform.• Video surveillance systems require massive storage; however, most stored frames contain idle scenes. To decrease the unnecessary saved data, an implementation of an embedded visual surveillance system was proposed. The system utilizes motion detection to trigger storing of the camera feed. The system was developed in C++ and OpenCV under Linux on DM6446 EVM embedded platform.
-
Research EngineerMilitary Technical College Jul 2008 - Sep 2010
-
Development Team LeadArab Organization For Industrialization Jan 2014 - Jan 2019Cairo, Helipolis, Eg -
Senior Research EngineerArab Organization For Industrialization Jan 2010 - Jan 2014Cairo, Helipolis, Eg
Ahmed M. Education Details
-
Carleton UniversitySimultaneous Localization And Mapping (Slam) -
Military Technical CollegeEmbedded Computer Systems -
Military Technical CollegeComputer Engineering
Frequently Asked Questions about Ahmed M.
What company does Ahmed M. work for?
Ahmed M. works for Conelabs
What is Ahmed M.'s role at the current company?
Ahmed M.'s current role is Co-Founder and CTO at ConeLabs (Techstars_ 23') - Revolutionizing building inspection for engineers by engineers..
What schools did Ahmed M. attend?
Ahmed M. attended Carleton University, Military Technical College, Military Technical College.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial