Designing, implementing and scaling state-of-the-art algorithms across the perception stack, enabling seamless XR experiences under the Magic Leap - Google Partnership.
Focused on deploying detection and localization pipelines on highly constrained hardware, while also developing the next generation of multi-layered HD maps under the Magna - Lyft Level 5 Partnership.

A Human-Centric Network of Co-Robots
As a part of the DASC Lab, over the course of 1.5 years, I implemented and scaled multiple optimization-based control and coverage algorithms developed by Will and Dr. Dimitra Panagou, translating theoretical guarantees into a deployable and simulation-validated system. AstroNet was one such research initiative focused on enabling autonomous coverage and discrete navigation strategies for NASA’s free-flying Astrobee robots operating aboard the ISS, with the goal of reducing reliance on human teleoperation.
As can be seen in the embedded video, in this project I primarily focused on building a high-fidelity ISS simulation environment using ROS and Gazebo to evaluate coverage strategies under realistic microgravity dynamics and sensing constraints. This included developing an optimized real-time control pipeline capable of executing discrete coverage policies within the ROS ecosystem while maintaining stable closed-loop behavior.
To support end-to-end evaluation, I designed a distributed architecture integrating MATLAB (core control and coverage algorithms), ROS (middleware), Gazebo (physics simulation), and an immersive VR visualization interface via an Oculus Rift. This framework enabled closed-loop validation of guidance, navigation, and control (GNC) strategies prior to hardware deployment, significantly improving experimental throughput and reproducibility.

Autonomous Aerial and Ground Robot Swarm
This was one of the more interesting robotics projects I worked on, in collaboration with TARDEC, and guided by Dr. Dimitra Panagou, where I used a mix of CrazyFlie drones and Aion R1 rovers for autonomous reconnaissance missions using multiple robots. The goal was to get a swarm of heterogeneous robots to work together seamlessly—even in unpredictable environments—while avoiding collisions and keeping track of their positions.
My main focus was on multi-robot localization and collision avoidance, helping the drones and rovers navigate in sync without stepping on each other’s toes. I was also solely responsible for the system architecture as well, where I developed a pub-sub architecture using ROS for this swarm of robots to communicate with each other.The project gave me hands-on experience with distributed algorithms, sensor fusion, and resilient swarm coordination. As you can see from the videos attached, we were trying to make the robots follow generic routines such as following a set path or creating a defined shape while the control algorithms were running in real-time. From the embedded videos, you can see how a set of 4 drones, as well as 3 rovers were able to create basic shapes without colliding or running into each other. For the purposes of the shown experiments, localization was done using simple vicon motion-capture systems in indoor environments but using SLAM outdoors using the sensors (lidar + odometry) the rovers were equipped with.

WiFi based Indoor Positioning System
This project was more attuned to be a masters theses than anything. From the time I learned about SLAM, I wanted to see if robots/devices can locate inside indoor environments using signals from low cost sensors such as WiFi/BLE Beacons which are already installed in most buildings. Working under Dr. Maani Ghaffari, I was working on exploring how radio waves, specifically WiFi can be used to accurately locate the 3DOF position of a device inside a GPS denied environment. After much exploration, I was able to create a deep neural network which was able to differentiate between Line-Of-Sight/Non-Line-Of-Sight packets with an accuracy of ~95%. Without this classifier, the robot often lost track of its position in the environment as seen in the illustration below (FastSLAM without classifier).
The algorithm was initially developed and tested using the open-source PyLayers simulator. On confirmation that our core algorithm worked, we proceeded to test it out on the Fetch robot, using the Friis Free Space model and our classifier to act as a measurment in Fast SLAM, while using the robot's wheel odometry as its positioning model. A variant using particle filter based SLAM was also implemented to evaluate performance. In our experiments, we were able to achieve a remarkably accurate localization with a ~1.6m RMSE in 35m x 170m indoor environment. We can see how the FastSLAM performs when the classifier is used to inform its state in the animation below.

Unsupervised Learning based on GMM
Another project where I was working with Will and Dr. Dimitra Panagou where we were using an AR glass, as well as a camera to provide the robot with assistive camera views. The vicon motion capture system was also used to track ground-truth head motions, which was then used to fit a Gaussian Mixture Model (GMM) to define the visual interest function of the user using online Expectation Maximization (EM). My role on this project was solely enablement - I was responsible for setting up the entire software stack for the project, including the AR glasses, the camera, and the vicon system as well as implementing the algorithm in C++ as well as optimizing the hell out of it using CUDA.
You can see in the embedded video how the drone is autonomously able to navigate to the task which is of highest importance, but not being currently focused on by the operator. The core goal of this project was that lets say an operator has multiple tasks to do - task 1, task 2, task 3 (ordered by priority), if the operator is focusing on task 2 at the moment, the drone automatically navigates to task 1 while providing a visual feed on how that is unfolding. For in-depth details on this project, I'd highly encourage you to refer to the paper we published with are findings in it.

Self Balancing Two-Wheeled Bot
Another one of those fun projects from the UMich Curriculum, in this project, a team of 2 people was responsible for building a self-balancing two-wheeled bot from scratch. We had to design everything from scratch — all the way from mechanical design, to the cascaded PID controller. The goal of the project was to autonomously navigate obstacle courses while maintaining its balance and positioning.
We used a combination of wheel odometry and Vicon based motion capture to estimate the bot's current position and orientation. Control algorithms were developed to control the bot's balance, motion and orientation, as well as dictating what it needs to do. The bot was able to navigate obstacle courses while maintaining its balance and positioning, that too within very strict time limits. In a competition of 12 teams, ours was the fastest and most stable!

Deep Learning for SLAM
In this project, I developed an online deep-learning framework for automatic data labeling of mobile camera input using structure-from-motion (SfM). A deep learning–based structural motion algorithm generated high-quality pose labels from raw images, eliminating the need for manual annotation. These labeled poses were used to train PoseNet as a visual sensor model, while GPS and odometry data provided the action model for the robot. The approach integrated learning-based perception with probabilistic state estimation in a factor graph using GTSAM and iSAM.
PoseNet served as the learned observation model, regressing camera poses from images, which were fused with motion priors from odometry and GPS in real time. The system was first validated in GTSAM simulations to ensure consistency and robustness before deployment on a differential drive mobile robot. In real-world operation, camera inputs were processed online, and PoseNet outputs were integrated with motion data to provide continuously refined pose estimates.
By combining deep learning with classical state estimation, this framework offered a scalable, hybrid approach for robust localization. It demonstrated that automatically labeled camera data could train effective sensor models, while probabilistic factor-graph optimization ensured globally consistent pose estimation for mobile robotic systems.

Geometric Deep Learning
A more trivial project (compared to the previous ones ofcourse), this one was a part of our curriculum at UMich as well. We were given the GTA 10k dataset and were supposed to develop a custom neural network to regress the 3D bounding box of vehicles in the dataset using a simplistic bounding box detector and geometric constraints. As a solution, I developed a simplistic 20-layer SE-ResNet + YOLO 3D bounding box regression incorporating ideas from Mousavian et al.. It was a Kaggle competition, where I ranked 5th out of 41 teams, achieving an accuracy of ~73% with the centroid MSE of ~9.

Mask R-CNN based Pedestrian Tracking
Pedestrian tracking is a long standing problem and multiple ways of tracking them accurately have already been developed. The problem comes in when pedestrians are occluded or when there are multiple pedestrians in the scene. In this project, I developed a pedestrian detection and tracking system that combines deep learning–based instance segmentation with probabilistic multi-target tracking using a particle filter. A pre-trained Mask R-CNN was modified to only capture and segment pedestrians at the pixel level allowing for precise localization even in partially occluded or crowded scenes.
We integrated a Probability Hypothesis Density (PHD) filter with dense optical flow, and image segmentation cues to enable reliable multi-target tracking without requiring explicit data association for every object, while optical flow captured short-term motion dynamics between consecutive frames. By fusing motion information with segmentation outputs, the system was able to estimate and update the trajectories of multiple pedestrians simultaneously, even when they crossed paths or experienced temporary occlusion. The pipeline was evaluated on the Cityscapes and Berkeley BDD100K datasets, where it demonstrated strong performance in low to medium-cluttered urban scenes.

Autonomous Maritime Systems
The AUVSI RobotX Challenge is an international robotics competition focused on designing fully autonomous maritime systems capable of navigating complex marine environments without human intervention. For this project, I developed a robust software and perception stack to support high-fidelity autonomy on an unmanned surface vehicle (USV). I architected and implemented a scalable sensor-fusion framework that tightly integrates a spatial dual GPS/IMU module with high-resolution imaging from the FLIR Ladybug 3 and dense point-cloud data from a Velodyne HDL-32E, enabling accurate state estimation and situational awareness in real time.
To meet competition requirements, I implemented classical robotic control algorithms alongside a YOLO-based marker detection system for visual identification of competition objects and task elements, improving the vehicle’s ability to localize and interact with its environment. I also developed advanced calibration pipelines for LiDAR–LiDAR and camera–LiDAR extrinsic alignment using PnP and 3D correspondences, based on established multi-sensor calibration techniques, which significantly enhanced the coherence of multi-modal data streams for perception and planning. Finally, I deployed a SLAM framework in ROS leveraging gmapping, costmap_2d, and AMCL libraries, enabling reliable mapping, localization, and autonomous waypoint navigation within the competition arena and demonstrating an end-to-end autonomy solution capable of perception, mapping, and decision-making under real-world constraints.

Particle Filter based SLAM
This was another one of those amazing projects which the curriculum at Michigan had to offer. We were given a simple differential drive mobile robot with all the drivers already implemented. The only thing we had to implement was particle filter based SLAM on it using a Scanse Sweep Lidar. To enable this, I implemented the occupancy grid map as well as the particle filter localization algorithm on a Raspberry Pi 3.
We also implemented Yamauchi's exploration method on the secondary computer, a BeagleBone Black. This enabled the robot to freely explore and map out an entire room without any manual intervention.

Multi-Agent Exploration using TurtleBots
This project focused on the development and large-scale evaluation of autonomous path planning and multi-robot exploration algorithms using Turtlebot platforms within the ROS ecosystem. I designed and implemented a complete navigation stack, including global and local planning components, enabling reliable autonomous exploration and waypoint navigation across structured and unknown environments.
To support algorithmic evaluation at scale, I also developed a Python-based multi-robot simulator for autonomous exploration and path planning, enabling rapid prototyping and benchmarking of multi-agent coordination strategies. The project, sponsored by the Defence Research and Development Organisation(DRDO), involved high-fidelity simulation of multi-robot exploration in Gazebo, bridging theoretical planning methods with realistic robotic constraints.
I conducted a comparative analysis of classical and multi-agent exploration algorithms across more than 50 grid maps (50×50 resolution) and validated performance experimentally using TurtleBots across 10 physical and simulated environments. The work provided quantitative insights into exploration efficiency, scalability, coordination overhead, and convergence behavior, contributing to a reproducible evaluation framework for decentralized and cooperative robotic navigation strategies.

Javascript based Kinematic Simulator
This project was part of one of the courses I'd taken - Autonomous Robotics. As a part of this project, I developed a web-based simulator which was capable of interfacing with ROS and actually control a Fetch Robot. The project involved me implementing parsers for URDFs to create robots models in 3D, forward/inverse kinematics, object following, as well as trajectory planning algorithms based on Rapidly-Exploring Random Trees (RRT). You should definitely checkout the Demo to play around with the 3D model and see it in action!

Vehicle Dynamics for Sports Cars
One of my favorite projects at BITS, Pilani - I was a part of the Formula SAE team where I led the design and development of the physics of a double wishbone push-rod suspension, made multiple weight optimizations to the bell-crank as well as optimized the suspension dynamics to enable maximal cornering control at high speeds and capable of bearing upto 2g in lateral force. I was fortunate enough to represent our team at FSAE Italy '14, where we ranked 4th / 46 international teams in design. Unfortunately, our vehicle was broken during transportation so we couldn't participate in any of the dynamic events. Since I was also a part of the marketing and sponsorship team, I also created multiple marketing materials and brought in multiple investors for our initiative. You'd get an idea of how passionate we were about this project, just look at the video!

Single, Double and Cart Pole Pendulums
This project was a primer to the kinematics simulator, a part of the course - Autonomous Robotics. In this project, I developed multiple web-based simulators for single, double and cart-poles to test out control algorithms. Some of the control algorithms I implemented were PID control based on: Euler, Verlet, Velocity Verlet and Runge-Kutta 4th order integration methods. This project was basically my primary exposure to control theory and numerical methods, setting me up to work on significantly complex projects across autonomous robotics and manipulators. Definitely check out the linked demos to see the control algorithms in action! These one-pager websites primarily work using JavaScript and HTML5 Canvas.

App Streaming Sensor Data via WebSockets
This was one of my side projects where I worked on repurposing my old Android phone as a mobile sensor platform, streaming its built-in sensors—GPS, IMU, camera, and more—over a websocket to a web server. It was an attempt to reuse old hardware and start experimenting with robotics, SLAM algorithms, or real-time sensor data without needing expensive equipment which I was generally used to.
Alongside the app, I also created a ROS package making it easier to subscribe to sensor topics and integrate the phone’s data into the SLAM algorithms I was playing around with. I also intentionally made it modular so as to include more sensors / custom processing in the future.

Microsoft Kinect + Robot Arm
Another project as part of my robotics curriculum, in this project we were tasked with designing and programming a robotic arm capable of detecting colored cubes, picking them up and stacking them in a specific order. As part of a three-person team, I designed the mechanical structure for the RRR:R dynamixel arm in SolidWorks, 3D printed it, and assembled it with the motors and electronics. The arm was controlled using a Raspberry Pi and a Python script was implemented which interfaced with the Dynamixel Motors and Microsoft Kinect to detect colored cubes via depth maps and plan trajectories using splines. I implemented the forward/inverse kinematics for this arm, whereas my team mates worked on block detection and trajectory planning.

Arduino and XBee Based Robot Arm
Another one of my undergrad projects, where I worked on designing and developing a gesture controlled robotic arm under the guidance of Dr. R.K.Mittal. I used two MPU 6050s, fixed on a glove to estimate the motion of the user's hand. This was coupled with XBees and an Arduino UNO for wireless commnuication with the main laptop, which basically enabled the robotic arm to mimic the users hand gestures. The robotic arm was large enough to completely span a 2ft hemispherical workspace, with a 500g payload.

Arduino-Powered Manipulator
This was my very first introduction to robotics so it really is close to my heart. In this summer internship, I worked with Dr. D.K. Pratihar from IIT Kharagpur, designing and developing a portable whiteboard cleaner (RR:PR manipulator) for white/black boards up to 4'×6'. I worked on chip design, fabrication as well as programming the servo motors to automatically clean the entire board. I also presented a whitepaper of this work at IEEE UPCON'15.
I also run a blog where I share interesting things I learn at work, offer tips for beginners, and write about tech / trading topics that others might find helpful. If you love what you see, definitely check out my entire archive on Medium !
Loading posts…
Hey! 👋
I’m Sahib Dhanjal— a robotics engineer obsessed with building machines that can think, move, and navigate the world on their own. I spend my days (and many late nights) tinkering with hardware, experimenting with algorithms, and pushing robots to better understand and interact with their surroundings, after which I even write about it.
Outside work, you’ll find me playing soccer, ultimate frisbee, or volleyball — or out running, hiking, and biking.
Always building. Always learning. Always moving.