Reconfigurable System-on-Chip Architectures for Robust Visual SLAM on Humanoid Robots

Visual SLAM is the method of employing optical sensors to map and reconstruct a robot's surroundings while, at the same time, identifying the robot’s pose in relation to that map. Dense SLAM is used to achieve information-richer 3D scene reconstruction at the expense of high computational requirements typically provided by high-end (and power-hungry) CPUs and GPUs.

Our team has been researching algorithmic approximations to achieve real-time dense SLAM processing on MPSoC FPGAs at a much lower power footprint. Two recent papers at DATE 2021 and ACM Transactions on Embedded Systems (TECS) explain the MPSoC FPGA analysis and design to achieve > 30 fps for two dense SLAM algorithms at < 5W power dissipation.

GitHub repo with FPGA source code:

Publication on ACM TECS: