Robotics StackExchange | Archived questions

How to use localization and mapping without motor encoders

Hi everyone

I'm currently developing a robot for a school project. The vehicle can drive and now we are looking to implement some mapping and localization (maybe even SLAM). We sadly didn't look at the requirements for most localization packages (mostly robotlocalization), which require some form of odometry. Since we are on a tight budget, we only have a 6-axis IMU and a Lidar to our disposal. Additionally we have a small camera for object detection purposes. For mapping we can use the hectormapping package since it only uses laserscan, but localization is an issue right now.

Does anyone know of a package, that can localize our robot with IMU, laserscan and camera data?

Thanks a lot for any answers, Me &rest of the team

Asked by hakeahnig on 2023-05-26 01:58:26 UTC

Comments

Answers

This method is documented pretty well. If you have a camera, you can check out RTAB-map (or ORB-SLAM), which provides nodelets that provide image pipelines to make use of camera data, and RTABmap_odometry which provides visual odometry in place of wheel odometry. Use an EKF to fuse this data with IMU. If you have just a RGB camera w/ no depth image, I am not sure this will work.

There's plenty of Lidar SLAM algorithms out there. Cartographer is great and well documented. The general idea you'll want to follow w/ what you have is to use one of the 2D (or visual) SLAM algorithms to build the map and publish the map tf, and use robot_localization EKF to fuse lidar and/or visual odometry and IMU data to publish the odom tf. This together with move_base you have a pretty solid autonomous system to work with.

Asked by chased11 on 2023-05-26 14:00:20 UTC

Comments