How to do SLAM in limo robot simulation environment?
Hello. I'm new to ROS, Gazebo, and the other robot development tools, packages, and so on. I'm working on the following project: My goal is to build code that enables the agileX limo robot to detect barcodes attached to surrounding walls as much as possible. The actual environment will be a small, enclosed place resembling a maze. I'm solving it with a wall-follower algorithm, however I only have one camera that can provide front scan information. I'll need another camera or sensor to attach it to the side of my robot. I have some ideas, but I'm not sure how to put them into action.
To begin, I'd like to map the gazebo simulation environment in which the limo robot runs with SLAM's gmapping or frontier_exploration tools. The SLAM package for the limo robot can be found in the repo linked below:
I also have a repository that contains the gazebo environment package for limo. You can see it here:
I educated myself about SLAM. I know its innerworkings and theory. However, numerous tutorials cover turtlebot3. This is not my setup. I spent days attempting to deduce anything from internet documentations and tutorials, but I never had the "aha!" moment when everything clicked and I started writing code. Could you kindly guide/assist me?
Asked by Turuu on 2023-05-13 09:48:49 UTC
Comments
I'm a bit lost with your question. Which is the reason to not use SLAM toolbox http://wiki.ros.org/slam_toolbox?
Asked by Bernat Gaston on 2023-05-15 06:05:21 UTC