- Turtlebot SLAM, RRT path planning and target detection using SIFT
- Clone
rrt_explorationpackage in youcatkin_ws - Install ROS package navigation stack, for kinetic run
sudo apt-get install ros-kinetic-navigation - Ensure that you have ROS package gmapping, for kinetic run
sudo apt-get install ros-kinetic-navigation - clone
rrt_exploration_tutorialsfor simulation - clone ROS package
rrr_explorationfor Physical Turtlbot - clone ROS package
urg_node - clone
robot_explorerfrom source - Install openCV via
pip2 install opencv-python==3.3.0.10 opencv-contrib-python==3.3.0.10 - Install camera packages
sudo apt install ros-kinetic-cv-camera ros-kinetic-usb-cam - Make workspace with command
catkin_makein your~/[catkin_ws] - Source workspace by running
source devel/setup.bashin youcatkin_ws
For more information visit: RRT wiki, Hokuyo Driver wiki
- check for usb connectivity by running:
ls -l /dev/ttyACM0 - to publish to the scan topic using live hokuyo sensor data run:
rosrun urg_node urg_node
- Controller config can be evaluated by running:
jstest /dev/input/js0 - Create
my_ps3_teleop.launchto reflect controller config - To test using controller teleop run:
roslaunch turtlebot_teleop my_ps3_teleop.launch
- Turtlbot hardware setup
- To check for turtlbot usb connection run
ls -al /dev | grep -i usb - You should see kobuki usb connection
- More about kobuki robot
Simulates and stages bot in RVIZ and Gazebo, contains wall follower node which subscribes to rrt_exploration topic /robot_1/base_scan
This also publishes to topic /robot_1/mobile_base/commands/velocity which drives the bot
- run:
roslaunch robot_explorer wall_follow.launch
- For more information: Wall Follower
RRT Path planning using goals provided by service provider fetch_goal.py. The service provider posts 10 goals each further from the origin than the last. This script can be easily modified to post targets around the map for exploration and target discovery.
- To run:
roslaunch rrt_exploration_tutorials single_simulated_house.launch - To run service service provider run:
python fetch_goal.py
- Green line is the robots current trajectory
- Scale invariant feature detection which detects objects relitive to the first frame recieved when the script is run.
- To Run with camera:
roscore rosrun usb_cam usb_cam_node roslaunch rrt_exploration_tutorials single_simulated_house.launch python SIFT_node.py - The matching_script can also be run directly using an image of an object and a target image as input, and outputs a graphical image of the objects location as output, if the object is found.
- To run SIFT with test image run:
python matching_script.pywhich tests on image:test_pic.jpg
To run with turtlebot you need to connect PC to Turtlebot and Hokuyo Laser Scanner, refer to Working with hardware for details.
- run:
roslaunch robot_explorer setup.launch - run:
python fetch_goal.py - This launches gmapping, turtlebot navigatin stack, RRT path planning, and Hokuyo driver related nodes.
- This can drive robot and perform mapping and localization, but had some trouble path planning.
- ROS subscription issues for path planner, I suspect this is some sort of namespace issues, preventing communication, despite publishing to correct topics.
- Still need simple SIFT node to publish camera data to the network for target detection and localization.
- Use images captured from RealSense camera


