Example sequence of commands to run with FASTER:
roslaunch loomo_convert convert.launch
roslaunch global_mapper_ros global_mapper_node.launch quad:=LO01 depth_image_topic:=realsense_loomo/depth_new_encoding pose_topic:=state goal_topic:=move_base_simple/goal odom_topic:=odom
roslaunch faster faster_interface.launch quad:= LO01 is_ground_robot:=true
roslaunch faster faster.launch quad:=LO01
# These first 4 commands can also be run direclty by using the faster.sh script in this repo
rosbag play --clock loomo_simple.bag -s 33 # the first 33 secs of the bag aren't useful
- In
acl-mapping/global-mapper/global_mapper_ros/launch/global_mapper_node.launch
- If running mapper with bag file of Loomo data, add line:
<param name="/use_sim_time" value="true" />
- If running mapper with bag file of Loomo data, add line:
- In
acl-mapping/global-mapper/global_mapper_ros/src/global_mapper_ros/global_mapper_ros.cc
- If depth images from Loomo are of
mono16
encoding, change line 598 to:if (image_msg->encoding == "16UC1" || image_msg->encoding == "mono16")
- If camera sets out-of-range values to 0 instead of NaN or Inf, in
DepthImageCallback
(around line 646) add:else if (depth == 0) { continue; } // don't add points to pointcloud if value/depth is 0
- If depth images from Loomo are of
- In
faster/faster/launch/faster.launch
~pcloud
should be remapped torealsense_loomo/points
~odom
should be remapped toodom
- In
faster/faster/params/faster.yaml
(beyond what is described in the FASTER readme)- I'm still unsure how to model the Loomo with just a
drone_radius
z_ground
has to be negative, or else planner won't work- If
z_max
is too high, then planner will generate paths that go over obstacles, which is possible for drones, but not for a ground robot like the Loomo.
- I'm still unsure how to model the Loomo with just a
- Copy
cvx_LO01.rviz
intofaster/faster/rviz_cfgs
. This is Loomo rviz config that can be used for visualizing FASTER.
- Mapper expects image topics of format
<camera_name>/<depth_image_topic_name>
and<camera_name>/camera_info
. depth_image_topic
expectssensor_msgs/Image
message,pose_topic
expectssnapstack_msgs/State
, andodom_topic
expectsnav_msgs/Odometry
, which was not directly provided from Loomo topics.- Planner expects a camera frame with name of format
<quad name>/camera
i.e.LO01/camera
2020-03-10-18-47-09.bag
contains data from driving the Loomo around with autonomy code running.- Note the camera data being used is from the zr300 Realsense built into the Loomo.
loomo_simple.bag
was created by filtering out unused topics and replacing the tf between themap
andLO01_odom
frames generated by AMCL with a static tf so the frames are on top of each other. This way, the Loomo's state estimatation is entirely dependent on its odometry.
Within convert.launch
, a node is already set up to visualize the depth images as pointclouds, but here are some notes about it.
depth_image_proc
requires the depth image encodings to either be16UC1
or32FC1
but the Realsense onboard the Loomo hasmono16
depth images. Within theconvert.py
script, the encoding field of the image messages are changed to16UC1
and are republished to a new topic, so this is not a univeral solution to the issue, but works in this case.- The point cloud has a different coordinate system than the Loomo, so it will look like the point cloud is being projected onto the ceiling. This can be fixed with a static tf between the camera and depth frame (already been done in
convert.launch
).