University of Michigan code used for the first HSR Challenge 180815.
Contact: Brent Griffin (griffb at umich dot edu)
Need to run: roslaunch hsr_war_machine amcl.launch
.
Need to have tensorflow sourced to run the challenge script: ./heimdall.py
.
Video Demonstration: https://youtu.be/4s14FmhO03o
The necessary segmentation models (e.g., "r.ckpt") are trained using train_osvos_models.py
at https://github.com/griffbr/VOSVS/tree/master/OSVOS_train. Code is setup for Toyota's Human Support Robot (HSR) using ROS messages, but should be reconfigurable for other robot platforms.
We have a paper detailing our vision and control method used in the challenge:
Video Object Segmentation-based Visual Servo Control and Object Depth Estimation on a Mobile Robot
Brent Griffin, Victoria Florence, and Jason J. Corso
IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
Please cite our paper if you find it useful for your research.
@inproceedings{GrFlCoWACV20,
author = {Griffin, Brent and Florence, Victoria and Corso, Jason J.},
booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
title = {Video Object Segmentation-based Visual Servo Control and Object Depth Estimation on a Mobile Robot},
year = {2020}
}
This code is available for non-commercial research purposes only.