ROS package for Detic. Run on both CPU and GPU, GPU is way performant, but work fine also with CPU (take few seconds to process single image).
example of custom vocabulary. Left: default (lvis), Right: custom ('bottle,shoe')
example of three dimensional pose recognition for cups, bottles, and bottle caps.
cd <your catkin workspace>/src
git clone git@github.com:HiroIshida/detic_ros.git
rosdep update && rosdep install -iry .
cd ../
catkin build
Prerequsite: You need to preinstall nvidia-container-toolkit beforehand. see (https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
Build docker image
git clone https://github.com/HiroIshida/detic_ros.git
cd detic_ros
docker build -t detic_ros .
Example for running node on pr1040 (please replace pr1040
by you robot hostname or localhost
):
roslaunch detic_ros sample.launch \
out_debug_img:=true \
out_debug_segimg:=false \
compressed:=false \
device:=auto \
input_image:=/kinect_head/rgb/image_color
# When you use docker,
python3 run_container.py -host pr1040 -mount ./launch -name sample.launch \
out_debug_img:=true \
out_debug_segimg:=false \
compressed:=false \
device:=auto \
input_image:=/kinect_head/rgb/image_color
The minimum necessary argument of run_container.py
is host
, mount
and name
:
- host: host name or IP address
- mount: launch file or launch file's directory path that will be mounted inside the container. In this example, launch file directory of this repository is mounted.
- name: launch file name that will be searched from mounted file or directory
Also, you can specify launch args as the roslaunch command (e.g.
out_debug_img:=true
). This launch args must come after the above three args.
Another example for running three dimensional object pose detection using point cloud filtered by segmentation.
roslaunch detic_ros sample_detection.launch \
debug:=true \
vocabulary:=custom \
custom_vocabulary:=bottle,cup
# When you use docker,
python3 run_container.py -host pr1040 -mount ./launch -name sample_detection.launch \
debug:=true \
vocabulary:=custom \
custom_vocabulary:=bottle,cup
Or rosrun detic_ros run_container.py
if you catkin build this package on the hosting computer side.
As in this example, by putting required sub-launch files inside the directory that will be mounted on, you can combine many node inside the container.
When launching sample_detection.launch, you must specify the following parameters:
input_image
input_depth
input_camera_info
target_frame_id
Example Command
python3 run_container.py -host xxx.xx.xxx.xx -mount ./launch -name sample_detection.launch \
debug:=true \
input_image:=/camera/color/image_raw \
input_depth:=/camera/aligned_depth_to_color/image_raw \
input_camera_info:=/camera/aligned_depth_to_color/camera_info \
target_frame_id:=real_base_link
- The default configuration of
sample_detection.launch
is set for PR2. - On custom vocabulary: if you want to limit the detected instances by custom vocabulary, please set launch args to
vocabulary:='custom' custom_vocabulary:='bottle,shoe'
or call~custom_vocabulary
service. If you want to set it to the default, please call~default_vocabulary
service. - On model types: Detic is trained in different model types. In this repository you can try out all of the real-time models using the
model_type
parameter. - On real-time performance: For higher recognition frequencies turn off all debug info, run on GPU, decompress topics locally, use smaller models (e.g.
res50
), and avoid having too many classes in the frame (by e.g. setting a custom vocabulary or higher confidence thresholds). Thesample_detection.launch
with default parameters handles all of this, yielding object bounding boxes at around 10Hz.
Example for using the published topic from the node above is masked_image_publisher.py. This will be helpful for understanding how to apply SegmentationInfo
message to a image. The test file for this example also might be helpful.
See definition of srv/DeticSeg.srv
~input_image
(sensor_msgs/Image
)- Input image
~debug_image
(sensor_msgs/Image
)- debug image
~debug_segmentation_image
(sensor_msgs/Image
with32SC1
encoding)- Say detected class number is 14,
~segmentation_image
in grayscale image is almost completely dark and not good for debugging. Therefore this topic scale the value to [0 ~ 255] so that grayscale image is human-friendly.
- Say detected class number is 14,
~segmentation_info
(detic_ros/SegmentationInfo
)- Published when
use_jsk_msgs
is false. Includes the class name list, confidence score list and segmentation image with32SC1
encoding. The image is filled by 0 and positive integers indicating segmented object number. These indexes correspond to one plus those of class name list and confidence score list. For example, an image value of 2 corresponds to the second (index=1) item in the class name and score list. Note that the image value of 0 is always reserved for the 'background' instance.
- Published when
~segmentation
(sensor_msgs/Image
)- Published when
use_jsk_msgs
is true. Includes the segmentation image with32SC1
encoding.
- Published when
~detected_classes
(jsk_recognition_msgs/LabelArray
)- Published when
use_jsk_msgs
is true. Includes the names and ids of the detected objects. In the same order as~score
.
- Published when
~score
(jsk_recognition_msgs/VectorArray
)- Published when
use_jsk_msgs
is true. Includes the confidence score of the detected objects. In the same order as~detected_classes
.
- Published when
As for rosparam, see node_cofig.py.
rosrun detic_ros batch_processor.py path/to/bagfile
See source code for the options.