Skip to content

This repository contains the code used in the experiments on R1 and iCub in the papers Fast Object Segmentation Learning with Kernel-based Methods for Robotics and Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot.

License

Notifications You must be signed in to change notification settings

hsp-iit/online-segmentation-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

On-line Segmentation Demo

This repository contains the code used in the experiments on R1 and iCub in the papers Fast Object Segmentation Learning with Kernel-based Methods for Robotics and Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot.

Table of Contents

Installation

To install this repository follow these steps:

We recommend to use the docker image that we make available together with this repository here, that can be built by executing the bash script build.sh.

Running the Demo

We provide the scripts to run the demo on the iCWT-TABLE-TOP-single-object-masks dataset.

Either from the environment where you have installed this the repository, or from the docker image that we provide (that you can run with the run.sh bash script), execute the bash script run_demo.sh. This will:

  • Run yarpserver.
  • Run iCWT_player_RF.py to stream the dataset to yarp ports.
  • Run OOSModule.py to run the on-line segmentation module, and stream the outputs in a yarpview.

iCWT_player_RF.py will start streaming the training set of the flower2 object. If you want to change the streamed dataset, you can change it via RPC with the command yarp rpc /iCWTPlayer/cmd:i, and select the dataset with the command <dataset_partition>_<object>, where <dataset_partition> can be either train or test, and the object can be one of the objects of the dataset corresponding to the names of the folders from here.

OOSModule.py will start by streaming in the yarpview the output of the model without any known object. To train a new object via RPC, run the command yarp rpc /detection/command:i, and then use the command train <object_label>, where <object_label> is the label that you want to assign to the object that the module is receiving in input. If you want to forget that object, use the command forget <object_label>.

We provide two different versions of the demo:

To run the demo on the robot, run OOSModule.py, without using the --train_from_dataset true, and connect the ports where you stream input images, masks and bounding boxes to the /detection/image:i, /detection/mask:i and /detection/gt_boxes ports.

Depending on the use case where you need to deploy this demo, you may want to set some parameters differently from those that we set in the configuration files provided in this repository, or choose the version of the demo that is more suitable for you use case. We set the parameters to reduce the training time as much as possible, if you need to have a models that is more accurate, we suggest increasing the number of Miniboostrap's iterations (e.g. here). We specifically targeted the demo of the paper Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot for applications where the data comes into streams, and enforced this constraint in the code to filter out noisy ground truth masks by checking the overlap of the mask in subsequent frames, as the OnlineRPN may degrade performance with noisy ground truth masks. If this happens, e.g. in the exemplar demo from the dataset, we suggest either to enforce the check on the ground truth images or to forget and retrain the object.

License

The code is released under the BSD 3-Clause License. See LICENCE for further details.

Citing the papers

If you find any part of this code useful, please consider citing the associated publications:

@ARTICLE{ceola2022tro,
  author={Ceola, Federico and Maiettini, Elisa and Pasquale, Giulia and Meanti, Giacomo and Rosasco, Lorenzo and Natale, Lorenzo},
  journal={IEEE Transactions on Robotics}, 
  title={Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot}, 
  year={2022},
  volume={38},
  number={5},
  pages={3154-3172},
  doi={10.1109/TRO.2022.3164331}}
  
@INPROCEEDINGS{ceola2021oos,
  author={Ceola, Federico and Maiettini, Elisa and Pasquale, Giulia and Rosasco, Lorenzo and Natale, Lorenzo},
  booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)}, 
  title={Fast Object Segmentation Learning with Kernel-based Methods for Robotics}, 
  year={2021},
  volume={},
  number={},
  pages={13581-13588},
  doi={10.1109/ICRA48506.2021.9561758}
}

Maintainer

This repository is maintained by:

@fedeceola

About

This repository contains the code used in the experiments on R1 and iCub in the papers Fast Object Segmentation Learning with Kernel-based Methods for Robotics and Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published