This repository contains the codes used in the evaluation for our TPAMI paper with the title, Leveraging Hand-Object Interactions in Assistive Egocentric Vision.
The models used in the evaluation were built on two different models (i.e., Fully Convolutional Networks and Faster R-CNN) and are located in two different folders, Hand-Object-Models and Faster-RCNN, respectively.
The Hand-Object-Models folder contains the codes and instructions for the models that use hand segmentation to localize and recognize an object of interest. The Faster-RCNN folder contains the codes and instructions for the models that use a bounding box (either a whole bounding box or a bounding box of an object center area) for object recognition.
For more details, please refer to README in each folder.