Skip to content

Sharing the codes and datasets used in the evaluation for our TPAMI paper with the title, "Leveraging Hand-Object Interactions in Assistive Egocentric Vision".

IAMLabUMD/tpami2021

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 

Repository files navigation

Leveraging Hand-Object Interactions in Assistive Egocentric Vision

This repository contains the codes used in the evaluation for our TPAMI paper with the title, Leveraging Hand-Object Interactions in Assistive Egocentric Vision.

The models used in the evaluation were built on two different models (i.e., Fully Convolutional Networks and Faster R-CNN) and are located in two different folders, Hand-Object-Models and Faster-RCNN, respectively.

The Hand-Object-Models folder contains the codes and instructions for the models that use hand segmentation to localize and recognize an object of interest. The Faster-RCNN folder contains the codes and instructions for the models that use a bounding box (either a whole bounding box or a bounding box of an object center area) for object recognition.

For more details, please refer to README in each folder.

About

Sharing the codes and datasets used in the evaluation for our TPAMI paper with the title, "Leveraging Hand-Object Interactions in Assistive Egocentric Vision".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published