Skip to content

using Recurrent attention network using spatial-temporal relations for action recognition in driving

Notifications You must be signed in to change notification settings

PancakeAwesome/ran_two_stream_to_recognize_drives

Repository files navigation

Action Recognition using Visual Attention

We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long-Short Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.

Dependencies

About

using Recurrent attention network using spatial-temporal relations for action recognition in driving

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published