Skip to content

PDuckworth/activity_analysis

Repository files navigation

activity analysis package

activity analysis package. To manage and represent skeleton and object data about humans in indoor environments.

activity_data

skeleton_publisher.py continuously logs data from the Openni2 Skeleton Tracker package here.

It will log in ~/SkeltonDataset/no_consent/: RGB and Depth images, along with estimated human pose sequence, the robots position, the date, time and a UUID, for each detected human.

rosrun activity_data skeleton_publisher.py

Recommended to use the launch file:

roslaunch activity_data record_data.launch
roslaunch activity_data record_data.launch log_to_mongo:=True max_frames:=1000 rec_rgb:=True X:=2 Y:=3

This uses topic_tools drop to reduce the rate of messages received on /skeleton_data/incremental_reduced to 10Hz, as per here

record_skeleton_action

skeleton_action.py is an action server which records a location (given a goal), to obtain a human detection. Once a detected human has more than a threshold of recorded poses, the action will try to obtain consent in order to store their RGBD data to disc.

It logs an RGB image to mongo, and calls the consent server.

To run:

rosrun record_skeleton_action skeleton_action.py
rosrun actionlib axclient /record_skeletons

Requires shapely and nav_goals_generator:

sudo apt-get install python-shapely
roslaunch nav_goals_generator nav_goals_generator.launch

consent_tsc

consent_for_images.py is an action server which deals with obtaining consent from a recorded individual. It serves the latest detected images to the webserver to display and displays yes/no style buttons on screen. It returns the value of this consent. Required when the webserver and the recording action server are running on different machines.

rosrun consent_tsc consent_for_images.py

human_activities

Learning_action.py is an action which uses an unsupervised, qualitative framework to learn common motion patterns from the collection of detected humans.

It first obtains all detected human pose sequences from mongo/or from file, and abstracts the pose information usign Qualitative Spatial Representations, as per QSRLib.

It then performs unsupervised clustering as per recent literature, here.

The online_learning branch performs online LDA using variational bayes as per this paper. And outputs a gamma and lambda for each increment (i.e. each run).

To run:

rosrun human_activities Learning_action.py
rosrun actionlib axclient /LearnHumanActivities

Configuration File:

\activity_analysis\human_activities\config\config.ini

Requires: (LDA package)[https://pypi.python.org/pypi/lda]:

pip install lda

About

ROS activity analysis package. To manage and represent skeleton and object data about humans in indoor environments.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published