Skip to content

codyrushing/project-bodypix

 
 

Repository files navigation

Coral BodyPix

BodyPix is an open-source machine learning model which allows for person and body-part segmentation. This has previously been released as a Tensorflow.Js project.

This repo contains a set of pre-trained BodyPix Models (with both MobileNet v1 and ResNet50 backbones) that are quantized and optimized for the Coral Edge TPU. Example code is provided to enable inferencing on generic platforms as well as an optimized version for the Coral Dev Board.

Body-Part Segmentation Anonymous Population Flow

The above images show two possible applications of BodyPix. The left shows body-part segmentation (on an example video) with bounding boxes and PoseNet-style skeletons. The right shows anonymous population flow. Both are running on the Coral Dev Board; see below for information on enabling these modes on the Dev Board or on a generic platform.

What is Person/Body-Part Segmentation?

Image segmentation refers to grouping pixels of an image into semantic areas, typically to locate objects and boundaries. For example, the Coral DeepLab model (available on the Coral Models Page) segments based on 20 objects. In this example, as with all segmentation examples, pixels are classified as one of those objects or background.

BodyPix extends this concept and segments for people as well as twenty-four body parts (such as "right hand" or "torso front"). More information can be found on the Tensorflow.Js page. This model and post-processing (contained as a custom OP in the Edge TPU TFLite Interpreter) has been optimized for the Edge TPU.

Examples in this repo

NOTE: BodyPix relies on the latest version of the Coral API and for the Dev Board the latest Mendel system image.

To install all the requirements, simply run

sh install_requirements.sh

bodypix.py

A generic BodyPix example intended to be run on multiple platforms, which has not been optimized. Note that this is not recommended for the Coral Dev Board, where the performance is poor compared to the bodypix_gl_imx example. This example allows segmentation of a person, segmentation of body parts, as well as an anonymizer option which lets you remove the person from the camera image.

Run the base demo (using the MobileNet v1 backbone with 640x480 input) like this:

python3 bodypix.py

To segment body parts (grouped as regions as opposed to displaying all 24) instead of the entire person, pass the --bodyparts flag:

python3 bodypix.py --bodyparts

In this repo we have included 11 BodyPix model files using different backbone networks and supporting different input resolutions. There are significant trade-offs in these versions, MobileNet will be faster than ResNet but less accurate; larger resolutions are slower but allow a wider field of view (allowing further-away people to be processed correctly).

This can be changed with the --model flag. Both EdgeTPU and CPU models can be found in the models folder.

You can change the camera resolution by using the --width and --height parameter. Note that in general the camera resolution should equal or exceed the input resolution of the network to get the full advantage of the higher resolution inference:

python3 bodypix.py --width 480 --height 360  # fast but low res
python3 bodypix.py --width 640 --height 480  # default
python3 bodypix.py --width 1280 --height 720 # slower but high res

If the camera and monitor are both facing you, consider adding the --mirror flag:

python3 bodypix.py --mirror

If your input camera supports encoded frames (h264 or JPEG) you can provide the corresponding flags to increase performance. Note these modes are mutually exclusive:

python3 bodypix.py --h264
python3 bodypix.py --jpeg

You can enable Anonymizer mode (which anonymizes the person, similar to in the Coral PoseNet Project. As opposed to the PoseNet example, instead of indicating the pose skeleton the entire outline of the person is indicated.

python3 bodypix.py --anonymize

bodypix_gl_imx.py

This example is optimized specifically for the iMX8MQ GPU and VPU found on the Coral Dev Board. It is intended to allow real time processing and rendering on the platform (able to achieve 30 FPS even at 1280x720 resolution). The flags for input (models, camera configuration) are the same but we enable toggling between display modes with key presses instead of a flag:

python3 bodypix_gl_imx.py

The following key presses can be used to toggle various modes:

Toggle PoseNet-style Skeletons: 's'
Toggle Bounding Boxes: 'b'
Toggle Anonymizer: 'a'
Toggle Aggregated Heatmap Generation: 'h'
Toggle Body Part Segmentation: 'p'
Reset: 'r'

Raspberry Pi notes:

  • Install pycoral. Use version 1.0.1
    pip3 install --extra-index-url https://github.com/google-coral/pycoral/releases/download/v1.0.1/pycoral-1.0.1-cp37-cp37m-linux_armv7l.whl pycoral
    
  • Rename posenet_lib/armv7a => posenet_lib/armv7l

bitmap => svg notes

TODO

  • heatmaps is currently a bitmap, 0 is definitely not a person, 1 is definitely a person, in between is some level of confidence that it is a person.
  • send that over to websocket
  • node.js uses lib to convert to edge verteces.
    • To get verteces would be edge detection on a 2d array of pixels. you could probably write this yourself in js
  • using those edge verteces, match them to the pose keypoints. shape + pose = person.
  • for person continuity over consecutive frames, you might need to store recent persons to track which incoming data belongs to each person by doing a diff on it.

Notes

This command will list the supported formats/resolutions for the usb camera

v4l2-ctl --list-formats-ext --device /dev/video1

Command to run the app:

python3 bodypix_gl_imx.py --jpeg --model models/bodypix_mobilenet_v1_075_1024_768_16_quant_edgetpu_decoder.tflite --videosrc /dev/video1 --width 1280 --height 720 --mirror

About

BodyPix model demo application for Google Coral

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 96.2%
  • Shell 1.9%
  • JavaScript 1.6%
  • TypeScript 0.3%