Skip to content

loopbio/FreemooVR

Repository files navigation

FreemooVR

FreemooVR - virtual reality engine

FreemooVR is a virtual reality engine built on OpenSceneGraph. It supports arbitary projection geometry and calibration methods for use in many scientific studies. It was described in the following paper

John R Stowers*, Maximilian Hofbauer*, Renaud Bastien, Johannes Griessner⁑, Peter Higgins⁑,
Sarfarazhussain Farooqui⁑, Ruth M Fischer, Karin Nowikovsky, Wulf Haubensak, Iain D Couzin,
Kristin Tessmar-Raible✎, Andrew D Straw✎.
Virtual reality for freely moving animals. Nature Methods 2017. DOI: 10.1038/nmeth.4399

FreemooVR is a fork of, and the successor to, freemovr. It removes all ROS dependencies, improves support, manipulations and display of OSG files, adds back an extra o to the name to enable the awesome cow logo, and makes the software a simpler and more extensible base upon which to build custom VR setups.

FreemooVR has a remote IPC interface (ZMQ) and a python API which allows using it to build custom VR assays and experiments. Virtual environments are designed in blender and loaded via .osg format into FreemooVR.

Calibration tooling was also consolidated around arbitrary geometry display models and integrated into the repository.

Installation (Ubuntu 18.04)

Display server

  • the software is tested on GCC-7.4, OSG-3.2, CMake-3.10, 0MQ-5, Cython-0.26
  • $ sudo apt install cmake pkg-config freeglut3-dev libpoco-dev libjansson-dev libopenscenegraph-dev libzmq3-dev libopenexr-dev
  • $ cmake .
  • $ ./bin/display_server --display-mode overview

Python interface

  • the basic python interface requires only python-zmq installed
    if you just want to play with stimuli and moving things in the VR, then this is all you need
  • the freemoovr python package which also includes utilities for arbitrary geometry calibration also depends on
    • python- numpy, scipy, matplotlib, OpenExr, evdev, and PIL/Pillow

Theory of operation

A moving observer has a pose within a global coordinate frame. Objects within the global frame may also move or be updated (e.g. a moving grating). Six camera views with a fixed relationship to the observer are used to build a cube map, showing the scene surrounding the observer without regard to the projection surface.

This cube map is then projected onto a 3D shape model of the display surface. From there, this image is warped to the physical display output.

Running FreemooVR

The single executable $ ./bin/display_server runs the VR software 'display server'.

The display server node runs locally on the computer(s) connected to the physical display. During a typical experiment, it will be running a stimulus plugin (typically StimulusOSG or StimulusOSG2).

A VR experiment updates one or many (see: MultiServerBaseZMQ) display servers on the basis of the observer's current position. Given the scenegraph and the calibrated screen layout, each display server will compute the images shown on the projectors.

  • please see $ ./bin/display_server --help
  • FreemooVR is controlled over ZMQ (see sample_code/*.py).
  • You can push s to toggle statistics about the rendering.
  • You can push c to take a screenshot

Other tips

  • exr files can be viewed using $ exrdisplay /path/to/file.exr
  • osg files can be viewed with $ osgviewer /path/to/file.osgt
  • osg animations can be played with $ osganimationviewer /path/to/file.osgt
  • $ ./bin/parseosg /path/to/file.osg displays the names of nodes and animations in an osg file

Developing and Testing

The specific details of tracking and calibration, and any additional coordinate systems that they must share are defined by, and a function of the downstream software using FreemooVR; for example

Nevertheless, the coordinate system of the OSG file and what is sent with ServerBaseZMQ.set_position() are internally consistent (see test_coord_system.py).

Showing OSG Files

There are two stimulus plugins for showing OSG files, StimulusOSG and StimulusOSG2. The two have slightly different features and limitations - stemming from how blender/osg represent animations - which depending on what you are doing in your osg file, determine which stimulus you should use.

when blender renders an animation into an OSG file, it bakes the coordinates in

This means that moving a node with started animation in StimulusOSG renders incorrectly. This is fixed in StimulusOSG2 which inserts a new root into the scenegraph in a way that allows moving baked animations.

  • Use StimulusOSG2 if you want to move objects while they are running an animation you designed in blender
  • Only 1 animated object is allowed per OSG file in StimulusOSG2.
    If you want to load multiple non-identical animated objects into the same scene, then place them into multiple files (each one at the origin), and call .load_osg() multiple times
  • If you want to load multiple identical (animated or not) objects
    You should do this in StimulusOSG2 using the .clone() method on the object returned by .load_osg(). See example
  • Animated objects to be loaded by StimulusOSG2 must be defined and animated at the origin

Conversely, if you have a very simple scene without animation, you should define all your virtual objects in the one OSG file, and use StimulusOSG to move/show/hide them individually. See example

There are some additional features only availalbe in StimulusOSG2, such as fading an object in or out. For an example of these, see multiple_objects_animation

Glossary

Display Coordinates - the native pixel indices on a physical display. These are 2D.

World Coordinates - the 3D coordinates in lab space of physical (or simulated) points. (May also be represented as a 4D homogeneous vector x,y,z,w with nonzero w.)

Physical Display - a physical device capable of emitting a large, rectangluar block of pixels. It has display coordinates - the 2D locations of each pixel. (A physical display does not have world coordinates used for the VR mathematics. On the other hand, A virtual display does have world coordinates.)

Virtual Display - a model of a physical display which relates world coordinates to display coordinates. The model consists of a linear pinhole projection model, a non-linear warping model for lens distortions, viewport used to clip valid display coordinates, 3D display surface shape in world coordinates, and luminance masking/blending. Note that a physical display can have multiple virtual displays, for example, if a projector shines onto mirrors that effectively create multiple projections.

Viewport - vertices of polygon defining projection region in display coordinates (x0,y0,x1,y1,...). It is used to limit the region of the physical display used to illuminate a surface. (The FreemooVR Viewport corresponds to a 2D polygon onto which the image of the projection screen is shown.)

Display Surface - a physical, 2D manifold in 3D space which is illuminated by a physical display (either by projection or direct illumination like an LCD screen).

About

A standalone, hackable, versatile, composable, perspective-correct VR for freely moving animals engine

Resources

License

Stars

Watchers

Forks