Skip to content

codeaudit/motion-structure-used-in-perception

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

motion-structure-used-in-perception

Python code and download links to the data of Bill et al., "Hierarchical structure is employed by humans during visual motion perception" (preprint).

This repository allows you to:

  • Generate figures 2, 3, 4 and 5 from the main paper,
  • Collect your own data,
  • Run the full analysis pipeline (if you are willing to dig into the code, a bit).

In case of questions, please contact Johannes Bill (johannes_bill@hms.harvard.edu).

Table of contents

Installation

We assume a Ubuntu-based Linux installation. On Mac, you should be able to homebrew with sip and pyqt. In the cloned repository, we suggest to use a virtual environment with Python 3.6+:

$ python3 -m pip install --user --upgrade pip   # Install pip (if not yet installed)
$ sudo apt-get install python3-venv             # May be needed for environment creation
$ python3.6 -m venv env                         # Create environment with the right python interpreter (must be installed)
$ source env/bin/activate                       # Activate env
$ python3 -m pip install --upgrade pip          # Make sure the local pip is up to date
$ pip3 install wheel                            # Install wheel first
$ pip3 install -r requirements.txt              # Install other required packages
$ deactivate                                    # Deactivate env

Usage

Always start your session by running source run_at_start.sh and end it with source run_at_end.sh. These will set up the virtual environment and python path. Here are some cookbooks.

Plot figures

Re-plotting the figures from the main paper is quick and easy:

$ source run_at_start.sh
$ cd plot
$ python3 plot_fig_2.py   # Plot Figure 2
$ python3 plot_fig_3.py   # Plot Figure 3
$ python3 plot_fig_4.py   # Plot Figure 4
$ python3 plot_fig_5.py   # Plot Figure 5
$ cd ..
$ source run_at_end.sh

All figures will be saved in ./plot/fig/ as png and pdf.

Collect your own data

MOT experiment

This experiment requires Python as well as MATLAB with Psychtoolbox. Please make sure to have at least 2GB of disk space available per participant. Questions on the data collection for the MOT experiment can also be directed to Hrag Pailian (pailian@fas.harvard.edu).

  1. Generate trials:
  • $ source run_at_start.sh
  • $ cd rmot/generate_stim
  • Adjust nSubjects=... in file generate_trials_via_script.sh to your needs.
  • Generate trials via $ ./generate_trials_via_script.sh (This may take a while depending on processor power.)
  • Resulting trials are written to:
    • data/rmot/myexp/trials for the Python data (will be needed for simulations and analyzes)
    • data/rmot/myexp/matlab_trials for the data collection with MATLAB
  1. Run the experiment: For each participant n=1,..
  • Copy the content of data/rmot/myexp/matlab_trials/participant_n/ into rmot/matlab_gui/Trials/.
  • $ cd ../matlab_gui
  • Determine the participant's speed via repeated execution of Part_1_Thresholding.m (will prompt for speed on start).
  • Conduct the main experiment via Part_2_Test.m (will prompt for speed and n).
  • Copy the saved responses to data/rmot/myexp/responses/ and rename the file to Response_File_Test_Pn.mat.
  1. Convert the data back to Python format:
  • $ cd ../ana
  • For each participant n=1,.., run
    $ python3 convert_mat_to_npy.py data/myexp/responses/Response_File_Test_Pn.mat.
  • $ cd ../..
  • $ source run_at_end.sh

Continue with the data analysis (see below).

Prediction experiment

This experiment is fully Python-based.

$ source run_at_start.sh
$ cd pred/gui
$ python3 play.py presets/example_trials/GLO.py -f -T 10   # EITHER: try out 10 trials (ca. 2 min)
$ ./run_full_experiment.sh -u 12345                        # OR: run the full experiment (ca. 75 min)
$ cd ../..
$ source run_at_end.sh

Continue with the data analysis (below).

If you run the full experiment, your data will be stored in /data/pred/myexp/. Please refer to /pred/gui/README.md for further information -- especially to ensure a stable frame rate before running a full experiment.

Data download

The data from the publication can be downloaded here:

For below analyses, unzip the content of these archives into the directories data/rmot/paper and data/pred/paper respectively. Then, execute steps 1. and 3. (replacing myexp with paper) in the description of Collect your own data >> MOT experiment.

Data analysis

Remark: The following description for the data analysis still refers to the 1st version of the manuscript. The data and analyses are generally identical with the 2nd version, but do not yet include the Bayesian model comparison across motion structures and the alternative observer models in the MOT task, presented in Figure 3. An updated description will be provided soon.

Use the following analysis chain to recreate the aggregate data files provided in /data from the raw data in /data/rmot/paper and /data/pred/paper -- or to analyze your own data (see above). The analysis may require some understanding of the Python code. So, please, do not expect a direct copy-and-paste workflow.

MOT experiment

$ source run_at_start.sh
$ cd rmot/ana
  1. Set up a data set labels (DSL) file to link human data to simulation data:
  • You can use DSLs_rmot_template.py as a template.
  • Adjust exppath and subjects. Make sure simpath exists.
  • For each participant, create an entry block and enter the participant's ["speed"] (from above 'thresholding').
  • The ["sim"] entries will be filled later.
  1. Set up the config_datarun.py file for simulations:
  • You can use config_datarun_template.py as a template.
  • Adjust the import to import from your DSL file and ensure that cfg["global"]["outdir"] exists.
  • Adjust cfg["observe"]["datadir"] to point to the (Python) trials.
  • You may want to reduce reps_per_trial from 25 to 1 to speed up the simulation (optional).
  1. Prepare the simulations in create_config_for_participant.py:
  • Adjust lines 8-11 to match your DSLs, config, and trial directory.
  1. Run observer models with different motion structure priors on the experiment trials:
  • For each participant and stimulus condition:
    • Adjust lines 6 and 7 in create_config_for_participant.py.
    • Run $ ./start_datarun_script.sh.
    • Enter the DSL of the simulation in your DSL file's ["sim"] entry of the respective participant and condition.
    • Warning: The simulations may take a while (we used the HMS cluster).
  • Collect all results via $ python3 load_human_and_sim_to_pandas.py (adjust line 7).
  • Copy the created pkl.zip file to the repository's /data/ directory.
  1. Plot the figure:
  • $ cd ../../plot
  • Adjust fname_data= to point to your data in plot_fig_2.py.
  • $ python3 plot_fig_2.py # Plot Figure 2
$ cd ..
$ source run_at_end.sh

Prediction experiment

$ source run_at_start.sh
$ cd pred/ana
  1. Run Kalman filters with different motion priors on the experiment trials:
  • In file config_datarun_MarApr2019.py, direct cfg["observe"]["datadir"] to the experiment data.
  • For each participant and stimulus condition:
    • In config_datarun_MarApr2019.py, enter GROUNDTRUTH= and datadsl=.
    • Run $ python3 run.py config_datarun_MarApr2019
    • Keep track of the data set labels (DSLs) linking experiment and simulation data, in a file similiar to DSLs_predict_MarApr2019.py.
  1. Fit all observer models (for Fig. 3):
  • Update the parameters section in fit_noise_models_with_lapse_from_DSLfile.py, especially:
    exppath, outFilename, and import from your DSL file.
  • $ python3 fit_noise_models_with_lapse_from_DSLfile.py
  • Copy the outFilename file to the repository's /data/ directory.
  1. Bias-variance analysis (for Fig. 4):
  • Update the parameters section in estimate_bias_variance.py, especially:
    path_exp, outfname_data, and import from your DSL file.
  • $ python3 estimate_bias_variance.py
  • Copy the outfname_data file to the repository's /data/ directory.
  1. Plot the figures:
  • $ cd ../../plot
  • Adjust fname_data= to point to your data in plot_fig_3.py and plot_fig_4.py.
  • $ python3 plot_fig_3.py # Plot Figure 3
  • $ python3 plot_fig_4.py # Plot Figure 4
$ cd ..
$ source run_at_end.sh

Miscellaneous

List of directories

  • data: Experiment data and simulation/analysis results
  • pckg: Python imports of shared classes and functions
  • plot: Plotting scripts for Figures 2, 3 and 4
  • pred: Simulation and analyis scripts for the prediction task
  • rmot: Simulation and analyis scripts for the rotational MOT task

Fonts

If the 'Arial' font is not installed already:

$ sudo apt-get install ttf-mscorefonts-installer
$ sudo fc-cache
$ python3 -c "import matplotlib.font_manager; matplotlib.font_manager._rebuild()"

...and if you really want it all: the stars in Figure 3 indicating significance use font type "FreeSans".

About

Python code and data for Bill et al. "Hierarchical structure is employed by humans during visual motion perception"

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 81.2%
  • MATLAB 17.5%
  • Shell 1.3%