Explanation of the procudure & code files
-
1 Use the model to get the hypo_ch_res(trials, ch) for each encoding trial
Direct problem : angle --> Model --> channel_activity
For each trial of the encoding task --> hypo_ch_res(trials, ch) -
2 Raw data encoding --> Preprocessed SPM --> Apply ROI mask --> High-pass filter & z-score per voxel
-
3 Get the TRs of interest (2 consecutive TRs) and average them--> Enc_TRs: (trials, vx)
-
4 Estimation the weights of each Voxel --> Loop of Liniar model with Lasso Regularization for each Vx
for each Vx in Enc_TRs:
Enc_TRs(trials,1) = hypo_ch_res(trials, ch) x weights(ch, 1)
Append the weights(ch, 1) of each voxel --> Weight_matrix(vx, ch) (WM )
-
1 Raw data WM --> Preprocessed SPM --> Apply ROI mask --> High-pass filter & z-score per voxel
-
2 Get the TRs of every trial--> WM_TRs: (trials, TR, vx)
-
3 Get the subset matrix of interest--> WM_TRs: (trials, TR, vx) smaller
-
4 INVERTED ENCODING MODEL: Transform voxel activity into Channel activity
for all the TRs:
ch_activity(ch,1) = inv( WMt(ch, vx) x WM(vx, ch) ) x WMt(ch, vx) x WM_TR1 (vx, 1) -
5 Solve the inverse problem : angle <-- Model <-- channel_activity
Using a smoother model (instead of a model of 36 ch, a model of 720 ch2)
Kind of population vector: ch_activity(ch,1) --> sum (ch(x) x "Model(ch2(x))) --> Angle repr(ch2, 1)
Angle representation --> Roll to preferred location
Angle_reo_all(trial, TR, ch2)
Average trials in each TR : Angle representation matrix(angles, TR) -
5 Visual Respresentation (heatmap for each TR)
Inside "scripts" you have the two main scripts. They are the files described below put together
- Takes the paths for the files and the mask depending on the subject and method of analysis
- specification depending on close, far or mix distances
- All the functions that you need
- Trains the encoding model (always making the average of 2TR)
- Tests in individual TRs of WM
- Trains the encoding model (always making the average of 2TR)
- Tests in the average of 2 consecutive TRs of WM
- Skips the training
- Tests in individual TRs of WM
- Skips the training
- Tests in the average of 2 consecutive TRs of WM
- Average the individual results into a population result
- by condition
- Plot one line per subject
- by condition
- Plot the 4 conditions per subject
- by condition
- Example of the T NT dist distribution when rolling
- one subject example
- Example bold signal
- not important
for each subject For each ROI For each condition of WM task (4)
Depending on:
- Method of analysis + together --> train in all the encoding sessions at the same time + bysess --> train and test by session
-
1.1 STEP 1 : Crete the model of the channels (36 channels)
-
1.2 STEP 2 : From the encoding (beh & images) --> Extract the portion of data (images) & Generate the hypothetical channel coefficients (beh)
-
1.3 STEP 3 : Estimate the channel weights in each voxel
-
2.1 STEP 4 : Extract the encoded channel response from the WM task trails
-
2.2 STEP 5 : Visualization of Heatmap and preferred
After running the most convineint loop, you can average the results of each individual by running the "combine_subjects.py". It will return the decoding value by condition and ROI. It generates the heatmap average population, the preferred and the whole ROI (360 degrees)