Example #1
0
#
# Paradigms define the events, epoch time, bandpass, and other preprocessing
# parameters. They have defaults that you can read in the documentation, or you
# can simply set them as we do here. A single paradigm defines a method for
# going from continuous data to trial data of a fixed size. To learn more look
# at the tutorial Exploring Paradigms

fmin = 8
fmax = 35
paradigm = LeftRightImagery(fmin=fmin, fmax=fmax)

##########################################################################
# Evaluation
# --------------------
#
# An evaluation defines how the training and test sets are chosen. This could
# be cross-validated within a single recording, or across days, or sessions, or
# subjects. This also is the correct place to specify multiple threads.

evaluation = CrossSessionEvaluation(paradigm=paradigm,
                                    datasets=datasets,
                                    suffix="examples",
                                    overwrite=False)
results = evaluation.process(pipelines)

##########################################################################
# Results are returned as a pandas DataFrame, and from here you can do as you
# want with them

print(results.head())
Example #2
0
#
# We define the paradigm (LeftRightImagery) and the dataset (BNCI2014001).
# The evaluation will return a dataframe containing a single AUC score for
# each subject / session of the dataset, and for each pipeline.
#
# Results are saved into the database, so that if you add a new pipeline, it
# will not run again the evaluation unless a parameter has changed. Results can
# be overwritten if necessary.

paradigm = LeftRightImagery()
dataset = BNCI2014001()
dataset.subject_list = dataset.subject_list[:4]
datasets = [dataset]
overwrite = True  # set to False if we want to use cached results
evaluation = CrossSessionEvaluation(paradigm=paradigm,
                                    datasets=datasets,
                                    suffix="stats",
                                    overwrite=overwrite)

results = evaluation.process(pipelines)

##############################################################################
# MOABB plotting
# ----------------
#
# Here we plot the results using some of the convenience methods within the
# toolkit.  The score_plot visualizes all the data with one score per subject
# for every dataset and pipeline.

fig = moabb_plt.score_plot(results)
plt.show()
    X, labels, meta = paradigm.get_data(dataset=d, subjects=[2])
    X_all.append(X)
    labels_all.append(labels)
    meta_all.append(meta)

##############################################################################
# Evaluation
# ----------
#
# The evaluation will return a dataframe containing a single AUC score for
# each subject / session of the dataset, and for each pipeline.

overwrite = True  # set to True if we want to overwrite cached results

evaluation = CrossSessionEvaluation(paradigm=paradigm,
                                    datasets=datasets,
                                    suffix='examples',
                                    overwrite=overwrite)
results = evaluation.process(pipeline)

print(results.head())

##############################################################################
# Plot Results
# ----------------
#
# Here we plot the results, indicating the score for each session and subject

sns.catplot(data=results,
            x='session',
            y='score',
            hue='subject',
        return arr.reshape(len(arr), -1)


# We will define a pipeline that is based on this new class, using a scaler
# and a logistic regression. This pipeline is evaluated across session using
# ROC-AUC metric.

mne_ppl = {}
mne_ppl["MNE LR"] = make_pipeline(
    MyVectorizer(), StandardScaler(), LogisticRegression(penalty="l1", solver="liblinear")
)

mne_eval = CrossSessionEvaluation(
    paradigm=paradigm,
    datasets=datasets,
    suffix="examples",
    overwrite=True,
    return_epochs=True,
)
mne_res = mne_eval.process(mne_ppl)

##############################################################################
# Advanced MNE pipeline
# ---------------------
#
# In some case, the MNE pipeline should have access to the original labels from
# the dataset. This is the case for the XDAWN code of MNE. One could pass
# `mne_labels` to evaluation in order to keep this label.
# As an example, we will define a pipeline that compute an XDAWN filter, rescale,
# then apply a logistic regression.