# %% [markdown]
# To demonstrate the performance of the meta-model, we can create an
# alternate design of experiments.  Note that to get different random values,
# we set the `random_seed` argument to something other than the default value.

# %%
design2 = design_experiments(
    scope=road_scope, 
    db=emat_db, 
    n_samples_per_factor=10, 
    sampler='lhs', 
    random_seed=2,
)

# %%
design2_results = mm.run_experiments(design2)
design2_results.head()

# %%
mm.cross_val_scores()

# %% [markdown]
# ### Compare Core vs Meta Model Results
#
# We can generate a variety of plots to compare the distribution of meta-model outcomes
# on the new design against the original model's results.

# %%
from emat.analysis import contrast_experiments
contrast_experiments(road_scope, lhs_results, design2_results)
Beispiel #2
0
# %% [markdown]
# ## Contrasting Sets of Experiments
#
# A similar set of visualizations can be created to contrast two set
# of experiments derived from the same (or substantially similar) scopes.
# This is particularly valuable to evaluate the performance of meta-models
# that are derived from core models, as we can generate scatter plot
# matrices that show experiments from both the core and meta models.
#
# To demonstrate this capability, we'll first create a meta-model from
# the Road Test core model, then apply that meta-model to a design of
# 5,000 experiments to create a set of meta-model results to visualize.

# %%
mm = model.create_metamodel_from_design('lhs')
mm_design = mm.design_experiments(n_samples=5000)
mm_results = mm.run_experiments(mm_design)

# %% [markdown]
# The `contrast_experiments` function in the `emat.analysis` sub-package can
# automatically create a scatter plot matrix, using a very similar interface
# to the `display_experiments` function.  The primary difference between these
# two functions is that `contrast_experiments` takes two sets of experiments
# as arguments, instead of one.  The resulting plots are also not colorized
# based on the roles of each factor in the scope; instead colors are used
# to differentiate the different datasets.

# %%
from emat.analysis import contrast_experiments
contrast_experiments(scope, mm_results, results)
Beispiel #3
0
# %% [markdown]
# We can then create a meta-model automatically from these experiments.

# %%
mm = model.create_metamodel_from_design('lhs')
mm

# %% [markdown]
# If you are using the default meta-model regressor, as we are doing here,
# you can directly access a cross-validation method that uses the experimental
# data to evaluate the quality of the regression model.  The `cross_val_scores`
# provides a measure of how well the meta-model predicts the experimental
# outcomes, similar to an $R^2$ measure on a linear regression model.

# %%
mm.cross_val_scores()

# %% [markdown]
# We can apply the meta-model directly on a new design of experiments, and
# use the `contrast_experiments` visualization tool to review how well the
# meta-model is replicating the underlying model's results.

# %%
design2 = mm.design_experiments(design_name='lhs_meta', n_samples=5000)
results2 = mm.run_experiments(design2)

# %%
from emat.analysis import contrast_experiments
contrast_experiments(mm.scope, results2, results)