Ejemplo n.º 1
0
# %%
from emat.analysis import feature_scores
fs = feature_scores(demo_scope, experiment_results)
fs

# %% [raw] raw_mimetype="text/restructuredtext"
# Note that the :func:`feature_scores` depend on the *scope* (to identify what are input features
# and what are outputs) and the *experiment_results*, but not on the model itself.  
#
# We can plot each of these input parameters using the `display_experiments` method,
# which can help visualize the underlying data and exactly how *B* is the most important
# feature for this example.

# %%
from emat.analysis import display_experiments
fig = display_experiments(demo_scope, experiment_results, render=False, return_figures=True)['Y']
fig.update_layout(
    xaxis_title_text =f"A (Feature Score = {fs.data.loc['Y','A']:.3f})",
    xaxis2_title_text=f"B (Feature Score = {fs.data.loc['Y','B']:.3f})",
    xaxis3_title_text=f"C (Feature Score = {fs.data.loc['Y','C']:.3f})",
)
from emat.util.rendering import render_plotly
render_plotly(fig, '.png')

# %% [markdown]
# One important thing to consider is that changing the range of the input parameters 
# in the scope can significantly impact the feature scores, even if the underlying 
# model itself is not changed.  For example, consider what happens to the features
# scores when we expand the range of the uncertainties:

# %%
Ejemplo n.º 2
0
# the design of experiments we are writing (all experiments exist within designs) and
# the source of the performance measure results (zero means actual results from a
# core model run, and non-zero values are ID numbers for metamodels). This allows many
# different possible sets of performance measures to be stored for the same set
# of input parameters.

# %%
db2.write_experiment_all(
    scope_name=s2.name,
    design_name='general',
    source=0,
    xlm_df=df2,
)

# %%
display_experiments(s2, 'general', db=db2, rows=['time_savings'])

# %% [markdown]
# ## Multiple-Design Datasets
#
# The EMAT database is not limited to storing a single design of experiments.  Multiple designs
# can be stored for the same scope.  We'll add a set of univariate sensitivity test to our
# database, and a "ref" design that contains a single experiment with all inputs set to their
# default values.

# %%
design_uni = model.design_experiments(sampler='uni')
model.run_experiments(design_uni)
model.run_reference_experiment()

# %% [markdown]
Ejemplo n.º 3
0
# the lane width for the link.  You might expect that increasing the lane
# width might increase the effective capacity on the link, but as coded any
# deviation from exactly 10.0 feet will result in substantial extra delay in
# the build travel time, regardless of any other factors.  Running the experiments
# with this broken input will invalidate the entire set of results, but here
# we'll assume that we don't know *a priori* that the lane width parameter is
# broken.
#
# Given this set of experimental results, we can display a scatter plot matrix
# to see the results.  This is a collection of two-dimensional plots, each
# showing a contrast between two factors, typically an input parameter (i.e.
# an uncertainty or a policy lever) and an output performance measure, although
# it is also possible to plot inputs against inputs or outputs against outputs.
#
# The `display_experiments` function in the `emat.analysis` sub-package can
# automatically create a scatter plot matrix that crosses every parameter with
# every measure, simply by providing the scope and the results.  By default,
# plots that display levers are shown in blue, plots that show uncertainties
# are in red.

# %%
from emat.analysis import display_experiments
display_experiments(scope, results)

# %% [markdown]
# The unexpected non-monotonic response function in the second row
# of figures should jump out at the analyst here as problematic.
# If we are not expecting this kind of response, we should carefully
# review the model code and results to figure out what (if anything)
# is going wrong here.
Ejemplo n.º 4
0
# the lane width for the link.  You might expect that increasing the lane
# width might increase the effective capacity on the link, but as coded any
# deviation from exactly 10.0 feet will result in substantial extra delay in
# the build travel time, regardless of any other factors.  Running the experiments
# with this broken input will invalidate the entire set of results, but here
# we'll assume that we don't know *a priori* that the lane width parameter is
# broken.
#
# Given this set of experimental results, we can display a scatter plot matrix
# to see the results.  This is a collection of two-dimensional plots, each
# showing a contrast between two factors, typically an input parameter (i.e.
# an uncertainty or a policy lever) and an output performance measure, although
# it is also possible to plot inputs against inputs or outputs against outputs.
#
# The `display_experiments` function in the `emat.analysis` sub-package can
# automatically create a scatter plot matrix that crosses every parameter with
# every measure, simply by providing the scope and the results.  By default,
# plots that display levers are shown in blue, plots that show uncertainties
# are in red.

# %%
from emat.analysis import display_experiments
display_experiments(scope, results, render="png")

# %% [markdown]
# The unexpected non-monotonic response function in the second row
# of figures should jump out at the analyst here as problematic.
# If we are not expecting this kind of response, we should carefully
# review the model code and results to figure out what (if anything)
# is going wrong here.
Ejemplo n.º 5
0
# %% [markdown]
# Given this set of experimental results, we can display a scatter plot matrix
# to see the results.  This is a collection of two-dimensional plots, each
# showing a contrast between two factors, typically an input parameter (i.e.
# an uncertainty or a policy lever) and an output performance measure, although
# it is also possible to plot inputs against inputs or outputs against outputs.
#
# The `display_experiments` function in the `emat.analysis` sub-package can
# automatically create a scatter plot matrix that crosses every parameter with
# every measure, simply by providing the scope and the results.  By default,
# plots that display levers are shown in blue, plots that show uncertainties
# are in red.

# %%
from emat.analysis import display_experiments
display_experiments(scope, results)

# %% [markdown]
# This function also offers the opportunity to identify only a particular
# subset of parameters or measures to display, using the `rows` and `columns`
# arguments.  Similar colors are used as the default full display, although
# if the plot contrasts an uncertainty with a lever the variable on the
# X axis determines the color; and the plot is green if only measures are shown.
# Because parameters and measures
# are all required to have unique names within a scope, it is not necessary
# to identify which is which, as the `display_experiments` can figure it out
# automatically.

# %%
display_experiments(
    scope,
Ejemplo n.º 6
0
# %%
lhs_outcomes = m.read_experiment_measures(design_name='lhs')
lhs_outcomes.head()

# %% [markdown]
# ## Feature Scoring

# %%
m.get_feature_scores('lhs')

# %% [markdown]
# ## Visualization

# %%
from emat.analysis import display_experiments
display_experiments(road_scope, lhs_results, rows=['time_savings', 'net_benefits', 'input_flow'])

# %% [markdown]
# ## Scenario Discovery

# %% [markdown]
# Scenario discovery in exploratory modeling is focused on finding scenarios that are interesting to the user.  
# The process generally begins through the identification of particular outcomes that are "of interest",
# and the discovery process that can seek out what factor or combination of factors can result in
# those outcomes.
#
# There are a variety of methods to use for scenario discovery.  We illustrate a few here.
#

# %% [markdown]
# ### PRIM