コード例 #1
0
def plot_regret_with_min(dataset):
    observations = dataset.observations.numpy()
    arg_min_idx = tf.squeeze(tf.argmin(observations, axis=0))

    suboptimality = observations - F_MINIMUM.numpy()
    ax = plt.gca()
    plot_regret(suboptimality,
                ax,
                num_init=num_initial_points,
                idx_best=arg_min_idx)

    ax.set_yscale("log")
    ax.set_ylabel("Regret")
    ax.set_ylim(0.001, 100000)
    ax.set_xlabel("# evaluations")
コード例 #2
0
def plot_ask_tell_regret(ask_tell_result):
    observations = ask_tell_result.try_get_final_dataset().observations.numpy()
    arg_min_idx = tf.squeeze(tf.argmin(observations, axis=0))

    suboptimality = observations - SCALED_BRANIN_MINIMUM.numpy()
    ax = plt.gca()
    plot_regret(suboptimality,
                ax,
                num_init=num_initial_points,
                idx_best=arg_min_idx)

    ax.set_yscale("log")
    ax.set_ylabel("Regret")
    ax.set_ylim(0.001, 100)
    ax.set_xlabel("# evaluations")
コード例 #3
0
)
fig.show()

# %% [markdown]
# We can also visualise the how each successive point compares the current best.
#
# We produce two plots. The left hand plot shows the observations (crosses and dots), the current best (orange line), and the start of the optimization loop (blue line). The right hand plot is the same as the previous two-dimensional contour plot, but without the resulting observations. The best point is shown in each (purple dot).

# %%
import matplotlib.pyplot as plt
from util.plotting import plot_regret

suboptimality = observations - SCALED_BRANIN_MINIMUM.numpy()
_, ax = plt.subplots(1, 2)
plot_regret(suboptimality,
            ax[0],
            num_init=num_initial_points,
            idx_best=arg_min_idx)
plot_bo_points(query_points,
               ax[1],
               num_init=num_initial_points,
               idx_best=arg_min_idx)

ax[0].set_yscale("log")
ax[0].set_ylabel("Regret")
ax[0].set_ylim(0.001, 100)
ax[0].set_xlabel("# evaluations")

# %% [markdown]
# We can visualise the model over the objective function by plotting the mean and 95% confidence intervals of its predictive distribution. Like with the data before, we can get the model with `.try_get_final_model()`.

# %%
コード例 #4
0
    idx_best=arg_min_idx,
    fig=fig,
)
fig.show()

# %% [markdown]
# We can also visualise the how each successive point compares the current best.
#
# We produce two plots. The left hand plot shows the observations (crosses and dots), the current best (orange line), and the start of the optimization loop (blue line). The right hand plot is the same as the previous two-dimensional contour plot, but without the resulting observations. The best point is shown in each (purple dot).

# %%
import matplotlib.pyplot as plt
from util.plotting import plot_regret

_, ax = plt.subplots(1, 2)
plot_regret(observations, ax[0], num_init=num_initial_points, idx_best=arg_min_idx)
plot_bo_points(
    query_points, ax[1], num_init=num_initial_points, idx_best=arg_min_idx
)

# %% [markdown]
# We can visualise the model over the objective function by plotting the mean and 95% confidence intervals of its predictive distribution. Like with the data before, we can get the model with `.try_get_final_models()` and indexing with `OBJECTIVE`.

# %%
from util.plotting_plotly import plot_gp_plotly

fig = plot_gp_plotly(
    result.try_get_final_models()[OBJECTIVE].model,
    search_space.lower,
    search_space.upper,
    grid_density=30
コード例 #5
0
plt.show()

# %% [markdown]
# Finally, we compare the regret from the non-batch strategy (left panel) to the regret from the batch strategy (right panel).
# In the following plots each marker represents a query point. The x-axis is the index of the query point (where the first queried point has index 0), and the y-axis is the observed value. The vertical blue line denotes the end of initialisation/start of optimisation. Green points satisfy the constraint, red points do not.

# %%
from util.plotting import plot_regret

mask_fail = constraint_data.observations.numpy() > Sim.threshold
batch_mask_fail = batch_constraint_data.observations.numpy() > Sim.threshold

fig, ax = plt.subplots(1, 2, sharey="all")
plot_regret(
    data[OBJECTIVE].observations.numpy(),
    ax[0],
    num_init=num_initial_points,
    mask_fail=mask_fail.flatten()
)
plot_regret(
    batch_data[OBJECTIVE].observations.numpy(),
    ax[1],
    num_init=num_initial_points,
    mask_fail=batch_mask_fail.flatten()
)

# %% [markdown]
# ## LICENSE
#
# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
コード例 #6
0
# For this particular problem (and random seed), we see that the`LocalPenalizationAcquisitionFunction` provides more effective batch optimization, finding a solution with a magnitude smaller regret than `BatchMonteCarloExpectedImprovement` under the same optimization budget.

# %%
from util.plotting import plot_regret

qei_observations = qei_result.try_get_final_dataset(
).observations - BRANIN_MINIMUM
qei_min_idx = tf.squeeze(tf.argmin(qei_observations, axis=0))
local_penalization_observations = (
    local_penalization_result.try_get_final_dataset().observations -
    BRANIN_MINIMUM)
local_penalization_min_idx = tf.squeeze(
    tf.argmin(local_penalization_observations, axis=0))

_, ax = plt.subplots(1, 2)
plot_regret(qei_observations.numpy(), ax[0], num_init=5, idx_best=qei_min_idx)
ax[0].set_yscale("log")
ax[0].set_ylabel("Regret")
ax[0].set_ylim(0.01, 100)
ax[0].set_xlabel("# evaluations")
ax[0].set_title("Batch-EI")

plot_regret(local_penalization_observations.numpy(),
            ax[1],
            num_init=5,
            idx_best=local_penalization_min_idx)
ax[1].set_yscale("log")
ax[1].set_xlabel("# evaluations")
ax[1].set_ylim(0.01, 100)
ax[1].set_title("Local Penalization")
コード例 #7
0
)
fig.update_layout(height=800, width=800)
fig.show()

# %% [markdown]
# We see that the DGP model does a much better job at understanding the structure of the function. The standard Gaussian process model has a large signal variance and small lengthscales, which do not result in a good model of the true objective. On the other hand, the DGP model is at least able to infer the local structure around the observations.
#
# We can also plot the regret curves of the two models side-by-side.

# %%

gp_suboptimality = gp_observations - F_MINIMIZER.numpy()
dgp_suboptimality = dgp_observations - F_MINIMIZER.numpy()

_, ax = plt.subplots(1, 2)
plot_regret(dgp_suboptimality, ax[0], num_init=num_initial_points, idx_best=dgp_arg_min_idx)
plot_regret(gp_suboptimality, ax[1], num_init=num_initial_points, idx_best=gp_arg_min_idx)

ax[0].set_yscale("log")
ax[0].set_ylabel("Regret")
ax[0].set_ylim(0.5, 3)
ax[0].set_xlabel("# evaluations")
ax[0].set_title("DGP")

ax[1].set_title("GP")
ax[1].set_yscale("log")
ax[1].set_ylim(0.5, 3)
ax[1].set_xlabel("# evaluations")

# %% [markdown]
# We might also expect that the DGP model will do better on higher dimensional data. We explore this by testing a higher-dimensional version of the Michalewicz dataset.
コード例 #8
0
predicted = (observations * std_y + mean_y)
arg_min_idx = tf.squeeze(tf.argmin(predicted, axis=0))

print(f"grain boundary id: {int(query_points[arg_min_idx, :]) + 1}")
print(f"Predicted value: {-predicted[arg_min_idx, :]}")
print(f"Optimization step: {arg_min_idx}")



# %%
import matplotlib.pyplot as plt
from util.plotting import plot_regret

fig, ax = plt.subplots(figsize=(10, 5))
plot_regret(predicted, ax, num_init=num_initial_points, idx_best=arg_min_idx)
plt.gca().invert_yaxis()
plt.savefig('Step_Eng.jpg')


# %%
# id_gb = np.linspace(0,N-1,N)
# plt.scatter(id_gb, y0)

# %%

ls_list = [
    step.models[OBJECTIVE].model.kernel.lengthscales.numpy()  # type: ignore
    for step in result.history + [result.final_result.unwrap()]
]