示例#1
0
def _vgp_matern(x: tf.Tensor, y: tf.Tensor) -> VGP:
    likelihood = gpflow.likelihoods.Gaussian()
    kernel = gpflow.kernels.Matern32(lengthscales=0.2)
    m = VGP((x, y), kernel, likelihood)
    variational_variables = [m.q_mu.unconstrained_variable, m.q_sqrt.unconstrained_variable]
    gpflow.optimizers.Scipy().minimize(m.training_loss_closure(), variational_variables)
    return m
示例#2
0
# %%
gpr = GPR(data, kernel=gpflow.kernels.Matern52())

# %% [markdown]
# The log marginal likelihood of the exact GP model is:

# %%
gpr.log_marginal_likelihood().numpy()

# %% [markdown]
# Now we will create an approximate model which approximates the true posterior via a variational Gaussian distribution.<br>We initialize the distribution to be zero mean and unit variance.

# %%
vgp = VGP(data,
          kernel=gpflow.kernels.Matern52(),
          likelihood=gpflow.likelihoods.Gaussian())

# %% [markdown]
# The log marginal likelihood lower bound (evidence lower bound or ELBO) of the approximate GP model is:

# %%
vgp.elbo().numpy()

# %% [markdown]
# Obviously, our initial guess for the variational distribution is not correct, which results in a lower bound to the likelihood of the exact GPR model. We can optimize the variational parameters in order to get a tighter bound.

# %% [markdown]
# In fact, we only need to take **one step** in the natural gradient direction to recover the exact posterior:

# %%