fig = model.plot(forecast)
fig.gca().plot(df_validp.ds, df_validp['y'], 'k.', c='r', label='validation')
plt.legend()
''

# As you can see at some periods the predictions are poor and at some points they are pretty close. Let's have a closer look at the future.

fig = model.plot(forecast)
fig.gca().plot(df_validp.ds, df_validp['y'], 'k.', c='r', label='validation')
plt.xlim(pd.to_datetime(["2013-06-15", "2013-08-15"]))
plt.ylim([10, 24])

# The model has found annual and weekly seasonalities. We can have closer look at these components using `.plot_components()`

model.plot_components(forecast)
1

# Now we can see which days of the week are associated with more energy consumption (it's not suprising to see Saturday and Sunday) and also how time of the year affects the energy consumption.

# ## Cross_validation

# We created a model and forecasted the future. But we still don't know how good the model is. 
#
# So like before we need a training and a validation set. We train a model on a training set, and then measure the accuracy of its prediction on validation set using metrics.
#
#
# One issue with this approach is that even when we get a value for prediction accuracy of a model, how do we know this value is reliable. Let's say we are comparing two models and mean absolute error for model A is 0.5 and for model B is 0.45. How do we know that B is better than A and it didn't just get lucky over this data set? 
#
# One way to ensure which one is better is by comparing them over multiple sections of data sets. This approach is called `cross validation`. In Prophet, we start by training the model over the data from the beginning up to a certain point (cut-off point) and then predict for a few time steps (Horizon). Then we move the cut-off point by a certain period and repeat the process. We can then calculate the metrics for each model over multiple sections of the data and have a better comparison at the end.
#