Ejemplo n.º 1
0
             batch_input_shape=(batch_size, X.shape[1], X.shape[2]),
             stateful=True)
#The network requires a single neuron in the output layer with a linear activation to predict
#the number of rental bikes at the next time step.

#Once the network is specified, it must be compiled into an efficient symbolic representation
#using a backend mathematical library, such as TensorFlow or Theano.

#In compiling the network, we must specify a loss function and optimization algorithm.
#We will use “mean_squared_error” as the loss function as it closely matches RMSE that we will are interested in, and the efficient ADAM optimization algorithm.

#Using the Sequential Keras API to define the network, the below snippet creates and compiles the network.

model = Sequential()
model.add(
    LSTM(neurons,
         batch_input_shape=(batch_size, X.shape[1], X.shape[2]),
         stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
#Once compiled, it can be fit to the training data. Because the network is stateful,
#we must control when the internal state is reset. Therefore, we must manually manage the training process one
#epoch at a time across the desired number of epochs.

#By default, the samples within an epoch are shuffled prior to being exposed to the network.
#Again, this is undesirable for the LSTM because we want the network to build up state as
#it learns across the sequence of observations. We can disable the shuffling of samples by
#setting “shuffle” to “False“.
#Below is a loop that manually fits the network to the training data.
for i in range(nb_epoch):
    model.fit(X, y, epochs=1, batch_size=batch_size, shuffle=False)
    model.reset_states()