Example #1
0
 def test_kfold_cv_error_03(self):
     pod = POD()
     gpr = GPR()
     db = Database(param, snapshots.T)
     rom = ROM(db, pod, gpr)
     err = rom.kfold_cv_error(n_splits=3, normalizer=False)
     np.testing.assert_allclose(
         err,
         np.array([0.664149, 1.355502, 0.379874]),
         rtol=1e-3)
Example #2
0
 def test_kfold_cv_error_01(self):
     pod = POD()
     rbf = RBF()
     db = Database(param, snapshots.T)
     rom = ROM(db, pod, rbf)
     err = rom.kfold_cv_error(n_splits=4)
     np.testing.assert_allclose(
         err,
         np.array([0.54002856, 1.21174449, 0.27177608, 0.91950896]),
         rtol=1e-4)
Example #3
0
 def test_kfold_cv_error_02(self):
     pod = POD()
     rbf = RBF()
     db = Database(param, snapshots.T)
     rom = ROM(db, pod, rbf)
     err = rom.kfold_cv_error(n_splits=3)
     np.testing.assert_allclose(
         err,
         np.array([0.468199, 0.271776, 0.919509]),
         rtol=1e-4)
Example #4
0
    ax[i].tricontourf(data.triang, rom.predict([param]))
    ax[i].set_title('Predicted snapshots at inlet velocity = {}'.format(param))

# We are now calculating the approximation error to see how close is our reduced solution to the full-order solution/simulation using the **k-fold Cross-Validation** strategy by passing the number of splits to the `ReducedOrderModel.kfold_cv_error(n_splits)` method, which operates as follows:
#
# 1. Split the dataset (parameters/snapshots) into $k$-number of groups/folds.
# 2. Use $k-1$ groups to calculate the reduced space and leave one group for testing.
# 3. Use the approximation/interpolation method to predict each snapshot in the testing group.
# 4. Calculate the error for each snapshot in the testing group by taking the difference between the predicted and the original snapshot.
# 5. Average the errors for predicting snapshots of the testing group/fold.
# 6. Repeat this procedure using different groups for testing and the remaining $k-1$ groups to calculate the reduced space.
# 7. In the end, we will have $k$-number errors for predicting each group/fold that we can average them to have one value for the error.

# In[7]:

errors = rom.kfold_cv_error(n_splits=5)
print('Average error for each fold:')
for e in errors:
    print('  ', e)
print('\nAverage error = {}'.format(errors.mean()))

# Another strategy for calculating the approximation error is called **leave-one-out** by using the  `ReducedOrderModel.loo_error()` method, which is similar to setting the number of folds equal to the number of snapshots (eg. in this case setting `n_splits` = 500) and it operates as follows:
# 1. Combine all the snapshots except one.
# 2. Calculate the reduced space.
# 3. Use the approximation/interpolation method to predict the removed snapshot.
# 4. Calculate the error by taking the difference between the predicted snapshot and the original removed one.
# 5. The error vector is obtained by repeating this procedure for each snapshot in the database.
#
# It is worth mentioning that it consumes more time because we have 500 snapshots and the algorithm will perform space order reduction and calculate the approximation error 500 times. For this reason, we commented the next line of code, in order to limit the computational effort needed to run this tutorial. Uncomment it only if you are a really brave person!

# In[8]: