Information Criteria - Examples¶
Example created by Wilson Rocha Lacerda Junior
Comparing different information criteria methods¶
Here we import the NARMAX model, the metric for model evaluation and the methods to generate sample data for tests. Also, we import pandas for specific usage.
pip install sysidentpy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sysidentpy.model_structure_selection import FROLS
from sysidentpy.basis_function._basis_function import Polynomial
from sysidentpy.metrics import root_relative_squared_error
from sysidentpy.utils.generate_data import get_siso_data
from sysidentpy.utils.display_results import results
from sysidentpy.utils.plotting import plot_residues_correlation, plot_results
from sysidentpy.residues.residues_correlation import (
compute_residues_autocorrelation,
compute_cross_correlation,
)
Generating sample data¶
The data is generated by simulating the following model: $y_k = 0.2y_{k-1} + 0.1y_{k-1}x_{k-1} + 0.9x_{k-1} + e_{k}$
If colored_noise is set to True:
$e_{k} = 0.8\nu_{k-1} + \nu_{k}$
where $x$ is a uniformly distributed random variable and $\nu$ is a gaussian distributed variable with $\mu=0$ and $\sigma=0.1$
In the next example we will generate a data with 3000 samples with white noise and selecting 90% of the data to train the model.
x_train, x_valid, y_train, y_valid = get_siso_data(
n=1000, colored_noise=False, sigma=0.2, train_percentage=90
)
The idea is to show the impact of the information criteria to select the number of terms to compose the final model. You will se why it is an auxiliary tool and let the algorithm select the number of terms based on the minimum value is not a good idea when dealing with data highly corrupted by noise (even white noise)
Note: You may find different results when running the examples. This is due the fact we are not setting a fixed random generator for the sample data. However, the main analysis remain.
AIC¶
basis_function = Polynomial(degree=2)
model = FROLS(
order_selection=True,
n_info_values=15,
extended_least_squares=False,
ylag=2,
xlag=2,
info_criteria="aic",
estimator="least_squares",
basis_function=basis_function,
)
model.fit(X=x_train, y=y_train)
yhat = model.predict(X=x_valid, y=y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)
r = pd.DataFrame(
results(
model.final_model,
model.theta,
model.err,
model.n_terms,
err_precision=8,
dtype="sci",
),
columns=["Regressors", "Parameters", "ERR"],
)
print(r)
plot_results(y=y_valid, yhat=yhat, n=1000)
ee = compute_residues_autocorrelation(y_valid, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
x1e = compute_cross_correlation(y_valid, yhat, x_valid)
plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
xaxis = np.arange(1, model.n_info_values + 1)
plt.plot(xaxis, model.info_values)
plt.xlabel("n_terms")
plt.ylabel("Information Criteria")
c:\Users\wilso\Desktop\projects\GitHub\sysidentpy\sysidentpy\utils\deprecation.py:37: FutureWarning: Passing a string to define the estimator will rise an error in v0.4.0. You'll have to use FROLS(estimator=LeastSquares()) instead. The only change is that you'll have to define the estimator first instead of passing a string like 'least_squares'. This change will make easier to implement new estimators and it'll improve code readability. warnings.warn(message, FutureWarning)
0.4013781100113083 Regressors Parameters ERR 0 x1(k-2) 8.9927E-01 8.31470481E-01 1 y(k-1) 2.0168E-01 4.10001326E-02 2 x1(k-1)y(k-1) 9.2625E-02 2.71330347E-03
Text(0, 0.5, 'Information Criteria')
model.info_values
array([-2641.78942999, -2890.08785144, -2907.8115072 , -2907.62848919, -2907.33194129, -2906.49131563, -2905.28985273, -2904.19842636, -2902.82919405, -2901.13658068, -2899.20612214, -2897.22138478, -2895.23592902, -2893.24339593, -2891.25015639])
As can be seen above, the minimum value make the algorithm choose a model with 5 terms. However, if you check the plot, 3 terms is the best choice. Increasing the number of terms from 3 upwards do not lead to a better model since the difference is very small.
In this case, you should run the model again with the parameters n_terms=3! The ERR algorithm ordered the terms in a correct way, so you will get the exact model structure again!
AICc¶
basis_function = Polynomial(degree=2)
model = FROLS(
order_selection=True,
n_info_values=15,
extended_least_squares=False,
ylag=2,
xlag=2,
info_criteria="aicc",
estimator="least_squares",
basis_function=basis_function,
)
model.fit(X=x_train, y=y_train)
yhat = model.predict(X=x_valid, y=y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)
r = pd.DataFrame(
results(
model.final_model,
model.theta,
model.err,
model.n_terms,
err_precision=8,
dtype="sci",
),
columns=["Regressors", "Parameters", "ERR"],
)
print(r)
plot_results(y=y_valid, yhat=yhat, n=1000)
ee = compute_residues_autocorrelation(y_valid, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
x1e = compute_cross_correlation(y_valid, yhat, x_valid)
plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
xaxis = np.arange(1, model.n_info_values + 1)
plt.plot(xaxis, model.info_values)
plt.xlabel("n_terms")
plt.ylabel("Information Criteria")
0.4013781100113083 Regressors Parameters ERR 0 x1(k-2) 8.9927E-01 8.31470481E-01 1 y(k-1) 2.0168E-01 4.10001326E-02 2 x1(k-1)y(k-1) 9.2625E-02 2.71330347E-03
c:\Users\wilso\Desktop\projects\GitHub\sysidentpy\sysidentpy\utils\deprecation.py:37: FutureWarning: Passing a string to define the estimator will rise an error in v0.4.0. You'll have to use FROLS(estimator=LeastSquares()) instead. The only change is that you'll have to define the estimator first instead of passing a string like 'least_squares'. This change will make easier to implement new estimators and it'll improve code readability. warnings.warn(message, FutureWarning)
Text(0, 0.5, 'Information Criteria')
model.info_values
array([-2641.78496571, -2890.07444362, -2907.78466157, -2907.58369636, -2907.26467671, -2906.39703954, -2905.16401003, -2904.03644661, -2902.62649135, -2900.88855362, -2898.90815375, -2896.86884241, -2894.82416432, -2892.76774474, -2890.7059387 ])
As can be seen above, the minimum value make the algorithm choose a model with 4 terms. AICc, however, have major differences compared with AIC when the number of samples is small.
In this case, you should run the model again with the parameters n_terms=3! The ERR algorithm ordered the terms in a correct way, so you will get the exact model structure again!
BIC¶
basis_function = Polynomial(degree=2)
model = FROLS(
order_selection=True,
n_info_values=15,
extended_least_squares=False,
ylag=2,
xlag=2,
info_criteria="bic",
estimator="least_squares",
basis_function=basis_function,
)
model.fit(X=x_train, y=y_train)
yhat = model.predict(X=x_valid, y=y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)
r = pd.DataFrame(
results(
model.final_model,
model.theta,
model.err,
model.n_terms,
err_precision=8,
dtype="sci",
),
columns=["Regressors", "Parameters", "ERR"],
)
print(r)
plot_results(y=y_valid, yhat=yhat, n=1000)
ee = compute_residues_autocorrelation(y_valid, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
x1e = compute_cross_correlation(y_valid, yhat, x_valid)
plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
xaxis = np.arange(1, model.n_info_values + 1)
plt.plot(xaxis, model.info_values)
plt.xlabel("n_terms")
plt.ylabel("Information Criteria")
0.4013781100113083 Regressors Parameters ERR 0 x1(k-2) 8.9927E-01 8.31470481E-01 1 y(k-1) 2.0168E-01 4.10001326E-02 2 x1(k-1)y(k-1) 9.2625E-02 2.71330347E-03
c:\Users\wilso\Desktop\projects\GitHub\sysidentpy\sysidentpy\utils\deprecation.py:37: FutureWarning: Passing a string to define the estimator will rise an error in v0.4.0. You'll have to use FROLS(estimator=LeastSquares()) instead. The only change is that you'll have to define the estimator first instead of passing a string like 'least_squares'. This change will make easier to implement new estimators and it'll improve code readability. warnings.warn(message, FutureWarning)
Text(0, 0.5, 'Information Criteria')
model.info_values
array([-2636.98925993, -2880.4875113 , -2893.410997 , -2888.42780892, -2883.33109095, -2877.69029522, -2871.68866225, -2865.79706581, -2859.62766344, -2853.13487999, -2846.40425139, -2839.61934396, -2832.83371813, -2826.04101497, -2819.24760536])
BIC did a better job in this case! The way it penalizes the model regarding the number of terms ensure that the minimum value here was exact the number of expected terms to compose the model. Good, but not always the best method!
LILC¶
basis_function = Polynomial(degree=2)
model = FROLS(
order_selection=True,
n_info_values=15,
extended_least_squares=False,
ylag=2,
xlag=2,
info_criteria="lilc",
estimator="least_squares",
basis_function=basis_function,
)
model.fit(X=x_train, y=y_train)
yhat = model.predict(X=x_valid, y=y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)
r = pd.DataFrame(
results(
model.final_model,
model.theta,
model.err,
model.n_terms,
err_precision=8,
dtype="sci",
),
columns=["Regressors", "Parameters", "ERR"],
)
print(r)
plot_results(y=y_valid, yhat=yhat, n=1000)
ee = compute_residues_autocorrelation(y_valid, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
x1e = compute_cross_correlation(y_valid, yhat, x_valid)
plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
xaxis = np.arange(1, model.n_info_values + 1)
plt.plot(xaxis, model.info_values)
plt.xlabel("n_terms")
plt.ylabel("Information Criteria")
0.4013781100113083 Regressors Parameters ERR 0 x1(k-2) 8.9927E-01 8.31470481E-01 1 y(k-1) 2.0168E-01 4.10001326E-02 2 x1(k-1)y(k-1) 9.2625E-02 2.71330347E-03
c:\Users\wilso\Desktop\projects\GitHub\sysidentpy\sysidentpy\utils\deprecation.py:37: FutureWarning: Passing a string to define the estimator will rise an error in v0.4.0. You'll have to use FROLS(estimator=LeastSquares()) instead. The only change is that you'll have to define the estimator first instead of passing a string like 'least_squares'. This change will make easier to implement new estimators and it'll improve code readability. warnings.warn(message, FutureWarning)
Text(0, 0.5, 'Information Criteria')
model.info_values
array([-2639.95553475, -2886.42006095, -2902.30982147, -2900.29290822, -2898.16246507, -2895.48794417, -2892.45258602, -2889.52726441, -2886.32413686, -2882.79762824, -2879.03327446, -2875.21464186, -2871.39529085, -2867.56886251, -2863.74172773])
LILC also includes spurious terms. Like AIC, it fails to automatically select the correct terms but you could select the right number based on the plot above!
FPE¶
basis_function = Polynomial(degree=2)
model = FROLS(
order_selection=True,
n_info_values=15,
extended_least_squares=False,
ylag=2,
xlag=2,
info_criteria="fpe",
estimator="least_squares",
basis_function=basis_function,
)
model.fit(X=x_train, y=y_train)
yhat = model.predict(X=x_valid, y=y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)
r = pd.DataFrame(
results(
model.final_model,
model.theta,
model.err,
model.n_terms,
err_precision=8,
dtype="sci",
),
columns=["Regressors", "Parameters", "ERR"],
)
print(r)
plot_results(y=y_valid, yhat=yhat, n=1000)
ee = compute_residues_autocorrelation(y_valid, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
x1e = compute_cross_correlation(y_valid, yhat, x_valid)
plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
xaxis = np.arange(1, model.n_info_values + 1)
plt.plot(xaxis, model.info_values)
plt.xlabel("n_terms")
plt.ylabel("Information Criteria")
0.4013781100113083 Regressors Parameters ERR 0 x1(k-2) 8.9927E-01 8.31470481E-01 1 y(k-1) 2.0168E-01 4.10001326E-02 2 x1(k-1)y(k-1) 9.2625E-02 2.71330347E-03
c:\Users\wilso\Desktop\projects\GitHub\sysidentpy\sysidentpy\utils\deprecation.py:37: FutureWarning: Passing a string to define the estimator will rise an error in v0.4.0. You'll have to use FROLS(estimator=LeastSquares()) instead. The only change is that you'll have to define the estimator first instead of passing a string like 'least_squares'. This change will make easier to implement new estimators and it'll improve code readability. warnings.warn(message, FutureWarning)
Text(0, 0.5, 'Information Criteria')
model.info_values
array([-2641.78942917, -2890.08784482, -2907.81148488, -2907.62843628, -2907.33183795, -2906.49113706, -2905.28956915, -2904.19800306, -2902.82859134, -2901.1357539 , -2899.20502169, -2897.21995607, -2895.2341125 , -2893.24112709, -2891.24736576])
FPE also failed to automatically select the right number of terms! But, as we pointed out before, Information Criteria is an auxiliary tool! If you look at the plots, all the methods allows you to choose the right numbers of terms!
Important Note¶
Here we are dealing with a known model structure! Concerning real data, we do not know the right number of terms so the methods above stands as excellent tools to help you out!
If you check the metrics above, even with the models with more terms, you will see excellent metrics! But System Identification always search for the best model structure! Model Structure Selection is the core of NARMAX methods! In this respect, the examples are to show basic concepts and how the algorithms work!