Extended Least Squares¶
Example created by Wilson Rocha Lacerda Junior
Here we import the NARMAX model, the metric for model evaluation and the methods to generate sample data for tests. Also, we import pandas for specific usage.
pip install sysidentpy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sysidentpy.model_structure_selection import FROLS
from sysidentpy.basis_function._basis_function import Polynomial
from sysidentpy.metrics import root_relative_squared_error
from sysidentpy.utils.generate_data import get_siso_data
from sysidentpy.utils.display_results import results
from sysidentpy.utils.plotting import plot_residues_correlation, plot_results
from sysidentpy.residues.residues_correlation import (
compute_residues_autocorrelation,
compute_cross_correlation,
)
Generating 1 input 1 output sample data¶
The data is generated by simulating the following model: $y_k = 0.2y_{k-1} + 0.1y_{k-1}x_{k-1} + 0.9x_{k-2} + e_{k}$
If colored_noise is set to True:
$e_{k} = 0.8\nu_{k-1} + \nu_{k}$
where $x$ is a uniformly distributed random variable and $\nu$ is a gaussian distributed variable with $\mu=0$ and $\sigma$ is defined by the user.
In the next example we will generate a data with 3000 samples with white noise and selecting 90% of the data to train the model.
x_train, x_valid, y_train, y_valid = get_siso_data(
n=1000, colored_noise=True, sigma=0.2, train_percentage=90
)
Build the model¶
First we will train a model without the Extended Least Squares Algorithm for comparison purpose.
basis_function = Polynomial(degree=2)
model = FROLS(
order_selection=False,
n_terms=3,
extended_least_squares=False,
ylag=2,
xlag=2,
info_criteria="aic",
estimator="least_squares",
basis_function=basis_function,
)
c:\Users\wilso\Desktop\projects\GitHub\sysidentpy\sysidentpy\utils\deprecation.py:37: FutureWarning: Passing a string to define the estimator will rise an error in v0.4.0. You'll have to use FROLS(estimator=LeastSquares()) instead. The only change is that you'll have to define the estimator first instead of passing a string like 'least_squares'. This change will make easier to implement new estimators and it'll improve code readability. warnings.warn(message, FutureWarning)
model.fit(X=x_train, y=y_train)
yhat = model.predict(X=x_valid, y=y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)
0.4864715504437732
Clearly we have something wrong with the obtained model. See the basic_steps notebook to compare the results obtained using the same data but without colored noise. But let take a look in whats is wrong.
r = pd.DataFrame(
results(
model.final_model,
model.theta,
model.err,
model.n_terms,
err_precision=8,
dtype="sci",
),
columns=["Regressors", "Parameters", "ERR"],
)
print(r)
Regressors Parameters ERR 0 x1(k-2) 9.1595E-01 7.59322318E-01 1 y(k-1) 2.6843E-01 7.29718483E-02 2 x1(k-1)y(k-1) 7.0081E-02 1.68008211E-03
Biased parameter estimation¶
As we can observe above, the model structure is exact the same the one that generate the data. You can se that the ERR ordered the terms in the correct way. And this is an important note regarding the Error Reduction Ratio algorithm used here: it is very robust to colored noise!!
That is a great feature! However, although the structure is correct, the model parameters are not ok! Here we have a biased estimation! The real parameter for $y_{k-1}$ is $0.2$, not $0.3$.
In this case, we are actually modeling using a NARX model, not a NARMAX. The MA part exists to allow a unbiased estimation of the parameters. To achieve a unbiased estimation of the parameters we have the Extend Least Squares algorithm. Remember, if the data have only white noise, NARX is fine.
Before applying the Extended Least Squares Algorithm we will run several NARX models to check how different the estimated parameters are from the real ones.
import seaborn as sns
import matplotlib.pyplot as plt
parameters = np.zeros([3, 50])
for i in range(50):
x_train, x_valid, y_train, y_valid = get_siso_data(
n=3000, colored_noise=True, train_percentage=90
)
model.fit(X=x_train, y=y_train)
parameters[:, i] = model.theta.flatten()
sns.set()
pal = sns.cubehelix_palette(3, rot=-0.5, dark=0.3)
ax = sns.kdeplot(parameters.T[:, 0])
ax = sns.kdeplot(parameters.T[:, 1])
ax = sns.kdeplot(parameters.T[:, 2])
# plotting a vertical line where the real values must lie
ax = plt.axvline(x=0.1, c="k")
ax = plt.axvline(x=0.2, c="k")
ax = plt.axvline(x=0.9, c="k")
Using the Extended Least Squares algorithm¶
As shown in figure above, we have a problem to estimate the parameter for $y_{k-1}$. Now we will use the Extended Least Squares Algorithm.
In SysIdentPy, just set extended_least_squares to True and the algorithm will be applied.
basis_function = Polynomial(degree=2)
parameters = np.zeros([3, 50])
for i in range(50):
x_train, x_valid, y_train, y_valid = get_siso_data(
n=3000, colored_noise=True, train_percentage=90
)
model = FROLS(
order_selection=False,
n_terms=3,
extended_least_squares=True,
ylag=2,
xlag=2,
elag=2,
info_criteria="aic",
estimator="least_squares",
basis_function=basis_function,
)
model.fit(X=x_train, y=y_train)
parameters[:, i] = model.theta.flatten()
sns.set()
pal = sns.cubehelix_palette(3, rot=-0.5, dark=0.3)
ax = sns.kdeplot(parameters.T[:, 0])
ax = sns.kdeplot(parameters.T[:, 1])
ax = sns.kdeplot(parameters.T[:, 2])
# plotting a vertical line where the real values must lie
ax = plt.axvline(x=0.1, c="k")
ax = plt.axvline(x=0.2, c="k")
ax = plt.axvline(x=0.9, c="k")
c:\Users\wilso\Desktop\projects\GitHub\sysidentpy\sysidentpy\utils\deprecation.py:37: FutureWarning: Passing a string to define the estimator will rise an error in v0.4.0. You'll have to use FROLS(estimator=LeastSquares()) instead. The only change is that you'll have to define the estimator first instead of passing a string like 'least_squares'. This change will make easier to implement new estimators and it'll improve code readability. warnings.warn(message, FutureWarning)
Great! Now we have an unbiased estimation of the parameters!
Note¶
Note: The Extended Least Squares is an iterative algorithm. In SysIdentpy we fixed 30 iterations because it is known from literature that the algorithm converges quickly (about 10 or 20 iterations).