Parameter Estimation

Example created by Wilson Rocha Lacerda Junior

Here we import the NARMAX model, the metric for model evaluation and the methods to generate sample data for tests. Also, we import pandas for specific usage.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sysidentpy.polynomial_basis import PolynomialNarmax
from sysidentpy.metrics import root_relative_squared_error
from sysidentpy.utils.generate_data import get_miso_data, get_siso_data

Generating 1 input 1 output sample data

The data is generated by simulating the following model:

\(y_k = 0.2y_{k-1} + 0.1y_{k-1}x_{k-1} + 0.9x_{k-1} + e_{k}\)

If colored_noise is set to True:

\(e_{k} = 0.8\nu_{k-1} + \nu_{k}\)

where \(x\) is a uniformly distributed random variable and \(\nu\) is a gaussian distributed variable with \(\mu=0\) and \(\sigma=0.1\)

In the next example we will generate a data with 1000 samples with white noise and selecting 90% of the data to train the model.

x_train, x_valid, y_train, y_valid = get_siso_data(n=1000,
                                                   colored_noise=False,
                                                   sigma=0.001,
                                                   train_percentage=90)

There are several method to be used for parameter estimation.

  • least_squares;

  • total_least_squares;

  • recursive_least_squares;

  • least_mean_squares

  • affine_least_mean_squares

  • least_mean_squares_sign_error

  • normalized_least_mean_squares

  • least_mean_squares_normalized_sign_error

  • least_mean_squares_sign_regressor

  • least_mean_squares_normalized_sign_regressor

  • least_mean_squares_sign_sign

  • least_mean_squares_normalized_sign_sign

  • least_mean_squares_normalized_leaky

  • least_mean_squares_leaky

  • least_mean_squares_fourth

  • least_mean_squares_mixed_norm

Polynomial NARMAX models are linear-in-the-parameter, so Least Squares based methods works well for most cases (using with extended least squares algorithm when dealing with colered noise).

However, the user can choose some recursive and stochastic gradient descent methods (in this case, the least mean squares algorithm and its variants) to that task too.

Choosing the method is straightforward: pass any of the methods mentioned above on estimator parameters.

  • Note: The adaptative filters have specifc parameter that need to be tunned. In the following examples we will use the default ones. More examples regarding tunned parameter will be available soon. For now, the user can read the method documentation for more information.

Total Least Squares

# total least squares
model = PolynomialNarmax(non_degree=2,
                         n_terms=3,
                         extended_least_squares=False,
                         ylag=2, xlag=2,
                         estimator='total_least_squares',
                         )

model.fit(x_train, y_train)
yhat = model.predict(x_valid, y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)

results = pd.DataFrame(model.results(err_precision=8,
                                     dtype='dec'),
                       columns=['Regressors', 'Parameters', 'ERR'])

print(results)
0.0018530175563774912
      Regressors Parameters         ERR
0        x1(k-2)     0.9000  0.95654752
1         y(k-1)     0.2001  0.04040759
2  x1(k-1)y(k-1)     0.1000  0.00304121

Recursive Least Squares

# recursive least squares
model = PolynomialNarmax(non_degree=2,
                         n_terms=3,
                         extended_least_squares=False,
                         ylag=2, xlag=2,
                         estimator='recursive_least_squares',
                         )

model.fit(x_train, y_train)
yhat = model.predict(x_valid, y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)

results = pd.DataFrame(model.results(err_precision=8,
                                     dtype='dec'),
                       columns=['Regressors', 'Parameters', 'ERR'])

print(results)
0.001835947345516444
      Regressors Parameters         ERR
0        x1(k-2)     0.9001  0.95654752
1         y(k-1)     0.2001  0.04040759
2  x1(k-1)y(k-1)     0.1005  0.00304121

Least Mean Squares

model = PolynomialNarmax(non_degree=2,
                         n_terms=3,
                         extended_least_squares=False,
                         ylag=2, xlag=2,
                         estimator='least_mean_squares',
                         )

model.fit(x_train, y_train)
yhat = model.predict(x_valid, y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)

results = pd.DataFrame(model.results(err_precision=8,
                                     dtype='dec'),
                       columns=['Regressors', 'Parameters', 'ERR'])

print(results)
0.015706349502363732
      Regressors Parameters         ERR
0        x1(k-2)     0.8969  0.95654752
1         y(k-1)     0.1968  0.04040759
2  x1(k-1)y(k-1)     0.0749  0.00304121

Affine Least Mean Squares

# affine_least_mean_squares
model = PolynomialNarmax(non_degree=2,
                         n_terms=3,
                         extended_least_squares=False,
                         ylag=2, xlag=2,
                         estimator='affine_least_mean_squares',
                         )

model.fit(x_train, y_train)
yhat = model.predict(x_valid, y_valid)
rrse = root_relative_squared_error(y_valid, yhat)
print(rrse)

results = pd.DataFrame(model.results(err_precision=8,
                                     dtype='dec'),
                       columns=['Regressors', 'Parameters', 'ERR'])

print(results)
0.0018472762495629913
      Regressors Parameters         ERR
0        x1(k-2)     0.8999  0.95654752
1         y(k-1)     0.2001  0.04040759
2  x1(k-1)y(k-1)     0.1000  0.00304121