TY - JOUR
T1 - Outlier-robust kernel hierarchical-optimization RLS on a budget with affine constraints
AU - Slavakis, Konstantinos
AU - Yukawa, Masahiro
N1 - Funding Information:
K. Slavakis was supported by the NSF CIF award 1718796, and M. Yukawa by JSPS KAKENHI grant number JP18H01446.
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - This paper introduces a non-parametric learning framework to combat outliers in online, multi-output, and nonlinear regression tasks. A hierarchical-optimization problem underpins the learning task: Search in a reproducing kernel Hilbert space (RKHS) for a function that minimizes a sample average ℓp-norm (1 ≤ p ≤ 2) error loss defined on data contaminated by noise and outliers, under affine constraints defined as the set of minimizers of a quadratic loss on a finite number of faithful data devoid of noise and outliers (side information). To surmount the computational obstacles inflicted by the choice of loss and the potentially infinite dimensional RKHS, approximations of the ℓp-norm loss, as well as a novel twist of the criterion of approximate linear dependency are devised to keep the computational-complexity footprint of the proposed algorithm bounded over time. Numerical tests on datasets showcase the robust behavior of the advocated framework against different types of outliers, under a low computational load, while satisfying at the same time the affine constraints, in contrast to the state-of-the-art methods which are constraint agnostic.
AB - This paper introduces a non-parametric learning framework to combat outliers in online, multi-output, and nonlinear regression tasks. A hierarchical-optimization problem underpins the learning task: Search in a reproducing kernel Hilbert space (RKHS) for a function that minimizes a sample average ℓp-norm (1 ≤ p ≤ 2) error loss defined on data contaminated by noise and outliers, under affine constraints defined as the set of minimizers of a quadratic loss on a finite number of faithful data devoid of noise and outliers (side information). To surmount the computational obstacles inflicted by the choice of loss and the potentially infinite dimensional RKHS, approximations of the ℓp-norm loss, as well as a novel twist of the criterion of approximate linear dependency are devised to keep the computational-complexity footprint of the proposed algorithm bounded over time. Numerical tests on datasets showcase the robust behavior of the advocated framework against different types of outliers, under a low computational load, while satisfying at the same time the affine constraints, in contrast to the state-of-the-art methods which are constraint agnostic.
KW - Adaptive filtering
KW - Kernel
KW - Online learning
KW - Outliers
KW - RLS
UR - http://www.scopus.com/inward/record.url?scp=85114864917&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85114864917&partnerID=8YFLogxK
U2 - 10.1109/ICASSP39728.2021.9413415
DO - 10.1109/ICASSP39728.2021.9413415
M3 - Conference article
AN - SCOPUS:85114864917
VL - 2021-June
SP - 5335
EP - 5339
JO - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
JF - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
SN - 0736-7791
T2 - 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
Y2 - 6 June 2021 through 11 June 2021
ER -