TY - JOUR
T1 - Online nonlinear estimation via iterative L2-Space projections
T2 - Reproducing kernel of subspace
AU - Ohnishi, Motoya
AU - Yukawa, Masahiro
N1 - Funding Information:
Manuscript received December 11, 2017; revised April 10, 2018; accepted June 3, 2018. Date of publication June 11, 2018; date of current version June 22, 2018. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Elias Aboutanios. This work was supported in part by the JSPS Grants-in-Aid (15K06081, 15K13986, and 15H02757) and in part by the Scandinavia-Japan Sasakawa Foundation. This paper was presented in part at the 25th European Signal Processing Conference, Kos Island, Greece, August–September 2017. (Corresponding author: Motoya Ohnishi.) M. Ohnishi is with the Department of Electronics and Electrical Engineering, Keio University, Tokyo 108-8345, Japan (e-mail:,ohnishi@ykw.elec.keio.ac.jp).
Publisher Copyright:
© 1991-2012 IEEE.
PY - 2018/8/1
Y1 - 2018/8/1
N2 - We propose a novel online learning paradigm for nonlinear-function estimation tasks based on the iterative projections in the $L^2$ space with probability measure reflecting the stochastic property of input signals. The proposed learning algorithm exploits the reproducing kernel of the so-called dictionary subspace, based on the fact that any finite-dimensional space of functions has a reproducing kernel characterized by the Gram matrix. The $L^2$ -space geometry provides the best decorrelation property in principle. The proposed learning paradigm is significantly different from the conventional kernel-based learning paradigm in two senses: first, the whole space is not a reproducing kernel Hilbert space; and second, the minimum mean squared error estimator gives the best approximation of the desired nonlinear function in the dictionary subspace. It preserves efficiency in computing the inner product as well as in updating the Gram matrix when the dictionary grows. Monotone approximation, asymptotic optimality, and convergence of the proposed algorithm are analyzed based on the variable-metric version of adaptive projected subgradient method. Numerical examples show the efficacy of the proposed algorithm for real data over a variety of methods including the extended Kalman filter and many batch machine-learning methods such as the multilayer perceptron.
AB - We propose a novel online learning paradigm for nonlinear-function estimation tasks based on the iterative projections in the $L^2$ space with probability measure reflecting the stochastic property of input signals. The proposed learning algorithm exploits the reproducing kernel of the so-called dictionary subspace, based on the fact that any finite-dimensional space of functions has a reproducing kernel characterized by the Gram matrix. The $L^2$ -space geometry provides the best decorrelation property in principle. The proposed learning paradigm is significantly different from the conventional kernel-based learning paradigm in two senses: first, the whole space is not a reproducing kernel Hilbert space; and second, the minimum mean squared error estimator gives the best approximation of the desired nonlinear function in the dictionary subspace. It preserves efficiency in computing the inner product as well as in updating the Gram matrix when the dictionary grows. Monotone approximation, asymptotic optimality, and convergence of the proposed algorithm are analyzed based on the variable-metric version of adaptive projected subgradient method. Numerical examples show the efficacy of the proposed algorithm for real data over a variety of methods including the extended Kalman filter and many batch machine-learning methods such as the multilayer perceptron.
KW - Online learning
KW - kernel adaptive filter
KW - metric projection
KW - recursive least squares
UR - http://www.scopus.com/inward/record.url?scp=85048487538&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048487538&partnerID=8YFLogxK
U2 - 10.1109/TSP.2018.2846271
DO - 10.1109/TSP.2018.2846271
M3 - Article
AN - SCOPUS:85048487538
SN - 1053-587X
VL - 66
SP - 4050
EP - 4064
JO - IEEE Transactions on Acoustics, Speech, and Signal Processing
JF - IEEE Transactions on Acoustics, Speech, and Signal Processing
IS - 15
ER -