Online model-selection and learning for nonlinear estimation based on multikernel adaptive filtering

Osamu Toda, Masahiro Yukawa

    Research output: Contribution to journalArticlepeer-review

    6 Citations (Scopus)


    We study a use of Gaussian kernels with a wide range of scales for nonlinear function estimation. The estimation task can then be split into two sub-tasks: (i) model selection and (ii) learning (parameter estimation) under the selected model. We propose a fully-adaptive and all-in-one scheme that jointly carries out the two sub-tasks based on the multikernel adaptive filtering framework. The task is cast as an asymptotic minimization problem of an instantaneous fidelity function penalized by two types of block l1-norm regularizers. Those regularizers enhance the sparsity of the solution in two different block structures, leading to effi- cient model selection and dictionary refinement. The adaptive generalized forward-backward splitting method is derived to deal with the asymptotic minimization problem. Numerical examples show that the scheme achieves the model selection and learning simultaneously, and demonstrate its strik- ing advantages over the multiple kernel learning (MKL) method called SimpleMKL.

    Original languageEnglish
    Pages (from-to)236-250
    Number of pages15
    JournalIEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
    Issue number1
    Publication statusPublished - 2017 Jan 1


    • Adaptive filter
    • Convex projection
    • Proximity operator
    • Reproducing kernels

    ASJC Scopus subject areas

    • Signal Processing
    • Computer Graphics and Computer-Aided Design
    • Applied Mathematics
    • Electrical and Electronic Engineering


    Dive into the research topics of 'Online model-selection and learning for nonlinear estimation based on multikernel adaptive filtering'. Together they form a unique fingerprint.

    Cite this