TY - GEN
T1 - Learning languages by collecting cases and tuning parameters
AU - Sakakibara, Yasubumi
AU - Jantke, Klaus P.
AU - Lange, Steffen
N1 - Publisher Copyright:
© 1994, Springer Verlag. All Rights Reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 1994
Y1 - 1994
N2 - We investigate the problem of case-based learning of formal languages. Case-based reasoning and learning is a currently booming area of artificial intelligence. The formal framework for case-based learning of languages has recently been developed by [JL93] in an inductive inference manner. In this paper, we first show that any indexed class of recursive languages in which finiteness is decidable is case-based representable, but many classes of languages including the class of all regular languages are not case-based learnable with a fixed universal similarity measure, even if both positive and negative examples are presented. Next we consider a framework of case-based learning where the learning algorithm is allowed to learn similarity measures, too. To avoid trivial encoding tricks, we carefully examine to what extent the similarity measure is going to be learned. Then by allowing only to learn a few parameters in the similarity measures, we show that any indexed class of recursive languages whose finiteness problem is decidable is case-based learnable. This implies that all context-free languages are case-based learnable by collecting cases and learning parameters of the similarity measures.
AB - We investigate the problem of case-based learning of formal languages. Case-based reasoning and learning is a currently booming area of artificial intelligence. The formal framework for case-based learning of languages has recently been developed by [JL93] in an inductive inference manner. In this paper, we first show that any indexed class of recursive languages in which finiteness is decidable is case-based representable, but many classes of languages including the class of all regular languages are not case-based learnable with a fixed universal similarity measure, even if both positive and negative examples are presented. Next we consider a framework of case-based learning where the learning algorithm is allowed to learn similarity measures, too. To avoid trivial encoding tricks, we carefully examine to what extent the similarity measure is going to be learned. Then by allowing only to learn a few parameters in the similarity measures, we show that any indexed class of recursive languages whose finiteness problem is decidable is case-based learnable. This implies that all context-free languages are case-based learnable by collecting cases and learning parameters of the similarity measures.
UR - http://www.scopus.com/inward/record.url?scp=84981185842&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84981185842&partnerID=8YFLogxK
U2 - 10.1007/3-540-58520-6_88
DO - 10.1007/3-540-58520-6_88
M3 - Conference contribution
AN - SCOPUS:84981185842
SN - 9783540585206
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 532
EP - 546
BT - Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings
A2 - Arikawa, Setsuo
A2 - Jantke, Klaus P.
PB - Springer Verlag
T2 - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994
Y2 - 10 October 1994 through 15 October 1994
ER -