Abstract
Gold-style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In traditional Gold-style language learning, learning machines are not provided with negative information, i.e., information about the complements of the input languages. We investigate two approaches to providing small amounts of negative information and demonstrate in each case a strong resulting increase in learning power. Finally, we show that small packets of negative information also lead to increased speed of learning. This result agrees with a psycholinguistic hypothesis of McNeill correlating the availability of parental expansions with the speed of child language development.
Original language | English (US) |
---|---|
Pages (from-to) | 273-285 |
Number of pages | 13 |
Journal | Journal of Computer and System Sciences |
Volume | 51 |
Issue number | 2 |
DOIs | |
State | Published - Oct 1995 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- General Computer Science
- Computer Networks and Communications
- Computational Theory and Mathematics
- Applied Mathematics