Learning general concepts from training examples has long been a major topic of machine learning researches. Explanation-Based Learning (EBL) is one of the most attractive approaches to formulate the weakest generalized sufficient description of a concept from a positive example.
In this thesis, we propose a new learning method, EBL-N, which can learn with a near-miss example. This EBL-N presents two major components for dealing with a near-miss example-an explanation method for generating the near-miss explanation tree and a learning method for formulating necessary description of the concept from the near-miss tree. Therefore, the proposed new EBL can learn not only sufficient description but also necessary description. In addition, the proposed EBL-N can generate more general sufficient description than the traditional one.
An experimental concept learning system, ACLS, is implemented to demonstrate the usefulness and expressiveness of the proposed schema.