Abductive diagnostic problem-solving systems use causal relations to infer plausible diagnostic hypotheses. An important but controversial issue for such models is what characteristics should define the most plausible hypotheses. While there are theoretical predictions relevant to this issue, there are almost no empirical data on which to base rational decisions. Accordingly, this study examines four different criteria of hypothesis plausibility in diagnosing the site of brain damage in 100 medical patients. The criteria examined are (1) naive minimal cardinality, (2) irredundancy, (3) most probable (Bayesian), and (4) minimal cardinality when adjacency relations are taken into account. Model performance when these different hypothesis plausibility criteria are used confirms the previously predicted inadequacy of minimal cardinality. It also indicates that irredundancy (‘minimality’), the criterion most widely used in current AI models, is not' useful in this setting because of the large number of alternative, implausible hypotheses it produces. The most interesting result is that a modified minimal cardinality criterion produces the best hypotheses when measured as the ratio of agreements with human experts per hypothesis generated. In addition, comparing the results of this study to two previous rule-based systems for a similar application indicates that abductive diagnostic systems can be very powerful as application programs. These results, useful in themselves, underscore the need for more systematic empirical studies of abductive problem-solving models.
|Number of pages||16|
|Journal||Journal of Experimental and Theoretical Artificial Intelligence|
|State||Published - 1991|