Expert Systems Shell-Based Programs for Medical Education
Expertise in medical diagnosis takes a long time to develop. Medical educators would like to find ways to minimize that time so that doctors in the early stages of their residencies can approximate the knowledge of expert diagnosticians, and thereby re duce instances of misdiagnosis.
One possibility for attaining this goal lies in artificial intelligence (AI) and the expert systems derived from AI strategies. This project used an AI-derived tool (KBIT or Knowledge-Based Inference Tool) developed by one of the investigators to stren gthen the diagnostic capabilities of medical students.
Project staff began by determining the ability of the tool to make highly reliable and valid distinctions between the diagnoses of experts and novices. They were aware that the experts success resulted from superior knowledge and experience rather than greater cognitive skills. They also found that successful diagnosticians are characterized by the ability to recognize patterns, to match symptoms with typical disease patterns and to discriminate among closely related patterns.
Through interviews with over a hundred expert practitioners in four medical problem areas (e.g., weakness, elevated creatinine) the investigators were able to extract the disease prototypes with which the experts were working and to represent the knowle dge base and prototype through computer programs using the AI tool. They found that the programs produced highly reliable (.71 to .96, depending on the disease problem areas) distinctions between novice and expert diagnosticians.
Students, on the basis of their experience and knowledge, were required to compile profiles of each type of illness containing the signs and symptoms associated with them. These profiles were later compared for accuracy, pattern match and pattern discr imination with those developed by experts.
Staff then developed expert system programs that would allow the medical trainees to encounter the presenting symptoms in a large number of cases in specific problem areas. After arriving at a diagnosis, the students could compare their analytic proces ses with those of the experts, thus learning how the experts arrived at their diagnosis.
The work was evaluated in two stages. First, the ability of the AI tool to represent cases in such a way as to distinguish between novices and experts had to be demonstrated. This effort involved a statistical comparison of the diagnostic accuracy of experts and medical students when presented with information in this form.
Testing the value of the instructional approaches based on the AI tool required comparing the diagnostic results obtained by untrained students, by students trained conventionally, and by those trained using strategies derived from the tool. The resul ts of these comparisons all demonstrated the validity of these expert systems programs in improving diagnostic training in particular diseases.
The AI students' levels of diagnostic accuracy were statistically superior to those of control or conventionally trained students. This project succeeded in creating problem-specific, computer- based instruments to improve the training of medical stude nts in medical diagnosis. The construct validity of the KBIT and the decision making paradigm on which it is based apparently allow generalizability across medical problem areas. The KBIT thus has the potential to form a foundation for a new generation of educationally sound, "intelligent" instructional and assessment tools.
In designing these instruments it is important to define the problem in such a way as to allow for sufficiently fine discriminations. These discriminations include differences among varying levels of expertise (e.g., beginning vs. experienced residents ) and differences between closely related diseases. Producing sufficiently fine-grained distinctions requires the patient development of large data bases.
A major difficulty for disseminating this educational strategy is the medical community's general lack of understanding of the theoretical bases of artificial intelligence. Thus medical educators are likely to resist these techniques until more of them understand how they work and the basis for their validity.
Project Continuation and Recognition
With the college's support, work continues on additional disease-specific areas. The project directors made a number of presentations and publications, and received the Thomas Hale Hamm New Investigator Award from the Research in Medical Education subgroup of the Association of American Medical Colleges.
Information about the project, including additional unpublished materials, is available from:
Texas College of Osteopathic Medicine
3500 Camp Bowie Blvd.
Fort Worth, TX 76107