Richard E. Clark in his widely published comprehensive studies and meta-analyses of the literature on computer assisted instruction (CAI) has decried the lack of carefully controlled research, challenging almost every study which shows the computer-based intervention to result in significant post-test proficiency gains over a non-computer-based intervention. We report on a randomized study in a medical school setting where the usual confounders found by Clark to plague most research, were carefully controlled. PlanAlyzer is a microcomputer-based, self-paced, case-based, event-driven system for medical education which was developed and used in carefully controlled trials in a second year medical school curriculum to test the hypothesis that students with access to the interactive programs could integrate their didactic knowledge more effectively and/or efficiently than with access only to traditional textual "nonintelligent" materials. PlanAlyzer presents cases, elicits and critiques a student's approach to the diagnosis of two common medical disorders: anemias and chest pain. PlanAlyzer uses text, hypertext, images and critiquing theory. Students were randomized, one half becoming the experimental group who received the interactive PlanAlyzer cases in anemia, the other half becoming the controls who received the exact same content material in a text format. Later in each year there was a crossover, the controls becoming the experimentals for a similar intervention with the cardiology PlanAlyzer cases. Preliminary results at the end of the first two full trials shows that the programs have achieved most of the proposed instructional objectives, plus some significant efficiency and economy gains. 96 faculty hours of classroom time were saved by using PlanAlyzer in their place, while maintaining high student achievement. In terms of student proficiency and efficiency, the 328 students in the trials over two years were able to accomplish the project's instructional objectives, and the experimentals accomplished this in 43% less time than the controls, achieving the same level of mastery. However, in spite of these significant efficiency findings, there have been no significant proficiency differences (as measured by current factual and higher order multiple choice post-tests) between the experimental and control groups. Very careful controls were used to avoid what Clark has found to be the most common confounders of CAI research. Accordingly, this research proved Clark's rival hypothesis, that the computer, in itself, does not appear to contribute to proficiency gains, at least as measured by our limited post-testing. Clark's position is that the computer is primarily a vehicle-as is either a pill or a hypodermic needle for delivering a drug. The hypodermic needle can deliver the drug more efficiently than can the pill, (as can the computer deliver the subject matter content more efficiently, as our research indicates), but the same content is delivered. At the same time, we proved our own hypothesis, as far as efficiency gains resulting from the computer are concerned. However, going beyond Clark's research, we may be teaching processes both more effectively and efficiently with the computer (experience in problem-solving or clinical reasoning and pattern recognition) which our current post-tests do not adequately measure. Our on-going research suggests additional inquiry in several areas: better evaluation instruments to measure the clinical reasoning skills PlanAlyzer was designed to teach; the addition of more advanced cases to determine if this might transform efficiency gains of the computer group into proficiency gains; the addition of enhanced graphic decision support tools and other pedagogical enhancements including cognitive feedback to strengthen PlanAlyzer's power to teach complex concepts of medical decision-making.
ASJC Scopus subject areas
- Medicine (miscellaneous)
- Information Systems
- Health Informatics
- Health Information Management