For educational software to be successful in purpose and outcome, students need to be able to apply new skills to other learning environments. However, the degree to which CAI actually helps students has rarely been quantified. It is unfortunate that there is scant standardization by software publishers; in contrast to standardized-test developers, they have not been held accountable for validating their own claims. Field-testing of software products and validation of learning claims by publishers remains the exception rather than the rule (Buckleitner, 1999). If they do subject their software to rigorous scientific research, few publish the results. It is rare for educational software to be subject to empirical testing, via use of treatment and control groups, and comparative analysis to identify CAI as a factor—rather than variable—in increasing student test scores. It is rarer still for any educational software publisher to show concrete evidence of long-term gains after their software treatment.
In 2003, Merit Software commissioned consultants at the Marshall University Graduate College in South Charleston, West Virginia to conduct a control versus treatment group research study utilizing several of its reading and language arts programs in classrooms at the Calhoun County Middle/High School in West Virginia. The purpose was to evaluate the effects of Merit programs on students in grades 6 and 8. The results of the study showed that treatment group students increased achievement growth in several sub-tests of the West Virginia standardized test scores (Jones, Staats, Bowling, Bickel, Cunningham, & Cadle, 2004).
Study results demonstrated that treatment-group students scored better than the control group in several sub-tests of the Stanford Achievement Test, Ninth Edition (SAT-9). With a suitable complement of controls in place, treatment-group students increased their SAT-9 Reading Vocabulary score by 13.1% of the total sample mean and their Reading Comprehension score by 10.5%. Membership in the experimental group also yielded an average gain of 11.1% for the SAT-9 Language Expression sub-test and an average gain of 8.3% for Spelling.
In 2004, Merit Software commissioned Marshall University consultants to conduct a second quantitative research study. The purpose was to follow up and extend the previous study. Researchers were to evaluate the long-term effects of Merit reading and writing software on students in grades 6, 7, and 8 in Calhoun County Middle/High School classrooms, determining whether the educational gains at Calhoun were short-lived or continuous. Results of the second study showed that treatment-group students outperformed control-group students on sub-tests of the West Virginia Educational Standards Test (2004 WESTEST). Low-achieving students in the treatment group made continuous advances in Reading and Language Arts; WESTEST scores for this group averaged 4.38 points higher than scores for low-achieving students in the control group. WESTEST Science scores were an average of 2.14 points higher; WESTEST Social Studies scores were an average of 8.23 points higher.
Though the second study was relatively short-term and non-intensive, it yielded statistically significant positive results (O’Byrne, Securro, Jones, & Cadle, 2005). The Merit treatment had the greatest impact on middle school students in the lowest class quartile—who were below the reading and language arts competency level expected by state guidelines. These students were considered struggling readers and “at-risk.” The second research study also confirmed that improved learning among students in reading and language arts correlated with educational gains in social studies and science.