Tony Gardner-Medwin has kindly given permission to post some fragments he’s put together of an informal review by Ahlgren (1969) of early work involving confidence
See here for the article which consists of some collated remarks delivered in the symposium “Confidence on Achievement Tests — Theory, Applications” at the
1969 meeting of the AERA and NCME.
You can see a lot more about Certainty-based marking / Confidence-based marking / (CBM) on Tony’s University College of London website: http://www.ucl.ac.uk/lapt/. His site contains and links to many publications on CBM, and information on the LAPT (London Agreed Protocol for Teaching) software which uses CBM in the presentation of learning resources.
The US Government ERIC (Education Resources Information Center) at http://www.eric.ed.gov/ has over a million education documents going back to 1966. There is a huge amount of relevant prior art here – if you can find it!
Here are 10 interesting documents I found from the earlier days of using computers for testing and assessment
||Proceedings of the Invitational Conference on Testing Problems in New York in 1953. Includes several papers describing test scoring machines which had been in active use for more than a decade at the time.
||Detailed description of an early computer-based Instructional Management System (Conwell approach) including tests – with a sophisticated approach to objectives, learner characteristics, learning styles and categorization of learning.
||Review of automated testing at the time by the Office of Naval Research. Considers test anxiety, validity and reliability, natural language processing, automated interpretation and more.
||Description of a computer-assisted diagnostic assessment given to medical students at the University of Illinois. It was created in a program called Coursewriter and allowed students to answer questions, skip and come back to review them later and give feedback printout 30 minutes after the test.
||200 page survey of US military computer-based training systems in 1977. Lists about 60 authoring tools/procedures, includes mention of PLATO, TICCIT, some coverage of computer assessment.
||Description of testing at BYU where computerization helped them deliver 300,000 tests per year.
||Detailed description of software for computer adaptive testing for the US Armed Services Vocational Aptitude Battery tests. Technical description and user manual. Features include automatic calling of proctor if too many keying errors made, ensuring that similar questions to previous ones not selected at random and holding demographic data within the system.
||Reviews of 18 sets of microcomputer item banking software: AIMS (Academic Instructional Measurement System); CREATE-A-TEST; Exam Builder; Exams and Examiner; MicroCAT; Multiple Choice Files; P.D.Q. Builder; Quiz Rite; Teacher Create Series (5 programs); TAP (Testing Authoring Program); TestBank Test Rite; Testmaster; Testmaster Series; Tests, Caicreate, Caitake; Tests Made Easy; TestWorks; and the Sage. Several programs used item banking by topic, random selection and password control.
||Report from the University of Pittsburgh about the state of the art in computer-assisted test construction – using computers to generate items or select items to form a test – includes a lot about levels of difficulty, use of IRT, test blueprints.
||Description of using the MicroCAT computerized testing system within the US Navy. Explains features of the software including a central proctor station which controls testing.
It’s great to see the huge variety and innovation in computer testing from decades ago. The 1953 material is unlikely to be useful prior art today but some of the 1970s or 1980s material could be.
John Kleeman, June 6, 2012