Thanks to Phil Butcher of the Open University for information about this system, which was developed from a report by Professor William Dorn of the University of Denver who was a Fulbright-Hay scholar at the UK Open University in 1972-3.
The online computer-based system enabled students at Open University study centres scattered around the UK to
- receive questions
- answer those questions
- receive immediate, automatic, feedback
and tutors in Milton Keynes to
- monitor the performance of students
- monitor the responses
and for both students and tutors to
- initiate a conversation to discuss and clarify misunderstandings using the same telephone line used to transmit the questions and answers.
The system was first introduced in 1977 and is described briefly in Bramer, M., (1980) Using computers in distance education: The first ten years of the British Open University, Computers and Education, 4, 293-301
The US Government ERIC (Education Resources Information Center) at http://www.eric.ed.gov/ has over a million education documents going back to 1966. There is a huge amount of relevant prior art here – if you can find it!
Here are 10 interesting documents I found from the earlier days of using computers for testing and assessment
||Proceedings of the Invitational Conference on Testing Problems in New York in 1953. Includes several papers describing test scoring machines which had been in active use for more than a decade at the time.
||Detailed description of an early computer-based Instructional Management System (Conwell approach) including tests – with a sophisticated approach to objectives, learner characteristics, learning styles and categorization of learning.
||Review of automated testing at the time by the Office of Naval Research. Considers test anxiety, validity and reliability, natural language processing, automated interpretation and more.
||Description of a computer-assisted diagnostic assessment given to medical students at the University of Illinois. It was created in a program called Coursewriter and allowed students to answer questions, skip and come back to review them later and give feedback printout 30 minutes after the test.
||200 page survey of US military computer-based training systems in 1977. Lists about 60 authoring tools/procedures, includes mention of PLATO, TICCIT, some coverage of computer assessment.
||Description of testing at BYU where computerization helped them deliver 300,000 tests per year.
||Detailed description of software for computer adaptive testing for the US Armed Services Vocational Aptitude Battery tests. Technical description and user manual. Features include automatic calling of proctor if too many keying errors made, ensuring that similar questions to previous ones not selected at random and holding demographic data within the system.
||Reviews of 18 sets of microcomputer item banking software: AIMS (Academic Instructional Measurement System); CREATE-A-TEST; Exam Builder; Exams and Examiner; MicroCAT; Multiple Choice Files; P.D.Q. Builder; Quiz Rite; Teacher Create Series (5 programs); TAP (Testing Authoring Program); TestBank Test Rite; Testmaster; Testmaster Series; Tests, Caicreate, Caitake; Tests Made Easy; TestWorks; and the Sage. Several programs used item banking by topic, random selection and password control.
||Report from the University of Pittsburgh about the state of the art in computer-assisted test construction – using computers to generate items or select items to form a test – includes a lot about levels of difficulty, use of IRT, test blueprints.
||Description of using the MicroCAT computerized testing system within the US Navy. Explains features of the software including a central proctor station which controls testing.
It’s great to see the huge variety and innovation in computer testing from decades ago. The 1953 material is unlikely to be useful prior art today but some of the 1970s or 1980s material could be.
John Kleeman, June 6, 2012