SToMP project (1992 onwards)

The SToMP (Software Teaching of Modular Physics) project was funded in the UK in 1992 to create a learning environment for some common first year physics courses. From the mid 90s, this included an assessment system that supported the delivery of the following question types:

  • List (radio button)
  • List (check boxes)
  • Rank ordering
  • Pair matching
  • Numeric
  • Text
  • Random numeric

Dick Bacon (r.bacon at has shared information about the project and given permission to share the documents below.

He advises that after about 6 months the project’s Open University representative, Steve Swithenby, pointed out that they really should have an assessment component, and during a six week sabbatical in Sweden, Dick created a prototype in Visual Basic handling, he believes, single and multiple choice, pairing and ordering questions. About 1994-5 the real version was produced, see details in the document below. In 2002-ish this was replaced by a QTI v1.2 system that outlived the rest of the SToMP project. Dick retains the SToMP name for his current SToMPII QTIv2 system (see

Here is a screenshot of the 1994-95 version. It shows a five question test, the questions being in the formatted text viewer on the right. The response box has a tab for each question, and the user has just got the first question wrong and has the feedback displayed. It is a self test, so results are not recorded and the student can try again as many times as wanted.

SToMP screenshot

Some links on SToMP

And here are copies of the SToMP manuals with great detail on the functionality:

  • Development manual for the whole of the SToMP materials which gives context dated December 1997 (see chapter 5 for the assessment system) [PDF] [Word
  • A  version of the chapter 5 on assessments dated October 1994 [PDF] [Word]

Thanks Dick for sharing this important 1990s example of assessment technology.

Handbook of computer based training (1983-84)

There was no Internet in the 1980s outside the laboratory, but many of the ideas and processes being used in Internet learning and assessment today were performed on other kinds of computers in the 1980s. imageThis useful book “A handbook of Computer Based Training” describes the state of the art in Computer Based Training in the early 1980s. It was written by Christopher Dean and Quentin Whitlock and published in the UK by Kogan Page and in the US by Nichols Publishing Company. This brief description will hopefully give you enough information to see if it’s worth tracking down a copy to verify prior art.

There are several editions of the book – it was updated into the 1990s, the description below is from the 1984 version (re-printed with some revisions from the 1983 edition). Google books have a searchable copy of the 1983 original edition here.

Much of the book describes technology which has been superseded, but there are two parts which might be interesting from a prior art perspective.

Chapters 1 through 4 of the book are on the design of learning sequences. It describes how one identifies training needs and analyses and breaks down tasks and then develops training objectives – with a common technique being to set up a module with an objective and a post-test. Modules are then combined in a learning plan, which can be individualized or adapted for different learners. There is discussion of pre-tests and entry requirements for a course, and giving people remedial or equalizing training and modular scheduling and branching – with learning adapting to the performance of individual learners.

Chapter 16 covers computer management of instruction, including:

  • Use of a network
  • Registering courses and students
  • Testing and recordkeeping
  • Directing the student through the course (based on topic scores)
  • Course maintenance
  • Reporting (with lots of example reports)
  • Running a CML system

You can search through the book on Google here.

Question Mark Professional (1993)

Question Mark Professional manual coverQuestion Mark Professional also known as Question Mark for DOS version 3 was launched in 1993 and marketed worldwide (with minor enhancements over the years) during the 1990s. The software was gradually superseded by Question Mark for Windows and Question Mark Perception.

Question Mark Professional consisted of:

  • Question Mark for DOS
  • Graphics Companion
  • Toolkit
  • Euro Pack (test delivery software in Dutch, French, German, Italian and Spanish)

There were various add-ons to the software including a Multimedia Editor and a Report Generator. Here are a few paragraphs from the user manual:

Question MarkTM is a computer program, which you can use to create, give, mark and analyse objective tests on a computer. Using Question Mark, you enter questions (on any topic) into the computer; students answer your ques­tions on-screen, and the computer can mark and analyse their answers.

Question Mark is suitable for use both by individual trainers and teachers, seeking to create tests for their students, and by courseware developers who want to create tests for distribution. Question Mark can also be used in any situation where you want to ask people questions and analyse their answers: ques­tionnaires, opinion polls, recruit­ment tests, and many sorts of form-filling.

Question Mark is easy to learn, and you do not need to know about computers to use it. You should start using it productive­ly after only a short learning period.

Using Question Mark, you can create tests with:

    • up to 500 questions (either all or some chosen random­ly);
    • each question in one of 9 types – multiple choice (includes yes/no, true/false), numeric, fill in blanks, word answer, free format, match­ing/ranking, multiple response, logical and explanation;
    • a variety of ways to present the test including giving the student feedback on answers after each question, after the test or not at all, and a range of options including time limits, hints for wrong answers, and letting students pass over questions;
    • control over the way the screen looks when the test is delivered, including the capability to include your own graphics in questions;
    • flexible and intelligent computer marking methods, where you can define a variety of correct answers and scoring methods;
    • the ability to call up other tests to follow on from this test, with the test chosen depending on the student score.

Students can answer the tests without needing to be familiar with the use of computers; and there are measures to pre­vent students from getting access to the questions and answers illegally.

After the questions have been answered, there are sophisti­cated but easy‑to‑use ways to review and analyse the answers. You can collate and analyse answers from differ­ent students and different tests, sending output reports to screen, printer or disk.

Evidence of its release and availability:

Evidence of its functionality:

Remote monitoring and intervention (1977)

Thanks to Phil Butcher of the Open University for information about this system, which was developed from a report by Professor William Dorn of the University of Denver who was a Fulbright-Hay scholar at the UK Open University in 1972-3.

The online computer-based system enabled students at Open University  study centres scattered around the UK to

  • receive questions
  • answer those questions
  • receive immediate, automatic, feedback

and tutors in Milton Keynes to

  • monitor the performance of students
  • monitor the responses

and for both students and tutors to

  • initiate a conversation to discuss and clarify misunderstandings using the same telephone line used to transmit the questions and answers.

The system was first introduced in 1977 and is described briefly in Bramer, M., (1980) Using computers in distance education: The first ten years of the British Open University, Computers and Education, 4, 293-301

Certainty-based marking (1969)

Tony Gardner-Medwin has kindly given permission to post some fragments he’s put together of an informal review by Ahlgren (1969) of early work involving confidence

See here for the article which consists of some collated remarks delivered in the symposium “Confidence on Achievement Tests — Theory, Applications” at the
1969 meeting of the AERA and NCME.

You can see a lot more about Certainty-based marking / Confidence-based marking / (CBM) on Tony’s University College of London website: His site contains and links to many publications on CBM, and information on the LAPT (London Agreed Protocol for Teaching) software which uses CBM in the presentation of learning resources.

10 early descriptions of computer assisted testing (1953-85)

The US Government ERIC (Education Resources Information Center) at has over a million education documents going back to 1966. There is a huge amount of relevant prior art here – if you can find it!

Here are 10 interesting documents I found from the earlier days of using computers for testing and assessment

Year Document
1953 Proceedings of the Invitational Conference on Testing Problems in New York in 1953. Includes several papers describing test scoring machines which had been in active use for more than a decade at the time.
international conference
1970 Detailed description of an early computer-based Instructional Management System (Conwell approach) including tests – with a sophisticated approach to objectives, learner characteristics, learning styles and categorization of learning.
1971 Review of automated testing at the time by the Office of Naval Research. Considers test anxiety, validity and reliability, natural language processing, automated interpretation and more.
1974 Description of a computer-assisted diagnostic assessment given to medical students at the University of Illinois. It was created in a program called Coursewriter and allowed students to answer questions, skip and come back to review them later and give feedback printout 30 minutes after the test.
1977 200 page survey of US military computer-based training systems in 1977. Lists about 60 authoring tools/procedures, includes mention of PLATO, TICCIT, some coverage of computer assessment.
1981 Description of testing at BYU where computerization helped them deliver 300,000 tests per year.
1984 Detailed description of software for computer adaptive testing for the US Armed Services Vocational Aptitude Battery tests. Technical description and user manual. Features include automatic calling of proctor if too many keying errors made, ensuring that similar questions to previous ones not selected at random and holding demographic data within the system.
1985 Reviews of 18 sets of microcomputer item banking software: AIMS (Academic Instructional Measurement System); CREATE-A-TEST; Exam Builder; Exams and Examiner; MicroCAT; Multiple Choice Files; P.D.Q. Builder; Quiz Rite; Teacher Create Series (5 programs); TAP (Testing Authoring Program); TestBank Test Rite; Testmaster; Testmaster Series; Tests, Caicreate, Caitake; Tests Made Easy; TestWorks; and the Sage.   Several programs used item banking by topic, random selection and password control.
1985 Report from the University of Pittsburgh about the state of the art in computer-assisted test construction – using computers to generate items or select items to form a test – includes a lot about levels of difficulty, use of IRT, test blueprints.
1985 Description of using the MicroCAT computerized testing system within the US Navy. Explains features of the software including a central proctor station which controls testing.

It’s great to see the huge variety and innovation in computer testing from decades ago. The 1953 material is unlikely to be useful prior art today but some of the 1970s or 1980s material could be.

John Kleeman, June 6, 2012

Question Mark for Web (1995-96)

QM Web Manual coverQuestion Mark for Web (QM Web) is believed to be the world’s first commercial web testing and surveying product. QM Web version 1 was released in October 1995. Version 2 of QM Web was released in September 1996.

To quote contemporary descriptive language: “Using QM Web, you create your questions and load them onto a Web or Intranet server. People anywhere in the world can then answer the questions using just a Web or Intranet browser (e.g. Netscape Navigator or Internet Explorer). Their answers can be sent back to you for analysis, or else marked on the server with immediate results.  If you need to gather information from people and analyze the results, using the Web or an Intranet, then QM Web is the answer for you. Typical uses of QM Web include:

  • Distance education. If you are presenting courses at a distance, you can add assessment – finding out what people know, checking what they have learnt, or offering a formal end of course exam to validate performance.
  • Self-assessment teaching material. Add interactivity to your training and educational material. Create quizzes that people can take to find out how much they know, or as practice tests prior to important exams.
  • Checking employee competence. Ask questions to prove your sales staff understand the products they are selling or that your employees know safety regulations.
  • Recruitment skill tests. Get a prospective employee to take a test to give you objective information on their capabilities, or gather information on the Web prior to selecting people for interview.
  • Surveys and questionnaires. Ask employee attitude surveys, get feedback from trainees on the quality of courses or ask your customers or people accessing your site what they think of your service.

QM Web gives you instant assessment, instant learning and instant results, anywhere in the world.”

QM Web took questions and assessments created in Question Mark Designer for Windows (a Windows assessment management system) and converted them into HTML for use on the web. A server-side program then allowed scoring, feedback and results reporting.

Evidence of its release and availability:

Evidence of its functionality:

QM Web functionality grew from its original release in 1995 to fuller capability in 1996. Manuals on this site have been uploaded with the permission of the copyright holder.

Feel free to ask me any questions about QM Web.

John Kleeman, May 29 2012

Question Mark Guardian (1995)

Questionmark Network Guardian manual cover

Question Mark Guardian (also called Network Guardian) was an add-on companion to Question Mark Designer for Windows which allowed you to control who has access to exams, tests, surveys and quizzes – including:

  • Maintain user records for testing
  • Limit access to particular users
  • Limit access to groups of users
  • Generate unique passwords
  • Provide time-windows for tests
  • Limit number of times for a test
  • Ensure tests are taken in sequence
  • Import user names and passwords

According to contemporary documentation, Question Mark Guardian helps you resolve your security problems. With Guardian you can:

  • Give users or groups of users access to certain question files, and prevent others from accessing the same questions;
  • Prevent people from giving a false name when they answer questions, by assigning a password to each user;
  • Stop users from being able to take a test more than once;
  • Stop people from running a test before it is released or after it has expired;
  • Ensure that users cannot attempt an advanced test without passing a simpler one first;
  • Keep a record of who does what and what their score is, thus preventing unscrupulous users from attempting a test and trying to hide this from you;
  • Assign users to groups within Question Mark Guardian so that you can look at results on a group basis.

Evidence of its release and availability:

Evidence of its functionality:

The first version of Network Guardian was called version 2.1. The user manual was printed in May 1995, ISBN 1 872089 13 5. The user manual has been uploaded to this site with the permission of the copyright holder.

Feel free to post any questions on Question Mark (Network) Guardian, and I will try to answer.

John Kleeman, May 25, 2012

Question Mark Perception v2 (1999)

Question Mark Perception Server v2 manual cover

Question Mark Perception version 2 was released in July 1999 and was a significant milestone in this assessment management system, building on the release of version 1 and adding security, reporting, multimedia and integration amongst other capabilities.

The main function of the software was to enables users to create their own tests, surveys, questionnaires, and assessments for use via the web or an intranet.

Key capabilities:

  • Author questions and organize in topics
  • Select questions (including at random) and put in assessments
  • Control access to assessments securely, with a functional and innovative security model (see chapter 4 of the server manual for a description)
  • Deliver over the web
  • Support of a secure browser
  • Integrate with Learning Management and other systems
  • Report on the results

Evidence of its release and availability:

Evidence of its functionality:

Feel free to post any questions on Perception version 2 and I will try to answer.

John Kleeman, May 23 2012