Welcome

Featured

Welcome to assessmentpriorart.org, a site that documents prior art in assessment software. The site contains some examples of assessment software history – and would welcome more – see the About page for more about the site and the How to contribute page if you are interested in contributing.

Question Mark sold by Presence Corporation in USA (1995-96)

Eric Shepherd has helpfully found some price lists, newsletters and software reviews dating back from 1995 and 1996. These provide useful evidence about testing and assessment software being offered for sale in the United States in this period (just before the founding of Questionmark Corporation in 1997). Here are the documents:

Triads (1992-current)

Overview

TRIADS is a powerful e-Assessment authoring and delivery system complemented by an interactive results analysis and reporting program. The unique characteristic of TRIADS is that it allows very flexible, visual question styles with sophisticated controls on question behaviour and scoring. Assessments may be run in summative or in formative mode with context-sensitive feedback either as an immediate response to a user action or as a summary at the end of each question. TRIADS can support scenarios that require concurrent, multiple interaction types overlaid on graphics and has some functionality that is not available in any other system.

History of Development

The direct precursor to TRIADS was initially developed by Professor Don Mackenzie as the University of Derby (UK) Interactive Assessment Delivery System or ‘DIADS’ and used to support computer-delivered assessment in Earth Sciences within the University. The system was used for some components of the course-ware developed by the UK Earth Sciences Course-ware Consortium project under the leadership of Dr Bill Sowerbutts at the University of Manchester, UK, funded as part of the national TLTP initiative (http://www.naec.org.uk/organisations/the-teaching-and-learning-technology-programme). This application has strongly influenced the development of highly visual course-ware style of interactions that now underpin all elements of its design.

Between 1994 & 1999 the system was further developed to provide assessments in a wider range of disciplines across the University and boosted nationally when used to underpin the TRIADS – Assessment of Learning Outcomes project. This was a collaboration between the Universities of Liverpool (Professor Chris Paul, Lead), Derby and the Open University funded as part of the national FDTL initiative (http://pcwww.liv.ac.uk/apboyle/triads/). The renaming of the system from DIADS to the Tripartite Interactive Assessment Delivery System (TRIADS) occurred at this time in recognition of the contribution made by the three Universities. The system was used by 16 disciplines across 32 academic departments in19 universities as part of the project. The state of development at this stage is summarised in Mackenzie (1999) .(http://caaconference.co.uk/pastConferences/1999/proceedings/mckenzie%20with%20images.pdf) which contains examples of some of the question styles available at this time. (See one example question from the paper below.)

image

In 1999 the University of Derby set up the Centre for Interactive Assessment Development (CIAD) in response to demand from a wide range of academic disciplines. This was a dedicated e-assessment design consultancy, production, monitoring and results reporting department to support all academic staff across the University.

Between 1999 and 2005 the wide ranging functionality provided by the system attracted the interest from a number of external organisations including pilot e-assessments for schools’ examination bodies (e.g.Maughan.S & Mackenzie.D., 2004) (http://caaconference.co.uk/pastConferences/2004/proceedings/Maughan_Mackenzie.pdf) and for parts of the UK National Health Service.

In 2005, the external interest in both e-assessment and e-learning prompted a move of development staff from the University’s e-learning unit and the core TRIADS development team into a new commercial e-learning development department of the University (Innovation for Learning, IfL (http://www.i4learn.co.uk) ). Production of e-assessments for external organisations was then concentrated in the wider commercial arm of the University. Competency assurance and continuing professional development assessments were developed by the team for medical staff in a range of NHS hospitals.

From 2011 on the TRIAD System has been developed and maintained by Prof. Don Mackenzie at Professional e-Assessment Services which continues to support TRIADS e-assessment clients of the University and other clients largely in the field of competency assessment of medical professionals up to Consultant level.

Functionality

All versions of DIADS and TRIADS are open-coded in Authorware Professional (previously from Macromedia now © Adobe Systems). Authorware was initially selected because of its intuitive and rapid development environment for the production of a wide range of interactions suitable for e-learning course-ware.

Over time and use in a wide range of applications, the limitations of the standard Authorware interaction controls became apparent and a number of interactions were re-written using a combination of the Authorware scripting language and JavaScript in order to give finer control on question behaviour, response checking and scoring.

TRIADS can deliver all the question types listed below individually with a variety of controls on their behaviour and scoring. Additionally TRIADS can deliver text/numeric entry + checkbox matrix + hotspot + draw line interactions concurrently within a single question. This concurrent functionality can potentially be extended to all interaction types, although it is difficult to imagine a scenario that would require all of them at the moment.

clip_image002

All interactions can be layered and overlain on graphics

Code can be added to all question types to account for specialist scoring requirements and the system may incorporate biological and petrological microscope simulators if these are required for specialist assessments using these tools. The way in which these tools are used to answer a question can also be scored, if required.

The question display engine can be configured in a wide variety of modes which can include controlled random question delivery, menu option links to different question sets and the optional delivery of context-sensitive feedback as an immediate response to a user action or at the end of each question. Feedback can be held internally or externally in .rtf files or as links to web pages.

Assessments may be Browser-delivered for intranet, internet and LMS via a plug-in or compiled as an executable for CD or LAN delivery.

Advantages

Whilst standard e-assessments can be produced and delivered using TRIADS, it comes into its own when producing simulation and scenario-based assessments where interactions are overlain on graphics and require a high degree of control on their behaviour and scoring.

TRIADS assessments are easily integrated into Authorware-based e-learning packages.

TRIADS questions are ideal for binding seamlessly into an e-learning narrative to generate a dialogue between the user and the e-tutoring element whilst tracking the scores.

There are very few limitations on the complexity of the assessment or nature of the questions that can be delivered using TRIADS.

General Limitations

Because of its wide ranging functionality, TRIADS is very much a tool for specialist Assessment Developers in commercial or university-based e-learning/assessment development departments.

It is not generally suitable for use by most individual tutors/assessment designers on an occasional basis. This was demonstrated in the TRIADS Assessment of Learning Outcomes project when usage of TRIADS fell significantly after project funding supporting developers in each institution ended.

TRIADS is open-coded in Authorware Professional (© Adobe Systems) and is thus vulnerable to commercial decisions made by Adobe. Authorware is still currently (2013) available from Adobe but is regarded as a ‘mature’ product and will not be developed further. Thus its longevity is in question. Re-development of a system as sophisticated as TRIADS using any other development tool will be very expensive and currently there appears to be no rapid development environment that gives the flexibility of delivery mode and level of interactivity provided by Authorware in a single package.

Web delivery of software developed using Authorware is provided via the Authorware Web Player 2004 plug-in. This works well, once installed, but its installation requires ‘Administrator’ security rights on client machines together with a rather specific browser setup. This creates barriers for individual users and difficulties in distribution across some secure corporate/institutional networks.

TRIADS is developed for Windows platforms only.

Caution

TRIADS has an unparalleled and innovatory range of assessment styles and any applicant who seeks to patent any kind of question delivery mechanism would need to ensure that their application did not infringe upon or plagiarise the work already done by the TRIADS team.

The TRIADSystem is © University of Derby and Prof. Don Mackenzie

 

 

Some related references

Boyle, D.N. Bryon, C.R.C. Paul (1997) Computer-based learning and assessment: A palaeontological case study with outcomes and implications . Computers & Geosciences, Volume 23, Issue 5, June 1997, Pages 573-580 – abstract here

Mackenzie, D.M. (1997) Computer Aided Assessment at the University of Derby. A case study in: Brown, G., Bull, J. and Pendlebury,M. Assessing Student Learning in Higher Education. Chapter 13 pp 215-217. Publ. Routledge, London.

Mackenzie, D.M. (1997) Fully Interactive Assessment on the Web – A Shock to the System. Proceedings of the First Annual Computer Assisted Assessment Conference. 18th. June at the University of Loughborough.

Mackenzie, D.M., Regan, P.F, Wilkins, H. & Hutchinson, P (1998) Integrating Fully Interactive Assessments with Web Resources – A flexible learning environment for students and an easier life for academics?. Geological Society of America Annual Conference, Toronto. Abstracts p.A-389.

Mackenzie, D.M. (1999) Recent Developments in the Tripartite Interactive Assessment Delivery System. Proceedings of the Third Annual Computer Assisted Assessment Conference. University of Loughborough. 16th-17th June

Mackenzie, D.M., Wilkins, H., O’Hare, D. and Boyle, A. (1999) Practical Implementation of Recent Developments in the Tripartite Interactive Assessment Delivery System. A Workshop for the Third Annual Computer Assisted Assessment Conference. University of Loughborough. 16th-17th June

Boyle, A.P., Edwards, D.J., Mackenzie,D.M., Mills, B., O’Hare, D., Morris, E.C., Paul, C.R.C.P., Wilkins, H. and Williams, D.W. (2000) Developments in on-line assessment – experiences and evaluation of using TRIADS and its potential for assessment in electrical engineering. International Journal of Electrical Engineering Education , vol 37, part 1, pp 74-85

Mackenzie, D.M., O’Hare,D., Paul,C., Boyle,A., Edwards, D., Williams,D. & Wilkins, H. (2004) Assessment for Learning: the TRIADS Assessment of Learning Outcomes Project and the development of a pedagogically friendly computer based assessment system. In O’Hare,D & Mackenzie, D.M. (Eds) Advances in Computer Aided Assessment, SEDA Paper 116 pp11-24. Staff and Educational Development Association Ltd., Birmingham.

Maughan,S & Mackenzie,D.M. (2004) BioScope: The Assessment of Process and Outcomes using the TRIADSystem. Proceedings of the 8th International CAA Conference, Loughborough, 6th & 7th July 2004 ISBN 0-9539572-3-3

Mackenzie, D.M. (2005) Online Assessment: quality production and
delivery for higher education. Keynote Address in Enhancing Practice, Assessment Workshop Series No. 5 in Reflections on Assessment, Volume II. pp22-29, Quality Assurance Agency for Higher Education, Gloucester. ISBN 1 84482 266 4

SToMP project (1992 onwards)

The SToMP (Software Teaching of Modular Physics) project was funded in the UK in 1992 to create a learning environment for some common first year physics courses. From the mid 90s, this included an assessment system that supported the delivery of the following question types:

  • List (radio button)
  • List (check boxes)
  • Rank ordering
  • Pair matching
  • Numeric
  • Text
  • Random numeric

Dick Bacon (r.bacon at surrey.ac.uk) has shared information about the project and given permission to share the documents below.

He advises that after about 6 months the project’s Open University representative, Steve Swithenby, pointed out that they really should have an assessment component, and during a six week sabbatical in Sweden, Dick created a prototype in Visual Basic handling, he believes, single and multiple choice, pairing and ordering questions. About 1994-5 the real version was produced, see details in the document below. In 2002-ish this was replaced by a QTI v1.2 system that outlived the rest of the SToMP project. Dick retains the SToMP name for his current SToMPII QTIv2 system (see http://www.stomp.ac.uk/stompii_demon.htm).

Here is a screenshot of the 1994-95 version. It shows a five question test, the questions being in the formatted text viewer on the right. The response box has a tab for each question, and the user has just got the first question wrong and has the feedback displayed. It is a self test, so results are not recorded and the student can try again as many times as wanted.

SToMP screenshot

Some links on SToMP

And here are copies of the SToMP manuals with great detail on the functionality:

  • Development manual for the whole of the SToMP materials which gives context dated December 1997 (see chapter 5 for the assessment system) [PDF] [Word
  • A  version of the chapter 5 on assessments dated October 1994 [PDF] [Word]

Thanks Dick for sharing this important 1990s example of assessment technology.

Handbook of computer based training (1983-84)

There was no Internet in the 1980s outside the laboratory, but many of the ideas and processes being used in Internet learning and assessment today were performed on other kinds of computers in the 1980s. imageThis useful book “A handbook of Computer Based Training” describes the state of the art in Computer Based Training in the early 1980s. It was written by Christopher Dean and Quentin Whitlock and published in the UK by Kogan Page and in the US by Nichols Publishing Company. This brief description will hopefully give you enough information to see if it’s worth tracking down a copy to verify prior art.

There are several editions of the book – it was updated into the 1990s, the description below is from the 1984 version (re-printed with some revisions from the 1983 edition). Google books have a searchable copy of the 1983 original edition here.

Much of the book describes technology which has been superseded, but there are two parts which might be interesting from a prior art perspective.

Chapters 1 through 4 of the book are on the design of learning sequences. It describes how one identifies training needs and analyses and breaks down tasks and then develops training objectives – with a common technique being to set up a module with an objective and a post-test. Modules are then combined in a learning plan, which can be individualized or adapted for different learners. There is discussion of pre-tests and entry requirements for a course, and giving people remedial or equalizing training and modular scheduling and branching – with learning adapting to the performance of individual learners.

Chapter 16 covers computer management of instruction, including:

  • Use of a network
  • Registering courses and students
  • Testing and recordkeeping
  • Directing the student through the course (based on topic scores)
  • Course maintenance
  • Reporting (with lots of example reports)
  • Running a CML system

You can search through the book on Google here.

Question Mark Professional (1993)

Question Mark Professional manual coverQuestion Mark Professional also known as Question Mark for DOS version 3 was launched in 1993 and marketed worldwide (with minor enhancements over the years) during the 1990s. The software was gradually superseded by Question Mark for Windows and Question Mark Perception.

Question Mark Professional consisted of:

  • Question Mark for DOS
  • Graphics Companion
  • Toolkit
  • Euro Pack (test delivery software in Dutch, French, German, Italian and Spanish)

There were various add-ons to the software including a Multimedia Editor and a Report Generator. Here are a few paragraphs from the user manual:

Question MarkTM is a computer program, which you can use to create, give, mark and analyse objective tests on a computer. Using Question Mark, you enter questions (on any topic) into the computer; students answer your ques­tions on-screen, and the computer can mark and analyse their answers.

Question Mark is suitable for use both by individual trainers and teachers, seeking to create tests for their students, and by courseware developers who want to create tests for distribution. Question Mark can also be used in any situation where you want to ask people questions and analyse their answers: ques­tionnaires, opinion polls, recruit­ment tests, and many sorts of form-filling.

Question Mark is easy to learn, and you do not need to know about computers to use it. You should start using it productive­ly after only a short learning period.

Using Question Mark, you can create tests with:

    • up to 500 questions (either all or some chosen random­ly);
    • each question in one of 9 types – multiple choice (includes yes/no, true/false), numeric, fill in blanks, word answer, free format, match­ing/ranking, multiple response, logical and explanation;
    • a variety of ways to present the test including giving the student feedback on answers after each question, after the test or not at all, and a range of options including time limits, hints for wrong answers, and letting students pass over questions;
    • control over the way the screen looks when the test is delivered, including the capability to include your own graphics in questions;
    • flexible and intelligent computer marking methods, where you can define a variety of correct answers and scoring methods;
    • the ability to call up other tests to follow on from this test, with the test chosen depending on the student score.

Students can answer the tests without needing to be familiar with the use of computers; and there are measures to pre­vent students from getting access to the questions and answers illegally.

After the questions have been answered, there are sophisti­cated but easy‑to‑use ways to review and analyse the answers. You can collate and analyse answers from differ­ent students and different tests, sending output reports to screen, printer or disk.

Evidence of its release and availability:

Evidence of its functionality:

Remote monitoring and intervention (1977)

Thanks to Phil Butcher of the Open University for information about this system, which was developed from a report by Professor William Dorn of the University of Denver who was a Fulbright-Hay scholar at the UK Open University in 1972-3.

The online computer-based system enabled students at Open University  study centres scattered around the UK to

  • receive questions
  • answer those questions
  • receive immediate, automatic, feedback

and tutors in Milton Keynes to

  • monitor the performance of students
  • monitor the responses

and for both students and tutors to

  • initiate a conversation to discuss and clarify misunderstandings using the same telephone line used to transmit the questions and answers.

The system was first introduced in 1977 and is described briefly in Bramer, M., (1980) Using computers in distance education: The first ten years of the British Open University, Computers and Education, 4, 293-301

Certainty-based marking (1969)

Tony Gardner-Medwin has kindly given permission to post some fragments he’s put together of an informal review by Ahlgren (1969) of early work involving confidence
judgements.

See here for the article which consists of some collated remarks delivered in the symposium “Confidence on Achievement Tests — Theory, Applications” at the
1969 meeting of the AERA and NCME.

You can see a lot more about Certainty-based marking / Confidence-based marking / (CBM) on Tony’s University College of London website: http://www.ucl.ac.uk/lapt/. His site contains and links to many publications on CBM, and information on the LAPT (London Agreed Protocol for Teaching) software which uses CBM in the presentation of learning resources.

7 early descriptions of remote proctoring (1997 – 2007)

Here are 7 interesting documents that I found that discuss remotely monitoring computer based assessments.

I am very appreciative of the opportunity to contribute to this blog.  My name is Matthew Jaeh and I am the VP of Operations for ProctorU Inc.

ProctorU is an online proctoring service that allows examinees to test anywhere while still ensuring exam security for the testing organization. Using webcams, screen-sharing technology, and proven authentication techniques, the staff at ProctorU’s three dedicated centers provides live monitoring of test-takers taking their examinations from home, work, or anywhere. Examinees connect to their monitor via a one-on-one video sessions and the monitor provide pre-exam assistance and technical support at no additional charge. The monitor maintains security during the session by monitoring audio, video and the candidate’s screen throughout the entire examination

10 early descriptions of computer assisted testing (1953-85)

The US Government ERIC (Education Resources Information Center) at http://www.eric.ed.gov/ has over a million education documents going back to 1966. There is a huge amount of relevant prior art here – if you can find it!

Here are 10 interesting documents I found from the earlier days of using computers for testing and assessment

Year Document
1953 Proceedings of the Invitational Conference on Testing Problems in New York in 1953. Includes several papers describing test scoring machines which had been in active use for more than a decade at the time.
international conference
1970 Detailed description of an early computer-based Instructional Management System (Conwell approach) including tests – with a sophisticated approach to objectives, learner characteristics, learning styles and categorization of learning.
1971 Review of automated testing at the time by the Office of Naval Research. Considers test anxiety, validity and reliability, natural language processing, automated interpretation and more.
1974 Description of a computer-assisted diagnostic assessment given to medical students at the University of Illinois. It was created in a program called Coursewriter and allowed students to answer questions, skip and come back to review them later and give feedback printout 30 minutes after the test.
1977 200 page survey of US military computer-based training systems in 1977. Lists about 60 authoring tools/procedures, includes mention of PLATO, TICCIT, some coverage of computer assessment.
1981 Description of testing at BYU where computerization helped them deliver 300,000 tests per year.
1984 Detailed description of software for computer adaptive testing for the US Armed Services Vocational Aptitude Battery tests. Technical description and user manual. Features include automatic calling of proctor if too many keying errors made, ensuring that similar questions to previous ones not selected at random and holding demographic data within the system.
1985 Reviews of 18 sets of microcomputer item banking software: AIMS (Academic Instructional Measurement System); CREATE-A-TEST; Exam Builder; Exams and Examiner; MicroCAT; Multiple Choice Files; P.D.Q. Builder; Quiz Rite; Teacher Create Series (5 programs); TAP (Testing Authoring Program); TestBank Test Rite; Testmaster; Testmaster Series; Tests, Caicreate, Caitake; Tests Made Easy; TestWorks; and the Sage.   Several programs used item banking by topic, random selection and password control.
1985 Report from the University of Pittsburgh about the state of the art in computer-assisted test construction – using computers to generate items or select items to form a test – includes a lot about levels of difficulty, use of IRT, test blueprints.
1985 Description of using the MicroCAT computerized testing system within the US Navy. Explains features of the software including a central proctor station which controls testing.

It’s great to see the huge variety and innovation in computer testing from decades ago. The 1953 material is unlikely to be useful prior art today but some of the 1970s or 1980s material could be.

John Kleeman, June 6, 2012

Question Mark for Web (1995-96)

QM Web Manual coverQuestion Mark for Web (QM Web) is believed to be the world’s first commercial web testing and surveying product. QM Web version 1 was released in October 1995. Version 2 of QM Web was released in September 1996.

To quote contemporary descriptive language: “Using QM Web, you create your questions and load them onto a Web or Intranet server. People anywhere in the world can then answer the questions using just a Web or Intranet browser (e.g. Netscape Navigator or Internet Explorer). Their answers can be sent back to you for analysis, or else marked on the server with immediate results.  If you need to gather information from people and analyze the results, using the Web or an Intranet, then QM Web is the answer for you. Typical uses of QM Web include:

  • Distance education. If you are presenting courses at a distance, you can add assessment – finding out what people know, checking what they have learnt, or offering a formal end of course exam to validate performance.
  • Self-assessment teaching material. Add interactivity to your training and educational material. Create quizzes that people can take to find out how much they know, or as practice tests prior to important exams.
  • Checking employee competence. Ask questions to prove your sales staff understand the products they are selling or that your employees know safety regulations.
  • Recruitment skill tests. Get a prospective employee to take a test to give you objective information on their capabilities, or gather information on the Web prior to selecting people for interview.
  • Surveys and questionnaires. Ask employee attitude surveys, get feedback from trainees on the quality of courses or ask your customers or people accessing your site what they think of your service.

QM Web gives you instant assessment, instant learning and instant results, anywhere in the world.”

QM Web took questions and assessments created in Question Mark Designer for Windows (a Windows assessment management system) and converted them into HTML for use on the web. A server-side program then allowed scoring, feedback and results reporting.

Evidence of its release and availability:

Evidence of its functionality:

QM Web functionality grew from its original release in 1995 to fuller capability in 1996. Manuals on this site have been uploaded with the permission of the copyright holder.

Feel free to ask me any questions about QM Web.

John Kleeman, May 29 2012