Feature Wiki

Information about planned and released features

Tabs

Introduction of Test Parts

This is a sub-entry of the feature request Competence-driven Question Groups and Test Parts

The entry is deprecated and this concept has been abandoned through the course of a virtual workshop on 17th March 2017.

The new conceptual approach to this is the Introduction of Learning Sequences (new container object) / an Object Sequence Player Object

1 Initial Problem

it is courrently not possible to split a test up into several test parts that behave differently. Some instructors would like to split up a test in to 3 three parts: an initial part with a fixed set of questions, a middle part with soem randomly selected ones, and a final part with agian a fixed set. SInce these parts are quite different, scoring, for example, needs to be set up individually for each part.

2 Conceptual Summary

Tests should be able to be split up into separate test parts which each behave differently. Each part should yield its own results, have it own competence assignments /  measurement and should should hold its own play back settings (mixing of questions, ...).

  • a test part can be a combination of
    • individual questions & question groups
    • and / or
    • self-evaluation blocks
  • a single test can comprise several test parts
    • test parts should be children of the test to which they belong
    • in general they a stubs in terms of which individual settings and presentations views they grant
  • realization as a new test setting:
    • "Use Test Parts"
      • monolithic test
      • test with test parts (in order to keep complexity down for "ordinary users")
      • results in activation of new tab "Test Parts" (otherwise hidden)
  • test-parts can be randomized within the global test

each test part has its own...

a test part does not have its own...

  • question sourc(es) and selection mode
  • results measurement und presentation mode (i.e. an admin can say whether the test part shows its results or not and if so to which degree of detail)
  • competence assignment / measurement
  • Settings Subtabs
    • General: stub of the "General" settings subtab with playback settings, ....
    • Mark Schema
    • Scoring and Results

  • participant management - all participants take part in all test parts
  • statistics view - this will be solved by filters in the statistics view
  • manual scoring management - solved by a selector for which test part is supposed to be marked

  • two basic modes of a test part
    • static mode (default): relevant for gobal result of the test
      • score will be included in the global test 
      • not possible to skip the test part through precondition
    • dynamic mode (optional): see Routing Rules in Test Parts for Adaptive Testing Scenarios for further details
      • not relevant for global result of test
      • score is not inclused in global mark at all
      • test part can hold preconditions and can be skipped
      • test part ass such has a resulh which is shown in the global result presantation but which is marked as "optional part - not counted"

  • test parts inherit their setting initially from the parent test
  • two possible playback modes:
    • linear playback order: it is not possible to jump between test parts
      • once a test part has been started it has to be completed before the user can move on
    •  non-linear playback order: the questions / question groups are randomized throughout the global test
      • due to examination regulations it is necessary to mark mc-questions different than free text / cloze
      • different lecturers build different test-parts but have to correct them too
      • the test-parts are only relevant "under the bonnet", in building a test and evaluating its results
  • presentation of results
    • optional, depends on the settings of this test part
    • presentation of results after completing the test part: can be switched on or off
    • after entire test is completed / finished: test parts are shown as separate blocks in the results presenation of the parent test

3 User Interface Modifications

3.1 List of Affected Views

  • Test -> Settings
    • new Setting "Test with Test Parts"
  • new Tab "Test Parts"
    • only shown when above setting is activated
  • ? Participants --> Show Details

3.2 User Interface Details

{For each of these views please list all user interface elements that should be modified, added or removed. Please provide the textual appearance of the UI elements and their interactive behaviour.}

3.3 New User Interface Concepts

{If the proposal introduces any completely new user interface elements, please provide a link to separate feature wiki entries for each of them according to the kitchen sink template.}

4 Technical Information

{The maintainer has to provide necessary technical information, e.g. dependencies on other ILIAS components, necessary modifications in general services/architecture, potential security or performance issues.}

5 Contact

  • Author of the Request: {Please add your name.}
  • Maintainer: {Please add your name before applying for an initial workshop or a Jour Fixe meeting.}
  • Implementation of the feature is done by: {The maintainer must add the name of the implementing developer.}

6 Funding

If you are interest in funding this feature, please add your name and institution to this list.

7 Discussion

Kergomard, Stephan [skergomard], 2022 Mar 9: We close this as part of the cleaning up effort undertaken in the Splitting-Up Test & Assessment Workinggroup.

8 Implementation

{The maintainer has to give a description of the final implementation and add screenshots if possible.}

Test Cases

Test cases completed at {date} by {user}

  • {Test case number linked to Testrail} : {test case title}

Approval

Approved at {date} by {user}.

Last edited: 9. Mar 2022, 13:30, Kergomard, Stephan [skergomard]