Feature Wiki

Information about planned and released features

Tabs

Edit Questions of Test which already contains datasets

1 Description

It must be possible to adapt questions of a test, even a test already contains datasets. This is important for correcting mistakes in tests.

- Questions of a test can be changed at each time.
- Delete a question: The question is not valid for the users any more, who have participated. The learning progress will be calculated again.
- Update a question: The learning progress will be calculated again.
 
The test creator is responsible to act conscientiously with this function. If it is a risk for e.g. semester exams we could establish a new setting "adaptable test after the test is productive"

2 Status

  • Scheduled for: Not scheduled yet  (more likely a 4.4 feature, see JF comments of 2 May 2012)
  • Funding: Required
  • Development: Feature is to be developed by Databay AG (JF 16 Apr 2012: since this is a high risk implementation, it should be done by the component maintainers only)

3 Additional Information

  • If you want to know more about this feature, its implementation or funding, please contact: Martin Studer, sr.solutions

4 Discussion

JF 16 Apr 2012: This would be a major change which would have several implications for tests. Please get in contact with the T&A maintainers Databay AG. If they agree on a technical feasibility and are willing to implement this, we could include this with 4.3.

BH & MB 25 Apr 2012:
 
We have discussed this feature in depth and report the following:

  • We second, that this is a very risky implemenation. We point out to the following points to be considered:
    • Due to possibly 'hidden' consequences, like opening/closing access to elements that have such a test as precondition, we fear the ability of a test editor to 'act conscientously' is always limited.
    • Since a change would mean to re-evaluate all test passes that were ever taken, such an update could raise considerable performance issues.
    • Since not all questions are scored automatically, we cannot guarantee a consistent working re-evaluation. Example here may be the manually scored essay question. A change here would mean to call all tutors and review all essays. We doubt, that this is a feasible process step for edited tests with a lot of existing datasets.
  • From a strictly technical point of view, an implementation is of course possible, but would mean a lot of work, since all question types and their respective answer classes need to be overhauled to allow a non-pass-related re-evaluation of an answer. We cannot just inject a saved answer back into the question type because active test / pass relations would be violated and the outcome would be unpredictable.
  • We strongly advise the JF to postpone this feature to release 4.4. If JF decides to take the above mentioned risks, we would feel a little better about the whole thing if we could take advantage of a prolonged testing phase.
  • In regards to the design, we have the following notes:
    • It should be decided, if the editing of questions with data sets present should be limited to administrators, as they promise a better insight into the impact such an edition has to their systems.
    • In regards to the setting, to limit the use of this feature to certain tests (as suggested), we fear, that this setting may not have the desired effect. We expect test editors to leave this option to allow editing of the tests, because they would 'anyways only use it in case of emergency', while the need to edit a question in a test that prohibits the editing may still arise.

BH &MB 26 Apr 2012:
We discussed this feature with Martin Studer and he came up with a feasible approach to solve the issue regarding manually scored answers: The editor should have here a selection, if the scoring status of this question should be set back to unscored or not.

BH & MB 26 Apr 2012:
With MJ involved, we came across yet new difficulties. While the general process of re-evaluation is still explainable in simple terms, the flavours of questiontypes pose an enormous challenge. Each one of those needs to be dealt with separately when it comes to this feature: All types need a new editing mode, which limits the capabilities of the form to prevent unwanted/unexpected behaviour.
 
Prior to touching code, we need to develop a concept on a "per questiontype and per setting change basis" regarding the behaviour of every single question type in case of a modification, with test data present. This paper must serve the JF to decide on details of each consequence.

MST 02 May 2012:
 
Wir haben die Funktionen mit Databay diskutiert und sind auf eine sehr umfassende Lösung gekommen, was das Prüfen der Änderungen anbelangt, welche von einem Tests-Administrator vorgenommen werden. Der Aufwand diese Lösung umzusetzen, steht in einem ungünstigen Verähltnis zum Nutzen, welcher daraus entsteht.
 
Könnt ihr bitte folgenden pragmatischen Ansatz besprechen:
1. Für die Änderung der Fragen bei einem Test, welcher bereits Resultate enthält, muss dieser offline genommen werden. So verhindert man Effekte bei Tests, welche im Moment von TN ausgefüllt werden.
2. Beim Speichern einer Frage wird geprüft, ob diese bereits in einem Test mit Benutzerresultaten (im Status offline) eingebunden ist, welcher das entsprechende Flag gesetzt, dass die Fragen auch nach Test-Beginn geändert werden dürfen. Diese Prüfung findet in folgender Methode statt: assQuestionGUI -> Save(). Die Fragenänderung wird mit einer entsprechenden Warnung aber ohne weitere Plausibilitätsprüfung gespeichert.
3. Der Test, in welchem die Frage eingebunden ist, wird für ILIAS entsprechend markiert. Die Markierung weist darauf hin, dass hier die Testreslutate neu berechnet werden müssen (ob dies nun direkt im Hintergrund geschieht oder ein Cronjob sich dem annimmt ist zu diskutieren.) So lange die Testresultate nicht überarbeitet sind, werden nach wie vor "falsche" Testresultate angezeigt.
4. Der Prozess, welcher nun nachträglich angestossen wird, muss nun bei sämtlichen  Benutzern des Tests die Fragen neu auswerten. Ich würde hier vorschlagen eine separate Klasse zu schreiben, welche für das nachträgliche bewerten eines Tests verantwortlich ist. Die entsprechenden Methoden eine Frage auszuwerten sind in ILIAS bereits vorhanden. Fragen welche nicht automatisiert bewertet werden können, werden nicht automatisiert bewertet. Dies wird dem Tutor überlassen.

Sämtliche Nebeneffekte überlassen wir dem Test-Administrator. Er ist selbst dafür verantwortlich, dass er Fragen, welche in Tests eingebunden sind und bereits gestartet sind, sinnvoll anpasst. Wenn ein Testadministrator beispielsweise bei einer Multiple-Choice-Frage mit 4 Antwortmöglichkeiten die 4. Antwortmöglichkeit, welche vor der Überarbeitung als korrekt galt streicht, bekommt der Benutzer, welcher die 4. Antwort angewählt hatte neu 0 Punkte, da es diese Antwortmöglichkeit nicht mehr gibt.
 
Wir können und sollten auf m.E. auf aufwändige Prüfungen von Änderungen verzichten. Wir können dies in diese Verantwortung dem jeweiligne Testsadministrator übergeben.

JF 2 May 2012: BH and MB will discuss this approach internally and present their findings and suggestions on the next JF. Currently we have the feeling that this feature will not become part of 4.3 due to its complexity (see also Revision of Test Evaluation).

MB / BH 16 May 2012:
To the individual points of the approach of MST:

  1. We definitely support that tests must be taken offline for any editing.
  2. We still have to find out on a per-question basis, if such changes are possible _at all_. Background is, that the current implementation quite often does not update records but deletes and creates new ones. This would break the referential integrity of the involved tables when it comes to auto_increments/sequences and we fear that a quick approach, with marks on questions being eligible for editing, will dramatically limit the use of the feature.
  3. JF has to decide, if showing wrong data for such a period of time is acceptable. The maintainers have no technical issues with that in general.
  4. As maintainers, we warn about the consequences with the manually scored questions. Regardless of other modifications, the scores achieved with manually scored questions may alter the overall test result for many users until the new score is applied. We point out to the implications this has on the learning progress and lp-based access to features/subsequent tests. We would instead try to keep manual scores and would dig deeper into the matter if this idea finds general approval.
Regarding the example with the multiple choice question, we allow ourselves to answer with an example as well:
 
If the saving of answers for this question type means that the answers get new Ids due to the present delete/insert-behaviour instead of the update-behaviour, the removal of one answer would lead to no already given answer being 'mappable' to the new answers of the question. All points would be lost, no answer given can then correspond to an answer available. Total loss. We have confirmed this with the most popular question types. There is no way around making this modification a big, time consuming one with every question type being analyzed and overhauled.
 
We are of course aware, that everyone who thinks about the mechanics involved in "saving a question" when "changing/editing/modifying" would expect that the questions/answers are updated. We do not get our cars squashed and rebuilt when we give it to a mechanic to change tires... But, we are afraid, the current state of affairs is that this, in fact, does happen. It's simply not that simple. Historically, tests were designed as being uneditable with test results present. So, the implementation itself is stable as such - just it doesn't come handy for this feature.

5 Follow-Up

Last edited: 20. Mar 2023, 09:15, Samoila, Oliver [oliver.samoila]