Feature Wiki
Tabs
Competence-driven Question Groups and Test Parts
Page Overview
[Hide]1 Initial Problem
Rallying Cry: We want to be able to measure student performance and competence levels more accurately and combine this automated assessment with self-evaluation data.
At the moment, ILIAS only offers competence measurment in the test based evaluating single questions in order to derive conclusions as to which competences have been acquired by the learner. In many cases, it is virtually impossible to measure a competence (level) on the basis of the results of a single, isolated questions. However, right now there is no way of grouping test questions in ILIAS.
In addition, it is courrently not possible to split a test up into several test parts that behave differently. This includes test "playback settings" (like 'mix questions for this part' and 'dont't mix questions for that part') or whether a test part should be shown at all (adaptive testing) depending on how well the learner performed in a previous part.
Furthermore, in order to assess competences properly, self-evaluations (in contrast to giving answers to test questions) are a crucial tool that is missing entirely in the testing procedures that ILIAS offers right now. It is not possible to alternate blocks of self-evaluation (e.g. a short competence-base survey) with questions (or groups of questions or test parts) that try to assess student perfomance in this ara of competences / this field.
If ILIAS were able to combine the feedback on how well a student did in a test part with the data of how they evaluated themselves (with reagrd to these competences or learning outcomes), it would be possible to achieve a much more effective impact on the students learning process by showing them really relevant learning suggestions and "learning progress" feedback.
In a way, this goal requires the ILIAS test module to have a similar gap analysis tool like the survey does, but with an internal interface that makes this data usable for other ILIAS modules and services. With this, it would be possible use this information to point out where their largest differences or gaps are located in terms of "what they think they know and what they think they can do" (self-evaluation), "what they actually know and do with this knowledge" (student performance) and where they should acutally according to the competence profile ('what they should know') for their course of studies (a predefined set of competences and competence levels that is the target for a 'typical' student of a subject).
The goal is make the ILIAS test module fit for these challenges.
2 Conceptual Summary
We assume theat we are looking at a computer science stundet in their first couple of semesters. The students are required to / invited to use a diagnostic "tool" / programmme based on ILIAS tools that tries to determine where their strong sides and weak spot are for a certain course or lecture (competence-based).
After starting the test, the student is required to fill out a short survey that asks them how well they think their perform in the field of analysis and calculus. After completing the fiirst part ('survey'), the ILIAS directs the student immediately to the next part which presents test questions. In this test part the student fails in a basic question (or fail a question group) regarding complex numbers by getting only 0 out of 3 points, because he was not able to multiply complex numbers properly.
This triggers the test to show an additional test part that contains two groups of questions, each comprising 4 questions regarding several aspects of complex numbers. These questions groups address and test (measure the performance for) a set of competences to yield data for the ILIAS competence management service. Another student, who got 2 / 3 points in the trigger question (or the trogger question group) is not confronted with "in depth" competence assesment block for "complex numbers" but moves on directly.
After thas the test "resumes" its normal course and presents its other survey and test parts which again contain question group that adress other mathematical areas by addressing and measuring competences sets for calculating with basic trogonometric funtions, some vector calculations, ... .
Since this development involves large changes in the behaviour of the test object it might be a good idea to break it down into several smaller parts:
- Introduction of Question Groups
- Competence Management for Questions Groups
- Introduction of Learning Sequences (new container object) / an Object Sequence Player Object (deprecated Introduction of Test Parts)
- Competence Management for Test Parts
- Routing Rules in Test Parts for Adaptive Testing Scenarios
- Surveys as Test Parts
- Gap Analysis of Competence Results from Tests
3 Contact
- Author of the Request: Glaubitz, Marko [mglaubitz]
- Maintainer: {Please add your name before applying for an initial workshop or a Jour Fixe meeting.}
- Implementation of the feature is done by: {The maintainer must add the name of the implementing developer.}
4 Funding
If you are interest in funding this feature, please add your name and institution to this list.
- we might need some help here, but the project "kosmic - KompetenzOrientierte Selbstlernangebote für Mathematik, Interkulturalität und Chemie" Universität Freiburg is interested in funding parts of this#
- ...
5 Discussion
AT: 2017-03-07: To enure that your ralleying cry can be met with swift discussion and implementation do separate this article into several independent articles. Your maximum slot in the Jour fixe is 30 minutes ;-)
- Editing Question Blocks (much prefered term over "Groups", that lable is taken and "question Blocks is an existing survey concept)) in Question Pools
- I am not sure whether this touches on "more than one question per screen"
- Questions Blocks as comptence triggers
- Test parts
- Play-back-something / Adaptive testing (much preferred term would be routing rules or routing questions, since this is an existing survey concept)
- Tightly packing Competence Self-Evaluation and Competence Measurement
- Contrasting Results of Competence Self-Evaluation and Competence Measurement
- Visualization of Gap analysis
2017-03-08, Glaubitz, Marko [mglaubitz]: Alexandra, your wish is my command. Done. Though some parts still need some finishing and polish... ;)
6 Implementation
{The maintainer has to give a description of the final implementation and add screenshots if possible.}
Test Cases
Test cases completed at {date} by {user}
- {Test case number linked to Testrail} : {test case title}
Approval
Approved at {date} by {user}.
Last edited: 1. Feb 2022, 09:16, Falkenstein, Rob [rob]