Dr. Markus Dahinden

Contact

Name

Markus Dahinden

Phone

work
+41 44 632 53 52

Email

E-Mail
 

Design principles and evaluation of a reliable CBA system for the collection of valid performance data

Measuring competences in higher education is challenging: on the one hand, it can be difficult for lecturers to formulate competence-oriented exam questions; on the other hand, the evaluation of exams can require a lot of time and can often absorb lecturers for days. With the utilization of a computer-based system, the development of exam questions can be optimised through process management and the time spent on eval-uating exams can be reduced significantly. The aim of this dissertation is to create the basis for a sustainable integration of computer-based assessment (CBA) in higher education. This is based on two prerequisites: First, the system has to be secure. It has to deliver reliable and provable exam results even in heterogeneous system environments. Second, the designed exam questions need to validly measure the cognitive performance of the students. To examine the security of existing systems, the operating modes of those systems were analysed and the potential risks were subjected to risk analysis.The results show that the systems are error prone due to data network failure or overload of the infrastructure. Additionally, the existing systems have only rudimentary technical features to determine the legal facts of the case of the exams. This is critical in the sense that eliminatin a physical exam document from the exam process constitutes a shift of paradigm in this process. Based on the findings from the risk analysis, five design principles for secure CBA-systems were developed. These include the use of digital signatures and sequence numbers to verify the integrity of the results. Furthermore the partial autonomy of individual components of the exam system is required. Based on these design principles, the Sioux exam suite has been implemented as a proof of concept. It has been evaluated in practice with a varying amount of students in both formative evaluations of training success and through graded summative exams. The results show that a CBA-system built on these design principles complies with the requirements of a secure and reliable system. Case-based exam questions were developed to validly measure the cognitive achievement of the studies. They are based on three different types of questions, which can be corrected automatically and are therefore suitable for CBA-systems. To prove empirically the quality of those questions, a development model was created, which is presented in this Thesis. Both the internal consistency and the validity of the exam results have been evaluated using external criteria. This analysis shows that competence-oriented and valid measurements are possible with case-based exam questions.

Through this research, the basis for reliable and automatic measurement of valid performance data has been laid. This data would mainly be used for grading students. However, it might also be useful to provide students with individualized feedback to develop future learning activities. This study concludes with the analysis as to whether performance data might also be useful in developing educational instruction. In doing so, the performance data, e.g., the survey results and the evaluation of the student performance through expert- and self-evaluation, were integrated with the teaching process data. The aggregated datasets have been clustered based on the performance data. The results show that educational data, along with retrospective and prospective analysis, can indeed support individual learning development. These analyses not only give the possibility to identify the cause and effect of teaching effort, but can also prove, for example, the causality between feedback, self-evaluation, and motivation.