Data-driven Feedback Generation for Modelling Exercises with an Application to Mixed Integer Programming
While predefined static tests such as the familiar unit tests for programming exercises are suitable to provide feedback for modelling and programming exercises at an almost arbitrarily large scale, they often fail to match the quality of manual feedback by domain experts.
We assume that for a given modelling exercise, students only hand in a small number of solutions that are fundamentally different from each other.
Based on this assumption, we seek to generate high quality feedback from clustering student solutions and manually tagging one representative from each cluster.
Ideally, this approach enables teachers to give high quality feedback to a large number of students by grading only a handful of solutions.