Validity and validation of innovative educational assessments
Abstract
Innovations in learning science, technology and psychometrics provide us with opportunities to develop innovative assessments where principles stemming from learning theory and assessment become more entwined. Some of these assessments aim to evaluate the learning of complex competencies through digital authentic assessments or included game-based elements to create an immersive assessment experience. Data collected within these assessments tend to be multi-modal in a sense that both process data as well as outcome data are used to make inferences about student learning. These inferences differ from inferences in more ‘traditional’ summative educational assessment contexts, therefore it is important to establish the validity of these innovative assessments. This presentation explores the possibilities of using the argument-based approach to validation (Kane, 2013; Kane & Wools, 2019) for innovative digital assessments with multi-modal data collection. Therefore, an interpretive argument is discussed that specifies inferences drawn within these assessments and elaborates on combining data to establish a validity argument for these innovative assessments. Special attention is paid to the potential benefits as well as new threats to validity that arise when process data is used to make claims about student performance.