Matthias von Davier & Stephen Sireci, Moderation: Saskia Wools


Research on the log data from educational assessment programs sometimes involves making inferences about students’ cognitive processes when responding to test items based on the data captured in log files as students navigate through a test, while other examples of research explore process data such as latency purely as predictors or covariates of achievement. Much research has derived constructs thought to represent students’ cognitive behaviors from these data. In such cases, these hypothesized constructs need validity evidence to confirm the hypothesized constructs are represented by these derived variables. In other cases, assessment systems are designed to gather specific response behaviors thought to represent cognitive constructs. These directly gathered variables also need validity evidence to support their empirical relation to the construct and need to be vetted by an interdisciplinary team that includes technical experts as well as content experts due to the ever-changing nature of interactive and increasingly intelligent technology-based platforms. The similarities and differences across these two approaches will be discussed with respect to construct validity (are these variables measures of the intended constructs) and construct-irrelevant variance (what information can these variables provide to improve test validity?).