Master projects and master theses in spring + fall semester 2025

In the spring semester 2025 and fall semester 2025, students are currenlty working on the following topics as part of their master's projects and theses:

  • Pipelines for interpretable machine learning in psychological research
  • Evaluation of significance tests for Shapley values

Fall semester 2024 / Luana Brunelli: Reliability measures in intensive longitudinal data -- Understanding generalizability and multilevel theory with interactive tools

The increasing use of intensive longitudinal data (ILD) in psychological research makes reliable measurement of dynamic processes over time crucial. Particularly in contexts with repeated measurements, ensuring high reliability is a necessary prerequisite to ensure the validity and interpretability of ILD.

This project explores the applicability and performance of advanced reliability measures within the frameworks of Generalizability Theory and Multilevel Theory. These methodological frameworks aim at providing a more detailed understanding of measurement consistency and precision, accounting for the nested structures and variability observed across individuals and time points.

In a simulation study, we evaluated the effects of various design parameters on reliability outcomes in ILD settings. Our simulations shed light on how factors such as sample size, time points, and measurement error influence the robustness and interpretability of reliability estimates. We also illustrate how certain patterns in ILD data, such as interactions between specific persons and items, influence different reliability measures in different ways. Currently, we are developing an interactive Shiny application to present these findings in a user-friendly format. This tool allows researchers to dynamically explore the impact of methodological decisions on reliability estimates, bridging the gap between theoretical advancements and practical application.

By integrating simulation-based results with interactive visualization, our project puts emphasis on the importance of reliability in ILD research and provides researchers with innovative tools to make informed decisions in their study designs.

Spring semester 2024 / Jan Radek: Partial Credit trees meet the partial gamma coefficient

pg_figure

In psychology, tests such as personality or language assessments undergo evaluation before being released to the public. A common step in this evaluation process is to determine whether all items are fair across different groups of individuals. If this is not the case for an item in a test, it is referred to as differential item functioning (DIF) or differential step functioning (DSF). 

In this project, we focus on Partial Credit Trees (PCtree), a method that combines the partial credit measurement model from the item response theory with decision trees from machine learning. We extended PCtrees by incorporating an effect size measure for DIF/DSF in polytomous items, namely the partial gamma coefficient from psychometrics. Now researchers can see which items are affected by DIF/DSF and assess the meaningfulness the DIF/DSF effect size. 

During his internship, Jan showed in a series of simulation studies that the partial gamma coefficient aids researchers in evaluating whether splits in the tree are meaningful, identifying DIF and DSF items, and preventing unnecessary tree growth when effect sizes are negligible. Additionally, we also addressed the issue of item-wise testing corrections, especially in longer assessments and illustrated the new method using data from the LISS panel.

The simulation studies that Jan conducted are part of a published article that can be accessed here: https://link.springer.com/article/10.1007/s41237-024-00252-3