Methods Center

Recent developments in our research

Mediation with Latent Variables and Confounders

We just submitted a new paper on latent variable modeling with causal mediator models when unobserved confounding is present in the data. We provide evidence that the new g-estimation based estimator is robust to confounding and highlight finite sample size properties. We posted a preprint on arxiv.org

 

Morelli, S., Faleh, H. & Brandt, H (submitted). RAPSEM: Identifying Latent Mediators Without Sequential Ignorability via a Rank-Preserving Structural Equation Model. Article Github

Tutorial on DLC-SEM

We submitted this tutorial on DLC-SEM to model intensive longitudinal data with heterogeneous development trajectories. Starting with Bayesian CFA and time series implementations, we build complex dynamic structural equation models (DSEM) and its extension with Hidden Markov Models to model sudden changes and factorial switches. This hands-on tutorial is aimed at applied users as well as students who want to learn about this framework. 

 

Faleh, R., Sofia, M., Andriamiarana, V., Roman, Z. J., Flückiger, C., & Brandt, H. (submitted). Dynamic Latent Class Structural Equation Modeling: A Hands-On Tutorial for Modeling Intensive Longitudinal Data. Article Github Repository

Two new papers on Dynamic Latent Class Structural Equation Modeling for intensive longitudinal data

In the first paper, we explore sample size requirements for complex latent variable models that model heterogeneous growth patterns via Hidden Markov models. We provide recommendations for minimal numbers of subjects and time points in order to retrieve reliable parameter estimates.

In the second paper, we extend this research by investigating the use of Bayesian shrinkage priors, particularly for the Hidden Markov model with many covariates. We conclude that shrinkage priors might not always be the best choice.


Andriamiarana, V., Kilian, P., Brandt, H., & Kelava, A. (2025). Are Bayesian Regularization Methods a Must for Multilevel Dynamic Latent Variables Models? Behavior Research Methods. doi: 10.3758/s13428-024-02589-9

Andriamiarana, V., Kilian, P., Kelava, A., & Brandt, H. (2023). On the requirements of nonlinear dynamic latent class SEM: A simulation study with varying number of subjects and time points. Structural Equation Modeling, 30., 789–806. doi: 10.1080/10705511.2023.2169698
 

Comment paper on causal inference with single case experimental designs

In a recent comment paper, I highlight some problems in causal inference in social sciences that are based on the rather careless use of the underlying “sequential ignorability” assumption. While no solution is presented that could overcome the limitations, the paper should provide some basis for discussing the plausibility of causal models that use the sequential ignorability assumption.


Brandt, H. (2024). Causal definitions vs. casual estimation: A reply to Valente, Rijnhart, and Miocevic (2022). Psychological Methods, 29, 589–602. doi: 10.1037/met0000544
 

New publication for the detection of inattentive behavior

We provide a new model to detect inattentive (careless or with insufficient effort) behavior in questionnaire responses. We use a Hidden Markov Model that models (a) a theory-derived model for attentive persons and (b) a person-specific model for inattentive behavior. Persons can switch at any time point (i.e. at any question) from one to the other state of attention/inattention. This novel approach allows us to take the dynamic nature of inattention into account and overcomes subjective cut-offs and similar procedures. We illustrate the approach with an empirical data set, where we manipulated the attention of the participants. In the supplement, we provide links to the model and data on our github websites.  


Roman, Z. J., Schmidt, P. W., Miller, J., & Brandt, H., (2024). Identifying dynamic shifts to careless and insufficient effort behavior in questionnaire responses; A novel approach and experimental validation. Structural Equation Modeling, 31, 775–793 doi: 10.1080/10705511.2024.2304816
 

 

Case-to-Condition Ratios in Qualitative Comparative Analysis: Adding Cases Instead of Removing Conditions by Judith Glaesser

In qualitative comparative analysis, as with all methods, there is a question about how many cases are needed to make an analysis robust. In deciding on the number of cases, a key consideration is the number of conditions to be analyzed. I suggest that adding cases is preferable to dropping conditions if there are too many conditions relative to the number of cases. I first consider the relationship of low n and limited diversity, followed by an exploration of two scenarios: (1) cases in the study are the universe; (2) more cases could exist. I suggest that a simple rule or benchmark on how many cases to include in relation to the number of conditions is unlikely to be helpful since this depends at least in part on the goals and circumstances of the research. Finally, this issue is not confined to QCA but affects all types of research.

 

Glaesser, J. (2024). Case-to-Condition Ratios in Qualitative Comparative Analysis: Adding Cases Instead of Removing Conditions. Field Methods, https://doi.org/10.1177/1525822X241231479

Are Personality Tests Applicable to AI Large Language Models? A Critical Perspective

The use of established personality inventories to evaluate large language models (LLMs) such as GPT-4 or LLaMA-2 has gained traction in recent research. However, this study presents compelling evidence that such psychometric instruments are not directly transferable to artificial systems.

Specifically, the authors demonstrate that LLMs frequently agree with semantically opposing items (e.g., endorsing both introversion and extraversion) and fail to reproduce the factor structures typically observed in human personality data, such as the Big Five.

Drawing on key concepts from psychometrics – especially measurement invariance and construct validity – the authors argue that personality tests designed for humans cannot be meaningfully applied to LLMs without extensive adaptation. The paper calls for a cautious and theoretically grounded approach when assessing latent traits in artificial agents.

Literature

Sühr, T., Dorner, F. E., Samadi, S., & Kelava, A. (2023). Challenging the validity of personality tests for large language models. arXiv preprint, arXiv:2311.18351. https://doi.org/10.48550/arXiv.2311.18351

Two papers on DIF detection using Bayesian shrinkage methods

In two recent papers in collaboration with Dan Bauer from the University of Northern Carolina, we investigate how Bayesian shrinkage priors can be used to detect differential item functioning (i.e. items that measure differently across levels of covariates). We explore this topic for binary and continuous items and use the general moderated nonlinear factor analysis framework to provide a means to include many covariates as they may relevant for large-scale settings. All code for the different shrinkage priors is available here


Brandt, H., Chen, S. M., & Bauer, D. J. (2023). Bayesian penalty methods for evaluating measurement invariance in moderated nonlinear factor analysis. Psychological Methods. doi: 10.1037/met0000552

Chen, S. M., Bauer, D. J., Belzak, W. M., & Brandt, H. (2022): Advantages of Spike and Slab Priors for Detecting Differential Item Functioning Relative to Other Bayesian Regularizing Priors and Frequentist Lasso. Structural Equation Modeling, 29, 122-139. doi: 10.1080/10705511.2021.1948335

A Latent Auto-regressive Approach for Bayesian Structural Equation Modeling of Spatially or Socially Dependent Data

Spatial analytic approaches (or social network auto-regressive models) are classic models in econometric literature, but relatively new in social and behavioral sciences. These models have two major benefits. First, dependent data, either socially or spatially, must be accounted for to acquire unbiased results. Second, analysis of the dependence provides rich additional information such as spillover effects. In this article, we provide a cohesive nonlinear spatial structural equation modeling framework which can simultaneously estimate latent interaction/polynomial effects and account for spatial effects with both exogenous and endogenous latent variables, the Bayesian Spatial Auto-Regressive Structural Equation Model (BARDSEM). 


Zachary Joseph Roman & Holger Brandt (2023). A Latent Auto-regressive Approach for Bayesian Structural Equation Modeling of Spatially or Socially Dependent Data. Multivariate Behavioral Research
 

The Evolution of Patients’ Concept of the Alliance and Its Relation to Outcome: A Dynamic Latent-Class Structural Equation Modeling Approach

The working alliance (WA) has been widely identified as the key concept for psychotherapy and allied health care services. The WA, measured at different phases of diverse kinds of therapies, has been shown to robustly predict posttreatment outcomes. But the way the clients’ conceptualization of the alliance evolves overtime, and the relation between this kind of conceptual change and subsequent symptom improvements, has not been investigated. 
In this article, we investigated the psychometric properties of the WA Inventory with regard to its evolution – and fusion – of the underlying dimensions of the initial three-dimensional WA construct (task, goal, bond) to a single dimension over the course of treatment. Using Dynamic Latent Class Structural Equation Models (DLC-SEM) we investigated data from two randomized clinical trials of cognitive-behavioral therapy for generalized anxiety disorder to evaluate the structural changes in patients’ self-reports of the quality of the alliance and subsequent treatment outcomes. Results indicated a dimensional fusion for 63% and 66% of the clients. This study shows a potential to empirically explore prior theoretical propositions of the evolutions (or stability) of the alliance overtime as it enfolds over time.


Christoph Flückiger, Adam O. Horvath, & Holger Brandt (2022). The Evolution of Patients’ Concept of the Alliance and Its Relation to Outcome: A Dynamic Latent-Class Structural Equation Modeling Approach. Journal of Counseling Psychology, 69(1), 51-62.
 

Automated Bot Detection using Bayesian Latent Class Models in Online Surveys

Researchers have recently started to collect data on online platforms such as Amazon’s Mechanical Turk. The advantage of these platforms such as easy access to more representative samples though comes at the cost of a problem that has recently been discovered: statistical bots. These bots are used in large amounts to automatically complete surveys by persons with a financial gain. They contaminate any data obtained from online platforms. In this article, we will provide a Bayesian latent class model that can be routinely applied to identify statistical bots in online questionnaires and tests. The model is very flexible and is based on plausible assumptions that are met in most empirical settings. It provides a confirmatory framework in that the latent classes are predetermined in their meanings (bots vs. non-bots). In a simulation study, we show very beneficial estimation characteristics of (a) identifying bots, and (b) providing unbiased estimates for the remaining participants at the same time. We will illustrate the model and its capabilities with data from an empirical political ideation survey with known bots. 


Zachary Joseph Roman, Holger Brandt, & Jason Michael Miller (2022). Automated Bot Detection using Bayesian Latent Class Models in Online Surveys. Frontiers in Psychology