» Articles » PMID: 33575987

Accurate Models Vs. Accurate Estimates: A Simulation Study of Bayesian Single-case Experimental Designs

Overview
Publisher Springer
Specialty Social Sciences
Date 2021 Feb 12
PMID 33575987
Citations 4
Authors
Affiliations
Soon will be listed here.
Abstract

Although statistical practices to evaluate intervention effects in single-case experimental design (SCEDs) have gained prominence in recent times, models are yet to incorporate and investigate all their analytic complexities. Most of these statistical models incorporate slopes and autocorrelations, both of which contribute to trend in the data. The question that arises is whether in SCED data that show trend, there is indeterminacy between estimating slope and autocorrelation, because both contribute to trend, and the data have a limited number of observations. Using Monte Carlo simulation, we compared the performance of four Bayesian change-point models: (a) intercepts only (IO), (b) slopes but no autocorrelations (SI), (c) autocorrelations but no slopes (NS), and (d) both autocorrelations and slopes (SA). Weakly informative priors were used to remain agnostic about the parameters. Coverage rates showed that for the SA model, either the slope effect size or the autocorrelation credible interval almost always erroneously contained 0, and the type II errors were prohibitively large. Considering the 0-coverage and coverage rates of slope effect size, intercept effect size, mean relative bias, and second-phase intercept relative bias, the SI model outperformed all other models. Therefore, it is recommended that researchers favor the SI model over the other three models. Research studies that develop slope effect sizes for SCEDs should consider the performance of the statistic by taking into account coverage and 0-coverage rates. These helped uncover patterns that were not realized in other simulation studies. We underline the need for investigating the use of informative priors in SCEDs.

Citing Articles

Does the choice of a linear trend-assessment technique matter in the context of single-case data?.

Manolov R Behav Res Methods. 2023; 55(8):4200-4221.

PMID: 36622560 DOI: 10.3758/s13428-022-02013-0.


Power analysis for single-case designs: Computations for (AB) designs.

Hedges L, Shadish W, Natesan Batley P Behav Res Methods. 2022; 55(7):3494-3503.

PMID: 36223007 DOI: 10.3758/s13428-022-01971-9.


Defining and assessing immediacy in single-case experimental designs.

Manolov R, Onghena P J Exp Anal Behav. 2022; 118(3):462-492.

PMID: 36106573 PMC: 9825864. DOI: 10.1002/jeab.799.


Comparing the Bayesian Unknown Change-Point Model and Simulation Modeling Analysis to Analyze Single Case Experimental Designs.

Natesan Batley P, Nandakumar R, Palka J, Shrestha P Front Psychol. 2021; 11:617047.

PMID: 33519641 PMC: 7843386. DOI: 10.3389/fpsyg.2020.617047.

References
1.
Allison D, Gorman B . Calculating effect sizes for meta-analysis: the case of the single case. Behav Res Ther. 1993; 31(6):621-31. DOI: 10.1016/0005-7967(93)90115-b. View

2.
Baek E, Ferron J . Multilevel models for multiple-baseline data: modeling across-participant variation in autocorrelation and residual variance. Behav Res Methods. 2012; 45(1):65-74. DOI: 10.3758/s13428-012-0231-z. View

2.
Shadish W, Sullivan K . Characteristics of single-case designs used to assess intervention effects in 2008. Behav Res Methods. 2011; 43(4):971-80. DOI: 10.3758/s13428-011-0111-y. View

3.
Brossart D, Parker R, Olson E, Mahadevan L . The relationship between visual analysis and five statistical analyses in a simple AB single-case research design. Behav Modif. 2006; 30(5):531-63. DOI: 10.1177/0145445503261167. View

4.
Cobb P, Shadish W . Abstract: Assessing Trend in Single-Case Designs Using Generalized Additive Models. Multivariate Behav Res. 2015; 50(1):131. DOI: 10.1080/00273171.2014.988991. View