Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

Controlling Type I Errors in Moderated Multiple Regression: An Application of Item Response Theory for Applied Psychological Research

Morse, Brendan J.

Abstract Details

2009, Doctor of Philosophy (PhD), Ohio University, Industrial/Organizational Psychology (Arts and Sciences).
Applied psychologists have long recognized the importance of measurement as a key component of research quality, but the use of psychometrically sound measurement practices has not kept pace. Recent evidence has emerged to suggest that weak measurement practices can have serious implications for the accuracy of parametric statistics. Two simulation studies (Embretson, 1996; Kang & Waller, 2005) have identified that response score scaling and assessment appropriateness heavily influence the Type I error rate for interaction effects in moderated statistical models when simple raw scores are used to operationalize a latent construct. However, the use of item response theory (IRT) models to rescale the raw data into estimated theta scores was found to mitigate these effects. The purpose of this dissertation was to generalize these results to polytomous data that is commonly found in applied psychological research using a Monte Carlo simulation. Consistent with the previous studies, inflated Type I error rates for the interaction effect in a moderated multiple regression model were observed when raw scores were used to operationalize a latent construct. In the most extreme cases, this inflation approached 85%. Also consistent with previous studies, psychometric factors were found to have a greater impact on raw scores than on estimated theta scores, and assessment appropriateness was found to be the most influential factor on the empirical Type I error rate. Inconsistent with previous studies, an inflated Type I error rate was also observed under some conditions for the estimated theta scores suggesting that the graded response model (GRM) may not have provided a sufficiently equal-interval metric. Additionally, the expected interaction between assessment appropriateness and assessment fidelity was not found to be significant. Overall, these results suggest that the IRT-derived scores were more robust to spurious interactions than simple raw scores, but may still result in inflated Type I error rates under some conditions. The implications of these results are discussed from two perspectives. The performance of the GRM under the simulated conditions is emphasized for measurement researchers, and the usefulness of model-based measurement practices for improving research quality is emphasized for applied psychologists.
Rodger W. Griffeth, PhD (Committee Co-Chair)
George Johanson, EdD (Committee Co-Chair)
Jeffrey Vancouver, PhD (Committee Member)
Paula Popovich, PhD (Committee Member)
Victor Heh, PhD (Committee Member)
246 p.

Recommended Citations

Citations

  • Morse, B. J. (2009). Controlling Type I Errors in Moderated Multiple Regression: An Application of Item Response Theory for Applied Psychological Research [Doctoral dissertation, Ohio University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1247063796

    APA Style (7th edition)

  • Morse, Brendan. Controlling Type I Errors in Moderated Multiple Regression: An Application of Item Response Theory for Applied Psychological Research. 2009. Ohio University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1247063796.

    MLA Style (8th edition)

  • Morse, Brendan. "Controlling Type I Errors in Moderated Multiple Regression: An Application of Item Response Theory for Applied Psychological Research." Doctoral dissertation, Ohio University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1247063796

    Chicago Manual of Style (17th edition)