Skip to Main Content
 

Global Search Box

 
 
 

ETD Abstract Container

Abstract Header

Generalizability of statistical prediction from psychological assessment data: an investigation with the MMPI-2-RF

Abstract Details

2019, PHD, Kent State University, College of Arts and Sciences / Department of Psychological Sciences.
In the present study, the author employed tools and principles from the domain of machine learning to investigate four questions related to the generalizability of statistical prediction in psychological assessment. First, to what extent do predictive methods common to psychology research and machine learning tend to produce generalizable predictions; that is, how well do calibrated prediction models actually predict new data points in new settings? Second, how well do the methods considered compare with one another with respect to prediction generalizability? Third, to what extent does a model benefit from incorporation of more or fewer predictors; in other words, how should we value parsimony in applied prediction? Fourth, what is the most effective way to select predictors for model inclusion when attempting to maximize generalizable predictive power in psychological assessment? To address these questions, the author developed numerous predictive models, using multiple prediction criteria, in a calibration sample drawn from an inpatient psychiatric population at a county hospital, then externally validated those models by applying them to one or two clinical samples drawn from other settings. Model generalizability was then evaluated based on prediction accuracy in the external validation samples. Noteworthy findings from the present study include 1) statistical models generally demonstrate observable performance shrinkage across settings regardless of modeling approach, though they may nevertheless retain non-negligible predictive power in new settings; 2) of the modeling approaches considered, regularized (penalized) regression methods appear to produce the most consistently robust predictions across settings; 3) models appear to produce more accurate predictions when allowed to incorporate more, rather than fewer potentially important predictors, indicating parsimony may be over-valued in psychology research; and 4) multivariate models whose predictors were selected automatically (e.g., through stepwise-type procedures) tended to perform relatively well, often producing significantly more generalizable predictions than models whose predictors were selected based on theory.
Yossef Ben-Porath, Ph.D. (Committee Chair)
Mary Beth Spitznagel, Ph.D. (Committee Member)
Manfred van Dulmen, Ph.D. (Committee Member)
Richard Serpe, Ph.D. (Committee Member)
Michael Ensley, Ph.D. (Committee Member)
184 p.

Recommended Citations

Citations

  • Menton, W. (2019). Generalizability of statistical prediction from psychological assessment data: an investigation with the MMPI-2-RF [Doctoral dissertation, Kent State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=kent1563189576504633

    APA Style (7th edition)

  • Menton, William. Generalizability of statistical prediction from psychological assessment data: an investigation with the MMPI-2-RF. 2019. Kent State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=kent1563189576504633.

    MLA Style (8th edition)

  • Menton, William. "Generalizability of statistical prediction from psychological assessment data: an investigation with the MMPI-2-RF." Doctoral dissertation, Kent State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1563189576504633

    Chicago Manual of Style (17th edition)