The CRMDA hosts a number of research workgroups focusing on methodology questions. These groups typically consist of 5 to 15 undergraduate, graduate, post-doctorate, and faculty researchers. The goal of each of these groups is to produce leading research on methodological questions that are relevant to the social and behavioral sciences. Groups meet weekly to discuss current research, brainstorm research ideas and methods, and interpret results.
|CMRDA Work Groups: Fall 2012|
Intraindividual Psychological Dynamics (iPsychD)
(Pascal Deboeck) Meets every other Monday from 9:00 a.m. to 11:00 a.m., 451 Fraser, first meeting will be Monday, January 28th
This workgroup will be focused on novel methodology for the analysis repeated, intraindividual measurements, particularly in cases where change is poorly described using monotonic trends. While the nature of the workgroup is anticipated to change from semester to semester, the overarching goals are 1) expansion of our methodological skill set and 2) application of methodology to data. This semester we will be focusing primarily on the first goal through the reading of "Nonlinear Dynamical Systems Analysis for the Behavioral Sciences Using Real Data."
|Aaron Boulton||David Chu||Pascal Deboeck||Scott Drotar|
|Caleb Gardner||Fei Gu||William Kennedy||Sunthud Pornprasertmanit|
This group is new and more information will be added as it becomes available.
Machine Learning and Bayesian Belief Networks
(David K. Johnson) Meeting Time: May 7, 2013 at 2:30 p.m. in Watson 455 and May 21, 2013 at 2:30 p.m. in Watson 475
Bayesian Belief Networks are novel analytic methods for the integration and testing of hypotheses using (1) expert knowledge (2) rational hypothesis testing based on clinical expertise, (3) supervised machine learning to discover complex interrelationships that are not intuitively comprehended, and (4) structured reasoning and inference in datasets where there is expert domain knowledge for a complicated process but also a high degree of uncertainty about how outcome variables may be related. Bayesian inference provides a principled way of combining empirical evidence recovered from the data with prior beliefs. BBN explicitly specify a set of interrelationships among clinical outcomes and through a well-understood iterative data sampling process, BBN ‘learn’ from the data whether those prior beliefs (hypotheses) are supported or rejected. Relevance-based reasoning (a logical decision process developed by artificial intelligence research informs the expert whether his/her beliefs are supported (i.e., succeed or fail to reach a minimum threshold for inclusion in the network). BBN are grounded in a mathematical formalism for defining complex probability models and causal inference that have revolutionized the fields of artificial intelligence, engineering, and computational bioscience.
- To develop a longitudinal Bayesian Belief Network in a large sample of autopsy confirmed patients with Alzheimer’s disease to gain an idealized baseline of AD-related cognitive decline, establish a prototypical network of cognitive outcomes, and investigate how that network changes over time.
- We will cross-validate an existing BBN in 3 independent datasets (an autopsy-confirmed dataset and 2 large clinical datasets) and then extend the BBN developed in neurocognitive data to include MRI, PET and CSF biomarkers from the Knight ADRC (KADRC) database.
|David Chu||Maurico Garnier Villarreal||Jared Harpole||David K. Johnson|
(Adam Hafdahl) Meets on Wednesday from 10:00 a.m. to 12:30 p.m., 475 Watson
First, we aim to understand fundamental and advanced methodological concepts, procedures, and issues in both meta-analysis and the broader endeavor of research synthesis. Our second aim is to advance methodological knowledge in meta-analysis and research synthesis and promote responsible use of these methods by developing novel techniques, evaluating extant approaches and applications thereof, and disseminating resources for practicing meta-analysts (e.g., reviews, didactic papers, software).
|Eugene Botanov||Noel Card||Ian Carroll||Adam Hafdahl|
|Neal Kingston||Alex Schoemann||Steve Short|
Members of this new group are progressing on the following projects:
1. review meta-analytic procedures for multivariate and other dependent effect sizes, with a didactic emphasis that covers software implementation
2. investigate how psychologists view and use meta-analysis and how this relates to prior experience with relevant methodology
3. propose and assess meta-analytic techniques for mediation and other indirect effects
4. develop methods to express random-effects meta-analytic results in other metrics (e.g., path models from correlation matrices)
5. develop meta-analytic techniques for degraded estimates of effect size, such as from reports of significance tests
6. refine techniques for multivariate random-effects meta-analysis, especially with incomplete estimates or study-level covariates
(Wei Wu & Todd Little) Meets on Friday from 1:00 p.m. to 2:00 p.m., 455 Watson
The missing data workgroup aims to foster interest in missing data analysis and designs accommodating missing data, learn advanced missing data techniques and software packages to implement the techniques, master critical skills in conducting simulation studies in missing data analysis and develop new methods to improve the current practice of missing data analysis.
The missing data workgroup is currently working on a variety of projects including
- Planned missing data designs in longitudinal and clustered data.
- Model fit evaluation with missing data
- Quantify the fraction of missingness.
- Multiple imputation with large scale data
- Multiple imputation with clustered data
- Comparison of different multiple imputation algorithms.
- Exploratory factor analysis with missing data.
- Full information maximum likelihood versus multiple imputations: when they work and which is better.
|Kelly Crowe||Chantelle Dowsett||Mauricio Garnier-Villarreal||Beth Grandfield|
|Catherine Gunsalus||Fan Jia||Terry Jorgensen||Richard Kinai|
|Kyle Lang||Todd Little||Luke McCune||Brent McPherson|
|Whitney Moore||Youngha Oh||Pavel Panko||Jay Patel|
|Sunthud Pornprasertmanit||Mijke Rhemtulla||Graham Rifenbark||Leslie Shaw|
|Alex Schoemann||Wei Wu|
- We just received a $400,000 funding from National Science Foundation (NSF) for a four-year project (2011-2015) on planned missing data designs for longitudinal data*.
- We organized a missing data analysis symposium in APA 2011 in which we had two presentations in the symposium.
- We give regular seminars on missing data analysis
- We have created R and SAS code templates for simulation studies in missing data analysis.
- Rhemtulla, M., & Little, T. D. (2012). Planned missing data designs for research in cognitive development. Journal of Cognition and Development, 13, 425-438.
- Savalei, V., & Rhemtulla, M. (2012). Teacher’s Corner: On obtaining estimates of the fraction of missing information from FIML. Structural Equation Modeling, 19, 477-494.
- Rhemtulla, M. & Little, T. D. (in press). Planned Missing Data Designs for Longitudinal Organizational Research. To be published in M. Raukko & E. Paavilainen-Mäntymäki (Eds.) Handbook of Longitudinal Research Methods in Studies of Organizations.
- Rhemtulla, M., & Savalei, V. (2012, May). Power in planned missingness designs. Paper presented at Modern Modeling Methods (M3) Conference, Storrs, CT.
- Rhemtulla, M., Little, T. D., Moore, W. G., Gibson, K., & Wu, W. (2012, March). Planned missing designs for longitudinal research. Poster presented at the Biennial Meeting of the Society for Research on Adolescence, Vancouver, Canada.
- Rhemtulla, M. (2011, August). Missing data mechanisms and techniques. Talk given at the 2011 meeting of the American Psychological Association, Washington, DC.
- In preparation
- Garnier-Villarreal, M., Rhemtulla, M, & Little, T.D. (2012). Two-method planned missing designs for longitudinal research. [Manuscript in preparation].
- Howard, W. J., Little, T. D., & Rhemtulla, M. (2012). Using principal component analysis (PCA) to obtain auxiliary variables for missing data estimation in large data sets [manuscript in preparation]
- Jia, F., Moore, E. W. G., Kinaim, R., Crowe, K. S., Schoemann, A. M., & Little, T. D. (2012). Planned missing data design on small sample size: How small is too small? [Manuscript in preparation].
- Jorgensen, T. D., Schoemann, A. M., McPherson, B., Rhemtulla, M., Wu, W., & Little, T. D. (2012). Assignment methods in three-form planned missing designs [Manuscript in preparation].
- Rhemtulla, M., Jia, F., Wu, W., & Little, T. D. Planned missing designs to optimize the efficiency of latent growth parameter estimates. To be submitted to the International Journal of Behavior Development.
- Schoemann, A. M., Miller, P. R., Pornprasermanit, P. (2012). Using Monte Carlo simulations to determine power and sample size for planned missing designs. [Manuscript in preparation].
*This material is based upon work supported by the National Science Foundation under Grant No. 1053160. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
(Wei Wu & Todd Little) Meets on Friday from 12:00 p.m. to 1:00 p.m., 455 Watson
The model evaluation workgroup aims to develop new and evaluate existing methods of model fit estimation for latent variable models. We also aim to develop an R package that helps applied researchers conduct Monte Carlo simulation studies for model evaluation or other purposes. Current projects include:
- Design the simsem package for R. The goal of this package is to help researchers run a simulation study in structural equation modeling for different purposes, such as model fit evaluation and power analysis.
- Assess/develop model fit criteria that account for trivial model misspecification in absolute model fit and model selection, with an emphasis on testing for measurement invariance.
- Develop new methods for model fit evaluation that account for the drawbacks of existing methods based on different estimation approaches (e.g., maximum likelihood, bootstrap, weighted least squares).
- Evaluate newly proposed methods for model fit evaluation, such as posterior predictive model checking approach, for various model types including (a) confirmatory factor analysis; (b) structural equation modeling; (c) growth curve modeling; (d) mixture modeling, etc.
- Evaluate the robustness of current model evaluation approaches in various contexts, such as nonnormality, missing data, and correlated errors.
- Develop guidelines to define trivial model misspecification for structural equation models.
- Develop new methods to quantify model parsimony.
|Aaron Boulton||Maurico Garnier-Villarreal||Beth Grandfield||Jared Harpole|
|Robert Hughes||Terrence Jorgensen||Jason Lee||Todd Little|
|Pavel Panko||Sunthud Pornprasermanit||Corbin Quick||Alex Schoemann|
|Leslie Shaw||Wei Wu|
- Pornprasertmanit, S., Wu, W., & Little, T. D. (in press). Taking into account sampling variability of model selection indices: A parametric bootstrap approach. Multivariate Behavioral Research, XX, XX-XX (Abstract).
- Pornprasertmanit, S., Wu, W., & Little, T. D. (in press). Using a Monte Carlo approach for nested model comparisons in structural equation modeling. Springer Proceedings in Mathematics & Statistics, XX, XX-XX.
- Jorgensen, T. D. (2012, March). Simulation study of posterior predictive p values in a Bayesian confirmatory factor analysis. Poster session presented at the Graduate Research Competition, University of Kansas, Lawrence, KS.
- Boulton, A., Pornprasermanit, S., Jorgensen, T. D., & Hughes, R. (2012, May). Recent advances in model evaluation. Presented at the University of Kansas Quantitative Psychology Weekly Colloquium series, Lawrence, KS.
- Boulton, A., & Preacher, K. J. (2012, May). Level-specific fit index sensitivity in multilevel structural equation modeling. Paper presented at the Modern Modeling Methods Conference, Storrs, CT.
- Jorgensen, T. D. (2012, May). Simulation study of posterior predictive p values in a Bayesian confirmatory factor analysis. Paper presented at the Modern Modeling Methods Conference, Storrs, CT.
- Schoemann, A., Little, T. D., Rhemtulla, M., & Pornprasertmanit, S. (2012, May). From discrete conditions to continuous factors: Rethinking methodological simulations. Paper presented at the Modern Modeling Methods Conference, Storrs, CT.
- Pornprasertmanit, S., Wu, W., & Little, T. D. (2012, May). On a Monte Carlo Approach for Model Fit Evaluation in SEM. Poster presented at the APS Annual Convention, Chicago, IL.
- Pornprasertmanit, S., Wu, W., & Little, T. D. (2012, July). Using a Monte Carlo approach for nested model comparisons in structural equation modeling. Paper presented at the 77th International Meeting of the Psychometric Society, Lincoln, NE.
- Schoemann, A., Pornprasertmanit, S., & Miller, P. J. (2012, July). simsem: Simulated structural equation modeling in R. Paper presented at the 77th International Meeting of the Psychometric Society, Lincoln, NE.
- Pornprasertmanit, S., Wu, W., & Little, T. D. (2012, October). Taking into account sampling variability of model selection indices: A parametric bootstrap approach. Poster presented at the Society of Multivariate Experimental Psychology annual pre-conference, Vancouver, British Columbia, Canada.
- Little, T. D., Schoemann, A. M., Pornprasertmanit, S., & Miller, P. (October, 2012). simsem: SIMulated Structural Equation Modeling in R. Paper presented at the Tenth Annual Society of Multivariate Experimental Psychology, Vancouver, Canada.
- Jorgensen, T. D., Garnier-Villarreal, M., Lee, J., Pornprasertmanit, S., Hughes, R., & Little, T. D. (in preparation). A Monte Carlo Simulation Study of Posterior Predictive Model Checking in Structural Equation Models [Manuscript in preparation].
- Jorgensen, T. D., Garnier-Villarreal, M., Pornprasertmanit, S., & Little, T. D. (in preparation). Revisiting Hu and Bentler (1999): Taking into account a model's error of approximation [Manuscript in preparation].
Neuroimaging Data Analysis (NiDA)
(David K. Johnson) Meeting times to be announced soon.
|Skyler Adams||Ruth Ann Atchley||Clinton Bell||Brad Fasbinder|
|Omri Gillath||David Johnson||Brent McPherson||Youngha Oh|
Psychometrics and Categorical Research Group
(Carol Woods & Billy Skorupski) Meets on Tuesday from 1:00 p.m. to 2:30 p.m., 455 Watson
This group focuses on methods in the domain of item response theory, nonparametric density estimation, differential item functioning (DIF) and ordinal data analysis. Specific projects currently pertain to kernel density estimation, multiple indicator multiple cause interaction models for testing nonuniform DIF, model fit in IRT, and model fit for the proportional odds version of the cumulative logits regression model with continuous predictors. MEMBERS
|Skyler Adams||Ian Carroll||Hsiang-Feng (Melody) Chen||Cameron Clyne|
|Jared Harpole||Suk Keun Im||Terry Jorgensen||Linette McJunkin|
|Melinda Montgomery||Corbin Quick||Graham Rifenbark||Billy Skorupski|
|Carol Woods||Mian Wang|
- Woods, C. M., Cai, L., & Wang, M. (in press). The Langer-improved Wald test for DIF testing with multiple groups: Evaluation and comparison to two-Group IRT. Educational and Psychological Measurement.
- Harpole, J. K., & Woods, C. M. (submitted). How bandwidth selections algorithms impact exploratory data analysis using kernel density estimation. Manuscript submitted for publication.
- Woods, C. M. (submitted). Comparing Groups with Logistic Regression: Difficulties and Solutions. Manuscript submitted for publication.
- Montgomery, M. & Skorupski, W. P. (2012, April). Investigation of IRT parameter recovery and classification accuracy in mixed format assessments. Paper presented at the Annual Meeting of the National Council on Measurement in Education, Vancouver, BC, CA.
- Carroll, I., & Woods, C. M. (2012, July). MIMIC DIF testing when the latent variable variance differs between groups. Presented at the annual meeting of the International Meeting of the Psychometric Society; Lincoln, Nebraska.
- Harpole, J. K. & Woods, C. M. (2012, July). Evaluation of bandwidth selection methods using kernel density estimation. Presented at the annual meeting of the International Meeting of the Psychometric Society; Lincoln, Nebraska.
- Md Desa, Z. D., Skorupski, W. P., & Johnson, P. E. (2012, July). Bifactor Compensatory and Partially Compensatory Multidimensional Item Response Theory for Subscores Estimation, Reliability, and Validity. Paper presented at the International Meeting of the Psychometric Society, Lincoln, NE.
- Rifenbark, G., & Woods, C. M. (2012, July). Evaluation of alternative measures for assessing goodness of fit within the cumulative logit proportional odds model. Poster presented at the annual meeting of the International Meeting of the Psychometric Society; Lincoln, Nebraska.
- Woods, C. M., Cai, L., & Wang, M. (2012, July). Evaluation of the Langer improved Wald test for DIF testing. Presented at the annual meeting of the International Meeting of the Psychometric Society; Lincoln, Nebraska.
- Skorupski, W. P. & Clyne, C. (2012, November). A Bayesian method for determining cutscores in Angoff standard setting. Paper presented at Teach Your Children Well: A Conference to Honor Ronald K. Hambleton, Amherst, MA.
(Paul Johnson) Meets on Friday from 11:00 a.m. to 12:00 p.m., 455 Watson
Explore R programming and the development of packages of re-usable code to the worldwide R network.
|Aaron Boulton||David Chu||Kelly Crowe||Patrick Edmonds|
|Caleb Gardner||Paul Johnson||Terrence Jorgensen||Brent McPherson|
|Pavel Panko||Sunthud Pornprasertmanit||Corbin Quick||Alex Schoemann|
The SVN archive is being migrated to GitHub at http://github.com/R-crmda. That houses R documentation, WorkingExamples, and some of our R packages. The R packages rockchalk, simsem, and portableParallelSeeds have been released to the worldwide network CRAN (http://r-project.org). An affiliated R project, SemTools, is also currently hosted on GitHub.