Collegiate Affiliation

My primary area of interest is computerized adaptive testing (CAT). CAT is the redesign of tests of ability, achievement, interests, personality, attitudes, preferences—or any kind of psychological variable—for delivery by computers. It is the application of both artificial intelligence and machine learning to the measurement of individual differences in psychological variables. In a CAT, test questions (or items) are selected dynamically by psychometric algorithms programmed into the computer that identify the most efficient and effective set of items to measure each individual. The result of applying CAT is the capability of measuring each individual to a predetermined level of precision, or classifying individuals with predetermined error rates, with a minimum number of items. Most contemporary CAT procedures are based on advanced psychometric models based in item response theory (IRT). Therefore, my interests extend into methodological issues in IRT as they relate to CAT.

My major current area of research is the measurement of individual change. Although there has been considerable research focused on analyzing change at the group level, or measuring change for an individual relative to a group, there has been virtually no research allowing the measurement of change for a single individual measured at two or more occasions on two or more variables. My research focuses on the development and evaluation of statistical tests applicable to both CATs and conventional tests for detecting psychometrically significant intraindividual change and applying them to real test data from a single individual measured multiple times with one or more variables.

A second area of research is designed to solve a problem in CAT that has not had an adequate solution in over fifty years of CAT research.  Although CAT has the capability of measuring each individual to a pre-specified degree of precision (as operationalized in a small value of error of measurement), real CAT item banks frequently preclude some examinees reaching that precision target due to deficiencies in the item bank. The result is that, for those examinees, no number of items in the bank will enable the examinee to meet the error of measurement termination criterion. My students and I have applied a procedure called "stochastic termination," borrowed from medical research and used in clinical trials, to predict for an examinee the probability that an examinee will reach the designated termination criterion for the CAT, as data are accumulated in the examinee's test.  If the prediction indicates that there is a high probability that the examinee will be unable to be measured to the desired level of precision, the test can be terminated, thus saving the examinee from having to answer many test items that will not be useful in measuring that examinee.

A third current area of research is on improving predictions from IRT-scored tests by using observed error of measurement values that result from IRT scoring.  This research examines improvements in predictive validity by using IRT errors of measurement in conjunction with examinees' IRT trait scores to predict external criteria.  Errors are measurement are used as both suppressor and moderator variables in prediction equations.  Results from one dataset with multiple predictors and multiple criterion variables show improvements in predictive validity attibutable to both suppressor and moderator effects in multiple regression. This research is being pursued in additional real datasets to determine the generality of the findings.

Educational Background & Specialties
Open Close

Educational Background

  • B.A.: Psychology, University of Pennsylvania, 1959
  • Ph.D.: Psychology, University of Minnesota, 1963

Specialties

  • computerized adaptive testing
  • psychometric methods
  • Item response theory