Predictive Validity

Predictive validity is a subtype of criterion-related validity that refers to the degree to which scores from a psychological instrument can predict a criterion measured in the future.

Predictive validity is evaluated by examining the relationship between scores on the test (the predictor) and scores on a criterion measure collected at a later time.

The strength of predictive validity is often measured by the correlation between test scores and criterion scores, with higher correlations indicating stronger predictive validity.

For example, a correlation coefficient of 0.60 would indicate a stronger predictive relationship than a correlation of 0.30.

Predictive validity studies are often conducted in situations where test scores are used for decision-making, such as predicting success on the job or in educational settings.

For example, college admissions tests predict academic success in college (the criterion).

Why is predictive validity important?

Predictive validity is important because it can provide evidence that a test is useful for its intended purpose.

For example, if a college admissions test has strong predictive validity, it can be used to help identify applicants who are most likely to succeed in college.

This can help colleges make more informed admissions decisions and can potentially save students time and money by helping them avoid enrolling in programs for which they are not well-suited.

Predictive validity is particularly important in areas such as employment selection, clinical diagnosis, and educational placement, where test scores are used to make decisions that have significant consequences for individuals.

Predictive vs concurrent validity

Predictive validity and concurrent validity are both subtypes of criterion validity, which refers to the ability of a psychological instrument to predict an external variable (called a criterion) that, in theory, the instrument should be able to predict.

The key difference between the two subtypes lies in the temporal relationship between the test administration and the measurement of the criterion.

  • Predictive validity: Examines the extent to which a test can predict a criterion that is measured in the future. In essence, it’s about forecasting future outcomes. For instance, a college admissions test (the predictor) is administered to predict how well a student will perform academically in their freshman year (the criterion).
  • Concurrent validity: Examines the relationship between a test and a criterion measured at the same time. This type of validity helps understand how well a new test might correspond to an existing one or to a different type of assessment.

How to measure predictive validity

Measuring predictive validity involves carefully selecting a relevant criterion, collecting data on both the predictor and the criterion, calculating the correlation coefficient, and interpreting the results in the context of the study and its purpose.

It is important to recognize that predictive validity is just one aspect of test validity and should be considered alongside other types of validity evidence, especially construct validity.

1. Identify a Relevant Criterion:

The first step is to identify a criterion that is meaningful and relevant to the purpose of the test.

For instance, if a test is designed to predict job success, then the criterion might be supervisor ratings of job performance or objective measures of productivity.

It is important to select a criterion that is reliable and can be measured accurately. In some cases, the ideal criterion may be too far in the future or too difficult to measure, so researchers may need to use a proxy measure.

2. Administer the Predictor Test:

Once a suitable criterion has been identified, the next step is to administer the predictor test to a sample of individuals.

It’s important to ensure that the test is administered under standardized conditions to minimize the influence of extraneous variables.

3. Collect Criterion Data:

After a suitable time interval, collect data on the chosen criterion for the same sample of individuals.

The length of the time interval will depend on the nature of the criterion and the purpose of the study.

For example, if the criterion is job performance, data might be collected after six months or a year on the job.

4. Calculate the Correlation Coefficient:

The next step is to calculate the correlation coefficient between the predictor test scores and the criterion scores.

This statistic provides a quantitative measure of the strength and direction of the relationship between the two variables.

Higher correlations indicate stronger predictive validity.

5. Interpret the Correlation in Context

It is crucial to interpret the correlation coefficient in the context of the specific study and its purpose.

Researchers should be cautious about assuming that a test that predicts a criterion in one context will necessarily do so in another context.

Consider factors that may have influenced the correlation, such as sample characteristics, the reliability of the measures, and any restrictions of range in either the predictor or criterion variable.

For example, factors such as differences in job requirements, changes in those requirements over time, and variations in the applicant pool can all affect the predictive validity of a test in different situations.

Additionally, think about the practical implications of the findings.

For instance, even a statistically significant correlation may not be practically significant if the magnitude of the effect is small or if the cost of using the test outweighs the potential benefits.

challenges in establishing predictive validity

  • Selecting an appropriate criterion: Identifying and measuring the ideal criterion can be difficult. For example, the ideal criterion may be too far in the future or too complex to measure accurately. Researchers may need to rely on proxy measures that may not perfectly capture the construct of interest.
  • Range restriction: When the sample of individuals used in a predictive validity study does not represent the full range of scores on the predictor or criterion variables, the correlation coefficient can be artificially reduced, underestimating the true predictive validity.
  • Sample size: Establishing predictive validity requires an adequate sample size to ensure the reliability of the findings. Smaller sample sizes can result in unstable correlation coefficients and reduce the statistical power of the study.
  • Time and cost: Conducting longitudinal predictive validity studies, where the criterion is measured at a later point in time, can be time-consuming and expensive. Researchers may face challenges in tracking participants over time and collecting complete data on the criterion measure.

Ethical considerations

The use of tests for prediction, particularly in high-stakes decision-making contexts like employment or education, raises ethical considerations.

It’s crucial to ensure that tests are used fairly and don’t disadvantage particular groups.

Considerations include potential biases in test content or administration that might unfairly impact different groups.

Additionally, the consequences of testing, such as potential discrimination or labeling based on test scores, should be carefully evaluated.

Responsible test use involves minimizing negative consequences and ensuring that test scores are interpreted and used ethically and appropriately.

Reading List

Barrett, G. V., Phillips, J. S., & Alexander, R. A. (1981). Concurrent and predictive validity designs: A critical reanalysis. Journal of Applied Psychology66(1), 1.

Eastwick, P. W., Eagly, A. H., Finkel, E. J., & Johnson, S. E. (2011). Implicit and explicit preferences for physical attractiveness in a romantic partner: A double dissociation in predictive validity. Journal of Personality and Social Psychology, 101, 993–1011.

Eastwick, P. W., Luchies, L. B., Finkel, E. J., & Hunt, L. L. (2014). The predictive validity of ideal partner preferences: a review and meta-analysis. Psychological Bulletin140(3), 623.

Olivia Guy-Evans, MSc

BSc (Hons) Psychology, MSc Psychology of Education

Associate Editor for Simply Psychology

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.


Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

h4 { font-weight: bold; } h1 { font-size: 40px; } h5 { font-weight: bold; } .mv-ad-box * { display: none !important; } .content-unmask .mv-ad-box { display:none; } #printfriendly { line-height: 1.7; } #printfriendly #pf-title { font-size: 40px; }