Convergent validity is a subtype of construct validity that evaluates the extent to which responses on a test or instrument exhibit a strong relationship with responses on conceptually similar tests or instruments. Not only should a construct correlate with related variables, but it should not correlate with dissimilar and unrelated ones.
Key Takeaways
- Convergent validity is the degree to which two measures of constructs that theoretically should be related, are in fact related
- These measures can be different methods (e.g., self-report questionnaires and behavioral observations) or different instruments designed to measure the same construct.
- High positive correlations (generally above 0.5) between measures of the same construct indicate convergent validity.
- Convergent validity is often assessed alongside discriminant validity, which checks whether measures of constructs that shouldn’t be related are indeed not related.

Examples of Convergent Validity
Depression Questionnaires
If a psychologist is attempting to measure depression among a population using two different tests, they can examine how closely related the responses from those tests are to one another.
If they find that the results from both tests correlate strongly with one another, then convergent validity has been established; however, if there is no significant correlation between test results, then further investigation into why this lack of correspondence exists may be necessary (Krefetz et al., 2002).
IQ Tests
Researchers establish the concurrent validity of IQ tests by comparing the IQ test scores with other measures that assess related mental abilities.
For example, if a person is given an IQ test and then subsequently completes a test of verbal skills, researchers can compare the results of both tests to determine whether there is a correlation between the two sets of results.
In the case of Firmin et al. (2008), researchers evaluated the concurrent validity of IQ tests by correlating scores on an established IQ test – the Composite Intelligence Index – with their web-administered tests.
This type of comparison helps to show that the IQ test is measuring what it purports to measure – intelligence, thus helping to establish construct validity.
Measuring Extroversion
Imagine a study designed to assess extroversion. The researchers use three different methods to collect data:
- A self-report questionnaire where participants rate their agreement with statements about sociability.
- Other-report ratings, in which the participants’ romantic partners describe their enjoyment of social events.
- Behavioral observation, with researchers observing how participants interact with strangers in a waiting room.
If these three methods yield scores that are highly correlated, it would be evidence for the convergent validity of the extroversion measures.
How to measure convergent validity
Convergent validity is a matter of degree, not an all-or-none phenomenon. Convergent validity is also not a one-time determination.
Rather, it is an ongoing process that should be continually reevaluated as new information becomes available
Convergent validity can be measured using several statistical methods. The most common approaches are:
Correlation coefficients
The most common method for assessing convergent validity is calculating the correlation coefficient between scores from different measures hypothesized to assess the same construct.
To establish convergent validity, researchers typically set a threshold for the correlation coefficients or factor loadings.
The exact threshold may vary depending on the field and the nature of the constructs being measured, but values above 0.5 are generally considered acceptable.
Pearson’s correlation coefficient (r) is used when both measures are continuous and normally distributed.
Spearman’s rank correlation coefficient (ρ) is used when the measures are ordinal or when the assumptions of Pearson’s correlation are not met.
While a high correlation is a positive indicator, it’s crucial to remember that it doesn’t guarantee the measures are accurately assessing the intended construct.
It’s possible they could be measuring a different, shared construct.
Factor analysis
Exploratory Factor Analysis (EFA) can be used to identify the underlying factor structure of a set of measures.
Confirmatory Factor Analysis (CFA) can be used to test whether the measures load onto the expected factors based on theory.
High factor loadings (generally above 0.5) of the measures on the same factor suggest convergent validity.
Structural Equation Modeling (SEM)
SEM is a more advanced technique that combines factor analysis and regression analysis.
It allows for the simultaneous assessment of convergent validity, discriminant validity, and other types of validity.
High factor loadings and low cross-loadings in SEM support convergent validity.
Multitrait-Multimethod Matrix (MTMM)
MTMM is a method that assesses both convergent and discriminant validity by examining the correlations between different traits (constructs) measured by different methods.
Convergent validity is assessed by examining the strength of correlations between the same trait measured by different methods (monotrait-heteromethod correlations).
Convergent validity is supported when the correlations between measures of the same trait using different methods are high.
Discriminant validity is evaluated by looking at correlations between different traits measured by the same method (heterotrait-monomethod correlations) and by different methods (heterotrait-heteromethod correlations).
These correlations are expected to be weaker than the monotrait-heteromethod correlations.
It’s important to note that assessing convergent validity is just one part of the validation process. Researchers should also consider other types of validity, such as content validity, criterion validity, and discriminant validity, to gain a comprehensive understanding of a measure’s psychometric properties.
FAQs
Is convergent validity internal or external?
Convergent validity is an example of external validity, as it is concerned with the degree to which different measures of a given construct are associated.
This determines generalizability, applicability to practical situations in the world at large, and whether the results of the measure can be translated into other contexts.
What is the difference between convergent and discriminant validity?
Discriminant validity indicates that the results obtained by an instrument do not correlate too strongly with measurements of a similar but distinctive trait. For example, say that a company is sending potential software engineers tests to measure how proficient they are at coding.
A high score on the coding test should not correlate strongly with the scores of an IQ test, as this would just make the coding test another IQ test.
Convergent validity, on the other hand, indicates that a test correlates with a well-established test’s measures of the same construct. Both discriminant and convergent validity are important for measuring construct validity (Hubley & Zumbo, 2013).
What is the difference between convergent and divergent validity?
Divergent validity is yet another name used for discriminant validity and has even been used by some well-known writers in the measurement field (e.g., Nunnally & Bernstein, 1994), although it is not the commonly accepted term (Hubley & Zumbo, 2013).
Thus, the same differences exist between convergent and divergent validity and convergent and discriminant validity.
References
American Psychological Association. (2010). Standards for educational and psychological testing. Retrieved from http://www.apa.org/education/k12/testing-standards.pdf
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. psychometrika, 16(3), 297-334.
Firmin, Michael W., et al. “Evaluating the concurrent validity of three web-based IQ tests and the Reynolds Intellectual Assessment Scales (RIAS).” Eastern Education Journal 37.1 (2008): 20.
Hubley, A. M., & Zumbo, B. D. (2013). Psychometric characteristics of assessment procedures: An overview.
Krabbe, E. C. W. (2017). Validity in quantitative research: A practical guide to interpreting validity coefficients in scientific studies. Routledge Academic US Division: New York, NY 10017 USA. doi: 10.4324/9781315677620
Krefetz, D. G., Steer, R. A., Gulab, N. A., & Beck, A. T. (2002). Convergent validity of the Beck Depression Inventory-II with the Reynolds Adolescent Depression Scale in psychiatric inpatients. Journal of Personality Assessment, 78(3), 451-460.
MacDonald III, A. W., Goghari, V. M., Hicks, B. M., Flory, J. D., Carter, C. S., & Manuck, S. B. (2005). A convergent-divergent approach to context processing, general intellectual functioning, and the genetic liability to schizophrenia. Neuropsychology, 19(6), 814.
Nunnally, J. C., & Bernstein, I. H. (1994).
Psychometric theory (3rd ed.). New York: McGraw-Hill.
Poole, K. T., & Rosenthal, H. (1991). Patterns of congressional voting. American journal of political science, 228-278.