Simply Psychology Logo

Experimental Design: Types, Examples & Methods

Experimental Design: Types, Examples & Methods

By Dr. Saul McLeod, updated

Experimental design refers to how participants are allocated to the different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the commonest way to design an experiment in psychology is to divide the participants into two groups, the experimental group, and the control group, and then introduce a change to the experimental group and not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants take part in both groups (e.g., repeated measures) or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures:

Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants. 

This should be done by random allocation, which ensures that each participant has an equal chance of being assigned to one group or the other.

Independent measures involve using two separate groups of participants; one in each condition. For example:

Independent Measures Experimental Design

  • Con: More people are needed than with the repeated measures design (i.e., more time consuming).
  • Pro: Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired and fed up by the time they come to the second condition, or becoming wise to the requirements of the experiment!
  • Con: Differences between participants in the groups may affect results, for example; variations in age, gender or social background.  These differences are known as participant variables (i.e., a type of extraneous variable).
  • Control: After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures:

Repeated Measures design is an experimental design where the same participants take part in each condition of the independent variable.  This means that each condition of the experiment includes the same group of participants.

Repeated Measures design is also known as within groups, or within-subjects design.

  • Pro: As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con: There may be order effects. Order effects refer to the order of the conditions having an effect on the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e. practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro: Fewer people are needed as they take part in all conditions (i.e. saves time).
  • Control: To combat order effects the researcher counter balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.


Suppose we used a repeated measures design in which all of the participants first learned words in 'loud noise' and then learned it in 'no noise.'  We would expect the participants to show better learning in 'no noise' simply because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would split into two groups experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ group 2 does ‘B’ then ‘A’ this is to eliminate order effects.  Although order effects occur for each participant, because they occur equally in both groups, they balance each other out in the results.


3. Matched Pairs:

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group.

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

Matched Pairs Experimental Design

  • Con: If one participant drops out you lose 2 PPs’ data.
  • Pro: Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con: Very time-consuming trying to find closely matched pairs.
  • Pro: Avoids order effects, and so counterbalancing is not necessary.
  • Con: Impossible to match people exactly, unless identical twins!
  • Control: Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental Design Summary

Experimental design refers to how participants are allocated to the different conditions (or IV levels) in an experiment. There are three types:

1. Independent measures / between-groups: Different participants are used in each condition of the independent variable.

2. Repeated measures /within-groups: The same participants take part in each condition of the independent variable.

3. Matched pairs: Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Animation created by Tom H

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1. In order to compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period. The researchers attempted to ensure that the patients in the two groups had a similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2. To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited a group of each from a local primary school. They were given the same passage of text to read, and then asked a series of questions to assess their understanding.

3. To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds were recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks. At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4. In order to assess the effect of organization on recall, a researcher randomly assigned student volunteers to two conditions. Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment that lead the participants to think they know what the researcher is looking for (e.g. experimenter’s body language).

Independent variable (IV)

Variable the experimenter manipulates (i.e. changes) – assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e. result) of a study.

Extraneous variables (EV)

All variables, which are not the independent variable, but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in the way the experiment is carried out and to limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

How to reference this article:

McLeod, S. A. (2017, January 14). Experimental design. Simply Psychology.

Home | About Us | Privacy Policy | Advertise | Contact Us

Simply Psychology's content is for informational and educational purposes only. Our website is not intended to be a substitute for professional medical advice, diagnosis, or treatment.

© Simply Scholar Ltd - All rights reserved