by Saul McLeod published 2011
Why do we need to recognise faces?
For evolutionary reasons it could be suggested we need to distinguish between familiar faces in our social groups for survival. Also, for sociocultural reasons: We might be able to predict a person’s behavior from facial features.
There are two explanations for how people recognise faces:
- Face recognition is taken as a whole unit comprising of the features, distance and configuration (i.e. relationship) between the features as well as cognitive and emotional (i.e. semantic) knowledge about when and where you normally see this person and how you feel about them.
- This is an example of a top-down theory, suggesting that recognizing a face requires stored semantic and emotional information.
- Evidence comes from Bruce and Young’s model.
2. Feature Analysis:
- This is an example of a bottom-up theory in which it is suggested that analysing individual features (e.g. eyes or nose) is the most important factor in face recognition.
- Evidence comes from Shepherd, Davies and Ellis (1981).
Bruce and Young's Holistic Model
Bruce and Young (1986) proposed a top-down approach to face recognition, suggesting that recognizing a face requires stored semantic and emotional information, and is more complex than simply adding together a set of features.
Face perception is driven by an individual’s prior knowledge and past experience.
For example, when we see someone in the street, we would need to refer back to previously stored information about where we know the person from in order to say that we have recognized them fully.
Bruce and Young's (1986) model of face perception is also a holistic theory.
According to the holistic approach, a face is recognized as a whole, analyzing the relationship between features (i.e. the configuration), feelings aroused by the face and semantic information about the person.
Ellis (1975) suggests we have a stored template or pattern for the face of each person we know, and when presented with a face, we try to match this stimulus to our mental pattern.
Evidence for the Holistic Theory
Young & Hay (1986) demonstrated the importance of layout or configuration in the processing of faces.
- Constructed faces from photos by combining top halves and bottom halves of famous people’s faces (i.e. top half of Tom Cruse and bottom half Robbie Williams).
- When halves were closely aligned, participants had problems with naming the two people.
- The composite (i.e. new face) seemed two produce a new holistic face in which it was difficult to perceive the separate halves.
- Performance much better when two halves were not closely aligned and therefore didn’t create a new configuration (i.e. there was a gap in between Tom and Robbie).
- If we recognise a faces by features alone a new configuration would not matter – however in this study it did.
- Close alignment produced new configuration, interfering with face recognition.
- Put simply the relationship between the facial features (e.g. eyes and mouth) changes when you put separate halves of two faces together. If we recognize faces using a holistic approach then this should cause problems with face recognition.
Clinical evidence (e.g. prosopagnosia & capgras) suggests face perception is extremely complicated. It involves both cognitive and emotional processes. For example, prosopagnosia sufferers cannot cognitively recognize a face, but can report emotional feeling. Capgras sufferers experience cognitive recognition, but have no sense of emotional recognition.
Both cases show how face recognition cannot rely on just features, as prosopagnosia and capgras patients can name and describe individual features of a familiar face. This points to a more holistic model of face recognition.
Feature Analysis Model
Feature-analysis theory is an example of a bottom-up theory in which it is suggested that analysing individual features is the most important factor in face recognition.
According to bottom-up theory, the visual information from the face we are currently viewing are the most important information for recognition, and so we would need to focus on the detail of the face, analysing the separate features closely.
Visual information would include the way the light and shade appear on the face and the texture of the hair and skin, and also skin colour. All these visual cues combine to enable us to perceive the broader features of the face like the shape of the nose and mouth.
Evidence for Features Theory
Shepherd, Davies & Ellis (1981) investigated how features are used in free-recall descriptions by showing participants some faces of people they had never seen before for a brief period of time.
Participants were then asked to describe from memory the faces they had been shown. In describing these unfamiliar faces, the features most often referred to were: hair, eyes, nose, mouth, eyebrows, chin and forehead (in that order). This research suggests that faces of unfamiliar people tend to be recalled using the main features of the face.
Ellis et al. (1979) discovered that descriptions of unfamiliar faces focus more on external facial features such as hair, face shape, etc., whereas we tend to use internal features when recalling faces of familiar people.
Obviously external features are more noticeable, particularly from a distance. However, they are also more likely to change, as when people dye or cut their hair, so internal features are probably more reliable for long-term recognition.
Critics of the holistic theory say that features too are important, and many studies show this. The most important features for people we know are internal ones like eyes, mouth etc. However, lots of studies show that it is not just the features that are important, but the whole arrangement.
Because cognitive psychology sees itself as a pure science is mainly used experiments to study cognitive processes. The big problem with this is that experiments are not like real life and therefore lack ecological validity.
Typically experiments Use materials/faces that have no personal meaning to participants (i.e. not like a real life situation).
Much of face recognition research is lab based and this means its not like real life:
- Experiments routinely use pictures of faces which are two dimensional (not like 3D real life).
- Also faces are static (motion less face), whereas in real life they are moving.
- In addition a range of social and emotional factors are linked to any real life encounter.
Most studies don’t have ecological validity. In real life faces are moving and encounters with people involve social, emotional and motivational factors.
Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327.
Ellis, H. D. (1975). Recognising faces. British Journal of Psychology, 66, 409-426.
Ellis, H. D., Shepherd, J. W., & Davies, G. M. (1979). Identification of familiar and unfamiliar faces from internal and external features: Some implications for theories of face recognition. Perception, 8(4), 431-439.
Shepherd, J., Ellis, H., & Davies, G. (Eds.). (1981). Perceiving and remembering faces. Academic Press.
Young, A. W., McWeeny, K. H., Hay, D. C., & Ellis, A. W. (1986). Matching familiar and unfamiliar faces on identity and expression. Psychological research, 48(2), 63-68.
How to cite this article:
McLeod, S. A. (2011). . Retrieved from
Like The Site? Follow Us!