Levels of Processing Theory (Craik & Lockhart, 1972)

The levels of processing model focuses on the depth of processing involved in memory, and predicts the deeper information is processed, the longer a memory trace will last.

Memory retention depends primarily on the depth of mental engagement during the initial encoding process.

Levels of processing theory suggests that how we think about information dictates how well we remember it.

Unlike previous models focusing on memory stores, this framework emphasizes the quality of cognitive effort.

Processing occurs along a continuous scale from shallow, sensory analysis to deep, semantic enrichment.

levels of processing memory model

Craik and Lockhart (1972) suggested a continuum of processing depth, ranging from shallow sensory analysis to deep semantic analysis. This continuum is generally divided into three distinct levels:

Shallow Processing

Shallow processing involves superficial or surface-level encoding of information. It typically focuses on sensory features or basic characteristics without engaging in meaningful analysis or elaboration.

Shallow processing only involves maintenance rehearsal (repetition to help us hold something in the STM) and leads to fairly short-term retention of information.

As a result, shallow processing leads to poorer memory encoding and weaker retention than deep processing, which involves more thorough and meaningful engagement with the information.

Structural processing (appearance): which is when we encode only the physical qualities of something.  E.g. the typeface of a word or how the letters look. For example, when looking at a word, a person might only pay attention to the shapes of the letters, count the number of vowels, or determine whether the word is written in uppercase or lowercase letters

Intermediate Phonemic Processing

Intermediate phonemic processing (also referred to as phonological processing) represents the middle tier of cognitive depth in the Levels of Processing (LOP) framework proposed by Craik and Lockhart.

At this level, the cognitive analysis of a stimulus moves beyond its mere physical or visual characteristics (shallow structural processing) and begins to translate those physical shapes into meaningful auditory units, attaching phonetic sounds to them.

Phonemic processing:  processing here focuses on how a word sounds, such as identifying whether a target word rhymes with another word.

Intermediate phonemic processing is essential to short-term memory capacity.

In Baddeley and Hitch’s model of working memory, this is handled by the phonological loop, which consists of a passive phonological store and an active articulatory rehearsal process (the “inner voice” that repeats information).

Deep Processing

Deep processing refers to the meaningful and thorough encoding of information.

It involves engaging with the content thoughtfully and elaborately, making connections to existing knowledge and personal experiences.

Deep processing promotes better memory retention and recall than shallow, surface-level processing.

Craik (1973, p. 48) defined depth not in terms of the number of operations performed on a stimulus, but as “the meaningfulness extracted from the stimulus” — a distinction that shifts the focus from quantity of processing to quality of engagement.

The primary mechanism is semantic processing: relating incoming information to previously stored meanings or personal experiences.

When a learner considers the implications of a word rather than merely its sound or appearance, they produce a more durable memory trace — one that resists decay because it is embedded within a dense network of existing associations rather than stored in isolation.

elaborative rehearsal:

Deep processing depends on elaborative rehearsal: the active transformation of material to make it more meaningful.

This goes beyond simple repetition.

The learner expands on the information by constructing mental images, identifying logical connections, or linking new content to prior knowledge.

Each layer of meaning added strengthens the memory trace and integrates the new material into a coherent knowledge structure, making subsequent retrieval more reliable.

Deep semantic encoding naturally leads to richer and more elaborate memory codes, largely because elaboration is inherently easier to achieve at the semantic level than at shallower structural or phonological levels.

Key Study: Craik and Tulving (1975)

Aim

To investigate whether the depth of processing affects the long-term recognition of words.

Method

Participants were presented with 60 words and asked questions requiring structural, phonological, or semantic analysis. A surprise recognition test followed these tasks.

Some questions required the participants to process the word in a deep way (e.g. semantic) and others in a shallow way (e.g. structural and phonemic). For example:

  • Shallow graphemic (structural) processing: Participants were asked to focus on the physical and visual characteristics of the words, such as deciding whether a given word was printed in uppercase or lowercase letters.

  • Intermediate phonemic processing: Participants were asked to focus on the auditory qualities of the words, such as determining whether a presented word rhymed with a specific target word.

  • Deep semantic processing: Participants were asked to focus entirely on the meaning of the words by deciding whether each word logically fitted into a blank space within a provided sentence.

Participants were then given a long list of 180 words into which the original words had been mixed.

They were asked to pick out the original words.

elaboration experiment

Craik and Tulving (1975) also sought to explore the impact of elaboration of processing, which refers to the sheer amount or richness of cognitive processing occurring at a specific level.

To test this, they manipulated the complexity of the sentences used in the deep semantic task.

Participants were presented with a word and asked if it fitted into either a very simple sentence frame, “She cooked the ____“, or a highly complex, descriptive sentence frame  “The great bird swooped down and carried off the struggling ____“)

Results

Recognition rates for semantically processed words were significantly higher than those for structurally or phonologically processed words.

The results from the elaboration experiment revealed that cued recall was twice as high for words that had accompanied the complex sentences compared to those in the simple sentences.

This demonstrated that while deeply processing a word’s meaning is highly effective on its own, actively embedding that meaning within a richer, more elaborate cognitive context creates an even stronger, more easily retrievable memory trace

The researchers also noted an interesting secondary finding:

Memory performance was generally much better for words that yielded a positive “Yes” response during the initial processing task than for those that yielded a negative “No” response.

Conclusion

Semantically processed words involve elaboration rehearsal and deep processing which results in more accurate recall. 

Phonemic and visually processed words involve shallow processing and less accurate recall.

Real-Life Applications

This explanation of memory is useful in everyday life because it highlights the way in which elaboration, which requires deeper processing of information, can aid memory.

Three examples of this are.

  • Reworking – putting information in your own words or talking about it with someone else.
  • Method of loci – when trying to remember a list of items, linking each with a familiar place or route.
  • Imagery – by creating an image of something you want to remember, you elaborate on it and encode it visually (i.e. a mind map).

The above examples could all be used to revise psychology using semantic processing (e.g. explaining memory models to your mum, using mind maps etc.) and should result in deeper processing through using elaboration rehearsal.

Consequently, more information will be remembered (and recalled) and better exam results should be achieved.

Strengths

A central strength of the LOP approach is its assumption that perception, attention, and memory are closely interconnected.

Rather than treating memory as an isolated system of rigid storage boxes, the theory correctly posits that learning and remembering are natural by-products of perception, attention, and comprehension.

This holistic view accurately reflects the fluid and integrated nature of human cognition.

The theory successfully identified elaboration and distinctiveness as crucial determinants of memory.

By establishing that long-term retention depends heavily on the depth of analysis, the LOP framework highlighted that enriching information (elaboration) and making it unique (distinctiveness) are critical for strong memory formation.

Functional neuroimaging has provided neurobiological backing for the theory’s claims about depth.

Functional neuroimaging provides biological evidence that deep processing engages specific regions of the human brain.

Researchers utilize Functional Magnetic Resonance Imaging (fMRI): a technology that measures brain activity by detecting changes associated with blood flow.

These studies reveal that semantic tasks trigger higher metabolic activity in the frontal and temporal lobes compared to shallow tasks.

Wagner et al. (2001)

  • Aim: To identify the neural correlates associated with the depth of processing during encoding.

  • Procedure: Participants were scanned using fMRI while they performed either semantic or perceptual tasks on various words.

  • Findings: Increased activation was observed in the left inferior frontal lobe and the medial temporal lobes during semantic processing.

  • Conclusions: Semantic encoding recruits specialized neural circuits that facilitate the formation of robust and accessible memory traces.

Weaknesses

Lack of explanatory depth

Eysenck (1990) argued that the theory describes rather than explains.

Craik and Lockhart demonstrated that deep processing produces better long-term memory than shallow processing, but offered no detailed account of why this is the case.

Subsequent research has partially addressed this:

Deeper coding appears to benefit memory because it is more elaborate — activating a wider range of semantic associations and integrating new material into pre-existing knowledge networks.

But the mechanism remains incompletely specified.

The vagueness of “depth”

The theory’s central construct cannot be independently observed or objectively measured.

Without a principled way to determine processing depth separately from its assumed effects on memory, the framework risks circularity: deep processing is inferred from better recall, which is then explained by deep processing.

Effort as a confound

Deeper processing typically demands more cognitive effort than shallow processing.

It is therefore unclear whether superior retention reflects the meaningfulness of the encoding or simply the effort invested — a distinction the theory does not adequately address.

The role of distinctiveness

Research by Bransford et al. (1979) revealed that depth and elaboration are not the only determinants of retention.

The sentence “A mosquito is like a doctor because both draw blood” is better recalled than the more elaborated “A mosquito is like a racoon because they both have heads, legs and jaws.”

The first sentence is less elaborate but more distinctive — its incongruity makes it memorable.

This finding suggests that the theory underestimates the independent contribution of distinctiveness to memory formation.

Neglect of retrieval

The assumption that deep processing is always superior to intermediate phonemic processing was later challenged.

Morris et al. (1977) demonstrated the principle of transfer-appropriate processing, which argues that long-term memory is best when the type of processing used during learning closely matches the type of processing required during the memory test.

If a subsequent memory test specifically demands phonological recall (for example, asking “what was the word that rhymed with cable?”), a person will actually perform better if they originally encoded the word using intermediate phonemic processing rather than deep semantic processing

Implicit memory and amnesia

Levels-of-processing effects are considerably stronger in explicit memory than in implicit memory.

More problematically, the theory struggles to account for amnesic patients who retain intact semantic processing abilities yet show severely impaired long-term memory — a dissociation that the depth-of-processing account has no ready explanation for.

Theoretical Revision

In response to these criticisms, Lockhart and Craik (1990) revised their original framework.

They acknowledged that the encoding-retrieval relationship had been inadequately theorised, conceded that deeper processing is not universally superior across all tasks and test conditions, and accepted that the original model was overly simplistic.

The core principle nonetheless retains its standing as a foundational heuristic in cognitive psychology:

Meaningful engagement with material — actively constructing understanding rather than passively rehearsing — remains one of the most reliable routes to durable memory.

How does LOP theory differ from multi-store models?

While multi-store models attempt to map out the rigid architecture of where memories live, the LOP theory focuses entirely on the cognitive activities that create memories in the first place.

Structure vs. Process

The most significant difference between the two approaches is their primary focus.

Multi-store models, such as the highly influential model proposed by Atkinson and Shiffrin (1968), emphasise the structural architecture of memory.

They propose that memory consists of fixed, permanent structural components—specifically, sensory stores, a short-term store (STS), and a long-term store (LTS).

In this view, memory is heavily compartmentalised, and the primary goal of the theory is to distinguish between these different storage systems.

Conversely, the Levels of Processing theory, proposed by Craik and Lockhart (1972), focuses on the mental processes that occur during learning, heavily criticising multi-store models for emphasising structure at the expense of processing.

LOP theorists argue that memory is simply a by-product of perception, attention, and comprehension.

According to Craik and Lockhart, the multi-store model has the relationship between structure and process “essentially the wrong way round”.

Instead of focusing on where a memory is stored, LOP theory asserts that the structural components of memory are merely the resulting consequences of perceptual analysis and information processing.

While LOP theorists do not explicitly deny the existence of different memory stores, they largely ignore them to focus on how information is encoded early on.

Mechanisms of Transfer: Rehearsal vs. Depth

The two theories also diverge sharply on the mechanisms required to retain information long-term.

The Multi-Store Model’s Reliance on Maintenance Rehearsal:

In the multi-store model, the short-term store acts as a necessary gateway or relay station. Information flows in a unidirectional, linear order from sensory memory to the STS, and then to the LTS.

The key control process that dictates whether information makes it from the short-term to the long-term store is rote rehearsal.

The multi-store model assumes a direct relationship between the amount of rehearsal an item receives in the STS and the strength of the resulting memory trace in the LTS.

The LOP Theory’s Focus on Depth of Processing:

Craik and Lockhart strongly disputed the assumption that simply repeating information (maintenance rehearsal) always improves long-term memory.

They argued that maintenance rehearsal does not effectively enhance long-term retention; instead, it is the kind of rehearsal or processing that matters.

The central claim of LOP theory is that the depth to which information is mentally analysed during initial exposure is what determines its memorability.

Processing occurs on a continuum, ranging from shallow sensory/structural analysis (e.g., looking at the physical shape of a word) to intermediate phonological analysis (e.g., how a word sounds), down to deep semantic analysis (e.g., understanding the word’s meaning).

The deeper and more elaborative the processing during encoding, the stronger and longer-lasting the memory trace will be.

Rigidity vs. Flexibility

Finally, multi-store models are often criticised for being overly simplistic and rigid.

They assume that information flows through the mind in a strict sequence and that short-term and long-term stores operate in a uniform way.

For example, Atkinson and Shiffrin’s model implies that only consciously processed information in the short-term store can reach long-term memory, which struggles to explain phenomena like implicit learning.

The LOP framework is more fluid, focusing on a continuum of processing rather than strict boundaries. It acknowledges that human memory is deeply intertwined with how we interact with the world.

It suggests that long-term memory is driven by how actively we understand, elaborate on, and find distinctiveness in new material, rather than just how long we hold it in a temporary mental waiting room.

References

Bransford, J. D., Franks, J. J., Morris, C.D., & Stein, B.S.(1979). Some general constraints on learning and memory research. In L.S. Cermak & F.I.M. Craik(Eds.), Levels of processing in human memory (pp.331–354). Hillsdale, NJ: Lawrence Erlbaum AssociatesInc.

Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal behavior, 11, 671-684.

Craik, F.I.M., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104, 268-294.

Eysenck, M. W. & Keane, M. T. (1990). Cognitive psychology: a student’s handbook, Lawrence Erlbaum Associates Ltd., Hove, UK.

Wagner, A. D., Schacter, D. L., Rotte, M., Koutstaal, W., Maril, A., Dale, A. M., Rosen, B. R., & Buckner, R. L. (2001). Building memories: Remembering and forgetting of verbal experiences as predicted by brain activity. Science, 281(5380), 1188–1191. https://doi.org/10.1126/science.281.5380.1188

What is the main idea of levels of processing theory?

The main idea of the levels of processing theory is that the depth at which information is processed during encoding affects its subsequent recall. According to this theory, information processed at a deeper level, such as through semantic or meaningful processing, is more likely to be remembered than information processed at a shallow level, such as through superficial or sensory-based processing.

What is deep processing?

Deep processing refers to the meaningful and thorough encoding of information. It involves engaging with the content thoughtfully and elaborately, making connections to existing knowledge and personal experiences. Deep processing promotes better memory retention and recall than shallow, surface-level processing.

Olivia Guy-Evans, MSc

BSc (Hons) Psychology, MSc Psychology of Education

Associate Editor for Simply Psychology

Olivia Guy-Evans is a writer and associate editor for Simply Psychology, where she contributes accessible content on psychological topics. She is also an autistic PhD student at the University of Birmingham, researching autistic camouflaging in higher education.


Saul McLeod, PhD

Chartered Psychologist (CPsychol)

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD, is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.