In a series of experiments, we investigated the dependence of contextual cueing on working memory resources. A visual search task with 50 % repeated displays was run in order to elicit the implicit learning of contextual cues. The search task was combined with a concurrent visual working memory task either during an initial learning phase or a later test phase. The visual working memory load was either spatial or nonspatial. Articulatory suppression was used to prevent verbalization. We found that nonspatial working memory load had no effect, independent of presentation in the learning or test phase. In contrast, visuospatial load diminished search facilitation in the test phase, but not during learning. We concluded that visuospatial working memory resources are needed for the expression of previously learned spatial contexts, whereas the learning of contextual cues does not depend on visuospatial working memory.