Dopamine transients do not act as model-free prediction errors during associative learning

Nat Commun. 2020 Jan 8;11(1):106. doi: 10.1038/s41467-019-13953-1.

Abstract

Dopamine neurons are proposed to signal the reward prediction error in model-free reinforcement learning algorithms. This term represents the unpredicted or 'excess' value of the rewarding event, value that is then added to the intrinsic value of any antecedent cues, contexts or events. To support this proposal, proponents cite evidence that artificially-induced dopamine transients cause lasting changes in behavior. Yet these studies do not generally assess learning under conditions where an endogenous prediction error would occur. Here, to address this, we conducted three experiments where we optogenetically activated dopamine neurons while rats were learning associative relationships, both with and without reward. In each experiment, the antecedent cues failed to acquire value and instead entered into associations with the later events, whether valueless cues or valued rewards. These results show that in learning situations appropriate for the appearance of a prediction error, dopamine transients support associative, rather than model-free, learning.

Publication types

  • Research Support, N.I.H., Intramural

MeSH terms

  • Animals
  • Behavior, Animal
  • Conditioning, Classical
  • Cues
  • Dopamine / metabolism*
  • Dopaminergic Neurons / physiology*
  • Female
  • Learning*
  • Male
  • Models, Neurological
  • Rats
  • Reward

Substances

  • Dopamine