» Articles » PMID: 38717856

A Dopamine Mechanism for Reward Maximization

Overview
Specialty Science
Date 2024 May 8
PMID 38717856
Authors
Affiliations
Soon will be listed here.
Abstract

Individual survival and evolutionary selection require biological organisms to maximize reward. Economic choice theories define the necessary and sufficient conditions, and neuronal signals of decision variables provide mechanistic explanations. Reinforcement learning (RL) formalisms use predictions, actions, and policies to maximize reward. Midbrain dopamine neurons code reward prediction errors (RPE) of subjective reward value suitable for RL. Electrical and optogenetic self-stimulation experiments demonstrate that monkeys and rodents repeat behaviors that result in dopamine excitation. Dopamine excitations reflect positive RPEs that increase reward predictions via RL; against increasing predictions, obtaining similar dopamine RPE signals again requires better rewards than before. The positive RPEs drive predictions higher again and thus advance a recursive reward-RPE-prediction iteration toward better and better rewards. Agents also avoid dopamine inhibitions that lower reward prediction via RL, which allows smaller rewards than before to elicit positive dopamine RPE signals and resume the iteration toward better rewards. In this way, dopamine RPE signals serve a causal mechanism that attracts agents via RL to the best rewards. The mechanism improves daily life and benefits evolutionary selection but may also induce restlessness and greed.

Citing Articles

Local Regulation of Striatal Dopamine Release Shifts from Predominantly Cholinergic in Mice to GABAergic in Macaques.

Shin J, Goldbach H, Burke D, Authement M, Swanson E, Bocarsly M J Neurosci. 2025; 45(11).

PMID: 39837662 PMC: 11905349. DOI: 10.1523/JNEUROSCI.1692-24.2025.

References
1.
Keramati M, Gutkin B . Homeostatic reinforcement learning for integrating reward collection and physiological stability. Elife. 2014; 3. PMC: 4270100. DOI: 10.7554/eLife.04811. View

2.
Mikhael J, Kim H, Uchida N, Gershman S . The role of state uncertainty in the dynamics of dopamine. Curr Biol. 2022; 32(5):1077-1087.e9. PMC: 8930519. DOI: 10.1016/j.cub.2022.01.025. View

3.
Pessiglione M, Seymour B, Flandin G, Dolan R, Frith C . Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans. Nature. 2006; 442(7106):1042-5. PMC: 2636869. DOI: 10.1038/nature05051. View

4.
Parker N, Cameron C, Taliaferro J, Lee J, Yoon Choi J, Davidson T . Reward and choice encoding in terminals of midbrain dopamine neurons depends on striatal target. Nat Neurosci. 2016; 19(6):845-54. PMC: 4882228. DOI: 10.1038/nn.4287. View

5.
Yagishita S, Hayashi-Takagi A, Ellis-Davies G, Urakubo H, Ishii S, Kasai H . A critical time window for dopamine actions on the structural plasticity of dendritic spines. Science. 2014; 345(6204):1616-20. PMC: 4225776. DOI: 10.1126/science.1255514. View