» Articles » PMID: 21423711

Learning from Sensory and Reward Prediction Errors During Motor Adaptation

Overview
Specialty Biology
Date 2011 Mar 23
PMID 21423711
Citations 214
Authors
Affiliations
Soon will be listed here.
Abstract

Voluntary motor commands produce two kinds of consequences. Initially, a sensory consequence is observed in terms of activity in our primary sensory organs (e.g., vision, proprioception). Subsequently, the brain evaluates the sensory feedback and produces a subjective measure of utility or usefulness of the motor commands (e.g., reward). As a result, comparisons between predicted and observed consequences of motor commands produce two forms of prediction error. How do these errors contribute to changes in motor commands? Here, we considered a reach adaptation protocol and found that when high quality sensory feedback was available, adaptation of motor commands was driven almost exclusively by sensory prediction errors. This form of learning had a distinct signature: as motor commands adapted, the subjects altered their predictions regarding sensory consequences of motor commands, and generalized this learning broadly to neighboring motor commands. In contrast, as the quality of the sensory feedback degraded, adaptation of motor commands became more dependent on reward prediction errors. Reward prediction errors produced comparable changes in the motor commands, but produced no change in the predicted sensory consequences of motor commands, and generalized only locally. Because we found that there was a within subject correlation between generalization patterns and sensory remapping, it is plausible that during adaptation an individual's relative reliance on sensory vs. reward prediction errors could be inferred. We suggest that while motor commands change because of sensory and reward prediction errors, only sensory prediction errors produce a change in the neural system that predicts sensory consequences of motor commands.

Citing Articles

A neural implementation model of feedback-based motor learning.

Feulner B, Perich M, Miller L, Clopath C, Gallego J Nat Commun. 2025; 16(1):1805.

PMID: 39979257 PMC: 11842561. DOI: 10.1038/s41467-024-54738-5.


Sensorimotor environment but not task rule reconfigures population dynamics in rhesus monkey posterior parietal cortex.

Guo H, Kuang S, Gail A Nat Commun. 2025; 16(1):1116.

PMID: 39900579 PMC: 11791165. DOI: 10.1038/s41467-025-56360-5.


Distributed representations of temporally accumulated reward prediction errors in the mouse cortex.

Makino H, Suhaimi A Sci Adv. 2025; 11(4):eadi4782.

PMID: 39841828 PMC: 11753378. DOI: 10.1126/sciadv.adi4782.


The microgravity environment affects sensorimotor adaptation and its neural correlates.

Tays G, Hupfeld K, McGregor H, Banker L, De Dios Y, Bloomberg J Cereb Cortex. 2025; 35(2).

PMID: 39756418 PMC: 11795311. DOI: 10.1093/cercor/bhae502.


The Use of Extrinsic Performance Feedback and Reward to Enhance Upper Limb Motor Behavior and Recovery Post-Stroke: A Scoping Review.

Palidis D, Gardiner Z, Stephenson A, Zhang K, Boruff J, Fellows L Neurorehabil Neural Repair. 2024; 39(2):157-173.

PMID: 39659261 PMC: 11849245. DOI: 10.1177/15459683241298262.


References
1.
Todorov E, Jordan M . Optimal feedback control as a theory of motor coordination. Nat Neurosci. 2002; 5(11):1226-35. DOI: 10.1038/nn963. View

2.
Synofzik M, Lindner A, Thier P . The cerebellum updates predictions about the visual consequences of one's behavior. Curr Biol. 2008; 18(11):814-8. DOI: 10.1016/j.cub.2008.04.071. View

3.
Sing G, Joiner W, Nanayakkara T, Brayanov J, Smith M . Primitives for motor adaptation reflect correlated neural tuning to position and velocity. Neuron. 2009; 64(4):575-89. DOI: 10.1016/j.neuron.2009.10.001. View

4.
Agostino R, Sanes J, Hallett M . Motor skill learning in Parkinson's disease. J Neurol Sci. 1996; 139(2):218-26. View

5.
Takikawa Y, Kawagoe R, Itoh H, Nakahara H, Hikosaka O . Modulation of saccadic eye movements by predicted reward outcome. Exp Brain Res. 2002; 142(2):284-91. DOI: 10.1007/s00221-001-0928-1. View