Consistent with the predictions from both models, all the animals tested in our study were more likely to choose the same target again after winning than losing or tying in the previous trial (paired t test, p < 10−13, for all PCI-32765 nmr sessions in each animal; Figure 2A). Moreover, as predicted by the BL model but not by the simple RL model, when the animals lost or tied in a given trial, they were more likely to choose in the next trial what would have been the winning target than the other unchosen target (p < 10−7, for all
sessions in each animal; Figure 2B), indicating that the animal’s choices were also influenced by the hypothetical outcomes from unchosen actions. To quantify the cumulative effects of hypothetical outcomes on the animal’s choices, we estimated learning rates for the actual (αA) and hypothetical (αH) outcomes from chosen and unchosen actions separately using a hybrid learning model that combine the features of both RL and BL (see Experimental Procedures). For all three animals, the learning rates for hypothetical outcomes
were significantly greater than zero (two-tailed t test, p < 10−27, for all sessions in each animal), although they were significantly smaller than the learning rates for actual outcomes (paired t test, p < 10−48; see Table S1). According to the Bayesian information criterion (BIC), this hybrid learning model and BL model performed better than the RL model in more than 95% of the sessions for each www.selleckchem.com/products/ly2157299.html animal. Therefore, animal’s behavior was influenced by hypothetical outcomes, albeit less strongly during than by actual outcomes. It should be noted that due to the competitive interaction with the computer opponent, the animals did not increase their reward rate by relying on such learning algorithms. In fact, for two monkeys (Q and S), average payoff decreased significantly as they were more strongly influenced by the actual outcomes from their previous choices (see Figure S2B and Supplemental Experimental Procedures). Average payoff was not significantly related to
the learning rates for hypothetical outcomes (Figure S2C). To test whether and how neurons in different regions of the prefrontal cortex modulate their activity according to the hypothetical outcomes from unchosen actions, we recorded the activity of 308 and 201 neurons in the DLPFC and OFC, respectively, during a computer-simulated rock-paper-scissors game. For each neuron, its activity during the 0.5 s feedback period was analyzed by applying a series of nested regression models that included the animal’s choice, actual payoff from the chosen target and hypothetical payoff from the unchosen winning target in a loss or tie trial as independent variables (see Experimental Procedures).