As a prediction method, TD learning takes into account the fact that subsequent predictions are often correlated in some sense. In standard supervised predictive learning, one only learns from actually observed values: A prediction is made, and when the observation is available, the prediction is adjusted to better match the observation. The core idea, as elucidated in , of TD learning is that we adjust predictions to match other, more accurate predictions, about the feature. This procedure is a form of bootstrapping as illustrated with the following example (taken from ):
Mathematically speaking, both in a standard and a TD approach, we would try to optimise some cost function, related to the error in our predictions of the expectation of some random variable, E[z]. However, while in the standard approach we in some sense assume E[z]=z (the actual observed value), in the TD approach we use a model. For the particular case of reinforcement learning, which is the major application of TD methods, z is the total return and E[z] is given by the Bellman equation of the return.
Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice . Initially the dopamine cells increased firing rates when exposed to the juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained the dopamine cells stopped firing. This mimics closely how the error function in TD is used for reinforcement learning.
The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research . It has also been used to study conditions such as schizophrenia or the consequences of pharmacological manipulations of dopamine on learning .
Thus, the reinforcement is the difference between the ideal prediction and the current prediction.
TD-Lambda is a learning algorithm invented by Richard Sutton based on earlier work on temporal difference learning by Arthur Samuel . This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that can learn to play the game of backgammon nearly as well as expert human players. The lambda () parameter here refers to the trace decay parameter, with . Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distal states and actions when is higher, with producing parallel learning to Monte Carlo RL algorithms.
 Richard Sutton. Learning to predict by the methods of temporal differences. Machine Learning 3:9-44. 1988. (A revised version is available on Richard Sutton's publication page)
 Richard Sutton and Andrew Barto. Reinforcement Learning. MIT Press, 1998. (available online)
 Schultz, W, Dayan, P & Montague, PR. 1997. A neural substrate of prediction and reward. Science 275:1593-1599.
 Schultz W. 1998. Predictive reward signal of dopamine neurons. J Neurophysiology 80:1-27.
 Dayan P. 2002. Motivated reinforcement learning. In: Ghahramani T, editor. Advances in neural information processing system, Cambridge, MA: MIT Press.
 Smith, A., Li, M., Becker, S. and Kapur, S. (2006), Dopamine, prediction error, and associative learning: a model-based account. Network: Computation in Neural Systems 17(1):61-84.
 Gerald Tesauro. Temporal Difference Learning and TD-Gammon. Communications of the ACM, March 1995 / Vol. 38, No. 3. (available at Temporal Difference Learning and TD-Gammon)
 Imran Ghory. Reinforcement Learning in Board Games. http://www.cs.bris.ac.uk/Publications/Papers/2000100.pdf
Wipo Publishes Patent of Leonard Michael Newnham, Jason Derek Mcfall, David J. Barker, David Silver and Causata for "Online Temporal Difference Learning from Incomplete Customer Interaction Histories" (British Inventors)
Apr 26, 2013; GENEVA, April 26 -- Publication No. WO/2013/059517 was published on April 25.Title of the invention: "ONLINE TEMPORAL DIFFERENCE...