Newton studied at Cambridge and was professor there from 1669 to 1701, succeeding his teacher Isaac Barrow as Lucasian professor of mathematics. His most important discoveries were made during the two-year period from 1664 to 1666, when the university was closed and he retired to his hometown of Woolsthorpe. At that time he discovered the law of universal gravitation, began to develop the calculus, and discovered that white light is composed of all the colors of the spectrum. These findings enabled him to make fundamental contributions to mathematics, astronomy, and theoretical and experimental physics.
Newton summarized his discoveries in terrestrial and celestial mechanics in his Philosophiae naturalis principia mathematica [mathematical principles of natural philosophy] (1687), one of the greatest milestones in the history of science. In it he showed how his principle of universal gravitation provided an explanation both of falling bodies on the earth and of the motions of planets, comets, and other bodies in the heavens. The first part of the Principia is devoted to dynamics and includes Newton's three famous laws of motion; the second part to fluid motion and other topics; and the third part to the system of the world, i.e., the unification of terrestrial and celestial mechanics under the principle of gravitation and the explanation of Kepler's laws of planetary motion. Although Newton used the calculus to discover his results, he explained them in the Principia by use of older geometric methods.
Newton's discoveries in optics were presented in his Opticks (1704), in which he elaborated his theory that light is composed of corpuscles, or particles. His corpuscular theory dominated optics until the early 19th cent., when it was replaced by the wave theory of light. The two theories were combined in the modern quantum theory. Among his other accomplishments were his construction (1668) of a reflecting telescope and his anticipation of the calculus of variations, founded by Gottfried Leibniz and the Bernoullis. In later years Newton considered mathematics and physics a recreation and turned much of his energy toward alchemy, theology, and history, particularly problems of chronology.
Newton was his university's representative in Parliament (1689-90, 1701-2) and was president of the Royal Society from 1703 until his death. He was made warden of the mint in 1696 and master in 1699, being knighted in 1705 in recognition of his services at the mint as much as for his scientific accomplishments. Although Newton was known as an open and generous person, at various times in his life he became involved in quarrels and controversies. The most notable was his dispute with Leibniz over which of them had first invented calculus; today they are jointly ascribed the honor.
An eight-volume edition of Newton's mathematical papers (ed. by D. H. Whiteside et al., 1967-81) has been published. See biographies by R. S. Westfall (1980), G. E. Christianson (1984), and J. Gleick (2003); J. Herivel, The Background to Newton's Principia (1965); A. Koyré, Newtonian Studies (1965); I. B. Cohen, Introduction to Newton's Principia (1971) and The Newtonian Revolution (1983); M. S. Stayer, ed., Newton's Dream (1988).
See his Cardiphonia, or the Utterance of the Heart (1795, repr. 1850, 1909); W. E. Phipps, Amazing Grace in John Newton (2001); S. Turner, Amazing Grace: The Story of America's Most Beloved Song (2002).
Newton's method can also be used to find a minimum or maximum of such a function, by finding a zero in the function's first derivative, see Newton's method as an optimization algorithm.
The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.
Suppose f : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.
Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive
For example, if one wishes to find the square root of 612, this is equivalent to finding the solution to
With an initial guess of 10, the sequence given by newtons method is
The correct digits are underlined in the above example. In particular, x6 is correct to the number of decimal places given. We see that the number of correct digits after the decimal point increases from 2 (for x3) to 5 and 10, illustrating the quadratic convergence.
Newton's method was described by Isaac Newton in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, his description differs substantially from the modern description given above: Newton applies the method only to polynomials. He does not compute the successive approximations , but computes a sequence of polynomials and only at the end, he arrives at an approximation for the root x. Finally, Newton views the method as purely algebraic and fails to notice the connection with calculus. Isaac Newton probably derived his method from a similar but less precise method by François Viète. The essence of Viète's method can be found in the work of the Persian mathematician, Sharaf al-Din al-Tusi, while his successor Jamshīd al-Kāshī used a form of Newton's method to solve to find roots of N (Ypma 1995). A special case of Newton's method for calculating square roots was known much earlier and is often called the Babylonian method.
Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson again viewed Newton's method purely as an algebraic method and restricted its use to polynomials, but he describes the method in terms of the successive approximations xn instead of the more complicated sequence of polynomials used by Newton. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using fluxional calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.
Arthur Cayley in 1879 in The Newton-Fourier imaginary problem was the first who noticed the difficulties in generalizing the Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions.
In general the convergence is quadratic: the error is essentially squared at each step (that is, the number of accurate digits doubles in each step). There are some caveats, however. First, Newton's method requires that the derivative be calculated directly. (If the derivative is approximated by the slope of a line through two points on the function, the secant method results; this can be more efficient depending on how one measures computational effort.) Second, if the initial value is too far from the true zero, Newton's method can fail to converge. Because of this, most practical implementations of Newton's method put an upper limit on the number of iterations and perhaps on the size of the iterates. Third, if the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken.
Since the most serious of the problems above is the possibility of a failure of convergence, Press et al. (1992) present a version of Newton's method that starts at the midpoint of an interval in which the root is known to lie and stops the iteration if an iterate is generated that lies outside the interval.
Developers of large scale computer systems involving root finding tend to prefer the secant method over Newton's method because the use of a difference quotient in place of the derivative in Newton's method implies that the additional code to compute the derivative need not be maintained. In practice, the advantages of maintaining a smaller code base usually outweigh the superior convergence characteristics of Newton's method.
For some functions, some starting points may enter an infinite cycle, preventing convergence. Let
This consideration holds true for every