An analog computer (spelt analogue in British English) is a form of computer that uses continuous physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved.
The similarity between linear mechanical components, such as springs and dashpots, and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations that are of essentially the same form.
However, the difference between these systems is what makes analog computing useful. If one considers a simple mass-spring system, constructing the physical system would require buying the springs and masses. This would be proceeded by attaching them to each other and an appropriate anchor, collecting test equipment with the appropriate input range, and finally, taking (somewhat difficult) measurements.
The electrical equivalent can be constructed with a few operational amplifiers (Op amps) and some passive linear components; all measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) 'mass of the spring' can be changed by adjusting a potentiometer. The electrical system is an analogy to the physical system, hence the name, but it is less expensive to construct, safer, and easier to modify. Also, an electronic circuit can typically operate at higher frequencies than the system being simulated. This allows the simulation to run faster than real time, for quicker results.
A digital system uses discrete electrical voltage levels as codes for symbols. The manipulation of these symbols is the method of operation of the digital computer. The electronic analog computer manipulates the physical quantities of waveforms, (voltage or current). The precision of the analog computer readout is limited chiefly by the precision of the readout equipment used, generally three or four significant figures. The digital computer precision must necessarily be finite, but the precision of its result is limited only by time. A digital computer can calculate many digits in parallel, or obtain the same number of digits by carrying out computations in time sequence.
There is an intermediate device, a hybrid computer, in which a digital computer is combined with an analog computer. Hybrid computers are used to obtain a very accurate but not exact 'seed' value, using an analog computer front-end, which is then fed into a digital computer iterative process to achieve the final desired degree of precision. With a three or four digit, highly accurate numerical seed, the total digital computation time necessary to reach the desired precision is dramatically reduced, since many fewer iterations are required. Or, for example, the analog computer might be used to solve a non-analytic differential equation problem for use at some stage of an overall computation (where precision is not very important). In any case, the hybrid computer is usually substantially faster than a digital computer, but can supply a far more precise computation than an analog computer. It is useful for real-time applications requiring such a combination (e.g., a high frequency phased-array radar or a weather system computation).
In analog computers, computations are often performed by using properties of electrical resistance, voltages and so on. For example, a simple two variable adder can be created by two current sources in parallel. The first value is set by adjusting the first current source (to say x milliamperes), and the second value is set by adjusting the second current source (say y milliamperes). Measuring the current across the two at their junction to signal ground will give the sum as a current through a resistance to signal ground, i.e., x+y milliamperes. (See Kirchhoff's current law) Other calculations are performed similarly, using operational amplifiers and specially designed circuits for other tasks.
The use of electrical properties in analog computers means that calculations are normally performed in real time (or faster), at a significant fraction of the speed of light, without the relatively large calculation delays of digital computers. This property allows certain useful calculations that are comparatively "difficult" for digital computers to perform, for example numerical integration. Analog computers can integrate a voltage waveform, usually by means of a capacitor, which accumulates charge over time.
Nonlinear functions and calculations can be constructed to a limited precision (three or four digits) by designing function generators— special circuits of various combinations of capacitance, inductance, resistance, in combination with diodes (e.g., Zener diodes) to provide the nonlinearity. Generally, a nonlinear function is simulated by a nonlinear waveform whose shape varies with voltage (or current). For example, as voltage increases, the total impedance may change as the diodes successively permit current to flow.
Any physical process which models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle of spaghetti as a model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding the convex hull of a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described in A.K. Dewdney (see citation below).
Analog computers often have a complicated framework, but they have, at their core, a set of key components which perform the calculations, which the operator manipulates through the computer's framework.
Key hydraulic components might include pipes, valves or towers; mechanical components might include gears and levers; key electrical components might include:
The core mathematical operations used in an electric analog computer are:
Differentiation with respect to time is not frequently used. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified.
While digital computation is extremely popular, research in analog computation is being done by a handful of people worldwide. In the United States, Jonathan Mills from Indiana University, Bloomington, Indiana has been working on research using Extended Analog Computers. At the Harvard Robotics Laboratory, analog computation is a research topic.
These are examples of analog computers that have been constructed or practically used:
Analog synthesizers can also be viewed as a form of analog computer, and their technology was originally based on electronic analog computer technology.
Computer theorists often refer to idealized analog computers as real computers (because they operate on the set of real numbers). Digital computers, by contrast, must first quantize the signal into a finite number of values, and so can only work with the rational number set (or, with an approximation of irrational numbers).
These idealized analog computers may in theory solve problems that are intractable on digital computers; however as mentioned, in reality, analog computers are far from attaining this ideal, largely because of noise minimization problems. Moreover, given unlimited time and memory, the (ideal) digital computer may also solve real number problems.
Other types of computers:
People associated with analog computer development: