electricity

electricity

[ih-lek-tris-i-tee, ee-lek-]
electricity, class of phenomena arising from the existence of charge. The basic unit of charge is that on the proton or electron—the proton's charge is designated as positive while the electron's is negative. There are three basic systems of units used to measure electrical quantities, the most common being the one in which the ampere is the unit of current, the coulomb is the unit of charge, the volt is the unit of electromotive force, and the ohm is the unit of resistance, reactance, or impedance (see electric and magnetic units).

Properties of Electric Charges

According to modern theory, most elementary particles of matter possess charge, either positive or negative. Two particles with like charges, both positive or both negative, repel each other, while two particles with unlike charges are attracted (see Coulomb's law). The electric force between two charged particles is much greater than the gravitational force between the particles. The negatively charged electrons in an atom are held near the nucleus because of their attraction for the positively charged protons in the nucleus.

If the numbers of electrons and protons are equal, the atom is electrically neutral; if there is an excess of electrons, it is a negative ion; and if there is a deficiency of electrons, it is a positive ion. Under various circumstances, the number of electrons associated with a given atom may change; chemical bonding results from such changes, with electrons being shared by more than one atom in covalent bonds or being transferred from one atom to another in ionic bonds (see chemical bond). Thus many of the bulk properties of matter ultimately are due to the electric forces among the particles of which the substance is composed. Materials differ in their ability to allow charge to flow through them (see conduction; insulation); materials that allow charge to pass easily are called conductors, while those that do not are called insulators, or dielectrics. A third class of materials, called semiconductors, conduct charge under some conditions but not under others.

Properties of Charges at Rest

Electrostatics is the study of charges, or charged bodies, at rest. When positive or negative charge builds up in fixed positions on objects, certain phenomena can be observed that are collectively referred to as static electricity. The charge can be built up by rubbing certain objects together, such as silk and glass or rubber and fur; the friction between the objects causes electrons to be transferred from one to the other—from a glass rod to a silk cloth or from fur to a rubber rod—with the result that the object that has lost the electrons has a positive charge and the object that has gained them has an equal negative charge. An electrically neutral object can be charged by bringing it in contact with a charged object: if the charged object is positive, the neutral object gains a positive charge when some of its electrons are attracted onto the positive object; if the charged object is negative, the neutral object gains a negative charge when some electrons are attracted onto it from the negative object.

A neutral conductor may be charged by induction using the following procedure. A charged object is placed near but not in contact with the conductor. If the object is positively charged, electrons in the conductor are drawn to the side of the conductor near the object. If the object is negatively charged, electrons are drawn to the side of the conductor away from the object. If the conductor is then connected to a reservoir of electrons, such as the ground, electrons will flow onto or off of the conductor with the result that it acquires a charge opposite to that of the charged object brought near it.

See also pole, in electricity and magnetism.

Properties of Charges in Motion

Electrodynamics is the study of charges in motion. A flow of electric charge constitutes an electric current. Historically, the direction of current was described in terms of the motion of imaginary positive charges; this convention is still used by many scientists, although it is directly opposite to the direction of electron flow, which is now known to be the basis of electric current in solids. Current considered to be composed of imaginary positive charges is often called conventional current. In order for a current to exist in a conductor, there must be an electromotive force (emf), or potential difference, between the conductor's ends. An electric cell, a battery of cells, and a generator are all sources of electromotive force; any such source with an external conductor connected from one of the source's two terminals to the other constitutes an electric circuit. If the source is a battery, the current is in one direction only and is called direct current (DC). If the source is a generator without a commutator, the current direction reverses twice during each rotation of the armature, passing first in one direction and then in the other; such current is called alternating current (AC). The number of times alternating current makes a double reversal of direction each second is called the frequency of the current; the frequency of ordinary household current in the U.S. is 60 cycles per sec (60 Hz), and electric devices must be designed to operate at this frequency.

In a solid the current consists not of a few electrons moving rapidly but of many electrons moving slowly; although this drift of electrons is slow, the impulse that causes it when the circuit is completed moves through the circuit at nearly the speed of light. The movement of electrons in a current is not steady; each electron moves in a series of stops and starts. In a direct current, the electrons are spread evenly through the conductor; in an alternating current, the electrons tend to congregate along the surface of the conductor. In liquids and gases, the current carriers are not only electrons but also positive and negative ions.

History of Electricity

From the writings of Thales of Miletus it appears that Westerners knew as long ago as 600 B.C. that amber becomes charged by rubbing. There was little real progress until the English scientist William Gilbert in 1600 described the electrification of many substances and coined the term electricity from the Greek word for amber. As a result, Gilbert is called the father of modern electricity. In 1660 Otto von Guericke invented a crude machine for producing static electricity. It was a ball of sulfur, rotated by a crank with one hand and rubbed with the other. Successors, such as Francis Hauksbee, made improvements that provided experimenters with a ready source of static electricity. Today's highly developed descendant of these early machines is the Van de Graaf generator, which is sometimes used as a particle accelerator. Robert Boyle realized that attraction and repulsion were mutual and that electric force was transmitted through a vacuum (c.1675). Stephen Gray distinguished between conductors and nonconductors (1729). C. F. Du Fay recognized two kinds of electricity, which Benjamin Franklin and Ebenezer Kinnersley of Philadelphia later named positive and negative.

The Leyden Jar and the Quantitative Era

Progress quickened after the Leyden jar was invented in 1745 by Pieter van Musschenbroek. The Leyden jar stored static electricity, which could be discharged all at once. In 1747 William Watson discharged a Leyden jar through a circuit, and comprehension of the current and circuit started a new field of experimentation. Henry Cavendish, by measuring the conductivity of materials (he compared the simultaneous shocks he received by discharging Leyden jars through the materials), and Charles A. Coulomb, by expressing mathematically the attraction of electrified bodies, began the quantitative study of electricity.

A new interest in current began with the invention of the battery. Luigi Galvani had noticed (1786) that a discharge of static electricity made a frog's leg jerk. Consequent experimentation produced what was a simple electron cell using the fluids of the leg as an electrolyte and the muscle as a circuit and indicator. Galvani thought the leg supplied electricity, but Alessandro Volta thought otherwise, and he built the voltaic pile, an early type of battery, as proof. Continuous current from batteries smoothed the way for the discovery of G. S. Ohm's law (pub. 1827), relating current, voltage (electromotive force), and resistance (see Ohm's law), and of J. P. Joule's law of electrical heating (pub. 1841). Ohm's law and the rules discovered later by G. R. Kirchhoff regarding the sum of the currents and the sum of the voltages in a circuit (see Kirchhoff's laws) are the basic means of making circuit calculations.

Era of Electromagnetism

In 1819 Hans Christian Oersted discovered that a magnetic field surrounds a current-carrying wire. Within two years André Marie Ampère had put several electromagnetic laws into mathematical form, D. F. Arago had invented the electromagnet, and Michael Faraday had devised a crude form of electric motor. Practical application of a motor had to wait 10 years, however, until Faraday (and earlier, independently, Joseph Henry) invented the electric generator with which to power the motor. A year after Faraday's laboratory approximation of the generator, Hippolyte Pixii constructed a hand-driven model. From then on engineers took over from the scientists, and a slow development followed; the first power stations were built 50 years later (see power, electric).

In 1873 James Clerk Maxwell had started a different path of development with equations that described the electromagnetic field, and he predicted the existence of electromagnetic waves traveling with the speed of light. Heinrich R. Hertz confirmed this prediction experimentally, and Marconi first made use of these waves in developing radio (1895). John Ambrose Fleming invented (1904) the diode rectifier vacuum tube as a detector for the Marconi radio. Three years later Lee De Forest made the diode into an amplifier by adding a third electrode, and electronics had begun. Theoretical understanding became more complete in 1897 with the discovery of the electron by J. J. Thomson. In 1910-11 Ernest R. Rutherford and his assistants learned the distribution of charge within the atom. Robert Millikan measured the charge on a single electron by 1913.

Bibliography

See D. L. Anderson, Discovery of the Electron: The Development of the Atomic Concept of Electricity (1964); W. T. Scott, The Physics of Electricity and Magnetism (2d ed. 1966); M. Kaufman and J. A. Wilson, Basic Electricity (1973); E. T. Whittaker, History of Theories of Aether and Electricity (1954, repr. 1987).

Phenomenon associated with stationary or moving electric charges. The word comes from the Greek elektron (“amber”); the Greeks discovered that amber rubbed with fur attracted light objects such as feathers. Such effects due to stationary charges, or static electricity, were the first electrical phenomena to be studied. Not until the early 19th century were static electricity and electric current shown to be aspects of the same phenomenon. The discovery of the electron, which carries a charge designated as negative, showed that the various manifestations of electricity are the result of the accumulation or motion of numbers of electrons. The invention of the incandescent lightbulb (1879) and the construction of the first central power station (1881) by Thomas Alva Edison led to the rapid introduction of electric power into factories and homes. Seealso James Clerk Maxwell.

Learn more about electricity with a free trial on Britannica.com.

Electricity (from the Greek word ήλεκτρον, (elektron), meaning amber, and finally from New Latin ēlectricus, "amber-like") is a general term that encompasses a variety of phenomena resulting from the presence and flow of electric charge. These include many easily recognizable phenomena such as lightning and static electricity, but in addition, less familiar concepts such as the electromagnetic field and electromagnetic induction.

In general usage, the word 'electricity' is adequate to refer to a number of physical effects. However, in scientific usage, the term is vague, and these related, but distinct, concepts are better identified by more precise terms:

Electricity has been studied since antiquity, though scientific advances were not forthcoming until the seventeenth and eighteenth centuries. It would not be until the late nineteenth century, however, that engineers were able to put electricity to industrial and residential use. This period witnessed a rapid expansion in the development of electrical technology. Electricity's extraordinary versatility as a source of energy means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. The backbone of modern industrial society is, and for the foreseeable future can be expected to remain, the use of electrical power.

History

Long before any knowledge of electricity existed people were aware of shocks from electric fishes. Ancient Egyptian texts dating from 2750 BC referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. They were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by catfish and torpedo rays, and knew that such shocks could travel along conducting objects. Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Possibly the earliest and nearest approach to the discovery of the identity of lightning, and electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning (raad) applied to the electric ray.

That certain objects such as rods of amber could be rubbed with cat's fur and attract light objects like feathers was known to ancient cultures around the Mediterranean. Thales of Miletos made a series of observations on static electricity around 600 BC, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artefact was electrical in nature.

Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English physician William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus ("of amber" or "like amber", from ήλεκτρον [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.

Further work was conducted by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. In the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. He observed a succession of sparks jumping from the key to the back of his hand, showing that lightning was indeed electrical in nature.

In 1791 Luigi Galvani published his discovery of bioelectricity, demonstrating that electricity was the medium by which nerve cells passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819-1820; Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827.

While it had been the early 19th century that had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Nikola Tesla, Thomas Edison, George Westinghouse, Ernst Werner von Siemens, Alexander Graham Bell and Lord Kelvin, electricity was turned from a scientific curiosity into an essential tool for modern life, becoming a driving force for the Second Industrial Revolution.

Concepts

Electric charge

Electric charge is a property of certain subatomic particles, which gives rise to and interacts with, the electromagnetic force, one of the four fundamental forces of nature. Charge originates in the atom, in which its most familiar carriers are the electron and proton. It is a conserved quantity, that is, the net charge within an isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.

The presence of charge gives rise to the electromagnetic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended from a string can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms, leading to the well-known axiom: like-charged objects repel and opposite-charged objects attract.

The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.

The charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin. The amount of charge is usually given the symbol Q and expressed in coulombs; each electron carries the same charge of approximately −1.6022×10−19 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10−19  coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.

Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer.

Electric current

The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current.

By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively-charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation. If another definition is used—for example, "electron current"—it needs to be explicitly stated.

The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.

Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics.

In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sinusoidal wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.

Electric field

The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.

An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, so it follows that an electric field is also a vector, having both magnitude and direction. Specifically, it is a vector field.

The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves.

The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc at electric field strengths which exceed 30 kV per centimetre across small gaps. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.

The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning stroke to develop there, rather than to the building it serves to protect.

An electric field is zero inside a conductor. This is because the net charge on a conductor only exists on the surface. External electrostatic fields are always perpendicular to the conductors surface. Otherwise this would produce a force on the charge carriers inside the conductor and so the field would not be static as we assume.

Electric potential

The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.

For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged – and unchargeable.

Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to temperature: as there is a certain temperature at every point in space, and the temperature gradient indicates the direction and magnitude of the driving force behind heat flow, similarly, there is an electric potential at every point in space, and its gradient, or field strength, indicates the direction and magnitude of the driving force behind charge movement. Equally, electric potential may be seen as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field.

The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest gradient of potential.

Electromagnetism

Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's slightly obscure words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too.

Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.

This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.

Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principal, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.

Faraday's and Ampère's work showed that a time-varying magnetic field acted as a source of an electric field, and a time-varying electric field was a source of a magnetic field. Thus, when either field is changing in time, then a field of the other is necessarily induced. Such a phenomenon has the properties of a wave, and is naturally referred to as an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell discovered a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that such a wave would necessarily travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's Laws, which unify light, fields, and charge are one of the great milestones of theoretical physics.

Electric circuits

An electric circuit is an interconnection of electric components, usually to perform some useful task, with a return path to enable the charge to return to its source.

The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behavior, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.

The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp.

The capacitor is a device capable of storing charge, and thereby storing electrical energy in the resulting field. Conceptually, it consists of two conducting plates separated by a thin insulating layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.

The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one.

Production and uses

Generation

Thales' experiments with amber rods were the first studies into the production of electrical energy. While this method, now known as the triboelectric effect, is capable of lifting light objects and even generating sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electrical energy. The battery is a versatile and very common power source which is ideally suited to many applications, but its energy storage is finite, and once discharged it must be disposed of or recharged. For large electrical demands electrical energy must be generated and transmitted in bulk.

Electrical energy is usually generated by electro-mechanical generators driven by steam produced from fossil fuel combustion, or the heat released from nuclear reactions; or from other sources such as kinetic energy extracted from wind or flowing water. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. The invention in the late nineteenth century of the transformer meant that electricity could be generated at centralised power stations, benefiting from economies of scale, and be transmitted across countries with increasing efficiency. Since electrical energy cannot easily be stored in quantities large enough to meet demands on a national scale, at all times exactly as much must be produced as is required. This requires electricity utilities to make careful predictions of their electrical loads, and maintain constant co-ordination with their power stations. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses.

Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China. Historically, the growth rate for electricity demand has outstripped that for other forms of energy, such as coal.

Environmental concerns with electricity generation have led to an increased focus on generation from renewable sources, in particular from wind- and hydropower. While debate can be expected to continue over the environmental impact of different means of electricity production, its final form is relatively clean.

Uses

Electricity is an extremely flexible form of energy, and has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting.

The Joule heating effect employed in the light bulb also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of electric heating in new buildings. Electricity is however a highly practical energy source for refrigeration, with air conditioning representing a growing sector for electricity demand, the effects of which electricity utilities are increasingly obliged to accommodate.

Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first intercontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication technology have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.

The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or by collecting current from a sliding contact such as a pantograph, placing restrictions on its range or performance.

Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain several billion miniaturised transistors in a region only a few centimetres square.

Electricity and the natural world

Physiological effects

A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 1 mA for mains-frequency electricity. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock is referred to as electrocution. Electrocution is still the means of judicial execution in some jurisdictions, though its use has become rarer in recent times.

Electrical phenomena in nature

Electricity is by no means a purely human invention, and may be observed in several forms in nature, a prominent manifestation of which is lightning. The Earth's magnetic field is thought to arise from a natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugarcane, generate a potential difference across their faces when subjected to external pressure. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal, and when a piezoelectric material is subjected to an electric field, a small change in physical dimensions take place.

Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. (Because of this principle, an electric shock can induce temporary or permanent paralysis by "overloading" the nervous system.) They are also responsible for coordinating activities in certain plants.

See also

References

Bibliography

External links

Search another word or see electricityon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature