The largest current experiment is the Joint European Torus [JET]. In 1997, JET produced a peak of 16.1 MW of fusion power (65% of input power), with fusion power of over 10 MW sustained for over 0.5 sec. In June 2005, the construction of the experimental reactor ITER, designed to produce several times more fusion power than the power put into the plasma over many minutes, was announced. They are currently preparing the site (Sep 2008). The production of net electrical power from fusion is planned for DEMO, the next generation experiment after ITER.
The basic concept behind any fusion reaction is to bring two or more atoms close enough together so that the strong nuclear force in their nuclei will pull them together into one larger atom. If two light nuclei fuse, they will generally form a single nucleus with a slightly smaller mass than the sum of their original masses. The difference in mass is released as energy according to Einstein's mass-energy equivalence formula E = mc2. If the input atoms are sufficiently massive, the resulting fusion product will be heavier than the reactants, in which case the reaction requires an external source of energy. The dividing line between "light" and "heavy" is iron-56. Above this atomic mass, energy will generally be released by nuclear fission reactions; below it, by fusion.
Fusion between the atoms is opposed by their shared electrical charge, specifically the net positive charge of the nuclei. In order to overcome this electrostatic force, or "Coulomb barrier", some external source of energy must be supplied. The easiest way to do this is to heat the atoms, which has the side effect of stripping the electrons from the atoms and leaving them as bare nuclei. In most experiments the nuclei and electrons are left in a fluid known as a plasma. The temperatures required to provide the nuclei with enough energy to overcome their repulsion is a function of the total charge, so hydrogen, which has the smallest nuclear charge therefore reacts at the lowest temperature. Helium has an extremely low mass per nucleon and therefore is energetically favoured as a fusion product. As a consequence, most fusion reactions combine isotopes of hydrogen ("protium", deuterium, or tritium) to form isotopes of helium (3He or 4He).
Perhaps the three most widely considered fuel cycles are based on the D-T, D-D, and p-11B reactions. Other fuel cycles (D-3He and 3He-3He) would require a supply of 3He, either from other nuclear reactions or from extraterrestrial sources, such as the surface of the moon or the atmospheres of the gas giant planets. The details of the calculations comparing these reactions can be found here.
The easiest (according to the Lawson criterion) and most immediately promising nuclear reaction to be used for fusion power is:
Deuterium is a naturally occurring isotope of hydrogen and as such is universally available. The large mass ratio of the hydrogen isotopes makes the separation rather easy compared to the difficult uranium enrichment process. Tritium is also an isotope of hydrogen, but it occurs naturally in only negligible amounts due to its radioactive half-life of 12.32 years. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions:
The reactant neutron is supplied by the D-T fusion reaction shown above, the one which also produces the useful energy. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic but does not consume the neutron. At least some 7Li reactions are required to replace the neutrons lost by reactions with other elements. Most reactor designs use the naturally occurring mix of lithium isotopes. The supply of lithium is more limited than that of deuterium, but still large enough to supply the world's energy demand for thousands of years.
Several drawbacks are commonly attributed to D-T fusion power:
The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of current fission power reactors, posing problems for material design. Design of suitable materials is under way but their actual use in a reactor is not proposed until the generation after ITER. After a single series of D-T tests at JET, the largest fusion reactor yet to use this fuel, the vacuum vessel was sufficiently radioactive that remote handling needed to be used for the year following the tests.
On the other hand, the volumetric deposition of neutron power can also be seen as an advantage. If all the power of a fusion reactor had to be transported by conduction through the surface enclosing the plasma, it would be very difficult to find materials and a construction that would survive, and it would probably entail a relatively poor efficiency.
|D + D||→ T||+ p|
|→ 3He||+ n|
Civilian applications, in which explosive energy production must be replaced by a controlled production, are still being developed. Although it took less than ten years to go from military applications to civilian fission energy production, it has been very different in the fusion energy field; more than fifty years have already passed without any commercial fusion energy production plant coming into operation.
The U.S. fusion program began in 1951 when Lyman Spitzer began work on a stellarator under the code name Project Matterhorn. His work led to the creation of the Princeton Plasma Physics Laboratory, where magnetically confined plasmas are still studied. The stellarator concept fell out of favor for several decades afterwards, plagued by poor confinement issues, but recent advances in computer technology have led to a significant resurgence in interest in these devices. A wide variety of other magnetic geometries were also experimented with, notably with the magnetic mirror. These systems also suffered from similar problems when higher performance versions were constructed.
A new approach was outlined in the theoretical works fulfilled in 1950–1951 by I.E. Tamm and A.D. Sakharov in Soviet Union, which laid the foundations of the tokamak. Experimental research of these systems started in 1956 in Kurchatov Institute, Moscow by a group of Soviet scientists lead by Lev Artsimovich. The group constructed the first tokamaks, the most successful of them being T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, conducting the first quasistationary thermonuclear fusion reaction ever. The tokamak was dramatically more efficient than the other approaches of the same era, and most research after the 1970s concentrated on variations of this theme.
The same is true today, where very large tokamaks like ITER are hoping to demonstrate several milestones on the way to commercial power production, including a burning plasma with long burn times, high power output and online fueling. There are no guarantees that the project will be successful, as previous generations of machines have faced formerly unseen problems on many occasions. But the entire field of high temperature plasmas is much better understood now due to the earlier research, and there is considerable optimism that ITER will meet its goals. If successful, ITER would be followed by a "commercial demonstrator" system, similar to the very earliest power-producing fission reactors built in the era before wide-scale commercial deployment of larger machines started in the 1960s and 1970s. Even with these goals met, there are a number of major engineering problems remaining, notably finding suitable "low activity" materials for reactor construction, demonstrating secondary systems including practical tritium extraction, and building reactor designs that allow their reactor core to be removed when it becomes embrittled due to the neutron flux. Practical generators based on the tokamak concept remain far in the future. The public at large has been somewhat disappointed, as the initial outlook for practical fusion power plants was much rosier than is now realized; a pamphlet from the 1970s printed by General Atomic stated that "Several commercial fusion reactors are expected to be online by the year 2000."
The Z-pinch phenomenon has been known since the end of the 18th century. Its use in the fusion field comes from research made on toroidal devices, initially in the Los Alamos National Laboratory right from 1952 (Perhapsatron), and in the United Kingdom from 1954 (ZETA), but its physical principles remained for a long time poorly understood and controlled. Pinch devices were studied as potential development paths to practical fusion devices through the 1950s, but studies of the data generated by these devices suggested that instabilities in the collapse mechanism would doom any pinch-type device to power levels that were far too low to suggest continuing along these lines would be practical. Most work on pinch-type devices ended by the 1960s. Recent work on the basic concept started as a result of the appearance of the "wires array" concept in the 1980s, which allowed a more efficient use of this technique. The Sandia National Laboratory runs a continuing wire-array research program with the Zpinch machine. In addition, the University of Washington's ZaP Lab have shown quiescent periods of stability hundreds of times longer than expected for plasma in a Z-pinch configuration, giving promise to the confinement technique.
More recent work had demonstrated that significant savings in the required laser energy are possible using a technique known as "fast ignition". The savings are so dramatic that the concept appears to be a useful technique for energy production again, so much so that it is a serious contender for pre-commercial development once again. There are proposals to build an experimental facility dedicated to the fast ignition approach, known as HiPER. At the same time, advances in solid state lasers appear to improve the "driver" systems' efficiency by about ten times (to 10- 20%), savings that make even the large "traditional" machines almost practical, and might make the fast ignition concept outpace the magnetic approaches in further development. The laser-based concept has other advantages as well. The reactor core is mostly exposed, as opposed to being wrapped in a huge magnet as in the tokamak. This makes the problem of removing energy from the system somewhat simpler, and should mean that a laser-based device would be much easier to perform maintenance on, such as core replacement. Additionally, the lack of strong magnetic fields allows for a wider variety of low-activation materials, including carbon fiber, which would both reduce the frequency of such swaps, as well as reducing the radioactivity of the discarded core. In other ways the program has many of the same problems as the tokamak; practical methods of energy removal and tritium recycling need to be demonstrated, and in addition there is always the possibility that a new previously unseen collapse problem will arise.
Inventor of the Cathode Ray Tube Television Philo T. Farnsworth patented his first Fusor design in 1968, a device which uses inertial electrostatic confinement. Towards the end of the 1960s, Robert Hirsch designed a variant of the Farnsworth Fusor known as the Hirsch-Meeks fusor. This variant is a considerable improvement over the Farnsworth design, and is able to generate neutron flux in the order of one billion neutrons per second. Although the efficiency was very low at first, there were hopes the device could be scaled up, but continued development demonstrated that this approach would be impractical for large machines. Nevertheless, fusion could be achieved using a "lab bench top" type set up for the first time, at minimal cost. This type of fusor found its first application as a portable neutron generator in the late 1990s. An automated sealed reaction chamber version of this device, commercially named Fusionstar was developed by EADS but abandoned in 2001. Its successor is the NSD-Fusion neutron generator.
Robert W. Bussard's Polywell concept is roughly similar to the Fusor design, but replaces the problematic grid with a magnetically contained electron cloud which holds the ions in position and gives an accelerating potential. Bussard claimed that a scaled up version would be capable of generating net power.
In April 2005, a team from UCLA announced it had devised a novel way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to smash deuterium atoms together. However, the process does not generate net power. See Pyroelectric fusion. Such a device would be useful in the same sort of roles as the fusor.
There is also no risk of a runaway reaction in a fusion reactor, since the plasma is normally burnt at optimal conditions, and any significant change will render it unable to produce excess heat. In fusion reactors the reaction process is so delicate that this level of safety is inherent; no elaborate failsafe mechanism is required. Although the plasma in a fusion power plant will have a volume of 1000 cubic meters or more, the density of the plasma is extremely low, and the total amount of fusion fuel in the vessel is very small, typically a few grams. If the fuel supply is closed, the reaction stops within seconds. In comparison, a fission reactor is typically loaded with enough fuel for one or several years, and no additional fuel is necessary to keep the reaction going.
In the magnetic approach, strong fields are developed in coils that are held in place mechanically by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to any other industrial accident, and could be effectively stopped with a containment building similar to those used in existing (fission) nuclear generators. The laser-driven inertial approach is generally lower-stress. Although failure of the reaction chamber is possible, simply stopping fuel delivery would prevent any sort of catastrophic failure.
Most reactor designs rely on the use of liquid lithium as both a coolant and a method for converting stray neutrons from the reaction into tritium, which is fed back into the reactor as fuel. Lithium is highly flammable, and in the case of a fire it is possible that the lithium stored on-site could be burned up and escape. In this case the tritium contents of the lithium would be released into the atmosphere, posing a radiation risk. However, calculations suggest that the total amount of tritium and other radioactive gases in a typical power plant would be so small, about 1 kg, that they would have diluted to legally acceptable limits by the time they blew as far as the plant's perimeter fence.
The half-life of the radioisotopes produced by fusion tend to be less than those from fission, so that the inventory decreases more rapidly. Furthermore, there are fewer unique species, and they tend to be non-volatile and biologically less active. Unlike fission reactors, whose waste remains radioactive for thousands of years, most of the radioactive material in a fusion reactor would be the reactor core itself, which would be dangerous for about 50 years, and low-level waste another 100. Although this waste will be considerably more radioactive during those 50 years than fission waste, the very short half-life makes the process very attractive, as the waste management is fairly straightforward. By 300 years the material would have the same radioactivity as coal ash.
Additionally, the materials used in a fusion reactor are more "flexible" than in a fission design, where many materials are required for their specific neutron cross-sections. This allows a fusion reactor to be designed using materials that are selected specifically to be "low activation", materials that do not easily become radioactive. Vanadium, for example, would become much less radioactive than stainless steel. Carbon fibre materials are also low-activation, as well as being strong and light, and are a promising area of study for laser-inertial reactors where a magnetic field is not required.
In general terms, fusion reactors would create far less radioactive material than a fission reactor, the material it would create is less damaging biologically, and the radioactivity "burns off" within a time period that is well within existing engineering capabilities.
Fusion power commonly proposes the use of deuterium, an isotope of hydrogen, as fuel and in many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global output of about 100 EJ/yr (= 1 x 1020 J/yr) and that this does not increase in the future, then the known current lithium reserves would last 3000 years, lithium from sea water would last 60 million years, and a more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years.
Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion:
The first human-made, large-scale production of fusion reactions was the test of the hydrogen bomb, Ivy Mike, in 1952. It was once proposed to use hydrogen bombs as a source of power by detonating them in underground caverns and then generating electricity from the heat produced, but such a power plant is unlikely ever to be constructed, for a variety of reasons. (See the PACER project for more details.) Controlled thermonuclear fusion (CTF) refers to the alternative of continuous power production, or at least the use of explosions that are so small that they do not destroy a significant portion of the machine that produces them.
To produce self-sustaining fusion, the energy released by the reaction (or at least a fraction of it) must be used to heat new reactant nuclei and keep them hot long enough that they also undergo fusion reactions. Retaining the heat is called energy confinement and may be accomplished in a number of ways.
The hydrogen bomb really has no confinement at all. The fuel is simply allowed to fly apart, but it takes a certain length of time to do this, and during this time fusion can occur. This approach is called inertial confinement. If more than milligram quantities of fuel are used (and efficiently fused), the explosion would destroy the machine, so theoretically, controlled thermonuclear fusion using inertial confinement would be done using tiny pellets of fuel which explode several times a second. To induce the explosion, the pellet must be compressed to about 30 times solid density with energetic beams. If the beams are focused directly on the pellet, it is called direct drive, which can in principle be very efficient, but in practice it is difficult to obtain the needed uniformity. An alternative approach is indirect drive, in which the beams heat a shell, and the shell radiates x-rays, which then implode the pellet. The beams are commonly laser beams, but heavy and light ion beams and electron beams have all been investigated.
Inertial confinement produces plasmas with impressively high densities and temperatures, and appears to be best suited to weapons research, X-ray generation, very small reactors, and perhaps in the distant future, spaceflight. They rely on fuel pellets with close to a "perfect" shape in order to generate a symmetrical inward shock wave to produce the high-density plasma, and in practice these have proven difficult to produce. A recent development in the field of laser induced ICF is the use of ultrashort pulse multi-petawatt lasers to heat the plasma of an imploding pellet at exactly the moment of greatest density after it is imploded conventionally using terawatt scale lasers. This research will be carried out on the (currently being built) OMEGA EP petawatt and OMEGA lasers at the University of Rochester and at the GEKKO XII laser at the institute for laser engineering in Osaka Japan, which if fruitful, may have the effect of greatly reducing the cost of a laser fusion based power source.
At the temperatures required for fusion, the fuel is in the form of a plasma with very good electrical conductivity. This opens the possibility to confine the fuel and the energy with magnetic fields, an idea known as magnetic confinement. The Lorenz force works only perpendicular to the magnetic field, so that the first problem is how to prevent the plasma from leaking out the ends of the field lines. There are basically two solutions.
The first is to use the magnetic mirror effect. If particles following a field line encounter a region of higher field strength, then some of the particles will be stopped and reflected. Advantages of a magnetic mirror power plant would be simplified construction and maintenance due to a linear topology and the potential to apply direct conversion in a natural way, but the confinement achieved in the experiments was so poor that this approach has been essentially abandoned.
The second possibility to prevent end losses is to bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed system of this type is the tokamak, with the stellarator being next most advanced, followed by the Reversed field pinch. Compact toroids, especially the Field-Reversed Configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. Compact toroids still have some enthusiastic supporters but are not backed as readily by the majority of the fusion community.
Finally, there are also electrostatic confinement fusion systems, in which ions in the reaction chamber are confined and held at the center of the device by electrostatic forces, as in the Farnsworth-Hirsch Fusor, which is not believed to be able to developed into a power plant. The Polywell, an advanced variant of the fusor, has shown a degree of research interest as of late; however, the technology is relatively immature, and major scientific and engineering questions remain which researchers under the auspices of the U.S. Office of Naval Research hope to further investigate.
Some researchers have reported excess heat, neutrons, tritium, helium and other nuclear effects in so-called cold fusion systems. In 2004, a peer review panel was commissioned by the US Department of Energy to study these claims : two thirds of its members found the evidences of nuclear reactions unconvincing, five found the evidence "somewhat convincing" and one was entirely convinced. In 2006, Mosier-Boss and Szpak, researchers in the U.S. Navy's Space and Naval Warfare Systems Center San Diego, reported evidence of nuclear reactions, which have been independently replicated.
There have been many design studies for fusion power plants. Despite many differences, there are several systems that are common to most. To begin with, a fusion power plant, like a fission power plant, is customarily divided into the nuclear island and the balance of plant. The balance of plant is the conventional part that converts high-temperature heat into electricity via steam turbines. It is much the same in a fusion power plant as in a fission or coal power plant. In a fusion power plant, the nuclear island has a plasma chamber with an associated vacuum system, surrounded by plasma-facing components (first wall and divertor) maintaining the vacuum boundary and absorbing the thermal radiation coming from the plasma, surrounded in turn by a blanket where the neutrons are absorbed to breed tritium and heat a working fluid that transfers the power to the balance of plant. If magnetic confinement is used, a magnet system, using primarily cryogenic superconducting magnets, is needed, and usually systems for heating and refueling the plasma and for driving current. In inertial confinement, a driver (laser or accelerator) and a focusing system are needed, as well as a means for forming and positioning the pellets.
Although the standard solution for electricity production in fusion power plant designs is conventional steam turbines using the heat deposited by neutrons, there are also designs for direct conversion of the energy of the charged particles into electricity. These are of little value with a D-T fuel cycle, where 80% of the power is in the neutrons, but are indispensable with aneutronic fusion, where less than 1% is. Direct conversion has been most commonly proposed for open-ended magnetic configurations like magnetic mirrors or Field-Reversed Configurations, where charged particles are lost along the magnetic field lines, which are then expanded to convert a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. Typically the claimed conversion efficiency is in the range of 80%, but the converter may approach the reactor itself in size and expense.
The problem is exacerbated because realistic material tests must expose samples to neutron fluxes of a similar level for a similar length of time as those expected in a fusion power plant. Such a neutron source is nearly as complicated and expensive as a fusion reactor itself would be. Proper materials testing will not be possible in ITER, and a proposed materials testing facility, IFMIF, is still at the design stage in 2005.
The material of the plasma facing components (PFC) is a special problem. The PFC do not have to withstand large mechanical loads, so neutron damage is much less of an issue. They do have to withstand extremely large thermal loads, up to 10 MW/m², which is a difficult but solvable problem. Regardless of the material chosen, the heat flux can only be accommodated without melting if the distance from the front surface to the coolant is not more than a centimeter or two. The primary issue is the interaction with the plasma. One can choose either a low-Z material, typified by graphite although for some purposes beryllium might be chosen, or a high-Z material, usually tungsten with molybdenum as a second choice. Use of liquid metals (lithium, gallium, tin) has also been proposed, e.g., by injection of 1-5 mm thick streams flowing at 10 m/s on solid substrates.
If graphite is used, the gross erosion rates due to physical and chemical sputtering would be many meters per year, so one must rely on redeposition of the sputtered material. The location of the redeposition will not exactly coincide with the location of the sputtering, so one is still left with erosion rates that may be prohibitive. An even larger problem is the tritium co-deposited with the redeposited graphite. The tritium inventory in graphite layers and dust in a reactor could quickly build up to many kilograms, representing a waste of resources and a serious radiological hazard in case of an accident. The consensus of the fusion community seems to be that graphite, although a very attractive material for fusion experiments, cannot be the primary PFC material in a commercial reactor.
The sputtering rate of tungsten can be orders of magnitude smaller than that of carbon, and tritium is not so easily incorporated into redeposited tungsten, making this a more attractive choice. On the other hand, tungsten impurities in a plasma are much more damaging than carbon impurities, and self-sputtering of tungsten can be high, so it will be necessary to ensure that the plasma in contact with the tungsten is not too hot (a few tens of eV rather than hundreds of eV). Tungsten also has disadvantages in terms of eddy currents and melting in off-normal events, as well as some radiological issues.
While fusion power is still in early stages of development, vast sums have been and continue to be invested in research. In the EU almost € 10 billion was spent on fusion research up to the end of the 1990s, and the new ITER reactor alone is budgeted at € 10 billion. It is estimated that up to the point of possible implementation of electricity generation by nuclear fusion, R&D will need further promotion totalling around € 60-80 billion over a period of 50 years or so (of which € 20-30 billion within the EU). Nuclear fusion research receives € 750 million (excluding ITER funding), compared with € 810 million for all non-nuclear energy research combined , putting research into fusion power well ahead of that of any single rivaling technology.
An important aspect of fusion energy in contrast to many other energy sources is that the cost of production is inelastic. The cost of wind energy, for example, goes up as the optimal locations are developed first, while further generators must be sited in less ideal conditions. With fusion energy, the production cost will not increase much, even if large numbers of plants are built. It has been suggested that even 100 times the current energy consumption of the world is possible.
Some problems which are expected to be an issue in the next century such as fresh water shortages can actually be regarded merely as problems of energy supply. For example, in desalination plants, seawater can be purified through distillation or reverse osmosis. However, these processes are energy intensive. Even if the first fusion plants are not competitive with alternative sources, fusion could still become competitive if large scale desalination requires more power than the alternatives are able to provide.
Despite being technically non-renewable, fusion power has many of the benefits of long-term renewable energy sources (such as being a sustainable energy supply compared to presently-utilized sources and emitting no greenhouse gases) as well as some of the benefits of such much more limited energy sources as hydrocarbons and nuclear fission (without reprocessing). Like these currently dominant energy sources, fusion could provide very high power-generation density and uninterrupted power delivery (due to the fact that it is not dependent on the weather, unlike wind and solar power).
Several fusion reactors have been built, but as of yet, none has produced more thermal energy than electrical energy consumed. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. The ITER project is currently leading the effort to commercialize fusion power.