Any of a class of extremely powerful digital computers. The term is commonly applied to the fastest high-performance systems available at a given time; current personal computers are more powerful than the supercomputers of just a few years ago. Supercomputers are used primarily for scientific and engineering work. Unlike conventional computers, they usually have more than one CPU, often functioning in parallel (simultaneously); even higher-performance supercomputers are now being developed through use of massively parallel processing, incorporating thousands of individual processors. Supercomputers have huge storage capacity and very fast input/output capability, and can operate in parallel on corresponding elements of arrays of numbers rather than on one pair of elements at a time.
Learn more about supercomputer with a free trial on Britannica.com.
Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). Cray, himself, never used the word "supercomputer"; a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and HP, who had purchased many of the 1980s companies to gain their experience.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
Technologies developed for supercomputers include:
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
Supercomputer operating systems, today most often variants of Linux or UNIX, are every bit as complex as those for smaller machines, if not more so. Their user interfaces tend to be less developed, however, as the OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that because these computers, often priced at millions of dollars, are sold to a very small market, their R&D budgets are often limited. (The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.)
Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as AMD and NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.
Historically, until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos and today's Linux.)
For this reason, in the future, the highest performance systems are likely to use a variant of UNIX or a UNIX-like operating system but with incompatible system-unique features (especially for the highest-end systems at secure facilities).
As of November 2006, the top ten supercomputers on the Top500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
As of July 2008 the fastest machine is IBM Roadrunner. This machine is a cluster of 3240 computers, each with 40 processing cores. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.
Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a ten-year old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4,000 US dollars.
Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.
Examples of special-purpose supercomputers:
On June 8, 2008, the Cell/AMD Opteron-based IBM Roadrunner at the Los Alamos National Laboratory (LANL) was announced as the fastest operational supercomputer, with a sustained processing rate of 1.026 PFLOPS. However, Roadrunner was then taken out of service to be shipped to its new home.
One such example is the BOINC platform, a host for a number of distributed computing projects. On August 26, 2008, BOINC recorded a processing power of over 1,080 TFLOPS through over 550,000 active computers on the network. The largest project, SETI@home, reported processing power of over 450 TFLOPS through almost 330,000 active computers.
Another distributed computing project, Folding@home, reported over 3.3 PFLOPS of processing power in August 2008. A little over 1 PFLOPS of this processing power is contributed by clients running on PlayStation 3 systems and another 1.8 PFLOPS is contributed by their newly released GPU2 client.
Google's search engine system may be faster with estimated total processing power of between 126 and 316 TFLOPS. The New York Times estimates that the Googleplex and its server farms contain 450,000 servers.
Another project in development by IBM is the Cyclops64 architecture, intended to create a "supercomputer on a chip".
Other PFLOP projects include one by Dr. Narendra Karmarkar in India, a CDAC effort targeted for 2010, and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).
|Year||Supercomputer|| Peak speed|
|1942||Atanasoff–Berry Computer (ABC)||30 OPS||Iowa State University, Ames, Iowa, USA|
|TRE Heath Robinson||200 OPS||Bletchley Park, Bletchley, UK|
|1944||Flowers Colossus||5 kOPS||Post Office Research Station, Dollis Hill, UK|
(before 1948+ modifications)
|100 kOPS||Department of War|
Aberdeen Proving Ground, Maryland, USA
|1954||IBM NORC||67 kOPS||Department of Defense|
U.S. Naval Proving Ground, Dahlgren, Virginia, USA
|1956||MIT TX-0||83 kOPS||Massachusetts Inst. of Technology, Lexington, Massachusetts, USA|
|1958||IBM AN/FSQ-7||400 kOPS||25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers)|
|1960||UNIVAC LARC||250 kFLOPS||Atomic Energy Commission (AEC)|
Lawrence Livermore National Laboratory, California, USA
|1961||IBM 7030 "Stretch"||1.2 MFLOPS||AEC-Los Alamos National Laboratory, New Mexico, USA|
|1964||CDC 6600||3 MFLOPS||AEC-Lawrence Livermore National Laboratory, California, USA|
|1969||CDC 7600||36 MFLOPS|
|1974||CDC STAR-100||100 MFLOPS|
|1975||Burroughs ILLIAC IV||150 MFLOPS||NASA Ames Research Center, California, USA|
|1976||Cray-1||250 MFLOPS||Energy Research and Development Administration (ERDA)|
Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide)
|1981||CDC Cyber 205||400 MFLOPS||(numerous sites worldwide)|
|1983||Cray X-MP/4||941 MFLOPS||U.S. Department of Energy (DoE)|
Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing
|1984||M-13||2.4 GFLOPS||Scientific Research Institute of Computer Complexes, Moscow, USSR|
|1985||Cray-2/8||3.9 GFLOPS||DoE-Lawrence Livermore National Laboratory, California, USA|
|1989||ETA10-G/8||10.3 GFLOPS||Florida State University, Florida, USA|
|1990||NEC SX-3/44R||23.2 GFLOPS||NEC Fuchu Plant, Fuchu, Japan|
|1993||Thinking Machines CM-5/1024||65.5 GFLOPS||DoE-Los Alamos National Laboratory; National Security Agency|
|Fujitsu Numerical Wind Tunnel||124.50 GFLOPS||National Aerospace Laboratory, Tokyo, Japan|
|Intel Paragon XP/S 140||143.40 GFLOPS||DoE-Sandia National Laboratories, New Mexico, USA|
|1994||Fujitsu Numerical Wind Tunnel||170.40 GFLOPS||National Aerospace Laboratory, Tokyo, Japan|
|1996||Hitachi SR2201/1024||220.4 GFLOPS||University of Tokyo, Japan|
|Hitachi/Tsukuba CP-PACS/2048||368.2 GFLOPS||Center for Computational Physics, University of Tsukuba, Tsukuba, Japan|
|1997||Intel ASCI Red/9152||1.338 TFLOPS||DoE-Sandia National Laboratories, New Mexico, USA|
|1999||Intel ASCI Red/9632||2.3796 TFLOPS|
|2000||IBM ASCI White||7.226 TFLOPS||DoE-Lawrence Livermore National Laboratory, California, USA|
|2002||NEC Earth Simulator||35.86 TFLOPS||Earth Simulator Center, Yokohama, Japan|
|2004||IBM Blue Gene/L||70.72 TFLOPS||DoE/IBM Rochester, Minnesota, USA|
|2005||136.8 TFLOPS||DoE/U.S. National Nuclear Security Administration,|
Lawrence Livermore National Laboratory, California, USA
|2008||IBM Roadrunner||1.026 PFLOPS||DoE-Los Alamos National Laboratory, New Mexico, USA|