Added to Favorites

Related Searches

Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour can not be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimal execution time can not be less than that critical 1 hour. Hence the speed up is limited up to 20x, as shown in the diagram on the right.

More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S. (For example, if an improvement can speed up 30% of the computation, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2.) Amdahl's law states that the overall speedup of applying the improvement will be

- $frac\{1\}\{(1\; -\; P)\; +\; frac\{P\}\{S\}\}$.

To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes, (which is 1 − P), plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part P/S. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does.

Here's another example. We are given a task which is split up into four parts: P1 = 11%, P2 = 18%, P3 = 23%, P4 = 48%, which add up to 100%. Then we say P1 is not sped up, so S1 = 1 or 100%, P2 is sped up 5x, so S2 = 500%, P3 is sped up 20x, so S3 = 2000%, and P4 is sped up 1.6x, so S4 = 160%. By using the formula P1/S1 + P2/S2 + P3/S3 + P4/S4, we find the running time is

- 0.11/1 + 0.18/5 + 0.23/20 + 0.48/1.6 = 0.4575

- $frac\{1\}\{(1-P)\; +\; frac\{P\}\{N\}\}$

In the limit, as N tends to infinity, the maximum speedup tends to 1 / (1-P). In practice, performance/price falls rapidly as N is increased once there is even a small component of (1 − P).

As an example, if P is 90%, then (1 − P) is 10%, and the problem can be sped up by a maximum of a factor of 10, no matter how large the value of N used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very high values of P: so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce (1-P) to the smallest possible value.

Amdahl's law does represent the law of diminishing returns if you are considering what sort of return you get by adding more processors to a machine, if you are running a fixed-size computation that will use all available processors to their capacity. Each new processor you add to the system will add less usable power than the previous one. Each time you double the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of

- $frac\{1\}\{(1\; -\; P)\}$.

This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth, if they do not scale with the number of processors; however taking into account such bottlenecks would tend to further demonstrate the diminishing returns of only adding processors.

The maximum speedup in an improved sequential program, where some part was sped up by $p$ times is

- Max. Speedup $le\; frac\{p\}\{1\; +\; f\; *\; (p\; -\; 1)\}$

where $f$ ($0.0\; <\; f\; <\; 1.0$) is the fraction of time (before the improvement) spent in the part that was not improved. For example,

- If part B (blue) is made five times faster, p = 5.0, $t\_n$ (red) = 3 seconds, $t\_i$ (blue) = 1 second and
- :$f\; =\; t\_n\; /\; (t\_n\; +\; t\_i)\; =\; 0.75$
- :Max. Speedup $le\; frac\{5\}\{1\; +\; 0.75\; *\; (5\; -\; 1)\}\; =\; 1.25$
- If part A (red) is made to run twice as fast, p = 2.0, $t\_n$ (blue) = 1 second, $t\_i$ (red) = 3 seconds and
- :$f\; =\; t\_n\; /\; (t\_n\; +\; t\_i)\; =\; 0.25$
- :Max. Speedup $le\; frac\{2\}\{1\; +\; 0.25\; *\; (2\; -\; 1)\}\; =\; 1.60$ (better!)

Therefore, making A twice faster is better than making B five times faster.

- Improving part A by a factor of two will result in a +60% increase in overall program speed.
- However, improving part B by a factor of 5 (which presumably requires more effort) will only achieve an overall speedup of +25%.

Amdahl's law also doesn't take into account that problem sizes may be scaled with increased number of processors, which typically reduces the relative amount of non-parallelizable tasks.

Amdahl's Rule of Thumb is that 1 byte of memory and 1 byte per second of I/O are required for each instruction per second supported by a computer. This also goes by the title Amdahl's Other Law.

- Speedup
- Amdahl Corporation
- Ninety-ninety rule
- Gustafson's Law
- Karp-Flatt Metric
- Brooks's law
- Moore's Law

- Gene Amdahl, " Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities", AFIPS Conference Proceedings, (30), pp. 483-485, 1967. Note: Gene Amdahl has approved the use of his complete text in the Usenet comp.sys.super news group FAQ which goes out on the 20th of each month.
- Rodgers, David P. (1985) Improvements in multiprocessor system design ACM SIGARCH Computer Architecture News archive Volume 13 , Issue 3 (June 1985) table of contents Special Issue: Proceedings of the 12th annual International Symposium on Computer Architecture (ISCA '85) Pages: 225 - 231 Year of Publication: 1985 ISSN:0163-5964. Also published in International Symposium on Computer Architecture, Proceedings of the 12th annual international symposium on Computer architecture, 1985 , Boston, Massachusetts, United States

- Gene Amdahl. Oral history interview. Charles Babbage Institute, University of Minnesota, Minneapolis.
- Reevaluating Amdahl's Law
- Reevaluating Amdahl's Law and Gustafson's Law
- A simple interactive Amdahl's Law calculator
- "Amdahl's Law" by Joel F. Klein, The Wolfram Demonstrations Project, 2007.
- Amdahl's Law in the Multicore Era
- Blog Post: "What the $#@! is Parallelism, Anyhow?"

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Wednesday September 24, 2008 at 07:14:07 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Wednesday September 24, 2008 at 07:14:07 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.