Added to Favorites

Related Searches

Definitions

Nearby Words

In engineering, mathematics, physics and similar disciplines, the term negligible refers to the quantities so small that they can be ignored (neglected) when studying the larger effect. Although related to the more mathematical concepts of infinitesimal, the idea of negligibility is particularly useful in practical disciplines like physics, chemistry, mechanical and electronic engineering, computer programming and in everyday decision-making. A quantity can be said to be negligible when it is safe to ignore (neglect) it in the present case, within the margins for error that have been agreed to be acceptable in this case.

An example would be a car moving at 10 km/h along a straight horizontal road. In total, there are five main forces acting on this car, the weight, the reaction force of the road opposing the weight, the friction of the wheels on the road, the force of the engine, and air resistance against the car. The forces that have the most effect on this car will be the weight, the reaction opposing the weight and the friction. In order to describe the motion of the car mathematically, in the simplest possible way, only four of the stated forces, weight, engine, reaction and friction need be considered. Air resistance can be disregarded because the car is moving at such a low speed. Even though air resistance has an effect, the effect is so minuscule that for most purposes it is safe to regard it as not being there at all, so to avoid any unnecessarily complicated calculations. This is a case where the force is "negligible".

In none of these cases can the perfect circuit element actually exist in practice. To take one example, consider the perfect voltage source. If a perfect voltage source existed, it would have no internal impedance and would continue to maintain its rated voltage, say 5 V dc, across any load, no matter what current may become necessary to do this. As the load impedance reduced toward zero ohms (a perfect short circuit - which also cannot truly exist) then the current flow and power delivery would approach infinity. This is simultaneously impossible, impractical and undesirable. So, rather than abandon the idea of the 'voltage source' we simply remember that the concept has limitations and work with them. To continue this example, we need to derive a specification for this practical voltage source. Perhaps we can say that the current draw will never exceed 2 amps. Perhaps we can say that input voltages between 4.999 V and 5.001 V will produce errors that in themselves are negligible for the practical purposes of the remaining circuitry. If the output impedance of the voltage source can drop 0.002 V (5.001 - 4.999) at a current of 2 A, it must be no more than 0.001 Ω or one milli ohm.

Now we have a practical case - our voltage source with its negligible 1 mΩ output impedance will produce voltages that only deviate from 5.0 V by negligible amounts, provided the current requirements remain within spec.

In another case these discrepancies may be far too much as any voltage less than 4.999999 volts, or more than 5.000001 V, would be unacceptable. This is no problem, we just tighten our specification - there is always a practical limit.

In yet another case we may have a good 12.0 V supply available and a requirement that any voltage between 4V and 6V will be acceptable and that no more than 2 mA will ever be drawn. In this case a couple of 2 kΩ resistors in a simple voltage divider may suffice as a voltage source. This is hardly ideal, but it meets the requirements.

The important point in the latter example is that, once drawn or soldered in place, the electronic engineer will continue to look upon the voltage divider, to a first approximation, as an ideal voltage source because as far as this requirement is concerned, that is what it is. Its practical discrepancies are negligible compared to the specification at this point. It is an important part of the engineer's skill, however, always to remember the assumptions and simplifications inherent in this thinking and to be able quickly to identify when cost savings can be made by reducing a specification requirement as well as when new requirements invalidate previously acceptable assumptions.

Similar examples could be created for any of the 'ideal' circuit elements listed above, and many more, from RF frequency mixers to the simplest switch.

Design calculations are often simplified by the omission of anything that will offset the answer by less than 1%, and when dealing with pythagorean equations of the form ...

- $z\; =\; sqrt\{r^2\; +\; x^2\}\; ,$

... it is especially useful to note that if r is less than x/10 (or x less than r/10), the answer for z cannot be affected by more than 1% if r (or x) is neglected.

Similarly in technical design, there are probabilities, in each case, that an electronic product may be used in the vicinity of a powerful radio transmitter, that mains-borne power surges may occur, that its batteries may go flat while in use etc. The designer has to consider each of these and write some off as outside of the specified requirements, while others clearly are not. There are clearly a very large number of uncontrollable possibilities that any designed product (not just electronic ones) may have to contend with; knowing where to draw each line can become very difficult.

Decisions made in this area can easily spell success or failure in a competitive marketplace. Your competitors may have been able to make significant cost savings by not handling cases that really almost never occur, or that do not worry the general public when they do. On the other hand, failures in cases that users see as perfectly normal may quickly give your product an unacceptably bad reputation.

A computer system's internal representation of floating point numbers may normally approximate so closely to real numbers as to produce only negligible errors under most circumstances, but if, one day two very similar such values are subtracted, the result may be very far from what should be expected.

There are many other ways that assumptions about the "negligible" errors involved in these digital representations may cause problems at run time or later including analog-to-digital conversion where resolution and bit-rates are necessarily limited, financial calculations where floating points or other imprecise number systems do not 'take care of the pennies' etc. All of these have something in common with the electronic engineering issues discussed above, although they are different in nature.

Modern computer programming languages provide the mechanism of throwing and catching exceptions so that the developer can handle these and many other possibilities without making the structure and logic of their code impenetrably complex to readers and future developers. Some languages, for example java, are designed to remind the developer about the exceptions that may be thrown — and so that should be caught, handled or declared of negligible interest — at each point. Others, like C#, provide the mechanism but do not enforce the practice in this way.

These examples are related to probabilities introduced by the IO systems of the computer.

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Thursday September 04, 2008 at 12:15:14 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Thursday September 04, 2008 at 12:15:14 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2015 Dictionary.com, LLC. All rights reserved.