Finding the index of a specific value in a sorted list is useful because, given the index, other data structures will contain associated information. Suppose a data structure containing the classic collection of name, address, telephone number and so forth has been accumulated, and an array is prepared containing the names, numbered from one to N. A query might be: what is the telephone number for a given name X. To answer this the array would be searched and the index (if any) corresponding to that name determined, whereupon the associated telephone number array would have X's telephone number at that index, and likewise the address array and so forth. Appropriate provision must be made for the name not being in the list (typically by returning an index value of zero), indeed the question of interest might be only whether X is in the list or not.
If the list of names is in sorted order, a binary search will find a given name with far fewer probes than the simple procedure of probing each name in the list, one after the other in a linear search, and the procedure is much simpler than organizing a hash table. However, once created, searching with a hash table may well be faster, typically averaging just over one probe per lookup. With a non-uniform distribution of values, if it is known that some few items are much more likely to be sought for than the majority, then a linear search with the list ordered so that the most popular items are first may do better than binary search. The choice of the best method may not be immediately obvious. If, between searches, items in the list are modified or items are added or removed, maintaining the required organisation may consume more time than the searches.
Even if the number we're guessing can be arbitrarily large, in which case there is no upper bound N, we can still find the number in at most steps (where k is the (unknown) selected number) by first finding an upper bound by repeated doubling. For example, if the number were 11, we could use the following sequence of guesses to find it: 1, 2, 4, 8, 16, 12, 10, 11
One could also extend the technique to include negative numbers; for example the following guesses could be used to find −13: 0, −1, −2, −4, −8, −16, −12, −14, −13
For example, suppose we could answer "Does this n x n matrix have determinant larger than k?" in O(n2) time. Then, by using binary search, we could find the (ceiling of the) determinant itself in O(n2log d) time, where d is the determinant; notice that d is not the size of the input, but the size of the output.
In order to discuss the method in detail, a more formal description is necessary. The basic idea is that there is a data structure represented by array A in which individual elements are identified as A(1), A(2),…,A(N) and may be accessed in any order. The data structure contains a sub-element or data field called here Key, and the array is ordered so that the successive values A(1).Key ≤ A(2).Key and so on. The requirement is that given some value x, find an index p (not necessarily the one and only) such that A(p).Key = x.
To begin with, the span to be searched is the full supplied list of elements, as marked by variables L and R, and their values are changed with each iteration of the search process, as depicted by the flowchart. Note that the division by two is integer division, with any remainder lost, so that 3/2 comes out as 1, not 1½. The search finishes either because the value has been found, or else, the specified value is not in the list.
The initialisation of L and R to 0 and N + 1 make this merely a restatement of the supplied problem, that elements 1 to N are to be searched, so the notion is established to begin with. The first step of each iteration is to check that there is something to search, which is to say whether there are any elements in the search span (L + 1) to (R − 1). The number of such elements is (R − L − 1) so computing (R − L) gives (number of elements + 1); halving that number (with integer division) means that if there was one element (or more) then p = 1 (or more), but if none p = 0, and in that case the method terminates with the report "Not found". Otherwise, for p > 0, the search continues with p:=L + p, which by construction is within the bounds (L + 1) to (R − 1). That this position is at or adjacent to the middle of the span is not important here, merely that it is a valid choice.
Now compare x to A(p).Key. If x = A(p).Key then the method terminates in success. Otherwise, suppose x < A(p).Key. If so, then because the array is in sorted order, x will also be less than all later elements of the array, all the way to element (R − 1) as well. Accordingly, the value of the right-hand bound index R can be changed to be the value p, since, by the test just made, x < A(p).Key and so, if x is to be found, it will be amongst elements earlier than p, that is (p − 1) and earlier. And contrariwise, for the case x > A(p).Key, the value of L would be changed. Thus, whichever bound is changed the ruling notion is upheld, and further, the span remaining to be searched is reduced. If L is changed, it is changed to a higher number (at least L + 1), whereas if R is changed, it is to a lower number (at most R − 1) because those are the limits for p.
Should there have been just one value remaining in the search span (so that L + 1 = p = R − 1), and x did not match, then depending on the sign of the comparison either L or R will receive the value of p and at the start of the next iteration the span will be found to be empty.
Accordingly, with each iteration, if the search span is empty the result is "Not found", otherwise either x is found at the probe point p or the search span is reduced for the next iteration. Thus the method works, and so can be called an Algorithm.
Suppose that the number of items in the list is odd. Then there is a definite middle element at p = (N + 1)/2 – this is proper division without discarding remainders. If that element's key does not match x, then the search proceeds either with the first half, elements 1 to p − 1 or the second, elements p + 1 to N. These two spans are equal in extent, having (N − 1)/2 elements. Conversely, suppose that the number of elements is even. Then the probe element will be p = N/2 and again, if there is no match one or the other of the subintervals will be chosen. They are not equal in size; the first has N/2 − 1 elements and the second (elements p + 1 to N as before) has N/2 elements.
Now supposing that it is just as likely that N will be even as odd in general, on average an interval with N elements in it will become an interval with (N − 1)/2 elements.
Working in the other direction, what might be the maximum number of elements that could be searched in p probes? Clearly, one probe can check a list with one element only (to report a match, or, "not found") and two probes can check a list of three elements. This is not very impressive because a linear search would only require three probes at most for that. But now the difference increases exponentially. Seven elements can be checked with three probes, fifteen with four, and so forth. In short, to search N elements requires at most p probes where p is the smallest integer such that Taking the binary logarithm of both sides,
Or, (with any fractional part rounded up to the next integer), is the maximum number of probes required to search N elements.
For searches that will succeed because the value is in the list, the search may finish early because a probed value happens to match. Loosely speaking, half the time the search will finish one iteration short of the maximum and a quarter of the time, two early. Consider then a test in which a list of N elements is searched once for each of the N values in the list, and determine the number of probes n for all N searches.
N = 1 2 3 4 5 6 7 8 9 10 11 12 13
n/N = 1 3/2 5/3 8/4 11/5 14/6 17/7 21/8 25/9 29/10 33/11 37/12 41/13
1 1.5 1.66 2 2.2 2.33 2.43 2.63 2.78 2.9 3 3.08 3.15
In short is about the expected number of probes in an average successful search, and the worst case is , just one more probe. If the list is empty, no probes at all are made.
Suppose the list to be searched contains N even numbers (say, 2,4,6,8 for N = 4) and a search is done for values 1, 2, 3, 4, 5, 6, 7, 8, and 9. The even numbers will be found, and the average number of iterations can be calculated as described. In the case of the odd numbers, they will not be found, and the collection of test values probes every possible position (with regard to the numbers that are in the list) that they might be not found in, and an average is calculated. The maximum value is for each N the greatest number of iterations that were required amongst the various trail searches of those N elements. The first plot shows the iteration counts for N = 1 to 63 (with N = 1, all results are 1), and the second plot is for N = 1 to 32767.
Thus binary search is a logarithmic algorithm and executes in O() time. In most cases it is considerably faster than a linear search. It can be implemented using iteration (as shown above), or recursion. In some languages it is more elegantly expressed recursively; however, in some C-based languages tail recursion is not eliminated and the recursive version requires more stack space.
Binary search can interact poorly with the memory hierarchy (i.e. caching), because of its random-access nature. For in-memory searching, if the interval to be searched is small, a linear search may have superior performance simply because it exhibits better locality of reference. For external searching, care must be taken or each of the first several probes will lead to a disk seek. A common technique is to abandon binary searching for linear searching as soon as the size of the remaining interval falls below a small value such as 8 or 16 or even more in recent computers. The exact value depends entirely on the machine running the algorithm.
Notice that for multiple searches with a fixed value for N, then (with the appropriate regard for integer division), the first iteration always selects the middle element at N/2, and the second always selects either N/4 or 3N/4, and so on. Thus if the array's key values are in some sort of slow storage (on a disc file, in virtual memory, not in the cpu's on-chip memory), keeping those three keys in a local array for a special preliminary search will avoid accessing widely separated memory. Escalating to seven or fifteen such values will allow further levels at not much cost in storage. On the other hand, if the searches are frequent and not separated by much other activity, the computer's various storage control features will more or less automatically promote frequently-accessed elements into faster storage.
When multiple binary searches are to be performed for the same key in related lists, fractional cascading can be used to speed up successive searches after the first one.
In more complex contexts, it might be that the data structure has many sub fields, such as a telephone number along with the name. An indexing array such as xref could be introduced so that elements A(xref(1)).Telephone ≤ A(xref(2)).Telephone … ≤ A(xref(N)).Telephone so that, "viewed through" array xref the array can be regarded as being sorted on the telephone number, and a search would be to find a given telephone number. In this case, A(i).Key would be replaced by A(xref(i)).Telephone and all would be as before. Thus, with auxiliary xref arrays, an array can be treated as if it is sorted in different ways without it having to be resorted for each different usage.
When a search returns the result "Not found", it may be helpful to have some indication as to where the missing value would be located so that the list can be augmented. A possible approach would be to return the value −L (rather than just −1 or 0, say), the negative indicating failure. This however can conflict with the array indexing protocol, if it includes zero as a valid index (since of course −0 = 0, and 0 would be a findable result) so caution is needed.
The elements of the list are not necessarily all unique. If one searches for a value that occurs multiple times in the list, the index returned will be of the first-encountered equal element, and this will not necessarily be that of the first, last, or middle element of the run of equal-key elements but will depend on the positions of the values. Modifying the list even in seemingly unrelated ways such as adding elements elsewhere in the list may change the result. To find all equal elements an upward and downward linear search can be carried out from the initial result, stopping each search when the element is no longer equal. Thus, e.g. in a table of cities sorted by country, we can find all cities in a given country.
A list of pairs (p,q) can be sorted based on just p. Then the comparisons in the algorithm need only consider the values of p, not those of q. For example, in a table of cities sorted on a column "country" we can find cities in Germany by comparing country names with "Germany", instead of comparing whole rows. Such partial content is called a sort key.
Several algorithms closely related to or extending binary search exist. For instance, noisy binary search solves the same class of projects as regular binary search, with the added complexity that any given test can return a false value at random. (Usually, the number of such erroneous results are bounded in some way, either in the form of an average error rate, or in the total number of errors allowed per element in the search space.) Optimal algorithms for several classes of noisy binary search problems have been known since the late seventies, and more recently, optimal algorithms for noisy binary search in quantum computers (where several elements can be tested at the same time) have been discovered.
Thus, the gain of a simpler initialisation, done once, is lost by a more complex calculation, and which is done for every iteration. If that is not enough, the test for an empty span is more complex also, as compared to the simplicity of checking that the value of p is zero. Nevertheless, this is the form found in many publications, such as Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition.
IntDiv 2 Do not store this intermediate result into p yet.
JumpZN NotFound if result is Zero or Negative, Jump to NotFound.
Store pThat is, the (human) compiler has recognised that in the three statements p:=(R − L)/2; if p <= 0 return(−L); p:=p + L; the value of p is already in the working register and need not be stored and retrieved until the end where it is stored once. The two-statement version, p:=(L + R)/2; if p <= L return(−L); would become
Store p Thus p:=(L + R)/2;
Subtract L Compare p to L for if p <= L
JumpZN NotFoundThis version has the slight disadvantages of an unnecessary store to p if NotFound and that the value of p is no longer in the working register ready to be used to index the array in A(p).Key of the next statement, but ordinarily it will involve the same number of actions, though computer compilers may not produce such code. Thus a tie: there is no advantage in locating the middle in one step. However there is a very good reason not to use the two-statement form, due to the risk of overflow described below.
When Jon Bentley assigned it as a problem in a course for professional programmers, he found that an astounding ninety percent failed to code a binary search correctly after several hours of working on it, and another study shows that accurate code for it is only found in five out of twenty textbooks (Kruse, 1999). Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contains an error that remained undetected for over twenty years.
Careful thought is required. The first issue is minor to begin with – how to signify "Not found". If the array is indexed 1 to N, then a returned index of zero is an obvious choice. However, some computer languages (notably C et al) insist that arrays have a lower bound of zero. In such a case, the array might be indexed 0 to N − 1 and so a negative result would be chosen for "Not found". Except that this can interfere with the desire to use unsigned integers for indexing. If the plan is to return −L for "not found", then unsigned integers cannot be used at all.
It is of course unlikely that if the collections being searched number around thirty thousand that sixteen bit integers will be used, but a second problem arises much sooner. A common variation computes the midpoint of the interval in one step, as p:=(L + R)/2; this means that the sum must not exceed the sixteen-bit limit for all to be well, and this detail is easily forgotten. The problem may be concealed on some computers, which use wider registers to perform sixteen-bit arithmetic so that there will be no overflow of intermediate results. But on a different computer, perhaps not. Thus, when tested and working code was transferred from a 16-bit version (in which there were never more than about fifteen thousand elements being searched) to a 32-bit version, and then the problem sizes steadily inflated, the forgotten limit can suddenly become relevant again. This was the mistake not noticed for decades, and is found in many textbooks – they concentrate on the description of the method, in a context where integer limits are far away.
To put this in simple terms, if the computer variables can hold a value of 0 to max then the binary search method will only work for N up to max − 1, not for all possible values of N. Reducing a limit from max to max − 1 is not an onerous constraint, however, if the variant form p:=(L + R)/2 is used, then should a search wander into indices beyond max/2 it will fail. Losing half the range for N is worth avoiding.
if a < b then action1
else if a > b then action2
else action3;About half the time, the first test will be true so that there will be only one comparison of a and b, but the other half of the time it will be false, and a second comparison forced. This is so grievous that some versions are recast so as not to make a second test at all thus not determining equality until the span has been reduced to zero, and thereby foregoing the possibility of early termination – remember that about half the time the search will happen on a matching value one iteration short of the limit. The problem is exacerbated for floating point variables that offer the special value NaN, which violates even the notion of equality: x = x is false if x has the value NaN!
Since Fortran does offer a three-way test, here is a version for searching an array of floating-point numbers. For labels Fortran uses numbers at the start of statements, thus the 1, 2, 3, and 4. The if statement performs a go to to one of the three nominated labels according to the sign of its arithmetic expression. It can be seen that the flow chart of this routine corresponds to the flow chart of a proven working method, and so, the code should work.
Well, yes and no… Leaving aside the problem of integer bounds, it remains possible that the routine might be presented with perverse parameters. For instance, N < 0 would cause trouble, and for this reason the test is if (P <= 0) rather than if (P = 0) as it can be performed with no extra effort. Similarly, the values in array A might not in fact be in sorted order, or the actual array size might be smaller than N. To check that the array is sorted requires inspecting every value, and this vitiates the whole reason for searching with a fast method. The proof of correctness relies on its presumption that the array is sorted, etc., and not meeting these requirements is not the fault of the method. Deciding how much checking and what to do is a troublesome issue.
N-1. We can eliminate the tail recursion above and convert this to an iterative implementation:
In practice, one frequently uses a three-way comparison instead of two comparisons per loop. Also, real implementations using fixed-width integers with modular arithmetic need to account for the possibility of overflow. One frequently-used technique for this is to compute mid, so that two smaller numbers are ultimately added:
upper_bound. Java offers a set of overloaded
binarySearch()static methods in the classes and for performing binary searches on Java arrays and Lists, respectively. They must be arrays of primitives, or the arrays or Lists must be of a type that implements the
Comparableinterface, or you must specify a custom Comparator object. Microsoft's .NET Framework 2.0 offers static generic versions of the Binary Search algorithm in its collection base classes. An example would be
BinarySearchPython provides the
(T array, T value).
bisectmodule. COBOL can perform binary search on internal tables using the
US Patent Issued to XRONET on May 17 for "Systems and Methods for Filter Tuning Using Binary Search Algorithm" (South Korean Inventor)
May 21, 2011; ALEXANDRIA, Va., May 21 -- United States Patent no. 7,944,286, issued on May 17, was assigned to XRONET Corp. (South Korea)....