Added to Favorites

Popular Searches

A binary search algorithm (or binary chop) is a technique for locating a particular value in a sorted list of values. To cast this in the frame of the guessing game (see Example below), realize that we seek to guess the index, or numbered place, of the value in the list. The method makes progressively better guesses, and closes in on the location of the sought value by selecting the middle element in the span (which, because the list is in sorted order, is the median value), comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. A guessed index whose value turns out to be too high becomes the new upper bound of the span, and if its value is too low that index becomes the new lower bound. Only the sign of the difference is inspected: there is no attempt at an interpolation search based on the size of the difference. Pursuing this strategy iteratively, the method reduces the search span by a factor of two each time, and soon finds the target value or else determines that it is not in the list at all. A binary search is an example of a dichotomic divide and conquer search algorithm.## Examples

### Number guessing game

This rather simple game begins something like "I'm thinking of an integer between forty and sixty inclusive, and to your guesses I'll respond 'High', 'Low', or 'Yes!' as might be the case."
Supposing that N is the number of possible values (here, twenty-one as "inclusive" was stated), then at most $lceillog\_2\; Nrceil$ questions are required to determine the number, since each question halves the search space. Note that one less question (iteration) is required than for the general algorithm, since the number is already constrained to be within a particular range.### Word lists

People typically use a mixture of the binary search and interpolative search algorithms when searching a telephone book, after the initial guess we exploit the fact that the entries are sorted and can rapidly find the required entry. For example when searching for Smith, if Rogers and Thomas have been found, one can flip to a page about halfway between the previous guesses. If this shows Samson, we know that Smith is somewhere between the Samson and Thomas pages so we can bisect these.
### Applications to complexity theory

Even if we do not know a fixed range the number k falls in, we can still determine its value by asking $2lceillog\_2krceil$ simple yes/no questions of the form "Is k greater than x?" for some number x. As a simple consequence of this, if you can answer the question "Is this integer property k greater than a given value?" in some amount of time then you can find the value of that property in the same amount of time with an added factor of log2 k. This is called a reduction, and it is because of this kind of reduction that most complexity theorists concentrate on decision problems, algorithms that produce a simple yes/no answer.## The method

### That it works

The method relies on and upholds the notion If x is to be found, it will be amongst elements (L + 1) to (R − 1) of the array. ### That it is fast

The interval being searched is successively halved (actually slightly better than halved) in width with each iteration, so the number of iterations required is at most the base two logarithm of N. – Zero for empty lists. More precisely, each probe both removes one element from further consideration and selects one or the other half of the interval for further searching.#### Average performance

There are two cases: for searches that will fail because the value is not in the list, the search interval must be successively halved until no more elements remain and this process will require at most the p probes just defined, or one less. This latter occurs because the search interval is not in fact exactly halved, and depending on the value of N and which elements of the list the absent value x is between, the interval may be closed early.### Extensions

There is no particular requirement that the array being searched has the bounds 1 to N. It is possible to search a specified range, elements first to last instead of 1 to N. All that is necessary is that the intialisation be L:=first − 1 and R:=last + 1, then all proceeds as before.### Variations

There are many, and they are easily confused.
#### Exclusive or inclusive bounds

The most significant differences are between the "exclusive" and "inclusive" forms of the bounds. This description uses the "exclusive" bound form, that is the span to be searched is (L + 1) to (R − 1), and this may seem clumsy when the span to be searched could be described in the "inclusive" form, as L to R. This form may be attained by replacing all appearances of "L" by "(L − 1)" and "R" by "(R + 1)" then rearranging. Thus, the initialisation of L:=0 becomes (L − 1):=0 or L:=1, and R:=N + 1 becomes (R + 1):=N + 1 or R:=N. So far so good, but note now that the changes to L and R are no longer simply transferring the value of p to L or R as appropriate but now must be (R + 1):=p or R:=p − 1, and (L − 1):=p or L:=p + 1.#### Locate the middle in one step

The other main variation is to combine the two step calculation of the probe position into one step, that is, p:=(R − L)/2; p:=p + L; into p:=(L + R)/2; which is indeed less work, but, the saving is lost because rather than if p <= 0 (which tests the just-computed value of p directly) the test becomes if p <= L and this second form requires the subtraction of the value of L. Pseudo machine code should read somewhat as follows:
#### Deferred detection of equality

Because of the syntax difficulties discussed below, so that distinguishing the three states <, =, and > would have to be done with two comparisons, it is possible to use just one comparison and at the end when the span is reduced to zero, equality can be tested for. The example distinguishes only < from >=.
#### Midpoint and width

An entirely different variation involves abandoning the L and R pointers in favour of a current position p and a width w where at each iteration, p is adjusted by + or − w and w is halved. Professor Knuth remarks "It is possible to do this, but only if extreme care is paid to the details" – Section 6.2.1, page 414 of The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition, outlines an algorithm, with the further remark "Simpler approaches are doomed to failure!"
## Computer usage

"Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky…" — Professor Donald Knuth### Numerical difficulties

More serious are the limitations of computer arithmetic. Variables have limited size, for instance the (very common) sixteen-bit two's complement signed integer can only hold values of −32768 to +32767. (Exactly the same problems arise with unsigned or other size integers, except that the digit strings are longer.) If the array is to be indexed with such variables, then the values first − 1 and last + 1 must be representable, that is, last ≤ 32766, not 32767. Using the "inclusive" form of the method won't help, because although R might safely hold the value 32767, should the sought value x follow the last element in the array then eventually the search will compare x to A(p).Key (with p safely holding 32767), then attempt to store p + 1 into L and fail. Similarly, the lower bound first may be zero (for arrays whose indexing starts at zero), in which case a value −1 must be representable, which precludes the use of unsigned integers. General-purpose testing is unlikely to present a test with these boundaries exercised, and so the detail can be overlooked. Formal proofs often do not attend to differences between computer arithmetic and mathematics.### Syntax difficulties

Another difficulty is presented by the absence in most computer languages of a three-way result from a comparison, which forces a comparison to be performed twice. The form is somewhat as follows:
## Implementations

### Recursive

The most straightforward implementation is recursive, which recursively searches the subrange dictated by the comparison:
It is invoked with initial ### Iterative

### Single comparison per iteration

Some implementations may not include the early termination branch, preferring to check at the end if the value was found, shown below. Checking to see if the value was found during the search (as opposed to at the end of the search) may seem a good idea, but there are extra computations involved in each iteration of the search. Also, with an array of length N using the low and high indices, the probability of actually finding the value on the first iteration is 1 / N, and the probability of finding it later on (before the end) is the about 1 / (high − low). The following checks for the value at the end of the search:
This algorithm has two other advantages. At the end of the loop, low points to the first entry greater than or equal to value, so a new entry can be inserted if no match is found. Moreover, it only requires one comparison; which could be significant for complex keys in languages which do not allow the result of a comparison to be saved.## Language support

Many standard libraries provide a way to do binary search. C provides in its standard library. C++'s STL provides algorithm functions ## See also

## References

## External links

Finding the index of a specific value in a sorted list is useful because, given the index, other data structures will contain associated information. Suppose a data structure containing the classic collection of name, address, telephone number and so forth has been accumulated, and an array is prepared containing the names, numbered from one to N. A query might be: what is the telephone number for a given name X. To answer this the array would be searched and the index (if any) corresponding to that name determined, whereupon the associated telephone number array would have X's telephone number at that index, and likewise the address array and so forth. Appropriate provision must be made for the name not being in the list (typically by returning an index value of zero), indeed the question of interest might be only whether X is in the list or not.

If the list of names is in sorted order, a binary search will find a given name with far fewer probes than the simple procedure of probing each name in the list, one after the other in a linear search, and the procedure is much simpler than organizing a hash table. However, once created, searching with a hash table may well be faster, typically averaging just over one probe per lookup. With a non-uniform distribution of values, if it is known that some few items are much more likely to be sought for than the majority, then a linear search with the list ordered so that the most popular items are first may do better than binary search. The choice of the best method may not be immediately obvious. If, between searches, items in the list are modified or items are added or removed, maintaining the required organisation may consume more time than the searches.

Even if the number we're guessing can be arbitrarily large, in which case there is no upper bound N, we can still find the number in at most $2lceil\; log\_2\; k\; rceil$ steps (where k is the (unknown) selected number) by first finding an upper bound by repeated doubling. For example, if the number were 11, we could use the following sequence of guesses to find it: 1, 2, 4, 8, 16, 12, 10, 11

One could also extend the technique to include negative numbers; for example the following guesses could be used to find −13: 0, −1, −2, −4, −8, −16, −12, −14, −13

For example, suppose we could answer "Does this n x n matrix have determinant larger than k?" in O(n^{2}) time. Then, by using binary search, we could find the (ceiling of the) determinant itself in O(n^{2}log d) time, where d is the determinant; notice that d is not the size of the input, but the size of the output.

In order to discuss the method in detail, a more formal description is necessary. The basic idea is that there is a data structure represented by array A in which individual elements are identified as A(1), A(2),…,A(N) and may be accessed in any order. The data structure contains a sub-element or data field called here Key, and the array is ordered so that the successive values A(1).Key ≤ A(2).Key and so on. The requirement is that given some value x, find an index p (not necessarily the one and only) such that A(p).Key = x.

To begin with, the span to be searched is the full supplied list of elements, as marked by variables L and R, and their values are changed with each iteration of the search process, as depicted by the flowchart. Note that the division by two is integer division, with any remainder lost, so that 3/2 comes out as 1, not 1½. The search finishes either because the value has been found, or else, the specified value is not in the list.

The initialisation of L and R to 0 and N + 1 make this merely a restatement of the supplied problem, that elements 1 to N are to be searched, so the notion is established to begin with. The first step of each iteration is to check that there is something to search, which is to say whether there are any elements in the search span (L + 1) to (R − 1). The number of such elements is (R − L − 1) so computing (R − L) gives (number of elements + 1); halving that number (with integer division) means that if there was one element (or more) then p = 1 (or more), but if none p = 0, and in that case the method terminates with the report "Not found". Otherwise, for p > 0, the search continues with p:=L + p, which by construction is within the bounds (L + 1) to (R − 1). That this position is at or adjacent to the middle of the span is not important here, merely that it is a valid choice.

Now compare x to A(p).Key. If x = A(p).Key then the method terminates in success. Otherwise, suppose x < A(p).Key. If so, then because the array is in sorted order, x will also be less than all later elements of the array, all the way to element (R − 1) as well. Accordingly, the value of the right-hand bound index R can be changed to be the value p, since, by the test just made, x < A(p).Key and so, if x is to be found, it will be amongst elements earlier than p, that is (p − 1) and earlier. And contrariwise, for the case x > A(p).Key, the value of L would be changed. Thus, whichever bound is changed the ruling notion is upheld, and further, the span remaining to be searched is reduced. If L is changed, it is changed to a higher number (at least L + 1), whereas if R is changed, it is to a lower number (at most R − 1) because those are the limits for p.

Should there have been just one value remaining in the search span (so that L + 1 = p = R − 1), and x did not match, then depending on the sign of the comparison either L or R will receive the value of p and at the start of the next iteration the span will be found to be empty.

Accordingly, with each iteration, if the search span is empty the result is "Not found", otherwise either x is found at the probe point p or the search span is reduced for the next iteration. Thus the method works, and so can be called an Algorithm.

Suppose that the number of items in the list is odd. Then there is a definite middle element at p = (N + 1)/2 – this is proper division without discarding remainders. If that element's key does not match x, then the search proceeds either with the first half, elements 1 to p − 1 or the second, elements p + 1 to N. These two spans are equal in extent, having (N − 1)/2 elements. Conversely, suppose that the number of elements is even. Then the probe element will be p = N/2 and again, if there is no match one or the other of the subintervals will be chosen. They are not equal in size; the first has N/2 − 1 elements and the second (elements p + 1 to N as before) has N/2 elements.

Now supposing that it is just as likely that N will be even as odd in general, on average an interval with N elements in it will become an interval with (N − 1)/2 elements.

Working in the other direction, what might be the maximum number of elements that could be searched in p probes? Clearly, one probe can check a list with one element only (to report a match, or, "not found") and two probes can check a list of three elements. This is not very impressive because a linear search would only require three probes at most for that. But now the difference increases exponentially. Seven elements can be checked with three probes, fifteen with four, and so forth. In short, to search N elements requires at most p probes where p is the smallest integer such that $2^p\; >\; N$ Taking the binary logarithm of both sides, $p\; >\; lb(N)$

Or, $lb(N)$ (with any fractional part rounded up to the next integer), is the maximum number of probes required to search N elements.

For searches that will succeed because the value is in the list, the search may finish early because a probed value happens to match. Loosely speaking, half the time the search will finish one iteration short of the maximum and a quarter of the time, two early. Consider then a test in which a list of N elements is searched once for each of the N values in the list, and determine the number of probes n for all N searches.

N = 1 2 3 4 5 6 7 8 9 10 11 12 13

n/N = 1 3/2 5/3 8/4 11/5 14/6 17/7 21/8 25/9 29/10 33/11 37/12 41/13

1 1.5 1.66 2 2.2 2.33 2.43 2.63 2.78 2.9 3 3.08 3.15

In short $lb(N)\; -\; 1$ is about the expected number of probes in an average successful search, and the worst case is $lb(N)$, just one more probe. If the list is empty, no probes at all are made.

Suppose the list to be searched contains N even numbers (say, 2,4,6,8 for N = 4) and a search is done for values 1, 2, 3, 4, 5, 6, 7, 8, and 9. The even numbers will be found, and the average number of iterations can be calculated as described. In the case of the odd numbers, they will not be found, and the collection of test values probes every possible position (with regard to the numbers that are in the list) that they might be not found in, and an average is calculated. The maximum value is for each N the greatest number of iterations that were required amongst the various trail searches of those N elements. The first plot shows the iteration counts for N = 1 to 63 (with N = 1, all results are 1), and the second plot is for N = 1 to 32767.

Thus binary search is a logarithmic algorithm and executes in O($log\; N$) time. In most cases it is considerably faster than a linear search. It can be implemented using iteration (as shown above), or recursion. In some languages it is more elegantly expressed recursively; however, in some C-based languages tail recursion is not eliminated and the recursive version requires more stack space.

Binary search can interact poorly with the memory hierarchy (i.e. caching), because of its random-access nature. For in-memory searching, if the interval to be searched is small, a linear search may have superior performance simply because it exhibits better locality of reference. For external searching, care must be taken or each of the first several probes will lead to a disk seek. A common technique is to abandon binary searching for linear searching as soon as the size of the remaining interval falls below a small value such as 8 or 16 or even more in recent computers. The exact value depends entirely on the machine running the algorithm.

Notice that for multiple searches with a fixed value for N, then (with the appropriate regard for integer division), the first iteration always selects the middle element at N/2, and the second always selects either N/4 or 3N/4, and so on. Thus if the array's key values are in some sort of slow storage (on a disc file, in virtual memory, not in the cpu's on-chip memory), keeping those three keys in a local array for a special preliminary search will avoid accessing widely separated memory. Escalating to seven or fifteen such values will allow further levels at not much cost in storage. On the other hand, if the searches are frequent and not separated by much other activity, the computer's various storage control features will more or less automatically promote frequently-accessed elements into faster storage.

When multiple binary searches are to be performed for the same key in related lists, fractional cascading can be used to speed up successive searches after the first one.

In more complex contexts, it might be that the data structure has many sub fields, such as a telephone number along with the name. An indexing array such as xref could be introduced so that elements A(xref(1)).Telephone ≤ A(xref(2)).Telephone … ≤ A(xref(N)).Telephone so that, "viewed through" array xref the array can be regarded as being sorted on the telephone number, and a search would be to find a given telephone number. In this case, A(i).Key would be replaced by A(xref(i)).Telephone and all would be as before. Thus, with auxiliary xref arrays, an array can be treated as if it is sorted in different ways without it having to be resorted for each different usage.

When a search returns the result "Not found", it may be helpful to have some indication as to where the missing value would be located so that the list can be augmented. A possible approach would be to return the value −L (rather than just −1 or 0, say), the negative indicating failure. This however can conflict with the array indexing protocol, if it includes zero as a valid index (since of course −0 = 0, and 0 would be a findable result) so caution is needed.

The elements of the list are not necessarily all unique. If one searches for a value that occurs multiple times in the list, the index returned will be of the first-encountered equal element, and this will not necessarily be that of the first, last, or middle element of the run of equal-key elements but will depend on the positions of the values. Modifying the list even in seemingly unrelated ways such as adding elements elsewhere in the list may change the result. To find all equal elements an upward and downward linear search can be carried out from the initial result, stopping each search when the element is no longer equal. Thus, e.g. in a table of cities sorted by country, we can find all cities in a given country.

A list of pairs (p,q) can be sorted based on just p. Then the comparisons in the algorithm need only consider the values of p, not those of q. For example, in a table of cities sorted on a column "country" we can find cities in Germany by comparing country names with "Germany", instead of comparing whole rows. Such partial content is called a sort key.

Several algorithms closely related to or extending binary search exist. For instance, noisy binary search solves the same class of projects as regular binary search, with the added complexity that any given test can return a false value at random. (Usually, the number of such erroneous results are bounded in some way, either in the form of an average error rate, or in the total number of errors allowed per element in the search space.) Optimal algorithms for several classes of noisy binary search problems have been known since the late seventies, and more recently, optimal algorithms for noisy binary search in quantum computers (where several elements can be tested at the same time) have been discovered.

Thus, the gain of a simpler initialisation, done once, is lost by a more complex calculation, and which is done for every iteration. If that is not enough, the test for an empty span is more complex also, as compared to the simplicity of checking that the value of p is zero. Nevertheless, this is the form found in many publications, such as Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition.

Load R

Subtract L

`IntDiv 2 Do not store this intermediate result into p yet.`

JumpZN NotFound if result is Zero or Negative, Jump to NotFound.

Add L

Store pThat is, the (human) compiler has recognised that in the three statements p:=(R − L)/2; if p <= 0 return(−L); p:=p + L; the value of p is already in the working register and need not be stored and retrieved until the end where it is stored once. The two-statement version, p:=(L + R)/2; if p <= L return(−L); would become

Load L

Add R

IntDiv 2

`Store p Thus p:=(L + R)/2;`

Subtract L Compare p to L for if p <= L

JumpZN NotFoundThis version has the slight disadvantages of an unnecessary store to p if NotFound and that the value of p is no longer in the working register ready to be used to index the array in A(p).Key of the next statement, but ordinarily it will involve the same number of actions, though computer compilers may not produce such code. Thus a tie: there is no advantage in locating the middle in one step. However there is a very good reason not to use the two-statement form, due to the risk of overflow described below.

When Jon Bentley assigned it as a problem in a course for professional programmers, he found that an astounding ninety percent failed to code a binary search correctly after several hours of working on it, and another study shows that accurate code for it is only found in five out of twenty textbooks (Kruse, 1999). Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contains an error that remained undetected for over twenty years.

Careful thought is required. The first issue is minor to begin with – how to signify "Not found". If the array is indexed 1 to N, then a returned index of zero is an obvious choice. However, some computer languages (notably C et al) insist that arrays have a lower bound of zero. In such a case, the array might be indexed 0 to N − 1 and so a negative result would be chosen for "Not found". Except that this can interfere with the desire to use unsigned integers for indexing. If the plan is to return −L for "not found", then unsigned integers cannot be used at all.

It is of course unlikely that if the collections being searched number around thirty thousand that sixteen bit integers will be used, but a second problem arises much sooner. A common variation computes the midpoint of the interval in one step, as p:=(L + R)/2; this means that the sum must not exceed the sixteen-bit limit for all to be well, and this detail is easily forgotten. The problem may be concealed on some computers, which use wider registers to perform sixteen-bit arithmetic so that there will be no overflow of intermediate results. But on a different computer, perhaps not. Thus, when tested and working code was transferred from a 16-bit version (in which there were never more than about fifteen thousand elements being searched) to a 32-bit version, and then the problem sizes steadily inflated, the forgotten limit can suddenly become relevant again. This was the mistake not noticed for decades, and is found in many textbooks – they concentrate on the description of the method, in a context where integer limits are far away.

To put this in simple terms, if the computer variables can hold a value of 0 to max then the binary search method will only work for N up to max − 1, not for all possible values of N. Reducing a limit from max to max − 1 is not an onerous constraint, however, if the variant form p:=(L + R)/2 is used, then should a search wander into indices beyond max/2 it will fail. Losing half the range for N is worth avoiding.

if a < b then action1

else if a > b then action2

else action3;About half the time, the first test will be true so that there will be only one comparison of a and b, but the other half of the time it will be false, and a second comparison forced. This is so grievous that some versions are recast so as not to make a second test at all thus not determining equality until the span has been reduced to zero, and thereby foregoing the possibility of early termination – remember that about half the time the search will happen on a matching value one iteration short of the limit. The problem is exacerbated for floating point variables that offer the special value NaN, which violates even the notion of equality: x = x is false if x has the value NaN!

Since Fortran does offer a three-way test, here is a version for searching an array of floating-point numbers. For labels Fortran uses numbers at the start of statements, thus the 1, 2, 3, and 4. The if statement performs a go to to one of the three nominated labels according to the sign of its arithmetic expression. It can be seen that the flow chart of this routine corresponds to the flow chart of a proven working method, and so, the code should work.

Well, yes and no… Leaving aside the problem of integer bounds, it remains possible that the routine might be presented with perverse parameters. For instance, N < 0 would cause trouble, and for this reason the test is if (P <= 0) rather than if (P = 0) as it can be performed with no extra effort. Similarly, the values in array A might not in fact be in sorted order, or the actual array size might be smaller than N. To check that the array is sorted requires inspecting every value, and this vitiates the whole reason for searching with a fast method. The proof of correctness relies on its presumption that the array is sorted, etc., and not meeting these requirements is not the fault of the method. Deciding how much checking and what to do is a troublesome issue.

`low`

and `high`

values of `0`

and `N-1`

. We can eliminate the tail recursion above and convert this to an iterative implementation:Java Sample:

public static int BinarySearch (int[] a, int low, int high, int searchValue)

{

// recursive version

int mid;

if (high < low)

return -1;

mid = (low + high) / 2;

if (a [mid] > searchValue)

{

return BinarySearch (a, low, mid, searchValue);}

else if (a [mid] < searchValue)

{

return BinarySearch (a, mid + 1, high, searchValue);}

else //when a[mid] is the search value..

{

return mid;} } //end function

In practice, one frequently uses a three-way comparison instead of two comparisons per loop. Also, real implementations using fixed-width integers with modular arithmetic need to account for the possibility of overflow. One frequently-used technique for this is to compute mid, so that two smaller numbers are ultimately added:

`binary_search`

, `lower_bound`

and `upper_bound`

. Java offers a set of overloaded `binarySearch()`

static methods in the classes and for performing binary searches on Java arrays and Lists, respectively. They must be arrays of primitives, or the arrays or Lists must be of a type that implements the `Comparable`

interface, or you must specify a custom Comparator object. Microsoft's .NET Framework 2.0 offers static generic versions of the Binary Search algorithm in its collection base classes. An example would be `System.Array`

's method `BinarySearch`(T[] array, T value).

Python provides the `bisect`

module. COBOL can perform binary search on internal tables using the `SEARCH ALL`

statement.
- Index (information technology) Very fast 'lookup' using an index to directly select an entry
- Branch tables Alternative indexed 'lookup' technique for decision making
- Self-balancing binary search tree
- Run-time analysis, illustrating binary search technique on machines of differing speeds

- Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Section 6.2.1: Searching an Ordered Table, pp.409–426.
- Kruse, Robert L.: "Data Structures and Program Design in C++", Prentice-Hall, 1999, ISBN 0-13-768995-0, page 280.
- Netty van Gasteren, Wim Feijen. The Binary Search Revisited, AvG127/WF214, 1995. (investigates the foundations of the Binary Search, debunking the myth that it applies only to sorted arrays)

- NIST Dictionary of Algorithms and Data Structures: binary search
- Sparknotes: Binary search Simplified overview of binary search.
- Binary Search Implementation in Visual Basic .NET (partially in English)
- msdn2.microsoft.com/en-us/library/2cy9f6wb.aspx .NET Framework Class Library Array.BinarySearch Generic Method (T[], T)
- Google Research: Nearly All Binary Searches and Mergesorts are Broken
- Implementations of binary search on LiteratePrograms
- Explained and commented Binary search algorithm in C++
- Binary Search using C++

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Thursday October 09, 2008 at 22:33:10 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Thursday October 09, 2008 at 22:33:10 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2015 Dictionary.com, LLC. All rights reserved.