Binary search algorithm 4266 226176356 2008-07-17T04:32:12Z NickyMcLean 433669 /* Locate the middle in one step */ omited a ; {{Infobox Algorithm |class=[[Search algorithm]] |image= |data=[[Array]] |time=''О(''log'' n)'' |space = ''O(1)'' |optimal=Yes }} A '''binary search algorithm''' (or '''binary chop''') is a technique for locating a particular value in a [[Collation|sorted list]] of values. To cast this in the frame of the guessing game (see Example below), realize that we seek to guess the ''index'', or numbered place, of the value in the list. The method makes progressively better guesses, and closes in on the location of the sought value by selecting the middle element in the span (which, because the list is in sorted order, is the [[median]] value), comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. A guessed index whose value turns out to be too high becomes the new upper bound of the span, and if its value is too low that index becomes the new lower bound. Only the sign of the difference is inspected: there is no attempt at an [[interpolation search]] based on the size of the difference. Pursuing this strategy iteratively, the method reduces the search span by a factor of two each time, and soon finds the target value or else determines that it is not in the list at all. A binary search is an example of a [[dichotomic search|dichotomic]] [[divide and conquer algorithm|divide and conquer]] [[search algorithm]]. Finding the index of a specific value in a [[Sorting algorithm|sorted list]] is useful because, given the index, other data structures will contain associated information. Suppose a data structure containing the classic collection of name, address, telephone number and so forth has been accumulated, and an array is prepared containing the names, numbered from one to ''N''. A query might be: what is the telephone number for a given name ''X''. To answer this the array would be searched and the index (if any) corresponding to that name determined, whereupon the associated telephone number array would have ''X'''s telephone number at that index, and likewise the address array and so forth. Appropriate provision must be made for the name not being in the list (typically by returning an ''index'' value of zero), indeed the question of interest might be only whether ''X'' is in the list or not. If the list of names is in sorted order, a binary search will find a given name with far fewer probes than the simple procedure of probing each name in the list, one after the other in a [[linear search]], and the procedure is much simpler than organizing a [[hash table]]. However, once created, searching with a hash table may well be faster, typically averaging just over one probe per lookup. With a non-uniform distribution of values, if it is known that some few items are ''much'' more likely to be sought for than the majority, then a linear search with the list ordered so that the most popular items are first may do better than binary search. The choice of the best method may not be immediately obvious. If, between searches, items in the list are modified or items are added or removed, maintaining the required organisation may consume more time than the searches. == Examples == An example of binary search in action is a simple guessing game in which a player has to guess a positive integer, between 1 and ''N'', selected by another player, using only questions answered with yes or no. Supposing ''N'' is 16 and the number 11 is selected, the game might proceed as follows. * Is the number greater than 8? (Yes). * Is the number greater than 12? (No) * Is the number greater than 10? (Yes) * Is the number greater than 11? (No) Therefore, the number must be 11. At each step, we choose a number right in the middle of the range of possible values for the number. For example, once we know the number is greater than 8, but less than or equal to 12, we know to choose a number in the middle of the range [9, 12] (in this case 10 is optimal). At most <math>\lceil\log_2 N\rceil</math> questions are required to determine the number, since each question halves the search space. Note that one less question (iteration) is required than for the general algorithm, since the number is constrained to a particular range. Even if the number we're guessing can be arbitrarily large, in which case there is no upper bound ''N'', we can still find the number in at most <math>2\lceil \log_2 k \rceil</math> steps (where k is the (unknown) selected number) by first finding an upper bound by repeated doubling. For example, if the number were 11, we could use the following sequence of guesses to find it: * Is the number greater than 1? (Yes) * Is the number greater than 2? (Yes) * Is the number greater than 4? (Yes) * Is the number greater than 8? (Yes) * Is the number greater than 16? (No, N=16, proceed as above) ( We know the number is greater than 8 ) * Is the number greater than 12? (No) * Is the number greater than 10? (Yes) * Is the number greater than 11? (No) As one simple example, in [[revision control]] systems, it is possible to use a binary search to see in which revision a piece of content was added to a file. We simply do a binary search through the entire version history; if the content is not present in a particular version, it appeared later, while if it is present it appeared at that version or sooner. This is far quicker than checking every difference. There are many occasions unrelated to computers when a binary search is the quickest way to isolate a solution we seek. In troubleshooting a single problem with many possible causes, we can change half the suspects, see if the problem remains and deduce in which half the culprit is; change half the remaining suspects, and so on. (For finding non-deterministic bugs, where the test used will not always reveal the bug even if it is present in the revision, see ''Extentions'' below.) See also: [[Shotgun debugging]]. People typically use a mixture of the binary search and interpolative search algorithms when searching a [[telephone book]], after the initial guess we exploit the fact that the entries are sorted and can rapidly find the required entry. For example when searching for Smith, if Rogers and Thomas have been found, one can flip to a page about halfway between the previous guesses. If this shows Samson, we know that Smith is somewhere between the Samson and Thomas pages so we can bisect these. ==The method== [[Image:BinarySearch.Flowchart.png|right]] In order to discuss the method in detail, a more formal description is necessary. The basic idea is that there is a data structure represented by array ''A'' in which individual elements are identified as ''A(1), A(2),...,A(N)'' and may be accessed in any order. The data structure contains a sub-element or data field called here ''Key'', and the array is ordered so that the successive values ''A(1).Key ≤ A(2).Key'' and so on. Such a key might be a name, or it might be a number. The requirement is that given some value ''x'', find an index ''p'' (not necessarily the one and only) such that ''A(p).Key = x''. To begin with, the span to be searched is the full supplied list of elements, as marked by variables ''L'' and ''R'', and their values are changed with each iteration of the search process, as depicted by the [[flowchart]]. Note that the division by two is integer division, with any remainder lost, so that 3/2 comes out as 1, not 1½. The search finishes either because the value has been found, or else, the specified value is not in the list. ===That it works=== The method relies on and upholds the notion ''If x is to be found, it will be amongst elements (L + 1) to (R - 1)'' of the array. The initialisation of ''L'' and ''R'' to 0 and ''N - 1'' make this merely a restatement of the supplied problem, that elements 1 to ''N'' are to be searched, so the notion is established to begin with. The first step of each iteration is to check that there is something to search, which is to say whether there are any elements in the search span ''(L + 1)'' to ''(R - 1)''. The number of such elements is ''(R - L - 1)'' so computing ''(R - L)'' gives (number of elements + 1); halving that number (with integer division) means that if there was one element then ''p = 1'', but if none ''p = 0'', and in that case the method terminates with the report "Not found". Otherwise, for ''p > 0'', the search continues with ''p:=L + p'', which by construction is within the bounds ''(L + 1)'' to ''(R - 1)''. That this position is at or adjacent to the middle of the span is not important here, merely that it is a valid choice. Now compare ''x'' to ''A(p).Key''. If ''x = A(p).Key'' then the method terminates in success. Otherwise, suppose ''x < A(p).Key''. If so, then ''because the array is in sorted order'', ''x'' will also be less than all later elements of the array, all the way to element ''(R - 1)'' as well. Accordingly, the value of the right-hand bound index R can be changed to be the value ''p'', since, by the test just made, ''x < A(p).Key'' and so, if x is to be found, it will be amongst elements earlier than ''p'', that is ''(p - 1)'' and earlier. And contrariwise, for the case ''x > A(p).Key'', the value of ''L'' would be changed. Further, whichever bound is changed, the span remaining to be searched is reduced. If ''L'' is changed, it is changed to a higher number (at least ''L + 1''), whereas if ''R'' is changed, it is to a lower number (at most ''R - 1'') because those are the limits for ''p''. Should there have been just one value remaining in the search span (so that ''L + 1 = p = R - 1''), and ''x'' did not match, then depending on the sign of the comparison either ''L'' or ''R'' will receive the value of ''p'' and at the start of the next iteration the span will be found to be empty. Accordingly, with each iteration, if the search span is empty the result is "Not found", otherwise either ''x'' is found at the probe point ''p'' or the search span is reduced for the next iteration. Thus the method works, and so can be called an [[Algorithm]]. ===That it is fast=== The interval being searched is successively halved (actually slightly better than halved) in width with each iteration, so the number of iterations required is at most the base two logarithm of N. - Zero for empty lists. More precisely, each probe both removes one element from further consideration and selects one or the other half of the interval for further searching. Suppose that the number of items in the list is odd. Then there is a definite middle element at ''p = (N + 1)/2'' - this is proper division without discarding remainders. If that element's key does not match ''x'', then the search proceeds either with the first half, elements 1 to ''p - 1'' or the second, elements ''p + 1'' to N. These two spans are equal in extent, having ''(N - 1)/2'' elements. Conversely, suppose that the number of elements is even. Then the probe element will be ''p = N/2'' and again, if there is no match one or the other of the subintervals will be chosen. They are not equal in size; the first has ''N/2 - 1'' elements and the second (elements ''p + 1'' to ''N'' as before) has ''N/2'' elements. Now supposing that it is just as likely that ''N'' will be even as odd in general, on average an interval with ''N'' elements in it will become an interval with ''(N - 1)/2'' elements. Working in the other direction, what might be the maximum number of elements that could be searched in ''p'' probes? Clearly, one probe can check a list with one element only (to report a match, or, "not found") and two probes can check a list of three elements. This is not very impressive because a linear search would only require three probes for that. But now the difference increases exponentially. With three probes, seven elements can be checked, four for fifteen, and so forth. In short, to search ''N'' elements requires at most ''p'' probes where ''p'' is the smallest integer such that <math>2^p > N</math> Taking the binary logarithm of both sides, <math>p > lb(N)</math> Or, <math>lb(N)</math> (with any fractional part rounded up to the next integer), is the maximum number of probes required to search ''N'' elements. ====Average Performance==== There are two cases: for searches that will fail because the value is not in the list, the search interval must be successively halved until no more elements remain and this process will require at most the ''p'' probes just defined, or one less. This latter occurs because the search interval is not in fact exactly halved, and depending on the value of N and which elements of the list the absent value ''x'' is between, the interval may be closed early. For searches that will succeed because the value is in the list, the search may finish early because a probed value happens to match. Loosely speaking, half the time the search will finish one iteration short of the maximum and a quarter of the time, two early. Consider then a test in which a list of ''N'' elements is searched once for each of the ''N'' values in the list, and determine the number of probes ''n'' for all ''N'' searches. N = 1 2 3 4 5 6 7 8 9 10 11 12 13 n/N = 1 3/2 5/3 8/4 11/5 14/6 17/7 21/8 25/9 29/10 33/11 37/12 41/13 1 1.5 1.66 2 2.2 2.33 2.43 2.63 2.78 2.9 3 3.08 3.15 In short <math>lb(N) - 1</math> is about the expected number of probes in an average successful search, and the worst case is <math>lb(N)</math>, just one more probe. If the list is empty, no probes at all are made. Suppose the list to be searched contains N even numbers (say, 2,4,6,8 for N = 4) and a search is done for values 1, 2, 3, 4, 5, 6, 7, 8, and 9. The even numbers will be found, and the average number of iterations can be calculated as described. In the case of the odd numbers, they will not be found, and the collection of test values probes every possible position (with regard to the numbers that ''are'' in the list) that they might be not found in, and an average is calculated. The maximum value is for each N the greatest number of iterations that were required amongst the various trail searches of those N elements. The first plot shows the iteration counts for N = 1 to 63 (with N = 1, all results are 1), and the second plot is for N = 1 to 32767. [[Image:BinarySearchStats63.png]] [[Image:BinarySearchStats32767.png]] Thus binary search is a [[logarithmic algorithm]] and executes in [[big O notation|O(<math>\log N</math>)]] time. In most cases it is considerably faster than a [[linear search]]. It can be implemented using [[iteration]] (as shown above), or [[recursion]]. In some languages it is more elegantly expressed recursively; however, in some C-based languages tail recursion is not eliminated and the recursive version requires more stack space. Binary search can interact poorly with the memory hierarchy (i.e. [[cache|caching]]), because of its random-access nature. For in-memory searching, if the interval to be searched is small, a linear search may have superior performance simply because it exhibits better locality of reference. For external searching, care must be taken or each of the first several probes will lead to a disk seek. A common technique is to abandon binary searching for linear searching as soon as the size of the remaining interval falls below a small value such as 8 or 16 or even more in recent computers. The exact value depends entirely on the machine running the algorithm. Notice that for multiple searches ''with a fixed value for N'', the first iteration always selects the middle element at N/2, and the second always selects either N/4 or 3N/4 (with the appropriate regard for integer division), and so on. Thus if the array's key values are in some sort of slow storage (on a disc file, in virtual memory, not in the cpu's on-chip memory), keeping those three keys in a local array for a special preliminary search will avoid accessing widely separated memory. Escalating to seven or fifteen such values will allow further levels at not much cost in storage. On the other hand, if the searches are frequent and not separated by much other activity, the computer's various storage control features will more or less automatically promote frequently-accessed elements into faster storage. When multiple binary searches are to be performed for the same key in related lists, [[fractional cascading]] can be used to speed up successive searches after the first one. ===Extensions=== There is no particular requirement that the array being searched has the bounds 1 to ''N''. It is possible to search a specified range, elements ''first'' to ''last'' instead of 1 to ''N''. All that is necessary is that the intialisation be ''L:=first - 1'' and ''R:=last + 1'', then all proceeds as before. In more complex contexts, it might be that the data structure has many sub fields, such as a telephone number along with the name. An indexing array such as ''xref'' could be introduced so that elements ''A(xref(1)).Telephone ≤ A(xref(2)).Telephone ... ≤ A(xref(N)).Telephone'' so that, "viewed through" array ''xref'' the array can be regarded as being sorted on the telephone number, and a search would be to find a given telephone number. In this case, ''A(i).Key'' would be replaced by ''A(xref(i)).Telephone'' and all would be as before. Thus, with auxiliary ''xref'' arrays, an array can be treated as if it is sorted in different ways without it having to be resorted for each different usage. When a search returns the result "Not found", it may be helpful to have some indication as to where the missing value would be located so that the list can be augmented. A possible approach would be to return the value ''-L'' (rather than just -1 or 0, say), the negative indicating failure. This however can conflict with the array indexing protocol, if it includes zero as a valid index (since of course -0 = 0, and 0 would be a findable result) so caution is needed. Several algorithms closely related to or extending binary search exist. For instance, '''noisy binary search''' solves the same class of projects as regular binary search, with the added complexity that any given test can return a false value at random. (Usually, the number of such erroneous results are bounded in some way, either in the form of an average error rate, or in the total number of errors allowed per element in the search space.) Optimal algorithms for several classes of noisy binary search problems have been known since the late seventies, and more recently, optimal algorithms for noisy binary search in quantum computers (where several elements can be tested at the same time) have been discovered. ===Variations=== There are many, and they are easily confused. ====Exclusive or Inclusive bounds==== The most significant differences are between the "exclusive" and "inclusive" forms of the bounds. This description uses the "exclusive" bound form, that is the span to be searched is ''(L + 1)'' to ''(R - 1)'', and this may seem clumsy when the span to be searched could be described in the "inclusive" form, as ''L'' to ''R''. This form may be attained by replacing all appearances of "L" by "(L + 1)" and "R" by "(R + 1)" then rearranging. Thus, the initialisation of ''L:=0'' becomes ''(L - 1):=0'' or ''L:=1'', and ''R:=N + 1'' becomes ''(R + 1):=N + 1'' or ''R:=N''. So far so good, but note now that the changes to L and R are no longer simply transferring the value of ''p'' to ''L'' or ''R'' as appropriate but now must be ''(R + 1):=p'' or ''R:=p - 1'', and ''(L - 1):=p'' or ''L:=p + 1''. Thus, the gain of a simpler initialisation, done once, is lost by a more complex calculation, and which is done for every iteration. If that is not enough, the test for an empty span is more complex also, as compared to the simplicity of checking that the value of ''p'' is zero. Nevertheless, this is the form found in many publications, such as [[Donald Knuth]]. ''The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition. ====Locate the middle in one step==== The other main variation is to combine the two step calculation of the probe position into one step, that is, ''p:=(R - L)/2; p:=p + L;'' into ''p:=(L + R)/2;'' which is indeed less work, but, the saving is lost because rather than ''if p <= 0'' (which tests the just-computed value of ''p'' directly) the test becomes ''if p <= L'' and this second form requires the subtraction of the value of ''L''. Pseudo machine code should read somewhat as follows: Load R Subtract L IntDiv 2 Do not store this intermediate result into ''p'' yet. JumpZN NotFound if result is Zero or Negative, Jump to NotFound. Add L Store p That is, the (human) compiler has recognised that in the three statements ''p:=(R - L)/2; if p <= 0 return(-L); p:=p + L;'' the value of ''p'' is already in the working register and need not be stored and retrieved until the end where it is stored once. The two-statement version, ''p:=(L + R)/2; if p <= L return(-L);'' would become Load L Add R IntDiv 2 Store p Thus ''p:=(L + R)/2;'' Subtract L Compare ''p'' to ''L'' for ''if p <= L'' JumpZN NotFound This is the same number of actions (though an unnecessary store to ''p'' if NotFound) but with the disadvantage that the value of ''p'' is no longer in the working register, ready to be used to index the array in ''A(p).Key'' of the next statement. Thus, a tie, though computer compilers may not produce such code. However there is a very good reason not to use the two-statement form, due to the risk of overflow described [[Binary_search#Numerical Difficulties|below]]. ====Deferred Detection of Equality==== Because of the syntax difficulties discussed [[Binary_search#Syntax Difficulties|below]], so that distinguishing the three states <, =, and > would have to be done with two comparisons, it is possible to use just one comparison and at the end when the span is reduced to zero, equality can be tested for. The [[Binary_search#Single Comparison per Iteration|example]] distinguishes only < from >=. ====Midpoint and Width==== An entirely different variation involves abandoning the ''L'' and ''R'' pointers in favour of a current position ''p'' and a width ''w'' where at each iteration, ''p'' is adjusted by + or - ''w'' and ''w'' is halved. Professor Knuth remarks "It is possible to do this, but only if extreme care is paid to the details" - Section 6.2.1, page 414 of ''The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition, outlines an algorithm, with the further remark "Simpler approaches are doomed to failure!" ==Computer usage== "Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky..." - Professor Knuth. When Jon Bentley assigned it as a problem in a course for professional programmers, he found that an astounding ninety percent failed to code a binary search correctly after several hours of working on it<ref>{{cite book | last = Bentley | first = Jon | authorlink = Jon Bentley | title = Programming Pearls | edition = 2nd edition | pages = p34 | publisher = [[Addison-Wesley]] | year = 2000 | origyear = 1986 | isbn = 0201657880 }}</ref>, and another study shows that accurate code for it is only found in five out of twenty textbooks. (Kruse, 1999) Furthermore, Bentley's own implementation of binary search, published in his 1986 book ''Programming Pearls'', contains an error that remained undetected for over twenty years.<ref>[http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken], Google Research Blog</ref> Careful thought is required. The first issue is minor to begin with - how to signify "Not found". If the array is indexed 1 to N, then a returned index of zero is an obvious choice. However, some computer languages (notably C ''et al'') insist that arrays have a lower bound of zero. In such a case, the array might be indexed 0 to ''N - 1'' and so a negative result would be chosen for "Not found". Except that this can interfere with the desire to use unsigned integers for indexing. If the plan is to return ''-L'' for "not found", then unsigned integers cannot be used at all. ===Numerical Difficulties=== More serious are the limitations of computer arithmetic. Variables have limited size, for instance the (very common) sixteen-bit [[Two's_complement|two's complement]] signed integer can only hold values of -32768 to +32767. (Exactly the same problems arise with unsigned or other size integers, except that the digit strings are longer.) If the array is to be indexed with such variables, then the values ''first -1'' and ''last + 1'' must be representable, that is, ''last'' ≤ 3276'''6''', ''not'' 3276'''7'''. Using the "inclusive" form of the method won't help, because although ''R'' might safely hold the value 32767, should the sought value ''x'' follow the last element in the array then eventually the search will compare ''x'' to ''A(p).Key'' (with ''p'' safely holding 32767), then attempt to store ''p + 1'' into ''L'' and fail. Similarly, the lower bound ''first'' may be zero (for arrays whose indexing starts at zero), in which case a value -1 must be representable, which precludes the use of unsigned integers. General-purpose testing is unlikely to present a test with these boundaries exercised, and so the detail can be overlooked. Formal proofs often do not attend to differences between computer arithmetic and mathematics. It is of course unlikely that if the collections being searched number around thirty thousand that sixteen bit integers will be used, but a second problem arises much sooner. A common variation computes the midpoint of the interval in one step, as ''p:=(L + R)/2;'' this means that the ''sum'' must not exceed the sixteen-bit limit for all to be well, and this detail is easily forgotten. The problem may be concealed on some computers, which use wider registers to perform sixteen-bit arithmetic so that there will be no overflow of intermediate results. But on a different computer, perhaps not. Thus, when tested and working code was transferred from a 16-bit version (in which there were never more than about fifteen thousand elements being searched) to a 32-bit version, and then the problem sizes steadily inflated, the forgotten limit can suddenly become relevant again. This was the mistake not noticed for decades, and is found in many textbooks - they concentrate on the description of the method, in a context where integer limits are far away. To put this in simple terms, if the computer variables can hold a value of 0 to ''max'' then the binary search method will only work for ''N'' up to ''max - 1'', not for all possible values of ''N''. Reducing a limit from ''max'' to ''max - 1'' is not an onerous constraint, however, if the variant form ''p:=(L + R)/2'' is used, then should a search wander into indices beyond ''max/2'' it will fail. Losing half the range for N is worth avoiding. ===Syntax Difficulties=== Another difficulty is presented by the absence in most computer languages of a three-way result from a comparison, which forces a comparison to be performed twice. The form is somewhat as follows: if a < b then action1 else if a > b then action2 else action3; About half the time, the first test will be true so that there will be only one comparison of ''a'' and ''b'', but the other half of the time it will be false, and a second comparison forced. This is so grievous that some versions are recast so as [[Binary_search#Single Comparison per Iteration|not to make a second test at all]] thus not determining equality until the span has been reduced to zero, and thereby foregoing the possibility of early termination - remember that about half the time the search will happen on a matching value one iteration short of the limit. The problem is exacerbated for floating point variables that offer the special value [[NaN]], which violates even the notion of equality: x = x is ''false'' if x has the value [[NaN]]! Since [[Fortran]] does offer a three-way test, here is a version for searching an array of integers: fortran uses numbers at the start of statements as labels, thus the labels 1, 2, 3, and 4. The ''if'' statement performs a ''go to'' to one of the three nominated labels according to the sign of the arithmetic expression. Integer '''Function''' BinarySearch(A,X,N) Integer A(*),X,N Integer L,R,P L = 0 !Outer bounds, R = N + 1 !To search elements 1 to N. 1 P = (R - L)/2 !Probe; integer division. '''if''' (P <= 0) '''Return'''(-L) !Search exhausted. P = L + P '''if''' (X - A(P)) 3,4,2 !Test: negative,zero,positive. 2 L = P !A(P) < X. Shift the left bound up. '''go to''' 1 3 R = P !X < A(P). Shift the right bound down. '''go to''' 1 4 '''Return'''(P) !X = A(P). Found at index P. '''End Function''' BinarySearch It can be seen that the flow chart of this routine corresponds to the flow chart of a proven working method, and so, the code should work. Well, yes and no... Leaving aside the problem of integer bounds, it remains possible that the routine might be presented with perverse parameters. For instance, ''N'' < 0 would cause trouble, and for this reason the test is ''if (P <= 0)'' rather than ''if (P = 0)'' as it can be performed with no extra effort. Similarly, the values in array ''A'' might not in fact be in sorted order, or the actual array size might be smaller than ''N''. To check that the array is sorted requires inspecting every value, and this vitiates the whole reason for searching with a fast method. The proof of correctness relies on its presumption that the array is sorted, etc., and not meeting these requirements is not the fault of the method. Deciding how much checking and what to do is a troublesome issue. ==Implementations== ===Recursive=== The most straightforward implementation is recursive, which recursively searches the subrange dictated by the comparison: BinarySearch(A[0..N-1], value, low, high) { if (high < low) return -1 // not found mid = (low + high) / 2 if (A[mid] > value) return BinarySearch(A, value, low, mid-1) else if (A[mid] < value) return BinarySearch(A, value, mid+1, high) else return mid // found } It is invoked with initial <code>low</code> and <code>high</code> values of <code>0</code> and <code>N-1</code>. We can eliminate the [[tail recursion]] above and convert this to an iterative implementation: ===Iterative=== BinarySearch(A[0..N-1], value) { low = 0 high = N - 1 while (low <= high) { mid = (low + high) / 2 if (A[mid] > value) high = mid - 1 else if (A[mid] < value) low = mid + 1 else return mid // found } return -1 // not found } ===Single Comparison per Iteration=== Some implementations may not include the early termination branch, preferring to check at the end if the value was found, shown below. Checking to see if the value was found ''during'' the search (as opposed to at the ''end'' of the search) may seem a good idea, but there are extra computations involved in each iteration of the search. Also, with an array of length ''N'' using the ''low'' and ''high'' indices, the probability of actually finding the value on the first iteration is 1 / ''N'', and the probability of finding it later on (before the end) is the about 1 / (''high'' - ''low''). The following checks for the value at the end of the search: low = 0 high = N while (low < high) { mid = (low + high)/2; if (A[mid] < value) low = mid + 1; else //can't be high = mid-1: here A[mid] >= value, //so high can't be < mid if A[mid] == value high = mid; } // high == low, using high or low depends on taste if ((low < N) && (A[low] == value)) return low // found else return -1 // not found This algorithm has two other advantages. At the end of the loop, ''low'' points to the first entry greater than or equal to ''value'', so a new entry can be inserted if no match is found. Moreover, it only requires one comparison; which could be significant for complex keys in languages which do not allow the result of a comparison to be saved. In practice, one frequently uses a [[three-way comparison]] instead of two comparisons per loop. Also, real implementations using fixed-width integers with modular arithmetic need to account for the possibility of overflow. One frequently-used technique for this is to compute mid, so that two smaller numbers are ultimately added: mid = low + ((high - low) / 2) ==Equal elements== The elements of the list are not necessarily all unique. If one searches for a value that occurs multiple times in the list, the index returned will be of the first-encountered equal element, and this will not necessarily be that of the first, last, or middle element of the run of equal-key elements but will depend on the positions of the values. Modifying the list even in seemingly unrelated ways such as adding elements elsewhere in the list may change the result. To find all equal elements an upward and downward linear search can be carried out from the initial result, stopping each search when the element is no longer equal. Thus, e.g. in a table of cities sorted by country, we can find all cities in a given country. ==Sort key== A list of pairs (p,q) can be sorted based on just p. Then the comparisons in the algorithm need only consider the values of p, not those of q. For example, in a table of cities sorted on a column "country" we can find cities in Germany by comparing country names with "Germany", instead of comparing whole rows. Such partial content is called a sort key. == Testing == It is important to remember that the best way to verify the correctness of a binary search algorithm is to thoroughly test it on a computer. It is difficult to visually analyze the code without making a mistake. To that end, the following code will thoroughly test a binary search at every index for many multiple lengths of arrays: <source lang="cpp"> bool passed=true; for(int offset=1; offset<5; offset++){ //tests with an offset between 1 and 4 for various amounts. for(int length=1; length<2049; length++){ //make array longer on each iteration int A[length]; for(int i=0; i<length; i++) A[i]=i*offset; //init array values from 0 to length-1 // check negative hits----------------------------------------------------- // search value too low // error if value is found // if search returns non-negative index, fail if(BinarySearch(A, length, A[0]-1)>=0) passed=false; // search value too high // error if value is found // if search returns non-negative index, fail if (BinarySearch(A, length, A[length - 1] + 1)>=0) passed=false; // check positive hits----------------------------------------------------- for(int i = 0; i < length; i++) //search for every array value // error if value is NOT found // if search does not return correct value, fail if (BinarySearch(A, length, A[i])!=i) passed=false; } } </source> In the above C++ test-code, if ''passed'' is ever false, then the binary search function has a bug. Note that this code assumes that you are returning the index of the search value within the array. In addition, it does not properly test duplicate values within the array, or errors that could be caused by more randomly distributed values.As such this should not be considered a complete proof of correctness, merely an aid for testing. == Language support == Many standard libraries provide a way to do binary search. [[C (programming language)|C]] provides {{man|3|bsearch|||inline}} in its standard library. [[C++]]'s [[Standard Template Library|STL]] provides [[algorithm function]]s <code>binary_search</code>, <code>lower_bound</code> and <code>upper_bound</code>. [[Java (sun)|Java]] offers a set of overloaded <code>binarySearch()</code> static methods in the classes {{Javadoc:SE|java/util|Arrays}} and {{Javadoc:SE|java/util|Collections}} for performing binary searches on Java arrays and Lists, respectively. They must be arrays of primitives, or the arrays or Lists must be of a type that implements the <code>Comparable</code> interface, or you must specify a custom Comparator object. [[Microsoft]]'s [[Microsoft .NET Framework|.NET Framework]] 2.0 offers static [[Generic programming|generic]] versions of the Binary Search algorithm in its collection base classes. An example would be <code>[[System.Array]]</code>'s method <code>BinarySearch<T>(T[] array, T value).</code> [[Python (programming language)|Python]] provides the <code>bisect</code> module. [[COBOL]] can perform binary search on internal tables using the <code>SEARCH ALL</code> statement. ==Applications to [[computational complexity theory|complexity theory]]== Even if we do not know a fixed range the number ''k'' falls in, we can still determine its value by asking <math>2\lceil\log_2k\rceil</math> simple yes/no questions of the form "Is ''k'' greater than ''x''?" for some number ''x''. As a simple consequence of this, if you can answer the question "Is this integer property ''k'' greater than a given value?" in some amount of time then you can find the value of that property in the same amount of time with an added factor of log2 ''k''. This is called a ''[[reduction (complexity)|reduction]]'', and it is because of this kind of reduction that most complexity theorists concentrate on [[decision problem]]s, algorithms that produce a simple yes/no answer. For example, suppose we could answer "Does this ''n'' x ''n'' matrix have [[determinant]] larger than ''k''?" in O(''n''<sup>2</sup>) time. Then, by using binary search, we could find the (ceiling of the) determinant itself in O(''n''<sup>2</sup>log ''d'') time, where ''d'' is the determinant; notice that ''d'' is not the size of the input, but the size of the output. ==See also== * [[Index (information technology)]] Very fast 'lookup' using an index to directly select an entry * [[Branch table]]s Alternative indexed 'lookup' technique for decision making * [[Self-balancing binary search tree]] ==References== <references/> * [[Donald Knuth]]. ''The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Section 6.2.1: Searching an Ordered Table, pp.409&ndash;426. * Kruse, Robert L.: "Data Structures and Program Design in C++", Prentice-Hall, 1999, ISBN 0-13-768995-0, page 280. * Netty van Gasteren, Wim Feijen. ''[http://www.mathmeth.com/wf/files/wf2xx/wf214.pdf The Binary Search Revisited]'', AvG127/WF214, 1995. (investigates the foundations of the Binary Search, debunking the myth that it applies only to sorted arrays) ==External links== * [http://www.nist.gov/dads/HTML/binarySearch.html NIST Dictionary of Algorithms and Data Structures: binary search] * [http://www.sparknotes.com/cs/searching/binarysearch/ Sparknotes: Binary search]. Simplified overview of binary search. * [http://blogs.netindonesia.net/adrian/articles/6288.aspx Binary Search Implementation in Visual Basic .NET (partially in English)] * [http://msdn2.microsoft.com/en-us/library/2cy9f6wb.aspx msdn2.microsoft.com/en-us/library/2cy9f6wb.aspx] .NET Framework Class Library Array.BinarySearch Generic Method (T[], T) * [http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html Google Research: Nearly All Binary Searches and Mergesorts are Broken]. * [http://en.literateprograms.org/Category:Binary_search Implementations of binary search on LiteratePrograms]. * [http://www.datastructures.info/what-is-a-binary-seach-algorithm-and-how-does-it-work/ Explained and commented Binary search algorithm in C++] * [http://www.paked.net/subject_pages/computer_science/prog1.htm Binary Search using C++ ] [[Category:Search algorithms]] [[cs:Binární vyhledávání]] [[de:Binäre Suche]] [[es:Búsqueda dicotómica]] [[fr:Dichotomie]] [[ko:이진 검색 알고리즘]] [[id:Pencarian biner]] [[it:Ricerca dicotomica]] [[he:חיפוש בינארי]] [[ja:二分探索]] [[nl:Bisectie]] [[no:Binærsøk]] [[pl:Wyszukiwanie binarne]] [[pt:Pesquisa binária]] [[ro:Căutare binară]] [[ru:Двоичный поиск]] [[sk:Binárne vyhľadávanie]] [[sl:Binarno iskanje]] [[fi:Puolitushaku]] [[tr:İkili arama algoritması]] [[uk:Двійковий пошук]]