## Binary search: an easy but tricky algorithm

Posted by Diego Assencio on 2014.06.29 under Computer science (Algorithms)

The binary search algorithm is a very famous and conceptually easy method for finding whether a given value is stored in a sorted array; if the value is found, its position is returned, otherwise a "not found" notification is returned. The running time of the algorithm is $O(\log_2 n)$, with $n$ being the length of the array.

There are many ways to implement the binary search algorithm (the focus here will be put on the implementation details; the reader is assumed to know the algorithm already). The "tricky" on the post title is due to the fact that it is very easy to write an implementation of the algorithm which looks perfectly right but is buggy. As an example, try to figure out what is the problem in the implementation below (written in Python):

# a wrong and conceptually poor implementation of binary search
def binary_search(u,x):

if len(u) == 0:
return None

left  = 0
right = len(u)-1

while left != right:

middle = left + (right-left)//2

# left subarray:  u[i] for i in [left,middle]
# right subarray: u[i] for i in [middle,right]

if u[middle] <= x:
left = middle
else:
right = middle

# at this point left == right; check if x was found
if u[left] == x:
return left
else:
return None


Consider what would happen if u = [0,1] and x = 0. Initially, we have left = 0 and right = 1. Inside the while loop, middle is set to 0 since (right-left)//2 = 0 (the // operator computes integer division and is necessary since regular division is converted to a floating point value in newer Python versions). After that point, the condition u[middle] <= x is satisfied and left = middle sets left to 0. Since neither left not right have changed, the loop will restart and be executed forever (or at least until you get tired of waiting and stop the process). To make a long story short: integer arithmetic has pulled the rug under our feet.

Before working on a fix for the implementation above, notice that it has a suboptimal design decision which, as will be shown below, is closely related to the infinite loop just described. Indeed, the implementation above "divides" the currently considered portion of the array u into two overlapping subarrays: a "left subarray" containing the elements with indices in the closed interval [left,middle] and a "right subarray" containing the elements with indices in [middle,right]. In other words, u[middle] is part of both intervals. Inside the while loop, the program decides on whether to continue iterating over the left or over the right subarray.

There is no need to consider two overlapping subarrays. Let's first consider what would happen if u[middle] were part of the right subarray but not part of the left one. One could try to improve the implementation above by using this assumption explicitly:

# a conceptually better but still wrong implementation of binary search
def binary_search(u,x):

if len(u) == 0:
return None

left  = 0
right = len(u)-1

while left != right:

middle = left + (right-left)//2

# left subarray:  u[i] for i in [left,middle)
# right subarray: u[i] for i in [middle,right]

if u[middle] <= x:
left = middle
else:
right = middle - 1

# at this point left == right; check if x was found
if u[left] == x:
return left
else:
return None


The implementation above is conceptually better in the sense that each subarray is strictly smaller than the original one, making it obvious that the method must converge if no other issues exist. Unfortunately, the applied changes do not fix the original problem: given u = [0,1] and x = 0, the program will still run the while loop forever. However, letting u[middle] be part of the left but not part of the right subarray solves our problem:

# a correct and conceptually clean implementation of binary search
def binary_search(u,x):

if len(u) == 0:
return None

left  = 0
right = len(u)-1

while left != right:

middle = left + (right-left)//2

# left subarray:  u[i] for i in [left,middle]
# right subarray: u[i] for i in [middle+1,right]

if u[middle] < x:
left = middle + 1
else:
right = middle

# at this point left == right; check if x was found
if u[left] == x:
return left
else:
return None


The implementation above is correct and will always find the desired value x provided it is in the array u. Indeed, the value of either left or right must change in each iteration since the first branch (lines 17 and 18) increases left by at least 1 (left <= middle) while the second branch (lines 19 and 20) decreases right by at least 1 (middle < right). Since, by construction, left never becomes larger than right, at some point both variables must contain the same value and the while loop will therefore terminate.

### Computing the middle index

The middle index can be computed as shown below:

middle = (left + right) // 2


However, doing integer arithmetic in this manner is prone to bugs which can be very hard to find. This will not be the case in Python since there the integer type is promoted to arbitrary precision by default, but in other languages such as C and C++, the sum of two integers can overflow and yield a result which is completely different from what you would normally expect. This overflow can be safely avoided by computing middle as we did above:

middle = left + (right-left)//2


### A final note on arrays with possibly duplicated values

Some curious reader might be wondering what would happen if we attempted to find a value x which occurs more than once in u. In this case, the value of left, if ever changed in the while loop, will always be such that u[left-1] < x. This property will be preserved until the condition left == right is satisfied, meaning left must be the position of either the first occurrence of x in u or the position of some value different from x, in which case x is not in u. If left is never changed, it remains equal to 0 during the entire process, meaning the searched value x is either at the first position of u or not in u at all (in the latter case, x must be smaller than u).