# 15110 Fall 2011 [Cortina/von Ronne]

## Written Homework 4 - due Friday, October 7 in class

Read sections 4.3-4.5 in chapter 4 and sections 5.1-5.6 in chapter 5 of the textbook Explorations in Computing.

### Instructions

• Type or neatly write the answers to the following problems.
• On the first page of your homework, include your name, andrew ID, lab section (A-N), and the assignment number.
• You must hand in your homework at the start of class on the given due date.

### Exercises

1. (2 pts) Here is another way to sort data in an array. This algorithm is called "Bubble Sort" and it works by "bubbling up" the maximum to the last position in the array, then "bubbling up" the next largest value to the 2nd to last position, etc.

```def swap(list, x, y)
# swaps the elements in list at index x and index y
temp = list[x]
list[x] = list[y]
list[y] = temp
end

def bubsort(list)
n = list.length
for i in (1..n-1) do
for j in (0..n-2) do
# swap adjacent elements if they're not in correct order
if list[j] > list[j+1] then
swap(list, j, j+1)
end
end
end
end
```

1. Trace the bubsort function for the list [4,7,3,6,1] (n = 5) showing the contents of the list after each iteration of the j loop. Note that the j loop is inside the i loop, so you'll need to trace through the j loop completely more than once.

Your trace should look like the table below. We've filled in the first two rows for you.

```i   j   list
1   0   [4,7,3,6,1]
1   1   [4,3,7,6,1]
1   2
1   3
2   0
2   1
...etc...
```

2. Do you see any wasted computation in the algorithm above, specifically with the range for the j loop? Rewrite the function so that it does not compare adjacent array elements that don't need to be compared anymore.

3. In the worst case, what is the order of complexity (in big O notation) of the original bubble sort algorithm in terms of n? Briefly explain your answer.

4. In the worst case, what is the order of complexity (in big O notation) of your modified bubble sort algorithm in terms of n? Briefly explain your answer.

2. (1 pt) Suppose you try to use binary search on an unsorted array.

1. Give an example of an array of 15 unsorted elements where the key is in the array but binary search would fail to find it. List the elements in the array that are examined by binary search to explain your answer.

2. Give an example of an array of 15 unsorted elements where the key is in the array and binary search does find it by sheer luck. List what elements in the array that are examined by binary search to explain your answer.

3. (2 pts) Consider the array [64, 23, 50, 81, 75, 18, 39, 44] which we want to sort using merge sort:

Merge Sort Algorithm:
1. Sort the first half of the array using merge sort.
2. Sort the second half of the array using merge sort.
3. Merge the two sorted halves to get a sorted array.

1. Show the list after step 1 of the algorithm.

2. Show the list after step 2 of the algorithm.

3. Show the merge process of step 3 by tracing this step as shown in the course slides/notes.

4. Explain why this algorithm is recursive. What is its base case?

4. (1 pt) You are given the following recursive function in Ruby:

```def f(n)
if n == 0 || n == 1 then
return n
else
return f(n-2) + f(n/2)
end
end
```

Trace the computation of f(7) by drawing a recursion tree (like the one we drew for Fibonacci numbers), showing all of the recursive computations that need to be computed to find the value for f(7).

5. (2 pts) Here is a recursive algorithm to count the number of odd integers in a list of integers:

1. If the list is empty, then the number of odd integers in the list is 0.
2. Otherwise, if the first element is odd, then the number of odd integers in the list is 1 + the number of odd integers in the rest of the list.
3. Otherwise the first element must be even, so the number of odd integers in the list is the number of odd integers in the rest of the list.

Complete the Ruby function below to perform this computation recursively:

```def countodd(list)

if ________________________ then

return 0

end

if ____________________________________ then

return 1 + ___________________________________

else

return ____________________________________
end
end
```

6. (2 pts) Quick sort is another algorithm for sorting data. In this algorithm, we compare all of the elements in the list to the first element, which we will call the pivot. All of those that are less than the pivot go into one list and all of those that are greater than or equal to the pivot go into a second list. Then we sort these two sublists (recursively). The final sorted list is the first sorted sublist followed by the pivot followed by the second sorted sublist.

For example, if we want to sort the list

```list = [58, 82, 27, 18, 90, 84, 41, 56, 33]
```
we create two lists, separating the elements based on the pivot 58:

```list1 = [27, 18, 41, 56, 33]
list2 = [82, 90, 84]
```

Each sublist is sorted (recursively):

```list1 = [18, 27, 33, 41, 56]
list2 = [82, 84, 90]
```

and the final result is

```[18, 27, 33, 41, 56] +  + [82, 84, 90]
==> [18, 27, 33, 41, 56, 58, 82, 84, 90]
```

Consider the quick sort algorithm sorting a list of n integers.

1. Is quick sort a divide and conquer algorithm? Why or why not?

2. This algorithm is recursive. What is the base case for this recursion? (In other words, when does quick sort stop partitioning a list into two sublists?) BE CAREFUL: This is a little trickier than it seems.

3. Explain when this algorithm would have an order of complexity of O(n log n). Give a simple example to justify your answer.

4. Explain when this algorithm would have an order of complexity of O(n2). Give a simple example to justify your answer.