Write invariants at the indicated space in the following functions.
bool
sequentialSearch(ListNode *head, int key) {
for(ListNode *n = head; n; n = n->next) {
// INVARIANT: ??
if(n->data == key) return true;
}
return false;
}
void
selectionSort(int *arr, int n) {
for(int i = 0; i < n; i++) {
// INVARIANT: ???
int min_pos = i;
for(int j = i + 1; j < n; j++) {
if(arr[j] < arr[min_pos]) min_pos = j;
}
int temp = arr[min_pos];
arr[min_pos] = arr[i];
arr[i] = temp;
}
}
void
insertionSort(int *arr, int n) {
for(int i = 0; i < n; i++) {
// INVARIANT: ???
int temp = arr[i];
int j = i - 1;
while(j >= 0 && arr[j] > arr[i]) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = temp;
}
}
void
pivot(int *arr, int n, int pivot) {
swap(arr[pivot_loc], arr[0]);
int l = 1;
int r = n - 1;
while(l < r) {
// INVARIANT: ???
while(l < r && arr[l] <= pivot) l++;
while(r > l && arr[r] > pivot) r--;
if(l < r) {
swap(arr[r], arr[l]);
l++;
r--;
}
}
if(arr[l] <= pivot) swap(arr[0], arr[l]);
else swap(arr[0], arr[l - 1]);
}
In Quiz 3, we computed a parent array for a breadth-first search tree. Let's say we wrote the following code to compute the tree's depth.
int
nodeDepth(int which, int *parent) {
if(parent[which] == -1) return 0;
else return 1 + nodeDepth(parent[which], parent);
}
int
treeDepth(int *parent, int num_nodes) {
int max_depth = -1;
for(int i = 0; i < num_nodes; i++) {
int i_depth = nodeDepth(i, parent);
if(i_depth > max_depth) max_depth = i_depth;
}
return max_depth;
}
If num_nodes is n, what is the worst-case running time of the treeDepth()?
Solve the following recurrence. (This one occurs in a well-known matrix multiplication algorithm that we're not studying.)
T(n) = 7 T(n / 2) + n^2 T(1) = 1
Challenge problem: Solve the following recurrence:
T(n) = T( n^(1/2) ) + lg lg n T(1) = 0
Multiplication: Notice that regular grade-school multiplication of two k-digit numbers takes O(k^2) time. Consider the following alternative technique.
MULTIPLY(a, b)
k <- max(number of digits in a, number of digits in b)
//base case
if k = 1 then return a * b fi
a_U <- upper k/2 digits of a
a_L <- lower k/2 digits of a
b_U <- upper k/2 digits of b
b_L <- lower k/2 digits of b
x_1 <- MULTIPLY(a_L, b_L)
x_2 <- MULTIPLY(a_U, b_U)
x_3 <- MULTIPLY(a_U + a_L, b_U + b_L) - x_1 - x_2
return x_2 * 10^k + x_3 * 10^(k/2) + x_1
Addition, subtraction, and multiplying by the O(k)th power
of 10 take O(k) time. Knowing this, how much time will
MULTIPLY take on two k-digit numbers?
Parallel sort: We can take advantage of a large number of processors by writing procedures that work in parallel. The amount of time taken to do things in parallel is the maximum amount of time taken for any one thing. Say we have an algorithm to merge two arrays of length m and n in O(log(m+n)) time.
// ParallelMerge takes an array arr with the first m elements in // sorted order and the next n elements in sorted order. It puts all // of arr into sorted order and takes O(log(m+n)) time. void ParallelMerge(int *arr, int m, int n);We can apply this to sorting using a parallel merge sort:
ParallelMergeSort(int *arr, int n) {
if(n == 1) return;
in parallel do {
ParallelMergeSort(arr, n / 2);
ParallelMergeSort(arr + n / 2, n - n / 2);
}
ParallelMerge(arr, n / 2, n - n / 2);
}
Write a recurrence for how long it takes to sort an array of length
n, and solve the recurrence using big-O notation.
(Problem taken from Design and Analysis of Algorithms, by Dexter Kozen.)
Subset sum: Say we have n items (where n is a power of 2), each with an integer weight between -m and m. We want to know if there is a subset whose sum is exactly k. The following divide-and-conquer code should do this:
bool*
subsetSumHelper(int *weight, int m, int n) {
if(n == 1) {
// base case: for a set with one element, only 0 and the
// element's weight are possible sums
bool *achieve = new bool[2 * m + 1];
achieve += m;
for(int i = -m; i <= m; i++) achieve[i] = false;
achieve[0] = true;
achieve[weight[0]] = true;
return achieve;
}
// compute achievable sums for first and second halves
bool *achieve_1 = subsetSumHelper(weight, m, n / 2);
bool *achieve_2 = subsetSumHelper(weight + n / 2, m, n / 2);
// I can only achieve the sums between -m * n and m * n, inclusive.
// make achieve be an array so that achieve[-m * n] and
// achieve[m * n] are both defined.
bool *achieve = new bool[2 * m * n + 1];
achieve += m * n;
for(int i = -m * n; i <= m * n; i++) achieve[i] = false;
// compute based on first and second halves; if there is a subset
// of the first half with sum i and there is a subset of the second
// half with sum j, then there is a subset of the entire array
// with sum i+j
for(int i = -m * n / 2; i <= m * n / 2; i++) {
if(achieve_1[i]) {
for(int j = -m * n / 2; j <= m * n / 2; j++) {
if(achieve_2[j]) achieve[i + j] = true;
}
}
}
// delete the arrays in the called functions
delete[] (achieve_1 - m * n / 2);
delete[] (achieve_2 - m * n / 2);
// return the achievable sums for my array
return achieve;
}
bool subsetSum(int *weight, int m, int n, int k) {
bool *achievable = new int[2 * m * n + 1];
subsetSumHelper(achievable + m * n, weight, m, n);
bool ret = achievable[k];
delete[] achievable;
return ret;
}
Write a recurrence for how long it takes to compute the transitive
closure for a graph of n vertices, and solve the recurrence
using big-O notation.
(Problem taken from Design and Analysis of Algorithms, by Dexter Kozen.)
Transitive closure: In class we saw how to compute the transitive closure of a graph in O(n^3) time. (Given the adjacency matrix, we created an array so that entry (i,j) is non-zero if there is a path from vertex i to vertex j). Here we'll see a better way. Assume we have the following routines:
void DisassembleMatrix(int E[][], int *(A[][]), int *(B[][]),
int *(C[][]), int *(D[][]), int n); // O(n^2) time
int[][] AssembleMatrix(int A[][], int B[][], int C[][], int D[][]);
// O(n^2) time
int[][] MatrixCopy(int A[][], int n); // O(n^2) time
void MatrixMultiply(int dest[][], int src[][], int n); // O(n^2.81) time
void MatrixAdd(int dest[][], int src[][], int n); // O(n^2) time
Then the following will compute the transitive closure.
int[][]
TransitiveClosure(int E[][], int n) {
// divide matrix into four n/2 x n/2 submatrices
// [ A B ]
// E = [ ]
// [ C D ]
DisassembleMatrix(E, &A, &B, &C, &D, n);
// compute Dt
Dt = TransitiveClosure(D, n / 2);
// compute F = A + B * Dt * C
F = MatrixCopy(B, n / 2);
MatrixMultiply(F, Dt, n / 2);
MatrixMultiply(F, C, n / 2);
MatrixAdd(F, A, n / 2);
// compute Ft
Ft = TransitiveClosure(F, n / 2);
// [ Ft Ft * B * Dt ]
// return [ ]
// [ Dt * C * Ft Dt + Dt * C * Ft * B * Dt ]
// compute Dt * C * Ft
lowerleft = MatrixCopy(Dt, n / 2);
MatrixMultiply(lowerleft, C, n / 2);
MatrixMultiply(lowerleft, Ft, n / 2);
// compute Dt + Dt * C * Ft * B * Dt
lowerright = MatrixCopy(lowerleft, n / 2);
MatrixMultiply(lowerright, B, n / 2);
MatrixMultiply(lowerright, Dt, n / 2);
MatrixAdd(lowerright, Dt, n / 2);
// compute Ft * B * Dt
upperright = MatrixCopy(Ft, n / 2);
MatrixMultiply(upperright, B, n / 2);
MatrixMultiply(upperright, Dt, n / 2);
return ReassembleMatrix(Ft, upperright, lowerleft, lowerright, n);
}
Write a recurrence for how long it takes to compute the transitive
closure for a graph of n vertices, and solve the recurrence
using big-O notation.
(Problem taken from Design and Analysis of Algorithms, by Dexter Kozen.)