# 15-210: Parallel and Sequential Data Structures and Algorithms

## Overview

This course aims to teach methods for designing, analyzing, and programming sequential and parallel algorithms and data structures. The emphasis is on teaching fundamental concepts applicable across a wide variety of problem domains, and transferable across a reasonably broad set of programming languages and computer architectures. The course, however, includes a significant programming component in which students will program concrete examples from domains such as engineering, scientific computing, graphics, data mining, and information retrieval (web search). Unlike a traditional introduction to algorithms and data structures, this course puts an emphasis on parallel thinking—i.e., thinking about how algorithms can do multiple things at once instead of one at a time. The course follows up on material learned in 15-122 and 15-150 but goes into significant more depth on algorithmic issues.

The concepts covered by this course include:

• Problem vs. Algorithm: A problem defines an interface (e.g., sorting) while an algorithm defines a particular method to solve the problem (e.g., quicksort). Being able to cleanly define a problem is key in the process of putting together large programs and in reusing their components. We cover a variety of fundamental problems (sorting, minimum spanning tree, nearest neighbors, ...) and a variety of algorithms for solving them (quicksort, Dijkstra's algorithm, ...).
• Asymptotic Analysis: Although the so-called big-O analysis is not going to accurately predict wall-clock time, it is critical for getting a rough estimate of the performance of an algorithm and helping guide algorithm design and selection. In the class, we analyze performance in terms of work, span and space. We cover a variety of techniques for analyzing asymptotic performance including solving recurrences, randomized analysis, and various counting arguments.
• Techniques in Algorithm Design: There are just a few techniques that play the key role in the the design of most parallel/sequential algorithms and data structures. In this class, we cover these techniques, including using collections (sets, sequences, priority queues, ...), divide-and-conquer, contraction, the greedy method, balanced trees, hashing, sparse matrices, and dynamic programming.
The assignments use Standard ML (SML). We use SML for several reasons, including: (1) it is safe for parallelism, (2) it properly supports higher-order functions, and (3) its module system does a good job of supporting abstract interfaces without confounding interfaces with state.

## Course Information

### Lectures:

Tue and Thu 10:30-11:50, BH 136A (Adamson Wing)

### Textbook:

There is no course textbook. Lecture notes and will be provided.

### Recitations:

Section A - Wed 10:30-11:50, GHC 4301, Chris Martens
Section B - Wed 11:30-21:20, SH 222, Ian Voysey
Section C - Wed 12:30-1:20, SH 222, Aapo Kyrola
Section D - Wed 1:30-2:20, DH 1211, Kanat Tangwongsan and Margaret Reid Miller