Fall 2000 and Spring 2001: | The seminar is in Wean 7220 on Friday's from 3:30pm to 5:00pm. |

TITLE: **Improved Approximations for Tour and Tree Covers.**

SPEAKER: Ojas Parekh.

WHEN: Friday, Sep. 22, 3:30pm-5:00pm.

WHERE: Wean 7220.

PAPERS:

- Jochen Könemann, Goran Konjevod, Ojas Parekh and Amitabh Sinha: Improved Approximations for Tour and Tree Covers. (Postscript).

A tree (tour) cover of an edge-weighted graph is a set of edges which comprises a tree (closed walk) and covers every edge in the graph. Arkin, Halldórsson and Hassin [1] give approximation algorithms with performance ratios of 3.55 and 5.5 for the tree and tour cover problems respectively. We present algorithms which employ polyhedral rounding techniques to provide 3-optimal solutions for both problems.

This is joint work with Jochen Könemann, Goran Konjevod and Amitabh Sinha.

REFERENCES:

- Arkin, Halldórsson, and Hassin. Approximating the tree and tour covers.

TITLE: **A Brief Introduction to Wavelets.**

SPEAKER: Paul Komarek.

WHEN: Friday, Oct. 13 , 3:30pm-5:00pm.

WHERE: Wean 7220.

PAPERS:

ABSTRACT:

JosephFourier's method of harmonic analysis has enjoyed great success and popularity during its two centuries of existence. In 1965 Cooley and Tukey formalized techniques for reducing the discrete Fourier transform's O(n^2) complexity to O(n log(n)). This ``Fast Fourier Transform'' (FFT) allowed cheap harmonic analysis for all of science and industry.

In the early 1990s, a competing method of harmonic analysis gained quick popularity. This new method employs a basis of ``wavelets'', functions which consisted of one ``wave'' surrounded by vanishing tails. The non-periodicity of wavelets allow them to represent non-periodic functions more efficiently than Joseph Fourier's sinusoidal basis, and provide an (arguably) better time versus frequency trade-off than the FFT.

I intend to briefly review properties of Fourier methods and introduce wavelet bases. The discrete wavelet transform will be contrasted with the FFT and, technology willing, their strengths and weaknesses demonstrated. I will finish with a discussion of some applications in which wavelets excel. I hope to make this talk suitable for anyone with a basic proficiency in calculus.

TITLE: Approximation for the maximum quadratic assignment problem.

SPEAKER: Amitabh Sinha.

WHEN: Friday, Oct. 20, 3:30pm-5:00pm.

WHERE: Wean 7220.

PAPERS: E. Arkin, R. Hassin, M.Sviridenko; Approximation for the maximum quadratic assignment problem , SODA 2000.

ABSTRACT:

Given 3 nxn nonnegative symmetric matrices A,B,C, the max quadratic assignment problem asks for a permutation \pi of {1,2,...,n} which maximizes \Sum_{i\ne j} a_{\pi(i),\pi(j)}b_{ij} + \Sum_i c_{i,\pi(i)} The authors provide a fast and elegant combinatorial algorithm which is a 1/4-approximation to the problem when the matrix B satisfies the triangle inequality. This framework can be used to model various graph theoretic problems, such as MAX-TSP, MAX-BISECTION, MAX-PERFECT-MATCHING, etc. While the problem in general and each of the special cases have been studied extensively (and the special cases can usually be approximated much better), this paper is the best known polynomial time approximation for the problem.

TITLE: Investigation of the Irregularity Strength of Trees

SPEAKER: David Kravitz.

WHEN: Friday, Nov. 17, 3:30pm-5:00pm.

WHERE: Wean 7220.

PAPERS: Here's a copy of David's bachelor thesis: dkravitz.ps

ABSTRACT:

The irregularity strength of a graph has recently become of interest to both number theorists and graph theorists. Here we deal with a specific conjecture relating to the irregularity strength of trees, which claims that the strength is essentially either the number of vertices of degree 1 or the average of the number of vertices of degree 1 and degree 2. After making the reader familiar with the necessary vocabulary in graph theory and defining irregularity strength, I present a brief survey of known results. Then, I describe methods used to try to prove the conjecture, and how this led to an infinite series of counter examples to refute it. The idea behind the counter examples arose from extensive computer experimentation. This is joint work with Felix Lazebnik and Gary Ebert.

Title: Average Case Analysis of Random Hamilton Cycle and Travelling
Salesman Problems: a survey of results and problems.

Speaker: Alan Frieze.

Where: Wean 7220

When: Friday, Dec. 08, 3:30pm -5:00 pm

Title: On the Rank of Mixed 0,1 Polyhedra

Speaker: Yanjun Li.

Where: Wean 7220

When: Friday, Dec. 15, 3:30pm -5:00 pm

ABSTRACT:

Eisenbrand and Schulz showed recently (IPCO 99) that the maximum Chv\'atal rank of a polytope in the $[0,1]^n$ cube is bounded above by $O(n^2 logn)$ and bounded below by $(1+\epsilon)n$ for some $\epsilon > 0$. It is well known that Chv\'atal's cuts are equivalent to Gomory's fractional cuts, which are themselves dominated by Gomory's mixed integer cuts. What do these upper and lower bounds become when the rank is defined relative to Gomory's mixed integer cuts? An upper bound of $n$ follows from existing results in the literature. In this note, we show that the lower bound is also equal to $n$. We relate this result to bounds on the disjunctive rank and on the Lov\'asz-Schrijver rank of polytopes in the $[0,1]^n$ cube. The result still holds for mixed 0,1 polyhedra with $n$ binary variables. This is a joint work with G\'erard Cornu\'ejols.

Title: The Plank Problem from the Viewpoint of Hypergraph Matchings and Covers

Speaker: Ron Holzman

Where: Wean 7220

When: Friday, 2/9/2001, 3:30-5:00pm

ABSTRACT:

The "plank problem", open since 1950, is the following conjecture. Suppose that the convex set K is contained in the unit cube of R^n, and meets all of the cube's facets. Suppose further that a collection of planks is given (where "plank" means a strip determined by two parallel hyperplanes orthogonal to one of the axes), and the union of these planks covers K. Then the sum of the planks' widths must be at least 1. We propose an analogy between this problem and familiar concepts from the theory of hypergraphs. This leads to a proof of the conjecture in certain special cases, and to a formulation of a fractional counterpart of the conjecture (corresponding to a linear programming relaxation of the problem), which we are able to verify to within a factor of 2. This is joint work with Ron Aharoni, Michael Krivelevich and Roy Meshulam.

Title: A shorter proof of Guenin's characterization of Weakly bipartite
graphs

Speaker: Giacomo Zambelli.

Where: Wean 7220

When: Friday, 2/16/2001, 3:30-5:00pm

ABSTRACT:

We give a proof of Guenin's theorem charchterizing weakly bipartite graphs by not having an odd-$K_5$ minor. The proof curtails the technical and case-checking parts of Guenin's original proof.

Title: Analysis of SRPT scheduling: Investigating Unfairness

Speaker: Nikhil Bansal.

Where: Wean 7220

When: Friday, 2/23/2001, 3:30-5:00pm

ABSTRACT:

The Shortest-Remaining-Processing-Time (SRPT) scheduling policy has long been known to be optimal for minimizing the mean response time (the mean time from when a job arrives until it finishes execution) Despite this fact, SRPT scheduling is rarely used in practice. It is believed that the performance improvements of SRPT over other scheduling policies with respect to mean response time stem from the fact that SRPT unfairly penalizes the large jobs in order to help the small jobs. This belief has caused people to instead adopt ``fair'' scheduling policies such as Processor-Sharing (PS).

In this talk, we will investigate the problem of unfairness in SRPT. We will show that surprisingly, SRPT is not unfair towards large jobs, particularly when the service requirements of jobs resemble those observed in the real world (i.e. the distribution of service requirements is heavy-tailed).

Title: A \Sigma_2-complete optimization problem and related hardness of approximation results.

Speaker: Abie Flaxman.

Where: Wean 7220

When: Friday, 3/16/2001, 3:30-5:00pm

ABSTRACT:

The recent thesis of Chris Umans proves several interesting problems relating to logic minimization are "harder than NP-hard". One such problem is MIN-DNF: Given a DNF formula \phi and an integer k, we accept if there exists a formula \psi equivalent to \phi with at most k literals. We will present Umans' proof that MIN-DNF is \Sigma_2-complete, and outline a technique for proving hardness of approximation results that does not require the PCP theorem.

Speaker: Gregory Sorkin, IBM T.J. Watson

Topic: Optimal myopic algorithms for random 3-SAT

When: Thursday, 3/22/01, 2:30-4:00pm

Where: Wean 4625

Abstract:

Let $F_3(n,m)$ be a random 3-SAT formula formed by selecting uniformly, independently, and with replacement, $m$ clauses among all $8 \binom{n}{3}$ possible 3-clauses over $n$ variables. It has been conjectured that there exists a constant $r_3$ such that for any $\e >0$, $F_3(n,(r_3-\e)n)$ is almost surely satisfiable, but $F_3(n,(r_3+\e)n)$ is almost surely unsatisfiable. The best lower bounds for the potential value of $r_3$ have come from analyzing rather simple extensions of unit-clause propagation. Recently, it was {shown~\cite{pi}} that all these extensions can be cast in a common framework and analyzed in a uniform manner by employing differential equations. Here, we determine optimal algorithms expressible in that framework, establishing $r_3 > 3.26$. We extend the analysis via differential equations, and make extensive use of a new optimization problem we call ``\mbox{max-density} \mbox{multiple-choice} knapsack''. The structure of optimal knapsack solutions elegantly characterizes the choices made by an optimal algorithm.

Speaker: Luis Zuluaga.

Topic: Optimal semi-parametric bounds for the payoff of European options via semidefinite programming.
When: Friday, 4/13/01, 3:30-5:00pm

Where: Wean 7220

Abstract:

Bounds for the payoff of European options that depend only on the moments of the underlying asset(s) and not on its entire distribution are called semi-parametric bounds. These types of bounds have been used to estimate option prices under the risk-neutral measure pricing theory. They also yield a relationship between option prices and the true distribution of the underlying asset(s). Motivated by previous work by Bertsimas and Popescu, we show how to obtain optimal semi-parametric bounds for European options whose payoff depends on two (possible correlated) assets by solving a semidefinite program. Also, we characterize the feasibility set for these problems, a question that falls into the theory of generalized moment problems. This connection between semidefinite programming and moment problems is closely related to the classical 17th Hilbert problem regarding definite forms. Finally, we show some numerical results for European exchange options. These results are obtained by solving the corresponding semidefinite programming formulation.

Speaker: Jozef Slokan.

Topic: Small Cliques in 3-uniform Hypergraphs

When: Friday, 4/27/01, 3:30-5:00pm

Where: Wean 7220

Abstract:

Many applications of Szemer\'edi's Regularity Lemma are based on the following technical fact: If $G$ is a $k$-partite graph with $V(G) = \bigcup_{i=1}^{k} V_i$, $|V_i|=n$ for all $i\in [k]$, and all pairs $\{V_i, V_j\}$, $1\le i < j\le k$, are $\epsilon$-regular of density $d$, then $G$ contains $d^{{k \choose 2}}n^k(1+f(\epsilon))$ cliques $K_{k}^{(2)}$, where $f(\epsilon)$ tends to~$0$ as $\epsilon$ tends to~$0$.

B.~Nagle and V.~R\"odl established the analogous statement for $3$-uniform hypergraphs. In this talk, we present a proof of the same result by a~simpler argument.

This is a joint work with Y. Peng and V. R\"odl.

Speaker: Bjarni Halldorsson.

Topic: On the Approximability of the Minimum Test Collection Problem

When: Friday, 5/4/01, 3:30-5:00pm

Where: Wean 7220

Paper: TestCollection.ps

Abstract:

The minimum test collection problem is defined as follows. Given a ground set $\mathcal{S}$ and a collection $\mathcal{C}$ of tests (subsets of $\mathcal{S}$), find the minimum subcollection $\mathcal{C'}$ of $\mathcal{C}$ such that for every pair of elements $(x,y)$ in $\mathcal{S}$ there exists a test in $\mathcal{C'}$ that contains exactly one of $x$ and $y$. In this way, the incidence of each element of $\mathcal{S}$ to the tests in $\mathcal{C'}$ provides a unique signature for it. It is well known that the greedy algorithm gives a $1+2\ln n$ approximation for the test collection problem where $n = |\mathcal{S}|$, the size of the ground set. In this paper, we show that this algorithm is close to the best possible, namely that there is no $o(\log n)$-approximation algorithm for the test collection problem unless $P = NP$.

We also give new approximation algorithms for this problem in the case when all the tests have a small cardinality, significantly improving the performance guarantee achievable by the greedy algorithm. In particular, for instances with test sizes at most $k$ we derive an $O(\log k)$ approximation. For the special case when $k=2$ we give a $\frac{7}{6} + \e$ approximation.

This is joint work with M.M. Halld\'orsson and R. Ravi

Documents linked from this page can be read using PostScript Ghostview or GSview, Adobe Acrobat Reader, Sun StarOffice Impress or Microsoft PowerPoint Viewer.

jochen@cmu.edu