\documentclass[11pt,twoside]{scrartcl}

%opening
\newcommand{\lecid}{15-414}
\newcommand{\leccourse}{Bug Catching: Automated Program Verification}
\newcommand{\lecdate}{} %e.g. {October 21, 2013}
\newcommand{\lecnum}{13}
\newcommand{\lectitle}{Satisfiability \& DPLL}
\newcommand{\lecturer}{Matt Fredrikson}

\usepackage{listings}

\usepackage{lecnotes}

\usepackage{tikz}

\usepackage[irlabel]{bugcatch}


\usetikzlibrary{automata,shapes,positioning,matrix,shapes.callouts,decorations.text}

\tikzset{onslide/.code args={<#1>#2}{%
  \only<#1>{\pgfkeysalso{#2}} % \pgfkeysalso doesn't change the path
}}

\tikzset{
    invisible/.style={opacity=0,text opacity=0},
    visible on/.style={alt={#1{}{invisible}}},
    alt/.code args={<#1>#2#3}{%
      \alt<#1>{\pgfkeysalso{#2}}{\pgfkeysalso{#3}} % \pgfkeysalso doesn't change the path
    },
  }

\definecolor{mygray}{rgb}{0.5,0.5,0.5}
\definecolor{backgray}{gray}{0.95}
\lstdefinestyle{whyml}{
  belowcaptionskip=1\baselineskip,
  breaklines=true,
  language=[Objective]Caml,
  showstringspaces=false,
  numbers=left,
  xleftmargin=2em,
  framexleftmargin=1.5em,
  numbersep=5pt,
  numberstyle=\tiny\color{mygray},
  basicstyle=\footnotesize\ttfamily,
  keywordstyle=\color{blue},
  commentstyle=\itshape\color{purple!40!black},
  tabsize=2,
  backgroundcolor=\color{backgray},
  escapechar=\%,
  morekeywords={predicate,invariant}
}

\begin{document}
\lstset{style=whyml}

\maketitle
\thispagestyle{empty}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Introduction}

In this lecture we will switch gears a bit from proving logical theorems ``by hand'', to algorithmic techniques for proving them automatically. Such algorithms are called \textbf{decision procedures}, because given a formula in some logic they attempt to decide their validity after a finite amount of computation.

Until now, we have gradually built up from proving properties about formulas in propositional logic, to doing so for first-order dynamic logic. As we begin discussing decision procedures, we will return to propositional logic so that the techniques applied by these algorithms can be more clearly understood. Decision procedures for propositional logic are often referred to as SAT solvers, as they work by exploiting the relationship between validity and satisfiability, and directly solve the latter problem. Later on, we will see that these same techniques underpin decision procedures for richer logics, and are able to automatically prove properties about programs.

\section{Review: Propositional Logic}

We'll focus on automating the decision problem for Boolean satisfiability. Let's start by refreshing ourselves on the fundamentals of propositional logic.   The formulas $F,G$ of propositional logic are defined by the following grammar (where $p$ is an atomic proposition, or \emph{atom}):
\[
F \bebecomes \top \alternative \bot \alternative p \alternative \lnot F \alternative F\land G \alternative F\lor G \alternative F\limply G \alternative F \lbisubjunct G
\]
When it comes to the semantics, recall that the meaning of formulas is given by an interpretation $I$ that gives the truth value for each atom. Given an interpretation, we can assign values to formulas constructed using the logical operators.
\begin{definition}[Semantics of propositional logic] \label{def:propositional-semantics}
The propositional formula $F$ is true in interpretation $\iget[const]{\I}$, written \(I \models F\), as inductively defined by distinguishing the shape of formula $F$:
\begin{enumerate}
\item \(I \models \top\) for all interpretations $I$.
\item \(I \not\models \bot\) for all interpretations $I$.
\item \(I \models p\) iff \(I(p)=\mtrue\) for atoms $p$.
\item \(I \models F\land G\) iff \(I \models F\) and \(I \models G\).
\item \(I \models F \lor G\) iff \(I \models F\) or \(I \models G\).
\item \(I \models \lnot F\) iff \(I \not\models F\).
\item \(I \models F \limply G\) iff \(I \not\models F\) or \(I \models G\).
\end{enumerate}
\end{definition}
Our notation for interpretations is essentially a list of all atoms that are $\mtrue$. So, the interpretation:
\[
I = \{p, q\}
\]
assigns the value $\mtrue$ to $p$ and $q$, and $\mfalse$ to all others. For example, the formula in Equation~\ref{eq:ex1} below would evaluate to $\mfalse$ under $I$, because $I(p) = \mtrue, I(q) = \mtrue, I(r) = \mfalse$ so $I \models p \land q$ and $I \not\models p \land q \limply r$.
\begin{equation}
\label{eq:ex1}
(p\land q \limply r) \land (p\limply q) \limply (p \limply r)
\end{equation}
We use some additional terminology to refer to formulas that evaluate to $\top$ under some or all possible interpretations.
\begin{definition}[Validity and Satisfiability]
  A formula $F$ is called \dfn{valid} iff it is true in all interpretations, i.e. \(I \models F\) for all interpretations $I$.
  Because any interpretation makes valid formulas true, we also write \(\entails F\) iff formula $F$ is valid.
  A formula $F$ is called \dfn{satisfiable} iff there is an interpretation $\iget[const]{\I}$ in which it is true, i.e. \(I \models F\).
  Otherwise it is called \dfn{unsatisfiable}.
\end{definition}
Satisfiability and validity are duals of each other. That is, a formula $F$ is valid if and only if $\lnot F$ is unsatisfiable. 
\begin{equation}
F~\text{is valid} \lbisubjunct \lnot F~\text{is unsatisfiable}
\end{equation}
Importantly, this means that we can decide whether a formula is valid by reasoning about the satisfiability of its negation. A proof of validity for $F$ from the unsatisfiability of $\lnot F$ is called a \emph{refutation}. Most efficient decision procedures use this approach, and therefore attempt to directly prove the satisfiability of a given formula. These tools are called SAT solvers, referring to the propositional SAT problem. If a SAT solver finds no satisfying interpretation for $F$, then we can conclude that $\lnot F$ is valid.

\section{A Simple Procedure}

Conceptually, SAT is not a difficult problem to solve. Each atom in the formula corresponds to a binary choice, and there are a finite number of them to deal with. Recall from the second lecture how we used truth tables to determine the validity of a formula:
\begin{enumerate}
\item Enumerate all possible interpretations of the atoms in $F$.
\item Continue evaluating all subformulas until the formula is a Boolean constant.
\item $F$ is valid iff it is $\mtrue$ under all interpretations.
\end{enumerate}
We can modify this procedure to decide satisfiability in the natural way.
\begin{enumerate}
\item Enumerate all possible assignments of the atoms in $F$.
\item Continue evaluating all subformulas until the formula is a Boolean constant.
\item $F$ is satisfiable iff it is $\mtrue$ under at least one interpretation.
\end{enumerate}
Implementing this procedure is fairly straightforward. The only part that might be tricky is enumerating the valuations, making sure that \textit{i)} we don't miss any, and \textit{ii)} we don't enumerate any of them more than necessary, potentially leading to nontermination. 

One natural way to do this is to use recursion, letting the stack implicitly keep track of which valuations have already been tried. We will rely on two helper functions to do this, which are outlined informally below.
\begin{itemize}
\item \texttt{choose\_atom: formula -> atom}. This function takes a formula argument and returns an arbitrary atom appearing in it.
\item \texttt{subst: formula -> atom -> bool -> formula}. Takes a formula, and atom appearing in the formula, and a Boolean value, and returns a new formula with all instances of the atom replaced by the Boolean value. It also simplifies it as much as possible, attempting to reduce the formula to a constant.
\end{itemize}
The function \texttt{sat} is given below. At each recursive step, the function begins by comparing the formula to the constants \texttt{true} and \texttt{false}, as a final decision can be made immediately in either case. Otherwise, it proceeds by selecting an arbitrary atom \texttt{p} from \texttt{F}, and creating two new formulas \texttt{Ft} and \texttt{Ff} by substituting \texttt{true} and \texttt{false}, respectively, and simplifying them as much as possible so that if there are no unassigned atoms in the formula then they are reduced to the appropriate constant. \texttt{sat} then makes two recursive calls on \texttt{Ft} and \texttt{Ff}, and if either return \texttt{true} then \texttt{sat} does as well.

\begin{minipage}{\linewidth}
\begin{lstlisting}[escapeinside={<*}{*>}]
let rec sat (F:formula) : bool =
	if F = true then true
	else if F = false then false
	else begin
		let p = choose_atom(F) in
		let Ft = (subst F p true) in
		let Ff = (subst F p false) in
		sat Ft || sat Ff
	end
\end{lstlisting}
\end{minipage}

\noindent
Intuitively, we can think of this approach as exhaustive case splitting. The procedure chooses an atom $p$, splits it into cases $p$ and $\lnot p$, and recursively applies itself to the cases. If either is satisfiable, then the original is as well. We know this will terminate because each split eliminates an atom, and there are only a finite number of atoms in a formula.

We now have a basic SAT solver. We know that SAT is a hard problem, and more precisely that it is NP-Complete, and a bit of thought about this code should convince that this solver will experience the worst-case runtime of $2^n$ much of the time. There is a chance that we might get lucky and conclude that the formula is satisfiable early, but certainly for unsatisfiable formulas \texttt{sat} won't terminate until it has exhausted all of the possible variable assignments. Can we be more clever than this?

\section{Normal Forms and Simplification}

Before continuing, we should address a major uncertainty about our naive \texttt{sat} procedure. We assumed that the function \texttt{subst} ``simplifies'' the formula after making a substitution, but what exactly does this mean and how can we do it efficiently? In practice, most SAT procedures require that formulas be provided in a \emph{normal form}, which makes this step easier.

\paragraph{Basic Identities.}

When a formula contains the constants $\top$ and $\bot$, a number of simplifications follow directly from the semantics of propositional logic. We will ignore the implication operator, as it can be rewritten in terms of negation and disjunction. For negation, conjunction, and disjunction, we can use the following equivalences to rewrite formulas containing constants in simpler terms:
\begin{align}
& \lnot \top \lbisubjunct \bot \lbisubjunct F \land \bot \\
& \lnot \bot \lbisubjunct \top \lbisubjunct F \lor \top \\
& F \lor \bot \lbisubjunct F \lbisubjunct F \land \top \\
\end{align}
Repeatedly applying these simplifications to a formula containing \emph{only} constants will eventually lead to either $\top$ or $\bot$. However, practical SAT solvers use additional strategies to further reduce the space of interpretations that need to be considered by the decision procedure.

\paragraph{Negation Normal Form and Pure Literal Elimination.}
The term ``normal form'' in this context refers to a class of formulas that have a particular syntactic property. When writing programs to efficiently process formulas, it is common to assume that formulas will be given to the program in a normal form that reduces the number of cases to be considered and admits uniform reasoning techniques. One such example is called negation normal form (NNF).

\begin{definition}[Negation Normal Form (NNF)]
A formula is in negation normal form if negation occurs only over atoms and the only Boolean operators are $\land, \lor$, and $\lnot$.
\end{definition}
For example, $(\lnot p \lor \lnot q) \land r$ is in NNF, but $\lnot(p \land q) \land r$ is not because a negation occurs on the conjunction. Any propositional formula can be transformed to an equivalent NNF formula in linear time by rewriting implications and applying De Morgan's laws.
\begin{align}
\ausfml \limply \busfml & \lbisubjunct \lnot \ausfml \lor \busfml \\
\lnot (\ausfml \lor \busfml) & \lbisubjunct \lnot \ausfml \land \lnot \busfml \\
\lnot (\ausfml \land \busfml) & \lbisubjunct \lnot \ausfml \lor \lnot \busfml
\end{align}
When a formula is in NNF, we know that negations will only occur over atoms. We call an atom or its negation a \emph{literal}, and say that a literal is \emph{positive} if it does not have a negation, and that a literal is \emph{negative} if it does have a negation. 

This already allows us to simplify formulas in some cases. For example, consider the formula:
\begin{equation}
(p \lor q \lor \lnot r) \land (\lnot p \lor s) \land (\lnot r \lor (\lnot q \land \lnot s))
\end{equation}
In this formula, the atom $r$ only occurs in negative literals. A bit of thought should convince us that in this case, there is no point in assigning the value $\mtrue$ to $r$ in an interpretation -- this would cause both literals containing $r$ to evaluate to $\mfalse$, and therefore result in any conjunction containing $r$ to also evaluate $\mfalse$. 

Any atom that only appears in either positive or negative literals is called \emph{pure}, and their corresponding atoms must always be assigned in a way that makes the literal $\mtrue$. Thus, they do not constrain the problem in a meaningful way, and can be assigned without making a choice. This is called \emph{pure literal elimination}, and is one type of simplification that can be applied to NNF formulas. In the former example, we would have:
\begin{align*}
(p \lor q \lor \lnot r) \land (\lnot p \lor s) \land (\lnot r \lor (\lnot q \land \lnot s)) & \lbisubjunct \\
(p \lor q \lor \lnot \bot) \land (\lnot p \lor s) \land (\lnot \bot \lor (\lnot q \land \lnot s)) & \lbisubjunct \\
(p \lor q \lor \top) \land (\lnot p \lor s) \land (\top \lor (\lnot q \land \lnot s)) & \lbisubjunct \\
\top \land (\lnot p \lor s) \land \top & \lbisubjunct \\
(\lnot p \lor s)
\end{align*}
In practice, pure literal elimination can significantly reduce the complexity of propositional formulas, and so it is sometimes used as a pre-processing simplification before handing the formula to a solver. However, oftentimes there are no pure literals in a formula. Introducing more structure into the normal form can lead to more opportunities.

\paragraph{Conjunctive Normal Form and Unit Resolution.}
A further restriction on NNF formulas is called conjunctive normal form (CNF).
\begin{definition}[Conjunctive Normal Form (CNF)]
A formula $F$ is in conjunctive normal form if it is a conjunction of disjunctions of literals, i.e., it has the form:
\[
\bigwedge_i \left(\bigvee_j l_{ij} \right)
\]
where $l_{ij}$ is the $j$th literal in the $i$th \emph{clause} of $F$.
\end{definition}
Every formula can be converted into CNF using the same approach as for NNF, but this may cause the size of the formula to increase exponentially. However, it is possible to transform any propositional formula into an \emph{equisatisfiable} one in linear time. Two formulas $F$ and $G$ are  equisatisfiable when $F$ is satisfiable if and only if $G$ is as well. We will not cover the details of such transformations, but more information is availble in Chapter 1 of \cite{Bradley2007}.

Consider the following CNF formula:
\begin{equation}
\underbrace{(p_1 \lor \lnot p_3 \lor \lnot p_5)}_{C_1} \land
\underbrace{(\lnot p_1 \lor p_2)}_{C_2} \land
\underbrace{(\lnot p_1 \lor \lnot p_3 \lor p_4)}_{C_3} \land
\underbrace{(\lnot p_1 \lor \lnot p_2 \lor p_3)}_{C_5} \land
\underbrace{(\lnot p_4 \lor \lnot p_2)}_{C_6}
\end{equation}
Suppose that \texttt{sat} begins by choosing to assign $p_1$ to $\mtrue$. This leaves us with:
\begin{align*}
&
(p_1 \lor \lnot p_3 \lor \lnot p_5) \land
(\lnot p_1 \lor p_2) \land
(\lnot p_1 \lor \lnot p_3 \lor p_4) \land
(\lnot p_1 \lor \lnot p_2 \lor p_3) \land
(\lnot p_4 \lor \lnot p_2) \\
\lbisubjunct\ &
(\top \lor \lnot p_3 \lor \lnot p_5) \land
(\bot \lor p_2) \land
(\bot \lor \lnot p_3 \lor p_4) \land
(\bot \lor \lnot p_2 \lor p_3) \land
(\lnot p_4 \lor \lnot p_2) \\
\lbisubjunct\ &
\top \land
p_2 \land
(\lnot p_3 \lor p_4) \land
(\lnot p_2 \lor p_3) \land
(\lnot p_4 \lor \lnot p_2) \\
\lbisubjunct\ &
p_2 \land
(\lnot p_3 \lor p_4) \land
(\lnot p_2 \lor p_3) \land
(\lnot p_4 \lor \lnot p_2)
\end{align*}
Notice the clause $C_2$, which was originally $\lnot p_1 \lor p_2$, is now simply $p_2$. It is obvious that any satisfying interpretation must assign $p_2$ $\mtrue$, so there is really no choice to make given this formula. We say that $p_2$ is a \emph{unit literal}, which simply means that it occurs in a clause with no other literals.

We can immediately set $p_2$ to the value that satisfies its literal, and apply equivalences to remove constants from the formula.
\begin{align*}
&
\top \land
(\lnot p_3 \lor p_4) \land
(\lnot \top \lor p_3) \land
(\lnot p_4 \lor \lnot \top) \\
\lbisubjunct\ &
(\lnot p_3 \lor p_4) \land
(\bot \lor p_3) \land
(\lnot p_4 \lor \bot) \\
\lbisubjunct\ &
(\lnot p_3 \lor p_4) \land
p_3 \land
\lnot p_4
\end{align*}
After simplifying, we again have two unit literals $p_3$ and $\lnot p_4$. We can continue by picking $p_3$, assigning it a satisfying value, and simplifying.
\begin{align*}
& 
(\lnot \top \lor p_4) \land
\top \land
\lnot p_4 \\
\lbisubjunct\ & 
(\bot \lor p_4) \land
\lnot p_4 \\
\lbisubjunct\ & 
p_4 \land
\lnot p_4
\end{align*}
Now all clauses are unit, and it is clear that if we assign $p_1$ to $\mtrue$ then resulting formula is not satisfiable. Notice that once we assigned $p_1$ to $\mtrue$, we were able to determine that the resulting formula was unsatisfiable without making any further decisions. All of the resulting simplifications were a logical consequence of this original choice. The process of carrying this to its conclusion is called \emph{Boolean constraint propagation} (BCP), or sometimes \emph{unit propagation} for short. 

\section{DPLL}
BCP allowed us to conclude that the remaining formula, which originally had five variables, was unsatisfiable with just one recursive call instead of the $2^5$ that would have been necessary in our original naive implementation. This is a big improvement! Let's add it to our decision procedure and have a look at the consequences. 

The natural place to insert this optimization is at the beginning of the procedure, before \texttt{F} is further inspected and any choices are made. This will ensure that if we are given a formula that is already reducible to a constant through BCP, then we won't do any unnecessary work by deciding values that don't matter. The resulting procedure is called the David-Putnam-Loveland-Logemann or DPLL procedure, as it was introduced by Martin Davis, Hilary Putnam, George Logemann, and Donald Loveland in the 1960s~\cite{Davis1960,Davis1962}.

\begin{minipage}{\linewidth}
\begin{lstlisting}[escapeinside={<*}{*>}]
let rec dpll (F:formula) : bool =
	let Fp = BCP F in
	if Fp = true then true
	else if Fp = false then false
	else begin
		let p = choose_atom(Fp) in
		let Ft = (subst Fp p true) in
		let Ff = (subst Fp p false) in
		dpll Ft || dpll Ff
	end
\end{lstlisting}
\end{minipage}

\noindent
Remarkably, although DPLL was introduced over 50 years ago, it still forms the basis of most modern SAT solvers. Much has changed since the 1960's, however, and the scale of SAT problems that are used in practice has increased dramatically. It is not uncommon to encounter instances with millions of atomic propositions and hundreds of thousands of clauses, and in practice it is often feasible to solve such instances.

Using an implementation that resembles the one above for such problems would not yield good results in practice. One immediate problem is that the formula is copied multiple times and mutated in-place with each recursive call. While this makes it easy to keep track of which variables have already been assigned or implied via propagation, even through backtracking, it is extremely slow and cumbersome. 

Modern solvers address this by using imperative loops rather than recursive calls, and mutating an interpretation rather than the formula itself. The interpretation remains \emph{partial} throughout most of the execution, which means that parts of the formula cannot be evaluated fully to a constant, but are instead \emph{unresolved}.

\begin{definition}[Status of a clause under partial interpretation]
Given a partial interpretation $I$, a clause is:
\begin{itemize}
\item Satisfied, if one or more of its literals is satisfied
\item Conflicting, if all of its literals are assigned but not satisfied
\item Unit, if it is not satisfied and all but one of its literals are assigned
\item Unresolved, otherwise
\end{itemize}
\end{definition}

\noindent
For example, given the partial interpretation $I = \{p_1, \lnot p_2, p_4\}$:
\begin{description}
\item[\ \ \ \ \ \ $(p_1 \lor p_3 \lor \lnot p_4)$] is satisfied
\item[\ \ \ \ \ \ $(\lnot p_1 \lor p_2)$] is conflicting
\item[\ \ \ \ \ \ $(\lnot p_2 \lor \lnot p_4 \lor p_3)$] is unit
\item[\ \ \ \ \ \ $(\lnot p_1 \lor p_3 \lor p_5)$] is unresolved
\end{description}
As we discussed earlier, when a clause $C$ is unit under partial interpretation $I$, $I$ must be extended so that $C$'s unassigned literal $\ell$ is satisfied. There is no need to backtrack on $\ell$ before the assignments in $I$ that made $C$ unit have already changed, because $\ell$'s value was implied by those assignments. Rather, backtracking can safely proceed to the \emph{most recent decision}, erasing any assignments that arose from unit propagation in the meantime. Implementing this backtracking optimization correctly is essential to an efficient SAT solver, as it is what allows DPLL to avoid explicitly enumerating large portions of the search space in practice.

\paragraph{Learning conflict clauses.}

Consider the following CNF:
\[
\underbrace{(\lnot p_1 \lor p_2)}_{C_1} \land
\underbrace{(\lnot p_3 \lor p_4)}_{C_2} \land
\underbrace{(\lnot p_6 \lor \lnot p_5 \lor \lnot p_2)}_{C_3} \land
\underbrace{(\lnot p_5 \lor p_6)}_{C_4} \land
\underbrace{(p_5 \lor p_7)}_{C_5} \land
\underbrace{(\lnot p_1 \lor p_5 \lor \lnot p_7)}_{C_6}
\]
And suppose we make the following decisions and propagations.
\begin{enumerate}
\item Decide $p_1$:
\[
\underbrace{(\bot \lor p_2)}_{C_1} \land
\underbrace{(\lnot p_3 \lor p_4)}_{C_2} \land
\underbrace{(\lnot p_6 \lor \lnot p_5 \lor \lnot p_2)}_{C_3} \land
\underbrace{(\lnot p_5 \lor p_6)}_{C_4} \land
\underbrace{(p_5 \lor p_7)}_{C_5} \land
\underbrace{(\bot \lor p_5 \lor \lnot p_7)}_{C_6}
\]
\item Propagate $p_2$ from clause $C_1$
\[
\underbrace{(\bot \lor \top)}_{C_1} \land
\underbrace{(\lnot p_3 \lor p_4)}_{C_2} \land
\underbrace{(\lnot p_6 \lor \lnot p_5 \lor \bot)}_{C_3} \land
\underbrace{(\lnot p_5 \lor p_6)}_{C_4} \land
\underbrace{(p_5 \lor p_7)}_{C_5} \land
\underbrace{(\bot \lor p_5 \lor \lnot p_7)}_{C_6}
\]

\item Decide $p_3$
\[
\underbrace{(\bot \lor \top)}_{C_1} \land
\underbrace{(\bot \lor p_4)}_{C_2} \land
\underbrace{(\lnot p_6 \lor \lnot p_5 \lor \bot)}_{C_3} \land
\underbrace{(\lnot p_5 \lor p_6)}_{C_4} \land
\underbrace{(p_5 \lor p_7)}_{C_5} \land
\underbrace{(\bot \lor p_5 \lor \lnot p_7)}_{C_6}
\]
\item Propagate $p_4$ from clause $C_2$
\[
\underbrace{(\bot \lor \top)}_{C_1} \land
\underbrace{(\bot \lor \top)}_{C_2} \land
\underbrace{(\lnot p_6 \lor \lnot p_5 \lor \bot)}_{C_3} \land
\underbrace{(\lnot p_5 \lor p_6)}_{C_4} \land
\underbrace{(p_5 \lor p_7)}_{C_5} \land
\underbrace{(\bot \lor p_5 \lor \lnot p_7)}_{C_6}
\]
\item Decide $p_5$
\[
\underbrace{(\bot \lor \top)}_{C_1} \land
\underbrace{(\bot \lor \top)}_{C_2} \land
\underbrace{(\lnot p_6 \lor \bot \lor \bot)}_{C_3} \land
\underbrace{(\bot \lor p_6)}_{C_4} \land
\underbrace{(\top \lor p_7)}_{C_5} \land
\underbrace{(\bot \lor \top \lor \lnot p_7)}_{C_6}
\]

\item Propagate $p_6$ from clause $C_4$
\[
\underbrace{(\bot \lor \top)}_{C_1} \land
\underbrace{(\bot \lor \top)}_{C_2} \land
\underbrace{(\bot \lor \bot \lor \bot)}_{C_3} \land
\underbrace{(\bot \lor \top)}_{C_4} \land
\underbrace{(\top \lor p_7)}_{C_5} \land
\underbrace{(\bot \lor \top \lor \lnot p_7)}_{C_6}
\]

\item Conflicted clause $C_3$
\end{enumerate}
At this point $C_3$ is conflicted. We should take a moment to reflect on how we got here. We know that some subset of the decisions contributed to a partial assignment that led to conflict, but which ones? 

Tracing backwards, the implication $p_6$ was chronologically the most direct culprit, as it was incidental to the conflict in $C_3$. This was a consequence of our decision to set $p_5$ and the presence of $C_4$, so we could conclude that this to blame and proceed backtracking to this point and change the decision. However, $C_3$ would not have been conflicting, even with $p_5$ and $p_6$, if not for $p_2$. Looking back at the trace, $p_2$ was a consequence of our decision to set $p_1$, and the fact of $C_1$ being in our formula. 

Thus, we learn from this outcome that $\lnot p_1 \lor \lnot p_5$ is logically entailed by our original CNF. The process that we used to arrive at this clause is called \emph{resolution}, and corresponds to repeated application of the binary resolution rule.
\[
\cinferenceRule[res|res]{res}
{\linferenceRule[sequent]
  {\lsequent[L]{}{\asfml}
  	& \lsequent[L]{}{\lnot\asfml}}
  {\lsequent[L]{}{}}
}{}%
\]
Indeed, we can use this to quickly derive a proof that the clauses in our formula imply $\lnot p_1 \lor \lnot p_5$. We get this by applying \irref{res} to the clauses involved in the conflict: first $C_3$ and $C_4$, and then $C_1$.

In the following, let $F$ be our original formula.
\begin{sequentdeduction}[array]
\linfer[res] {
	\linfer[res] {
		\linfer[orl] {
			\vdots
		} {
			\lsequent{F}{\lnot p_6, \lnot p_5, \lnot p_2}
		}
		!\linfer[orl] {
			\vdots
		} {
			\lsequent{F}{\lnot p_5, p_6}
		}
	} {
		\lsequent{F}{\lnot p_5, \lnot p_2}
	}
	!\linfer[orl] {
		\vdots
	} {
		\lsequent{F}{\lnot p_1, p_2}
	}
} {
	\lsequent{F}{\lnot p_1, \lnot p_5}
}
\end{sequentdeduction}
In the above proof, we did not close out any branches, but they can be seen to follow immediately from (going left to right) the presence of $C_3$, $C_4$, and $C_1$ in $F$.

Clauses derived in this way are called \emph{conflict clauses}, and they are useful in pruning the search space. In the current example, suppose that we added the conflict clause $\lnot p_1 \lor \lnot p_5$ to our set. Then any partial interpretation with $p_1$ makes this clause unit, implying the assignment $\lnot p_5$. 
\begin{enumerate}
\setcounter{enumi}{4}
\item Backtrack to $p_5$
\item Learn clause $C_7 \lbisubjunct \lnot p_1 \lor \lnot p_5$
\item Propagate $\lnot p_5$ from clause $C_7$
\item ...
\end{enumerate}
Without this, if we eventually backtrack past $p_5$ to change the assignment to $p_3$, then when the procedure revisits $p_5$ it would attempt both assignments $p_5$ and $\lnot p_5$. Because $p_1$ has not changed, and we have now proved that $F \limply (\lnot p_1 \lor \lnot p_5)$, this would lead to the very same conflict again.

To summarize, the procedure for finding a conflict clause under partial assignment $I$ is as follows.
\begin{enumerate}
\item Let $C$ be a conflicting clause under $I$
\item While $C$ contains implied literals, do:
\item \ \ \ \ Let $\ell$ be the most recent implied literal in $C$
\item \ \ \ \ Let $C'$ be the clause that implied $\ell$ by unit propagation
\item \ \ \ \ Update $C$ by applying resolution to $C$ and $C'$ on $\ell$
\end{enumerate}
This procedure terminates when all of the literals in $C$ correspond to decisions made by \texttt{dpll}. However, the conflict clause produced in this way is by no means the only sound or useful such clause that can be derived. The most efficient way to find others is to construct an \emph{implication graph}.

\begin{definition}[Implication graph]
An implication graph for partial assignment $I$ is a directed acyclic graph with vertices $V$ and edges $E$, where:
\begin{itemize}
\item Each literal $\ell_i$ in $I$ corresponds to a vertex $v_i \in V$.
\item Each edge $(v_i, v_j) \in E$ corresponds to an implication brought about by unit propagation. That is, if $\ell_j$ appears in $I$ because of a unit propagation, and $\ell_i$ appears in the corresponding unit clause that brought about this propagation, then $(v_i, v_j) \in E$.
\item $V$ contains a special \emph{conflict vertex} $\Lambda$, which only has incoming edges $\{(\ell, \Lambda) | \ell \in C\}$ for each literal appearing in a conflicting clause $C$.
\end{itemize}
\end{definition}
The implication graph is a data structure maintained by many efficient implementations of DPLL. As assignments are added to a partial interpretation, the graph is updated with new nodes and edges to keep track of the relationship between decisions and their implied consequences. Likewise, nodes and edges are removed to account for backtracking.

The implication graph for our running example is shown below.

\begin{center}
\tikzstyle{vertex}=[circle,draw,fill=none,minimum size=3em,inner sep=0pt]
\tikzstyle{edge} = [draw,thick,->]
\tikzstyle{weight} = [font=\small]

\begin{tikzpicture}%[scale=1.8, auto,swap]
    \node[vertex] (P1) at (0,0) {$p_1@1$};
    \node[vertex] (P2) at (3,0) {$p_2@1$};
    \node[vertex] (P3) at (7,1) {$p_3@2$};
    \node[vertex] (P4) at (10,1) {$p_4@2$};
    \node[vertex] (P5) at (0,1.5) {$p_5@3$};
    \node[vertex] (P6) at (3,2) {$p_6@3$};
    \node[vertex] (C) at (5,1) {$\Lambda$};

    \path[edge] (P1) -- node[below] {$C_1$} (P2);
    \path[edge] (P3) -- node[below] {$C_2$} (P4);
    \path[edge] (P5) -- node[above] {$C_4$} (P6);
    \path[edge] (P5) -- node[below] {$C_3$} (C);
    \path[edge] (P6) -- node[above] {$C_3$} (C);
    \path[edge] (P2) -- node[below] {$C_3$} (C);
\end{tikzpicture}
\end{center}

The three decisions we made correspond to roots of the graph, and implications are internal nodes. We also keep track of at which \emph{decision level} each vertex appeared, with the $@$ notation. Recall that we began (decision level 1) by deciding $p_1$, which implied $p_2$ by unit propagation. The responsible clause, in this case $C_1$, labels the edge that reflects this implication.

Visually, the implication graph makes the relevant facts quite obvious. First, notice the subgraph containing vertices $p_3@2$ and $p_4@2$. The decision to assign $p_3$ ended up being irrelevant to the eventual conflict in $C_3$, and this is reflected in the fact that the subgraph is disconnected from the conflict node. When analyzing a conflict, we can simply ignore subgraphs disconnected from the conflict node.

Focusing only on the subgraph connected to the conflict node, the correspondence between the roots and the conflict clause we obtained via resolution, $\lnot p_1 \lor \lnot p_5$, is immediate. This is not an accident, and in fact is the entire reason for building an implication graph in the first place. We can use this data structure to generalize on the resolution-based procedure outlined above by identifying \emph{separating cuts} in the implication graph.

\begin{definition}[Separating cut]
A separating cut in an implication graph is a minimal set of edges whose removal breaks all paths from the roots to the conflict nodes.
\end{definition}

The separating cut partitions the implication graph into two sides, which we can think of as the ``reason'' side and the ``conflict'' side. Importantly, any set of vertices on the ``reason'' side with at least one edge to a vertex on the ``conflict'' side corresponds to a sufficient condition for the conflict. We obtain a conflict clause by negating the literals that correspond to these vertices. In the example from earlier, we chose the following edges highlighted in red for our conflict clause.

\begin{center}
\tikzstyle{vertex}=[circle,draw,fill=none,minimum size=3em,inner sep=0pt]
\tikzstyle{edge} = [draw,thick,->]
\tikzstyle{weight} = [font=\small]

\begin{tikzpicture}%[scale=1.8, auto,swap]
    \node[vertex] (P1) at (0,0) {$p_1@1$};
    \node[vertex] (P2) at (3,0) {$p_2@1$};
    \node[vertex] (P5) at (0,1.5) {$p_5@3$};
    \node[vertex] (P6) at (3,2) {$p_6@3$};
    \node[vertex] (C) at (5,1) {$\Lambda$};

    \path[edge,red] (P1) -- node[below] {$C_1$} (P2);
    \path[edge,red] (P5) -- node[above] {$C_4$} (P6);
    \path[edge,red] (P5) -- node[below] {$C_3$} (C);
    \path[edge] (P6) -- node[above] {$C_3$} (C);
    \path[edge] (P2) -- node[below] {$C_3$} (C);
\end{tikzpicture}
\end{center}
However, we could have just as well chosen the following, which would have led to the clause $\lnot p_5 \lor \lnot p_2$.
\begin{center}
\tikzstyle{vertex}=[circle,draw,fill=none,minimum size=3em,inner sep=0pt]
\tikzstyle{edge} = [draw,thick,->]
\tikzstyle{weight} = [font=\small]

\begin{tikzpicture}%[scale=1.8, auto,swap]
    \node[vertex] (P1) at (0,0) {$p_1@1$};
    \node[vertex] (P2) at (3,0) {$p_2@1$};
    \node[vertex] (P5) at (0,1.5) {$p_5@3$};
    \node[vertex] (P6) at (3,2) {$p_6@3$};
    \node[vertex] (C) at (5,1) {$\Lambda$};

    \path[edge] (P1) -- node[below] {$C_1$} (P2);
    \path[edge,red] (P5) -- node[above] {$C_4$} (P6);
    \path[edge,red] (P5) -- node[below] {$C_3$} (C);
    \path[edge] (P6) -- node[above] {$C_3$} (C);
    \path[edge,red] (P2) -- node[below] {$C_3$} (C);
\end{tikzpicture}
\end{center}
Any conflict clause corresponding to such a cut is derivable using the resolution rule, and is safe to add to the clause set. Different procedures have various ways of selecting cuts. Some choose to compute several cuts, aggressively adding multiple conflict clauses to further constrain the search. Most modern solvers aim to find a single effective cut that corresponds to an \emph{asserting clause}, which forces an implication immediately after backtracking. Because SAT is a hard problem, these are heuristic choices that may or may not improve performance on different classes of instances. For any sound strategy, such choices are best validated empirically to identify those that yield the best results on important problems that arise in practice.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Summary}

\begin{itemize}
\item \textbf{Decision procedure} is an algorithm that, given a decision problem, terminates with a correct yes/no answer. In this lecture we presented a decision procedure for propositional logic, also known as SAT solvers.

\item Duality between \textbf{satisfiability} and \textbf{validity}: F is valid $\leftrightarrow$ $\neg$ F is unsatisfiable.

\item To perform efficient formula simplification, we must have our formula in a \textbf{normal form} such as \textbf{negation normal form (NNF)} or \textbf{conjunctive normal form (CNF)}.

\item \textbf{Boolean constraint propagation (BCP)} (also known as unit propagation) is the process of applying the \textbf{unit resolution} until a fix point.

\item \textbf{DPLL} is the basis of most modern SAT solvers and performs BCP at each branching step.

\item \textbf{Status of a clause}: satisfied, conflicting, unit or unresolved.

\item Learned clauses can be derived by using the \textbf{resolution} rule.

\item In practice, learned clauses are derived from the \textbf{implication graph} of the current interpretation. 
\end{itemize}

\bibliography{platzer,bibliography}
\end{document}