In order to handle preferences we need to be able to express preference information explicitly. Since we want to do this * in* the logical language we have to extend the language. We do this in two respects:

- we use a set of rule names
**N**together with a naming function**name**to be able to refer to particular rules, - we use a special (infix) symbol that can take rule names as arguments to represent preferences among rules.

A prioritized logic program is a pair where **R** is a set of
rules and **name** a naming function. To make sure that the symbol has its intended
meaning, i.e., represents a transitive and anti-symmetric relation, we assume
that **R** contains all ground instances of the
schemata

and

where are parameters for names. Note that in our examples we won't mention these rules explicitly.

The function **name** is a partial injective naming function that assigns a name to some of
the rules in **R**. Note that not all rules do necessarily have
a name. The reason is that names will only play a role in conflict resolution
among defeasible rules, i.e., rules with weakly negated preconditions. For this reason names for strict rules, i.e., rules in which the symbol does not appear, won't be needed.
A technical advantage of leaving some rules unnamed is that the use of rule
schemata with parameters for rule names does not necessarily make programs
infinite. If we would require names for all rules we would have to use a parameterized name for each schema and thus end up with an infinite set **N** of names.

In our examples we assume that **N** is given implicitly. We also define the function **name** implicitly. We write:

to express that .

For convenience we will simply speak of programs instead of prioritized logic programs whenever this does not lead to misunderstandings.

Before introducing new definitions we would like to point out how we want the new explicit preference information to be used. Our approach follows two principles:

- we want to extend well-founded semantics, i.e. we want that every -conclusion remains a conclusion in the prioritized approach,
- we want to use preferences to solve conflicts whenever this is possible without violating principle 1.

There is a type-I conflict between and . Although the explicit preference information gives precedence to we want to apply here to comply with the first of our two principles. Technically, this means that we can apply a preferred rule

The following two rules exhibit a different type of conflict:

The heads of these rules are not complementary. However, the application of one rule defeats the other and vice versa. We call this a direct type-II conflict. Of course, in the general case the defeat of the conflicting rule may be indirect, i.e. based on the existence of additional rules. We say and are type-II conflicting wrt. a set of rules

- neither defeats nor ,
- defeats , and
- defeats

Now we have a type-II conflict between and (more precisely, a direct type-II and a type-I conflict) that is not solvable by the implicit mechanisms of well-founded semantics alone. It is this kind of conflict that we try to solve by the explicit preference information. In our example will be used to derive . Note that now the application of defeats and there is no danger that a literal defeating might become derivable later. Generally, a type-II conflict between and (wrt. some undefeated rules of the program) will be solved in favour of the preferred rule, say , only if applying excludes any further possibility of deriving an -defeating literal.

Note that every type-I conflict can be turned into a direct type-II conflict by a (non-equivalent!) rerepresentation of the rules: if each conflicting rule **r** is replaced by its seminormal form then all conflicts become type-II conflicts and are thus amenable to conflict resolution through preference information.

After this motivating discussion let us present the new definitions. Our treatment of priorities is based on a weakening of the notion of **X**-safeness. In Sect. 2 we considered a rule **r** as **X**-safe whenever there is no proof
for a literal defeating **r** from the monotonic counterparts of **X**-undefeated rules. Now in the context of a prioritized logic program we will consider a rule **r** as **X**-safe if there is no such proof
from monotonic counterparts * of a certain subset* of the **X**-undefeated rules. The subset to be used depends on the rule **r** and consists of those rules that are not ``dominated'' by **r**. Intuitively, is dominated by **r** iff is (1) known to be less preferred than **r** and (2) defeated when **r** is applied together with rules that already have been established to be **X**-safe. (2) is necessary to make sure that explicit preference information is used the right way, according to our discussion of .

It is obvious that whenever there is no proof
for a defeating literal from all **X**-undefeated rules there can be no such proof
from a subset of these rules. Rules that were **X**-safe according to our earlier definition thus remain to be **X**-safe. Here are the precise definitions:

Note that is monotonic in both **X** and **Y**. We can now define the **X**-safe rules inductively:

Note that **X**-safeness is obviously monotonic in **X**. Based on this notion we introduce a new monotonic operator :

As before we define the (prioritized) well-founded conclusions of **P**, denoted , as the least fixpoint of . If a program does not contain preference information at all, i.e., if the symbol does not appear in **R**, the new semantics coincides with since in that case no rule can dominate another rule. In the general case,
since the new definition of **X**-safeness is weaker than the one used earlier in Sect. 2 we may have more **X**-safe rules and for this reason obtain more conclusions than via . The following result is thus obvious:

From this and the monotonicity of both operators it follows immediately that .

Well-founded semantics has sometimes been criticized for being too weak and missing intended conclusions. The proposition shows that we can strengthen the obtained results by adding adequate preference information.

As a first simple example let us consider the following program :

We first apply to the empty set. Besides the instances of the transitivity and anti-symmetry schema that we implicitly assume only is in . We thus obtain

We next apply to . Since we have . since does not defeat and we obtain

Further iteration of yields no new literals, i.e. is the least fixpoint. Note that **c** is not a conclusion under the original well-founded semantics.

We next show that the programs and discussed earlier are handled as intended. Here is :

Since does not defeat this rule is safe from the beginning, i.e., . yields

\

which is also the least fixpoint. The explicit preference does not interfere with the implicit one, as intended.

The situation changes in where the first rule in is replaced by

The new rule is not in since it is defeated by the consequence of and is not dominated by . yields

Now since dominates wrt. and the empty set of rules. We thus conclude as intended. The least fixpoint is

In [4] we used an example to illustrate the possible non-existence of extensions in our earlier approach. This example involved two normal defaults each of which had the conclusion that the other one is to be preferred. The prioritized logic programming representation of this example is the following:

It is straightforward to verify that the set of well-founded conclusions for this example is empty.

Thu Feb 8 10:26:15 MET 1996