--- An outline of the forward focusing method for the BI logic --- Notation: G bunch 0m multiplicative zero G,G multiplicative 0a additive zero G;G additive A,B,C proposition A -> B intuitionistic implication A /\ B intuitionistic conjunction A \/ B intuitionistic disjunction A -* B multiplicative implication A * B multiplicative conjunction True unit for /\ False unit for \/ One unit for * We represent a bunch as follows. This is not necessary to formulate BI, but it's an integral part in our forward focusing method. G := A | [G; ...; G] # of elements <> 1 (if so, remove []) | (G, ..., G) # of elements <> 1 (if so, remove ()) Then 0a and 0m are represented with [] and (), respectively. Suppose that we want to prove a proposition A. It suffices to prove: 0m |- A We first apply all possible asynchronous rules until the bunch on the left side and the formula on the right side contain no asynchronous connectives. Left asynchronous: A /\ B, A * B, A \/ B, True, False, One(?) Right asynchronous: A /\ B, A -> B, A -* B, True, One(?) We end up with a collection of sequents G |- A, each of must be proven independently. Let's write G => A for a 'stable sequent' where there is no occurrence of asynchronous connectives. G takes a tree structure, and we think of a sub-bunch in it as a 'node'. The rules in the sequent calclus for BI (and also in its natural deduction form) manipulate the entire bunch on the left side. Therefore if we are to implement these rules directly, we must be able to maintain/compare/create/... bunches as trees, which seems to be difficult, not to mention that it is likely to be extremely inefficient. Our observation is that we can just work on each individual node in a given bunch. Specifically, by allowing sequents obtained in a node to be promoted to its parent node, we can generate rules in an analogous way to those in intuitionistic logic. As an example, consider a bunch which contains: | | [ A; A -> B ] Let's mark each node with a label: l': | | l:[ A; A -> B ] When we work on the node l, we focus only on A and A -> B, and "do not think about their interaction with the rest of the tree, which is necessary in the standard formulation of BI logic". In the above example, we obtain the following two rules by focusing on A and A -> B: 1) A => A 2) G => A B => B G => A ----------------, which is the same as ------------------ G; A -> B => B G; A -> B => B (Side note: in this particular case (where B is an atomic formula), starting with G'{G; A -> B} evaluates leads to the constraint G' = an empty bunch. In general, however, this is not a safe assumption since B may not be an atomic.) Now we can apply these rules to generate sequents: A => A A; A -> B => B Note that while we are generating these sequents, we just concentrate on the node l and nothing else. This is the main difference between our proposed method and one that follows the sequent calculus for BI directly. Now we can 'promote' these sequents to the parent node, namely l'. Since l is an additive bunch, whatever is proved in it can be promoted to its parent l'. In other words, if we can prove G => C in the node l, we can conclude l => C in the parent node l'. It is critical to replace G by the label l when we promote the sequent. An easy way to understand this is to think as follows: the parent node l' doesn't care about which bunch is actually used to prove C in its child node l; therefore, if G => C is proven in node l, it is as good as proving l => C from the viewpoint of node l'. Here is another example: l': | | l:( A, A -* B ) By focusing on A and A -* B, we generate the following rules: 1) A => A 2) G => A ------------------ G, A -* B => B Now we can apply these rules to generate sequents: A => A A, A -> B => B The difference from the previous example is that since the node l is in a multiplicative context, not every sequent proven in it is promoted to its parent node. For example, A => A cannot be promoted to its parent node as l => A, since not the entire bunch has been consumed. It's only A, A -> B => B that can be promoted to the parent node l' as l => B because it consumes the entire bunch. Generalizing this observation, we can handle the interaction between nodes as follows: 1) we represent every node with a label. 2) in an additive context l : [l1; ...; ln], anything, say C, proven here is promoted to its parent node as l => C. That is, if ??? => C in node l, then we have l => C in its parent node. 3) in a multiplicative context l : (l1, ..., ln), a formula C proven with all labels being consumed is promoted to its parent node as l => C. That is, if l1, l2, ... ln => C in node l, then we have l => C in its parent node. A simple strategy to incorporate all these ideas, which our team usually refer to as "bubble up", is: 1) visit every leaf node; apply the rules associated with that leaf node until its (local) database is saturated. 2) promote all sequents to its parent node if it is an additive node; promote those sequents consuming the entire bunch to its parent node if it is a multiplicative node. 3) We let the "bubble" up until the goal formula is proven at the root node, at which point the "bubble" explodes and the search is completed. A few minor points: - A bunch consisting of a single proposition can be either additive or multiplicative; in other words, it doesn't matter. - Therefore every bunch can be labeled and the "bubble up" strategy can be implemented in a uniform way. - Now the rule generation/representation is just as hard as in the case of propositional intuitionistic logic.