Newsgroups: sci.math,sci.stat.math,comp.ai.fuzzy
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!newsfeed.internetmci.com!in2.uu.net!decan!sthomas
From: sthomas@decan.com (S. F. Thomas)
Subject: Re: A Spy Problem
Followup-To: sci.math,sci.stat.math,comp.ai.fuzzy
X-Newsreader: TIN [version 1.2 PL2]
Organization: Decision Analytics, Inc.
Message-ID: <DLrBt4.Ivv@decan.com>
References: <4e279k$gj0@chaos.kulnet.kuleuven.ac.be> <4e2j26$j9i@zinc.compulink.co.uk> <4e2tmu$q4n@chaos.kulnet.kuleuven.ac.be>
Date: Thu, 25 Jan 1996 22:00:38 GMT
Lines: 122
Xref: glinda.oz.cs.cmu.edu sci.math:133878 sci.stat.math:8938 comp.ai.fuzzy:6532

Steven Demuynck (Steven.Demuynck@fys.kuleuven.ac.be) wrote:
: hv@compulink.co.uk (Hugo van der Sanden) wrote:
: >
<snip of van der Sanden's solution>

:    You hit the question I always end up with when trying to solve this 
: problem one way or the other. If you calculate the probability that S3 
: will cross, given that S1 and S2 has crossed already [P(S3=1|S1=1,S2=1)], 
: you find p+c1+c2. However, the only constraints on c1 and c2 were that 
: p+c1 is between 0 and 1, and also p+c2. That a conditional probability 
: like the one above is not larger than 1 should be a natural result when 
: these constraints are applied. There are two possibilities:

:   * You did not fully take into account all the (hidden) constraints
:     in the problem. Remember that the spies S1 and S2 don't know about
:     each other. That is expressed by p+c1 <= 1 and p+c2 <= 1. From
:     your solution one can see that you implicitely suppose that they
:     do have contact because they have to interconsent about their c1 
:     and c2 being such that p+c1+c2 <= 1. Somewhere it should be expressed
:     that S1 and S2 are independent.

:   * Another possibility is that the problem the way I state it has no
:     solutions, which should appear very strange to me.

:    Another test is that the solution should also hold when you ad more 
: spies who all send messages to the last spy (and not to each other). For 
: ever more ci's, all probabilities should remain meaningful without the 
: need to change the ci's themselves.
:    I hope you or someone else have the knowledge and/or inspiration to 
: take this quest to an end. The solution would mean a lot for some people 
: here in Leuven.

I'll take a shot at it.  But first I'll restate the problem.
I see it as a problem in the combination of evidence regarding
a parameter whose value is uncertain.  This parameter is the
probability p of a successful crossing.  The problem is an 
analogue of one that I considered many years ago when I was
first introduced to the fuzzy-set theory: Consider the tossing
of a thumb-tack.  It can land either top-down, or on its side,
and we have no precise idea beforehand what the probability, 
p, is of it landing top-down on a single toss.  However, we can 
look at the tack, get a feel for the physics involved, and proceed
to guess that it's more likely to land top down than on its
side, unlike a fair coin, fairly tossed, that presumably has 
equal chances of landing head or tails.  Suppose the best 
we can articulate our prior estimate of the probability of 
the thumb tack landing top down is to say that the probability
is "high".  Suppose further we accept the insights of
the fuzzy set theory which allow us to characterize the
uncertainty consistent with such a description by a
membership function (for "high") over a numeric base variable, u
say, ranging on [0,1].  Now, following Bayesian notions, 
we proceed to experiment.  We toss the thumb tack once, and 
sure enough, it lands top down.  How do we update our 
characterization of the uncertainty regarding the probability p?

Now, the border crossings are like tossing a thumb tack
except for the following "wrinkles":  First, suppose that 
we don't actually get to see and feel the thumb tack
before making the prior assertion about p.  But when we
are given the thumb tack to experiment with, we discover
it's really more like a pin, with a much smaller and lighter
head than we imagined at the outset.  At this point,
we have not only the result of the toss, but an additional
observation that changes our perception altogether of the 
probability of "success" on a single trial.  How now do
we update our characterization of the uncertainty regarding
the probability p?  Second, what if the problem is a 
collaborative exercise, and, contrary to the usual Bayesian
formulation, we want to elicit and combine the prior opinions
of a group of decision makers, rather than proceeding with
that of just one.  How do we do that a priori, and also a 
posteriori, given the result of experiment.

It is possible to show (see Thomas, 1995. "Fuzziness 
and Probability") that "fuzzy" priors (eg. p is "high") 
are qualitatively the same as a likelihood function, and 
assuming independence, the appropriate rule of combination 
is pointwise multiplicative.  Schematically, as in the 
Bayesian theory, but with prior and posterior now both 
being considered qualitatively identical to likelihood,

(1)	Posterior(u) = Likelihood(u) * Prior(u)		

And for group assertions

(2)	Group(u) = Prior_1(u) * Prior_2(u) * ... * Prior_n(u)

where n independent priors are asserted.  This is a form
of group intersection.  (There is also a form of group union,
which provides an appropriate rule of combination when the 
priors are not independent, but we leave that aside.)

With this general understanding, it is easy to see how
the Spy Problem may be reformulated in a way not likely 
to lead to such basic anomalies as having p1 + c1 + c2
possibly exceeding 1.  Or having to decide ahead of time, and 
without benefit of observation, what c1, n1, c2 and n2 
should be.  What the question reduces to is what should 
be the rule for combining the posterior estimates made by
S1 and S2, given group prior estimates made by S1, S2 and S3.
Since Thomas (1995), this is a solved problem, with
formula (2) being appropriate first for combining
the starting priors of S1, S2 and S3, and being again
appropriate for combining the posteriors of S1 and
S2 after they have each attempted a crossing.  The
posteriors of S1 and S2 may be obtained using formula 
(1), unless what is observed is so inconsistent with
the starting prior (Cf. a "real" thumb-tack vs. round-headed
pin analogy), that the posteriors of S1 and S2 amount
to completely discarding their starting priors and
asserting essentially new estimates.

: Thanks and regards,
: Steven

Hope this is helpful, even it is not an answer to the
question precisely as posed.

Regards,
S. F. Thomas

