Newsgroups: rec.arts.int-fiction
Path: gmd.de!xlink.net!zib-berlin.de!netmbx.de!Germany.EU.net!EU.net!uunet!yeshua.marcam.com!news.kei.com!ub!acsu.buffalo.edu!goetz
From: goetz@cs.buffalo.edu (Phil Goetz)
Subject: Re: Reasoning agents
Message-ID: <CM450s.CDK@acsu.buffalo.edu>
Sender: nntp@acsu.buffalo.edu
Nntp-Posting-Host: kitalpha.cs.buffalo.edu
Organization: State University of New York at Buffalo/Comp Sci
References: <whittenCL2xwI.57D@netcom.com> <2kp1h7$5i9@u.cc.utah.edu> <CLyH9r.3HA@acsu.buffalo.edu> <1994Mar2.154214.20825@oxvax>
Date: Thu, 3 Mar 1994 23:53:16 GMT
Lines: 71

In article <1994Mar2.154214.20825@oxvax>,
Jocelyn Paine <popx@vax.oxford.ac.uk> wrote:
>When the agent perceives
>something new, it has to work out whether any of its perceptions refer
>to objects already described by this model. I.e. suppose the model
>contains an assertion of the form
>    in( assassin(1), room(27) ) ,
>and the agent moves into room 28 and sees an assassin, it has to work
>out whether this is assassin(1) or not.  If so,
>it can update the assertion to
>    in( assassin(1), room(28) ).

In IF we cheat and let the agent know the token identity
of the objects it sees.  :)

>If not, it must create a new token for the new assassin, and add a new
>assassin:
>    in( assassin(2), room(28) ).
>
>A big problem with this approach is that you don't always have enough
>information to work out whether the newly-seen object really is new or
>not. So perhaps you make an assumption about its identity (that it's
>assassin 1 in room 28). To be consistent, you then have to remove
>assassin 1 from room 27; if it actually was assassin 2 in room 28,
>you've created an incorrect model. Or you try to find some way to
>represent the fact ``it's either assassin 1 or assassin 2 in room 28 but
>I don't know which''. This leads to all sorts of difficulties with
>representing and inferring from disjunctions.

Side note not really related to agents for IF, but to the long-term
goals of AI: Yes, horrible difficulties, but they are similar to
difficulties that you find in reading natural lang text when you come
across an ambiguity that will be resolved later.  So any complete
cognitive architecture must be able to deal with these problems anyway.

All this stuff involving figuring whether assassin 1 is still
in room 28 is above and beyond anything deictic processing can do.
You can hack your reasoner to conclude "maybe assassin 1 is in room 27,
and maybe he's in room 28" with a good logic, and leave it there.
Whatever you do is icing on the cake compared to what you get with
deictic reasoning.

>Now to the deictic approach. I'll quote from the end of {\it Situated
>...
>However, the functional role of
>these objects is always the same, so it doesn't matter.

I think it is evident that insects use deictic planning (except
maybe honeybees when they communicate the direction to some flowers).
If food appears in front of them, they eat it.  They don't
mate for life; whenever female insect F is near a male insect M,
F = mate(M).  They can't reason their way out of a paper bag
(as evidenced by japanese beetles in Maryland, which we
trap in hanging plastic bags which hundreds of them go into and
stay in until they starve), because they don't have a concept of
places other than their present location.

It is evident that people don't always; I may want
to give a Christmas present to my brother, not to just anybody;
I may want to go to Duff's, not just any white adobe building.
If you want to build one reasoning system, not two, and you want
it to exhibit behavior other than "out of sight, out of mind",
you'll have to use a more powerful representation than deictic rep.

I'm not saying that deictic reasoning doesn't have a place in IF.
But remember that most of the work done by "animats" researchers is geared
towards developing insects, and might not apply to designing characters for IF.

>Jocelyn Paine

Phil goetz@cs.buffalo.edu
