Newsgroups: rec.arts.int-fiction
Path: gmd.de!xlink.net!sol.ctr.columbia.edu!news.kei.com!ub!acsu.buffalo.edu!goetz
From: goetz@cs.buffalo.edu (Phil Goetz)
Subject: Re: Reasoning agents
Message-ID: <CKvJxx.3to@acsu.buffalo.edu>
Keywords: logic, reasoning
Sender: nntp@acsu.buffalo.edu
Nntp-Posting-Host: hydra.cs.buffalo.edu
Organization: State University of New York at Buffalo/Comp Sci
References: <CKto2p.BzF@acsu.buffalo.edu> <whittenCKu9nG.Hq7@netcom.com> <MATM.94Feb7044633@mountaindew.eng.umd.edu>
Date: Mon, 7 Feb 1994 22:03:33 GMT
Lines: 78

In article <MATM.94Feb7044633@mountaindew.eng.umd.edu> matm@glue.umd.edu (Matthew J. MacKenzie) writes:
>Reasoning about collections of objects is difficult (since they do not
>appear as first-order objects?); you can talk about this person or
>that, but not about all the people in a boat.

You can say things about all the people in a boat
(e.g. all of them are wet), but you can't count them.

>Mr. Goetz ["Phil" is fine] mentioned reasoning by plausibilty, which I
>take as a reference to Baysean reasoning, though of course he may not. :-) 
>This makes sense to me since I think people use something like
>probabilities when they reason, and the general IF problem is within
>spitting distance of the general AI problem.

I like Bayesian reasoning.  But some people who know more about it than
I do say that it's hard to use, because you have to know prior probabilities
for everything (how likely it is that the value of a random variable X =
some specific instance x, when you don't have any data that give you hints
about X's value).

Reasoning systems that can be used without prior
probabilities include Dempster-Shafer theory (see AI Expert Aug. 1993
p. 26-33), which uses belief intervals, and fuzzy logic.  Generally
the term "certainty factor" is used to refer to a number which is used
like the probability of a proposition, but isn't really a
probability because all the certainty factors don't sum to 1.
Also, CF(P^Q) may be calculated as Min(CF(P),CF(Q) instead of CF(P)xCF(Q)
Expert systems often use certainty factors.
Cyc uses a multi-valued nonmonotonic logic.  It used to use numeric real
values to represent certainty, but its builders switched from real numbers
to "100 (monotonically T), T (default T), ? (unknown), F (default F),
0 (monotonically F)" for reasons I find unconvicing.
(Doug Lenat said that it was disturbing that a statement with certainty
factor .901 would be called more probable than one with CF .9.
I don't see the problem.)  Then they define the result of logical
operations on propositions with these values:  100 ^ F = 100, T ^ F = ?,
F v T = F, etc.  100 ^ 0 = contradiction, which must be resolved.

Incidentally, there are certain situations in which people usually
make predictable inferences that are illogical.  For instance, rank
these in terms of likeliness:

	1. Diane is a housekeeper.
	2. Diane is president of a multinational corporation.
	3. Diane is president of a multinational corporation and a feminist.

People usually say 3 is more likely than 2, even though 2 implies 3.

I'm not saying that we should cripple our systems to avoid being smarter
than people.  There's no danger of that yet.

>I'm curious if anyone has tried probabilistic agent planning, in IF or
>some other micro world.  I'd be surprised if there were mature IF
>systems out there based on this approach since you'd expect that much
>effort to be announced right here.

Probabilistic planning and probabilistic maps of the world
are popular in robots.

>It gets extremely weird if you represent the world model in terms of
>probabilities, since the system never knows if you're carrying the
>lantern, or if there's a grue around the corner, it's only "pretty
>sure" (approaching certainty).

You can use a probability of 1.

There is a large literature on nonmonotonic reasoning.
For a critique of the monotonic version of default logic
("turn(ignition) ^ not(in(potato, tailpipe) => starts(engine)") see
John McCarthy (1980) "Circumscription - a form of non-monotonic
reasoning," in  _Artificial Intelligence_ 13:27-39.

I strongly recommend _Building Large Knowledge-Based Systems:
Representation and inference in the Cyc project_,
by Doug Lenat and R. Guha, Addison-Wesley 1990.  It describes many
of the problems they had in representing and reasoning about the world.

Phil goetz@cs.buffalo.edu
