Newsgroups: rec.arts.int-fiction
Path: gmd.de!ira.uka.de!yale.edu!spool.mu.edu!uwm.edu!linac!att!mcdchg!chinet!jorn
From: jorn@chinet.chi.il.us (Jorn Barger)
Subject: "Was:  barger@ils"  Chapter 3 (LISP)
Message-ID: <C10zMx.89s@chinet.chi.il.us>
Organization: Chinet - Public Access UNIX
Date: Mon, 18 Jan 1993 01:18:33 GMT
Lines: 114

====================================================================
                        "Was: Barger@ILS"
                   (memoirs of an a.i. hacker)
                          by Jorn Barger

                        Chapter 3: LISP
====================================================================


In our first weeks at ILS, the new hires all got a short'n'sweet overview
of Lisp programming from Chris Riesbeck, and a nice shiny Macintosh each,
with Macintosh Allegro Common Lisp (MACL) to start experimenting on.

Anyone who gripes about the profligate way Lisp uses parentheses is
completely missing the point.  Parentheses are just the simplest possible
way of depicting tree-structures.  For instance, the tree:

A
  1
  2
    a
    b
B

is topologically equivalent to:  (  ( () (()()) )  ()  ).

(G. Spencer-Brown's "Laws of Form" (1969, out-of-print-- an
underappreciated masterpiece) formalizes this level of abstraction very
elegantly, in the domain of Boolean algebra.)

Using this uniform notation system for both data and program, as Lisp
does, allows one to decompose more-complex structures into simpler ones,
and to take advantage of structural similarities between units of
different overall complexity.

MACL is a big comfy chair of a programming environment, very quick for
prototyping, very easy to maintain and modify if used wisely, but pretty
much of a bytes-and-hertz hog.  (SIMMS and speeds being what they are,
though, these days, it's quite an excellent choice for lots of sorts of
exploratory development.)

The latest version of MACL uses the CLOS (CEE-loss) object system.  I
think it might not be too radical to divide the history-to-date of AI into
two periods, the first characterized by exploration of the concept of
abstraction hierarchy (c500 BC to c1980), the second by "object-oriented
programming," where the program's vocabulary of 'verbs' is distributed
across an abstraction hierarchy of types (and their instances), with more-
specialized nodes 'inheriting' verbs from the generalizations above them. 
Perhaps the next era will arrive via a neat solution to the CLOS meta-
object protocol problem, analysing the most-elegant internal mechanics for
connecting types, instances, and code-methods.

I'm chasing after the idea that we might view code-segments as *the
histories (or stories) of their parameter-argument-variables*, so that the
"+" story is one of the usual useful stories you'd want to tell about two
integers, and "show-view" is ditto for a data-object and a user, and "edit-
hierarchy" ditto for a programmer and a knowledgebase.  In MACL's CLOS,
all the interface objects like windows and menus end up in the same
overarching hierarchy with all the data objects one chooses to represent,
so using the story-history metaphor for both looks like potentially a neat
'win'.


Here's Doug Lenat on Lisp vs Prolog (this is from a book commissioned by
Texas Instruments and distributed by Radio Shack, a combo about as tasty
as a 9-volt battery shorted across the tongue):

Q: How does Prolog differ from Lisp?
Lenat:  There has been a constant dream in AI, by a large fraction of
people in the field for 25 years, of the form: there really ought to be
some way of formalizing and axiomatizing human thought and reason.  But
time and time again, all attempts at axiomatizing things have led people
to trivialize them, to the point where they no longer apply to what they
were originally modelled after in the real world.
   There are a lot of people in the field who want to be *sure*, who want
to believe that they can get absolutely precise, logically guaranteeable
models that are close to what is going on in the real world.  If you
believe that, then the kinds of operations you want as primitives are
*logical* operations, those involved simply in *theorem proving*.  Those
are the sort of operations that are present in Prolog.

Q: Why is Prolog used more in Europe than in the U.S.?
Lenat: In most European countries, you have very rigid hierarchies of
'ancient' professors, and then younger professors, and then research
associates-- and then it filters down about seven levels to the people
actually writing the programs.  It is the people at the top who decide
what research is going to get done, not the people at the bottom who have
experience with what is actually happening.
   The people at the top-- who want to believe in a nice, simple,
mathematical, axiomatizable universe-- basically determine the kind of
research that is going to get done.  The experiences that would lead them
to change their minds are simply not occurring to them, they are occurring
to the people at the bottom, who have no say.

Q: Is Prolog used in the Japanese 5th Generation Project for the same
reason?
Lenat:  In Japan, they use Prolog mainly because it is not an American
language; it adds to their national spirit and pride to be removed from
what is going on in America.  But if you look real hard, what the Japanese
have done is to build Lisp-like functions on top of Prolog so that by now
it is hard to tell what language they are using.  They would have probably
been about two years ahead if they had used Lisp to start with instead of
Prolog.

from "Understanding Artificial Intelligence" by H.C. Mishkoff, 1986.


[Next: Case-based Reasoning]

Jorn Barger     jorn@chinet.chi.il.us     (Was: barger@ils.nwu.edu)
-------------------------------------------------------------------
Bonus riddle for British-style-crossword fans (answer a proper name):
1 Down (5 letters): Contrary pride at heart of abrupt 'railroad'.
-------------------------------------------------------------------
