Message-ID: <3D330FCF.60006@csi.com>
Date: Mon, 15 Jul 2002 14:09:19 -0400
From: John Colagioia <JColagioia@csi.com>
User-Agent: Mozilla/5.0 (Windows; U; Win98; en-US; rv:1.0rc2) Gecko/20020618 Netscape/7.0b1
X-Accept-Language: en-us, en
MIME-Version: 1.0
Newsgroups: rec.arts.int-fiction
Subject: Re: what's wrong with some existing IF languages
References: <Xns924696F5A38E7edmewsicSPAMGUARDcom@199.45.49.11> <agfei1$l7p10$1@ID-60390.news.dfncis.de> <Xns924755A2D30A1edmewsicSPAMGUARDcom@199.45.49.11> <iain-137BC7.21103410072002@socrates.zen.co.uk> <nJ1X8.28523$5f3.16894@nwrddc01.gnilink.net> <Xns92479B14FAF54OKB@12.252.202.62> <656X8.29751$5f3.22064@nwrddc01.gnilink.net> <agitkb$qnp@dispatch.concentric.net> <Hy7X8.18$7W6.3@nwrddc02.gnilink.net> <S%7X8.311714$R61.268018@rwcrnsc52.ops.asp.att.net> <He9X8.241$7W6.122@nwrddc02.gnilink.net> <eheX8.177$uw.207@rwcrnsc51.ops.asp.att.net> <3D2EC4E8.80902@csi.com> <ago3kv$qo9@dispatch.concentric.net> <3d302ede@excalibur.gbmtech.net> <Nf7Y8.350287$R61.330207@rwcrnsc52.ops.asp.att.net> <3d317469@excalibur.gbmtech.net> <7LvY8.541163$cQ3.49111@sccrnsc01>
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: ool-182f30fa.dyn.optonline.net
X-Original-NNTP-Posting-Host: ool-182f30fa.dyn.optonline.net
X-Trace: excalibur.gbmtech.net 1026756139 ool-182f30fa.dyn.optonline.net (15 Jul 2002 14:02:19 -0400)
Organization: ProNet USA Inc.
Lines: 295
X-Authenticated-User: jnc
Path: news.duke.edu!newsgate.duke.edu!nntp-out.monmouth.com!newspeer.monmouth.com!newsswitch.lcs.mit.edu!news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!nntp.abs.net!uunet!dca.uu.net!ash.uu.net!excalibur.gbmtech.net
Xref: news.duke.edu rec.arts.int-fiction:106171

Tzvetan Mikov wrote:
> "John Colagioia" <JColagioia@csi.com> wrote in message
> news:3d317469@excalibur.gbmtech.net...
>>You obviously haven't done much assembly-level programming.
> First rule of Usenet etiquette: never express opinions about a poster, only
> about his ideas. But since you already apologized in the end ... :-)
> (The first assembler program I wrote was in the 80-s on an Apple II - it was
> a version of Snakebyte. I've also learned a few things since then)

Technically, that was a(n apparently) baseless conclusion, rather
than an opinion.  Those are almost certainly free game, right...?
Right...?

>>No.  Tradition on certain architectures has developed
>>among programmers, so that code can more easily be used.
>>The BP/SP on Intel chips is a convenience for finding
>>the return address, but even it makes no claims about
>>the memory above and below either pointer.
> Hm. This has really nothing to do with the subject, but please, demonstrate
> how to push function parameters *after* the return address on x86 :-)

I don't see how this is a problem.  Push the return address, push
parameters, set SP and BP in an intelligent fashion.  You probably
can't use CALL and RET, anymore (ick), but that's a side issue.

More in line with modern practices, simply load the parameters into
[SP-2] and so on, then use CALL and RET as usual.

The point, though, is that the CPU knows (and cares) nothing about
parameters and local variables, and the difference between them.  It
takes a programmer (human or compiler) to make those distinctions.

As a programmer, I'd be willing to bet that the first parameter in
any given routine is at [BP+2], because most of us were trained in
the same methodology, but that doesn't mean I'm always going to be
right.

>>If you know of a chip which does make an actual
>>distinction, I'd genuinely like to hear about it; the
>>propagation of RISC chips throughout the industry
>>makes that unfortunately pretty unlikely.
> Sparc ? EPIC ?

I'm not familiar with EPIC, but I'm pretty sure that Sparc follows
the Motorola (and other RISC-a-likes) approach of letting the
programmer choose which to push first.

> Anyway, you have entirely missed my point. On real CPUs the subroutine has
> to take explicit steps for allocating space for locals.

This is true.

> A compiler (or an
> assembly programmer) must posses the knowledge of the locations of input
> parameters and locals.
[...]
> I am not talking about physical distinction but about conventions which
> enforce a logical one.

But, then, you shouldn't be talking about "real CPUs."  Apparently,
it's a point of confusion...

> For example, a ZMachine convention could enforce such
> a distinction if locals were allocated from the bottom (starting from number
> 15). Each routine would also have to check at runtime for too many
> parameters (more than 15 - number_of_locals), so that they don't overwrite
> the locals. Optionally such extra parameters could be ignored.
> (Note that I am not actually suggesting such a change in Inform's code
> generation - this is just an example.)

And note that this is almost what already happens.  N locals are
allocated; the first M are parameters, up to the number that are
passed in (no more than N, though most implementations don't check,
as far as I know).  Any parameters can be ignored.

>>>>>Nor does it support
>>>>>more then 15 locals (it would be very hard to do in the ZMachine,
>>>>> anyway)
>>>>See above.  It wouldn't be all that hard.  Add some code
>>>>to pack the data into arrays or an Object, and pass the
>>>>relevant pointer or handle.
>>>It may be possible (if one jumps through hoops),
>>Jump through hoops!?  It'd be, at most, a hundred-line or
>>so change to the compiler:  On a Routine call, allocate
>>some Z-memory, load the parameters into it, and put the
>>pointer onto the stack; in the Routine, change the access
>>to use that pointer instead of the stack pointer.
> It wouldn't be quite as easy as you make it sound. I don't have the ZMachine
> reference handy, but IIRC, indirect addressing with an offset would require
> at least one more instruction per access.

I can only assume that there's a "emit local access Z-code" routine,
which localizes such issues.

> Naturally,anything is possible,
> the ZMachine is Turing-complete after all, but it is clear that it wasn't
> designed for that.

I'm not so sure.  I rarely pass values.  Far more frequently, I pass
(pointers to) objects and routines into my routines.  It's not that
much of a conceptual jump to have the function call do this for me.

[...]
> Why was 15 chosen as the magical number ? Why not 10 or 20 or, even better,
> a compiler parameter ? Are you saying that the choice of 15 locals in Inform
> had nothing to do with the ZMachine ?

I assume, as I said before, it's because the Z-Machine makes it
*easiest* to have 15 locals.  That's far different than being limited
to that number.

> I doubt that, so I don't understand
> what you are trying to prove.

I wasn't proving anything; I was explaining that "not like I usually
program" isn't necessarily a "limitation" or "bad thing."

> I agree that too many locals in a single
> function are often an indication of a programming problem in any language,
> but that's another topic.

I don't know that they're unrelated.  The usual research says that
humans can only keep track of about 3-7 things at a time.  I think
it's significant that the earlier Z-Machine could only carry 7 locals
into a call.

Add in a realization (a few years later) that many functions are
pushing the limit, and it's logical to double the value, figuring
that seven locals, plus seven parameters (plus one, just because the
encoding scheme allows it) should still be viable.

Contrast this with C or (shudder) PERL, where "if you can imagine it,
you can program it," no matter how bad an idea it is.

I seem to recall that, back in the mists of time, we were discussing
readability.  Which method improves readability?

>>>The limitation in the programming language is a
>>>direct result of the limitation in the ZMachine.
>>Where, exactly, is the limitation?  I see some
>>inconveniences for people programming fringe cases, but
>>nothing like a limitation.  Unless you definition of
>>"limitation" is "it's easier in C."  I doubt that, though.
> OK, I give up. I never thought that classifying the fact that Inform
> supports only 15 locals as a "limitation" would cause such a violent
> protective reaction.

Well, you called it a limitation (which you admit it shouldn't be,
for any sane programmer), and then argued that it harms readability
in some vague way.  I think that's what's causing the reaction.

> Man, I wish we could be more objective... Anyway,
> forget that I said it. Having 15 locals is in fact a great advantage! :-)
> Let's drop that point. We have drifted away from the subject, whatever it
> was ...

No, no.  I'm not defending my turf or anything.  I want to know what
the actual *problem* is that your solution is intended to treat, and
why you think it should be part of the standard Inform language (two
separate questions).

>>>is there any other definition of readability ? :-)
>>It's more syntax to slog through, none of which provides
>>any semantic information.  That is, it's less readable in
>>the same way that COBOL is less readable.
> Of course it provides semantic information.  It enforces the number of
> parameters, so that when you pass wrong number of parameters, you would get
> a compile-time or runtime error, instead of strange undefined behavior.



> I get the idea that you don't like any additional characters in the source
> :-),

I did spend some time hacking APL programs, but...

No, what I dislike is additional verbiage that either doesn't solve a
problem (i.e., LISP's desire to have even basic math subexpressions
parenthesized, given that a benefit of prefix notation is that you
don't need to explicitly group things) or solves a problem in a
somewhat backwards way (i.e., prototypes in C--the *right* solution
is to build a compiler that's smart enough to look ahead; Hungarian
notation also falls into this category).

The "right" solution to the problem I believe you're pointing out
(compile-time checking of parameter lists) is to semantically analyze
the existing code, rather than adding unnecessary syntax to carry the
same information.

> but even you have to agree that automatic error checking is a good
> thing, especially if it doesn't interfere with weak typing.

I fail to see the connection between this and adding syntax.  In
fact, I'm tempted to say that I fail to see the (necessary)
connection between compile-time error checking and altering the
language at all.

>>>To tell you the truth, we have a too big difference of opinion for this
>>>discussion to come to a constructive end.
>>If you want to take your ball and go home, that's
>>certainly your prerogative, but I think you're
>>overlooking critical issues, thinking that C (and
>>its derivatives) are some pinnacle of evolution.
> Please, don't put words in my mouth.  I haven't in a single place in my posts
> in this thread expressed an opinion about "C", let alone compare it with
> Inform (I used C++ once as an example of default parameters, because that is
> the most popular one). It is you, in fact, who is constantly doing that.

Your ideas are fairly obviously inspired by C or are mutually
inspired by the same sources.  Not that this is a bad thing, per se,
because C has pretty much infiltrated the collective consciousness of
the software industry (and, by "C," I mean pretty much everything
from BCPL to Java and C#, since they all share the same basic
semantic structure).  But there's no reason to try to shove them into
another language or claim their readability based on precedent.

> (Not that it matters, but my opinion of C is not quite as flattering as you
> might think, especially since I am lookng at it from the viewpoint of a
> compiler writer)

Heh.  What do you mean?  C is a compiler-writer's *dream*!  Static
scoping, lot's of "implementation's calls" in the standard, and no
intrinsic I/O.  Hell, if you're targetting a PDP machine, you're just
about done, right there...

> I think that you are reading negative things between my lines that I just
> haven't said,

Possibly.  I also acknowledge that I may have confused your points
with those of others in the same thread.

> and this prevents you from looking at Inform with an open
> mind. Inform is not the pinnacle of evolution either. As anything else, it
> can be improved.

I haven't said otherwise.  What I *have* said is that I don't believe
your particular suggestion was worth integrating into the mainline
language, since it (a) can't have a clean syntax, and (b) adds syntax
which doesn't add any semantic information (which isn't already
knowable from existing syntax), and that, if you really thought it
was a necessity, it'd be fairly easy to place in a preprocessor (as
would my syntaxless variation).

>>The former is only as true as it is for the difference
>>between, say, Java and (to use a recent example) Prolog;
>>the application domains are usually dissimilar, and,
>>while you can use the same approaches in both ("You can
>>write your crappy FORTRAN code in any language," as one
>>of my old professors used to say), each has more
>>"native" ways of solving most problems.
> Aren't you exagerating a little? :-) I mean, Inform is different, but it
> isn't that different. By no means it is as different as ML and Prolog are
> from Algol, for example. It definitely doesn't require a drastically new way
> of thinking or programming.

The syntax and low-level semantics are similar.  It doesn't take too
much to, for example, rewrite a C (or Algol, or Java) program in
Inform (NB: Only in theory; I haven't bothered to try this).  The
higher-level semantics, and the nifty little "fringe" niceties that
make it an IF language, however, aren't as easy to "backport," for
lack of a better term.

I might argue (from what little I know of the language) that it might
actually be easier to move sufficiently Inform-ish Inform code to ML
than it would to C, though, again, that's only a gut feeling.

To give you an example, just this morning, I reinvented the two-way
door (none of the existing versions did what I needed for the game).
It checks found_in pairs, and automates/hides the remainder of the
directional code by doing "weird" things like iterating through the
contents of "Compass" and using compound, dynamically-chosen
properties.

I'd be hard-pressed to move this to Java.

[...]
> The biggest proof of that is the effort that was required on my
> part in order to start reading and understanding Inform sources - I just had
> to read the manual. It was much harder with Prolog :-)

Really?  I found Prolog a breeze.  A stupid breeze that I didn't
like, I'll grant you (as Joao pointed out, they're just a bunch of
"floating if-thens"), but fairly easy to start.  And, once you get
cuts and stuff, you're done with Prolog.  Data structures, algorithms
(oh, sorry, they're not "algorithms," they're...uhm...behaviors, or
something, right...?), and whatever else you might want, is based on
the same five or six constructs (in particular, once you understand a
"family tree" example, and can follow the operation of a stack,
you're really just about done).

Inform, I was able to "code in" after skimming the manual, but I
couldn't "write a program" (idea to design to code) in Inform for
quite a few years.  Time will only tell if I can do that *now*, to
be honest...

