Reply-To: "Beth" <BethStone21@hotmail.com>
From: "Beth" <BethStone21@hotmail.com>
Newsgroups: alt.os.development,comp.os.minix
References: <nlig5tkfkjn3freuarvkglrf3fds1gl7rt@4ax.com> <MPG.14c3b89d9332d8d798984c@news.direcpc.com> <slrn95ked9.3at.pino+comp_os_minix@mud.stack.nl> <MPG.14c3f049f50eae198984f@news.direcpc.com> <slrn95lq75.i3s.pino+comp_os_minix@mud.stack.nl> <93fssi$la1$1@news.ilstu.edu> <slrn95ooht.qhl.pino+comp_os_minix@mud.stack.nl> <Pine.LNX.3.96.1010110234351.5152D-100000@winnie.obuda.kando.hu> <93ku3p$6vn$1@news.ilstu.edu> <Pine.LNX.3.96.1010112014034.20983K-100000@winnie.obuda.kando.hu> <93m9kb$iv1$1@news.ilstu.edu> <AHA96.6641$3N1.143602@news2-win.server.ntlworld.com> <947ibd$1mbf$1@gavrilo.mtu.ru> <XMwa6.7729$vH6.117213@news6-win.server.ntlworld.com> <94kma3$cbu$1@nnrp1.deja.com>
Subject: Re: dumb question: do you fork()?
Lines: 370
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 5.50.4133.2400
X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4133.2400
Message-ID: <5cyb6.708$Z%5.6132@news2-win.server.ntlworld.com>
Date: Wed, 24 Jan 2001 10:17:28 -0000
NNTP-Posting-Host: 213.104.140.67
X-Complaints-To: abuse@ntlworld.com
X-Trace: news2-win.server.ntlworld.com 980332417 213.104.140.67 (Wed, 24 Jan 2001 10:33:37 GMT)
NNTP-Posting-Date: Wed, 24 Jan 2001 10:33:37 GMT
Organization: ntlworld News Service
Path: news.adfa.edu.au!clarion.carno.net.au!news0.optus.net.au!news1.optus.net.au!optus!news.mel.connect.com.au!news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!isdnet!newsfeed.online.be!news-raspail.gip.net!news.gsl.net!gip.net!news5-gui.server.ntli.net!ntli.net!news2-win.server.ntlworld.com.POSTED!not-for-mail
Xref: news.adfa.edu.au alt.os.development:1044 comp.os.minix:36637

eliason wrote:
> Beth:
>
> part 0
> ------
> i'm amazed that no one has simply come out and called
> you a freak!  credit to the community!  (oops - i just
> kinda did it, didn't i!!)

Actually, they have...not in so many words but as good as...lol :)

> part 1
> ------
> i think your basic philosophy is admirable.

Sheesh...again, its the same thing..."you're a freak"/"you're wrong"/"you're
insane and twisted"/"you have no idea what you're talking about" and then,
all of a sudden, its "actually, though, I agree with you in principle"...

Ummm...so is that admitting you're all freaks too? lmfao...just kidding...:)

Oh, I just don't get this...maybe I am just a dumb freak...because I
certainly don't understand this good cop/bad cop routine, which you all seem
to have absolutely no problem with...I'm confused...I openly admit it...

> especially from a simple mathematical point of view it is quite appealing!
(see
> if i understand it rightly):
>
> program development should have two phases -
> (a)debugging/development,
> (b)deployment.
>
> (a)phase employs OS with strongly paranoid protection to allow
> developers to find violations (accidental!! no penis issues
> here...) of good behaviour rules.
>
> (b)phase employs lean, mean OS.  since application is now free
> of erroneous violations, it can run on this non-protected OS.

The general idea is something like that, yes...

> here's my question, and i'm addressing only the desktop
> computer market here: how do you implement a barrier between
> (a)phase and (b)phase in the real world?

Well, the "barrier" is somewhat natural...you have a machine(s) with the
non-lean and mean OS on it and all your development tools, etc...the users
have the lean/mean OS...so the "barrier" I suppose would be the final agreed
released product being published and distributed...much the same as
always...

> is it a legal thing?

I have no idea what you're referring to here...where does the law come into
this? Now I'm confused _and_ intrigued ;)

> what about development groups that do their best,
> but something slips through?  isn't there a real limit on
> how much time a group can spend generating test cases &
> debugging before release and revenue?

Aaaah...this is part of the problem...traditional "test cases" and
"debugging" are completely inadequate for what I'm talking about...they look
at one instance at a time...they cannot prove a correct program to be
correct, nor can they always prove an incorrect program to be
incorrect...yet this is what we currently entirely depend on to catch all
our bugs (excepting syntax errors and the basic stuff the compiler/assembler
might catch but, as was debated rigourously in CLAX, there are real limits
and potential headaches to tool-based bug catching of that sort)...

An entirely different style of "bug-catching" would need to be
employed...(Note: "bug-catching" not "debugging"...we don't let the bugs get
into the code in the first place..."prevention rather than cure"...as the
NHS (and other health authorities round the world) will testify to, this
philosophy is easier, cheaper, healthier, etc., etc.)

First off, debugging is an integral part of the development effort...it is
not something that's done afterwards...this is both to cut costs and to save
time, plus if you don't put bugs in then there _should_ be nothing to
"de-bug" (literal sense)...actually, I'm developing a tool (thanks to the
guys at CLAX, btw, for giving me the facts and opinions necessary to
fine-tune this tool's basic ideas to meet most people's preferences ;) that
does this in a very practical way...I would go into details but I'd like to
protect my interests, just in case (though, as everyone thinks I'm mad, then
it's unlikely that anyone will steal my insane ideas off me but you never
know)...

Although, I have discussed the mechanics of this tool with a few people in
CLAX and they seemed to at least like the theory of it all, even if they're
not going to personally use it...

> what about evil people who might implement errors on purpose?

Viruses? Well, no methodology, mine or the "untrusting" attitude, has or can
stop these..._BUT_ it is unacceptable to punish innocent applications
continuously because of one or two viruses..."innocent until proven
guilty"...

And, in OS terms, I have a good solution to that...unlike modern OSes, that
just punish everything in sight and, as your necessary copy of McAffee and
all those constant virus updates attest to, do not actually catch the real
culprits in the slightest...I have something I think is personally better
than what you're getting now...but, again, I have to protect my interests
here...suffice to say, the OS design I had in mind does deal with this
issue...its an integral part rather than an add-on utility to line
pockets...

> and for that matter: isn't it mathematically possible
> that for some programs there may be an INFINITE number
> of test cases?  how big is the number before it's
> infinite for all practical purposes?

Depends on your notation; For instance, if you want to go through each and
every test case by actually physically running the code for each and every
test case then, even for quite small programs, it might take a lifetime to
do this...

I could, for instance, run the following code with all 2^32 combinations of
bits in EAX:

cmp eax, 3
je EAXisThree!!!

Or, we could look at not just the data for validation but at the code
itself...as should be obvious, if you know x86 ASM, we'll only jump to
"EAXisThree!!!" when EAX=3, otherwise, we'll fall through...thus, by taking
a step back, thinking a level higher, we've reduced 2^32 individual physical
test runs into one _logical_ test run...

This is a highly simplistic example, yes, but the principles hold and are
the perfect thing for a tool to generate for you to browse through...in
CLAX, a number of real world problems got thrown at me and I gave rough
ideas of how the tool would deal with them...although, I don't think I
entirely convinced everyone, the basic philosophy is sound...its making a
practical tool to implement this that's what's needed more than anything...

Btw, as I've said in another post, this is NOT a new idea...I'm never
claimed this either...it's called "formal specification" (though, the
traditional "formal specification" is far too impractical currently...its a
bunch of very sound theory but with no tool support at all because the
mathematicians who think it up can't program very well and the programmers
are all hooked into _the_ development method to even take the time to
listen...as my posts here and in CLAX are nicely proving :)...

_BUT_ there is a massive divide between the OS hacker mindset and the "HLL"
mathematician mindset, which has not been bridged so far...and it is very
much to do with "trivial" things like a lack of tools, a resistance to
anything based on "nasty" mathematics, usual programmer arrogance (not a
snipe, guys...I'm as bad as the rest of you ;), a deep, but unfounded,
untrust in the notion of _logical_ excution above physical execution (i.e.
test runs as the only feasible method of catching bugs...even though such a
method is extremely hard to justify and, as your own experience of the vast
majority of commerical products should testify to (patches, bugs,
"features", etc., etc.), is far from miraculous at catching those bugs, is
it?), etc., etc.

In fact, the biggest remark made against these sorts of ideas is that "you
can only test a program by running it"...which is a bit short-sighted,
really...like an architect builds hundreds of skyscrapers to see which
designs will fall down or not (sound ludicrous? this is exactly the way most
programs are designed/implemented _and_ then everyone goes on about the
costs...well, for goodness' sake, no wonder!! lol :)...

A nice little analogy I used with the guys in CLAX was of an architect
building a small bridge over a stream and then jumping up and down on it,
smashing it with a sledgehammer, completely obliterating it and then
building yet another bridge that is slightly better and repeating this until
he, theoretically, builds a bridge that doesn't break...again, the cost and
effort _is_ ludicrous...and, again, this is worryingly close to how things
_do_ get done right now...

> the legal idea is not a joke; note that the barrier
> in the automobile industry is implemented that way.  but
> that example is not the answer to the question because
> cars are all quite similar in their goals and requirements
> while software is RADICALLY more complex, varied, and
> profuse.

I still don't know what you mean by the legal idea...but its sounds very
intriguing...please enlighten me about this :)

And, yes, software can be much more complex, varied and profuse but the
basic principles here still apply...I mean, do we abandon our laws when the
population gets slightly bigger? do we use different physics with cottages
and with skyscarpers?

Of course not...and, anyway, modern cars aren't exactly a walk in the park,
you know...I worked for an automobile R&D company in Germany (and, man, do
they adore cars there...lol ;) and the project I was engaged in was a highly
complex embedded system...it basically ran _ALL_ the electronic systems
(from controlling injection in the engine to the cigarette lighter ;) _AND_,
this was the part I was engaged in, it featured, as a "trivial" part of the
entire system, a neural network (with eyes..lol :) controlling airbag
deployment (this should all be coming on the market soon, btw, if you have
the cash for it (and if they are still on the schedule they had when I was
working there :)...

But, be warned, I was the on the airbag side of the project...so, if you
don't trust my philosophies and still think I'm insane, then you might not
want to purchase any cars with "intelligent airbags"(tm)...hehehehehe lol
;)...oh, just in case they lie on the ads (and don't be naive in thinking
companies don't do that), the success rate was 99.7% for the
deployment...which translates into 3 severe injuries or deaths per 1000
crashes...fills you with confidence, eh? (although, I say that jokingly,
because you learnt to detach or a bad test run would get you really
upset...every "wrong" on the test output logically translates to a life
destroyed, if not lost...it was not a nice thought)...

This was a commerical and a life-critical application...we used formal
specification methods and delivered it on time and well within budget on the
same timescale that would have been employed if it wasn't formally
specified...how did we manage this? Simple(-ish)...we adopted the philosophy
whole-heartedly and developed tools to complement the philosophy rather than
trying to fit awkward unhelpful tools to suit our methodology...

In fact, it was such a joy to work under this philosophy, that is _exactly_
why I'm re-developing (actually, extending too) the sorts of tools we used
in that project (unfortunately, the tools are all their property so I
couldn't just walk out of the plant with a disk in my hand but the
underlying principles of the idea are not patented (you can't patent boolean
logic or mathematics just yet ;) so I _can_ develop my own little version of
them :)...as I'm doing this alone, and in my spare time, it might take a
little while but I'll be sure to notify anyone who's interested in looking
into this when it's finished :)

> here's a second question: is the (b)phase OS really
> more efficient than the (a)phase OS?  i agree that the
> Intel 386 architecture protection often slows down
> context switching, but perhaps that's more of a hardware
> design/philosophy issue than OS issue?

It can be far more efficient all round; but it depends a lot on how you
implement it...if we're talking about taking NT and just removing the paging
from it or something daft like that, as everyone else seems to assume I mean
(but I don't...for the record: I DON'T WANT TO GO BACK TO DOS ;), then its
not going to "win" you much and would be awkward as hell to implement...

You really have to start from square one with the new philosophy firmly in
mind...then you can see very real performance advantages...I mean, for some
inexplicable reason, this entire branch of this thread has been perverted
into talking about "x86 protection mechanisms", which was not what I was
actually talking about at all (I suppose, like I found in CLAX, everyone's
so used to technical discussion on a bit/byte level that the idea of talking
about something as abstract as "OS philosophy" was confusing them...lol
:)...

If you just mean avoiding those sorts of mechanisms, then you will, again,
gain very little but I was being a little more generic than that...a
properly trusted application would (*cringes knowing the torrent of abuse
I'm about to unleash*) be able to directly access OS structures...goodbye
most, if not all, API!!! The possibilities here are large and intriguing :)

The applications and OS are no longer battling for supremecy, they actually
work together to get the job done as quickly and efficiently as
possible...and, if you actually think that an OS and applications are
co-operating now, then I really advise you take a harder look at things...

As for it being a hardware issue, I would mostly agree, _if_ it was just the
protection mechanisms we were talking about...which is CPU-specific...ok,
lots of CPUs have this type of mechanism but they don't always work in an
identical fashion, do they? In fact, current design is far more hardware
dependent than the abstract ideas I'm presenting...

> ok, here's a thought for further discussion:  could
> a protection oriented shut-down mechanism in the (b)
> phase OS be one of the most practical ways to implement
> a barrier between (a)phase of software development and
> (b)phase?  because that's what we have right now...

Well, if you're satisified with that sort of minimal support then I wouldn't
essentially disagree, as such, but there is a whole lot more that _could_ be
done...but, it seems, that the current philosophy is "line of _perceived_
least resistance" (bloatware/"planned obselence" mindset) so bugs are
tolerated (almost encouraged because the development lifecycle provides no
real inherent recognition even of the notion of bugs...the programmer is
entirely responsible for more or less everything _and_ debuggers and such
are really add-ons to the process, to plug the gap that the basic
development paradigm has semingly forgotten to address) because the effort
sounds like too much (it's not, if done correctly, but, admittedly, it
_sounds_ like an immense effort...i.e. like your assumption above of
actually running a test case for every possible permutation..._that_ most
definitely would be ludicrous, no argument)...

> part 2
> ------
> i *HATE* code bloat more than anyone on this sub-
> thread so far!!!  why big drives!!!  why big RAM!!!!
> (unless you want to edit video, which makes big drives
> GREAT!)  why waste money!!!  but (fork()), memory protection,
> these are different issues from bloat.  bloat is corporations
> solving their business problems with low priority on space,
> which is where your issues about family economics and sexual
> dominance should really focus, i think.

Partial agreement; Bloat is caused by basing your attitudes in the wrong
place...if an OS was implemented as I've suggested, it would be exceedingly
leaner than current OSes by orders of magnitude (and, no, it would not
essentially lose features for this)...this is part of the deal...another
part is what you state...another part is too little competent
programmers/designers for the need the world has, so they draft in anyone
who can cobble together a few lines of code, whether that code is any good
or not (coupled into this is that, even today, computing is still an
immature field, so we haven't got our house in order yet)...another part is
the "have"s going headlong into computing and, indirectly, forcing the
"have-not"s to play catch up (i.e. the rich play the GHz game and the
hardware guys play the big money game and listen to them, rather than the
community at large...so, the hardware zooms along in leaps and bounds,
leaving anyone who can't upgrade at the drop of a hat in an unacceptable,
but very real, situation)...another part is...

Actually, I've noticed a general attitude in society, as we've moved over to
the "blame culture" that people seem to now automatically assume one cause
for one effect, which is a bit of a ludicrous notion...anything of even
moderate complexity, is interdependent with a hundered/thousand/million
different things...routing out one cause is normally counter-productive
(after all, apparently, all evil in this world is down to Marilyn Manson and
Ithcy and Scratchy...well, that's what some people seem to say...I think
"simplistic" is the term that pops to mind...not wrong but a bit
short-sighted)...

As for who hates code bloat most...then you have no idea how obsessed and,
yes, intolerant I am of lazy/arrogant design/programming...again, ask the
CLAX guys...I work on the notion that good quality is an immutable
attribute, the _real_ prize in the little games we play...most people think
"quality? what's that?" ;)

> (fork()) may have some really strong points and if you want it
> then the memory for it is well justified.

Well, as I said originally, I appologise for butting into this thread
unannounced and dragging it off on a tangent but I thought it was something
worth saying...I'm sure that fork() has many weird and wonderful
implications to it...I'll be glued to the thread to see what you exactly
mean...:)

> that's the topic of this NG thread - is (fork()) interesting?
> you have to prepare the child process's environment somehow.
> (fork()) lets you clone yourself, and (in the new clone) then
> prepare the new environment by changing the existing environment
> before you (exec()).  without (fork()) you'd have to list out
> in a data structure all the child's environment (or list the
> changes, or something), or give up the flexibility of having
> the environment preconfigured for the child process.  i think
> that's REALLY interesting!

Oh yeah...you've _got_ to love the theory behind fork()...I just don't agree
with the basic philosophies underpinning that lovely bit of theory...but I
am definitely interested and impressed with the idea of fork()...

If you understand me, fork() is a very good idea _if_ I believed the basic
philosophy underpinning the whole design but, unfortunately, I
don't...otherwise, I'd probably be blowing fork()'s trumpet here instead ;)

> and as i raised elsewhere in this thread, it might be interesting
> in the solution of some problems just to (fork()).  but that's
> where the threads versus processes issue begins!

Kuhl,
Beth :)

P.S. Thank you very, very much for taking the time to listen and formulate a
_real_ reply to my post...it's really, really appreciated after having to
trail through another post, which was basically just a torrent of insults
and abuse...your kind and rationale post has restored my faith and brought
back my little inner smile...I want to give you a big hug now...lol hehehehe
;)



