Reply-To: "Beth" <BethStone21@hotmail.com>
From: "Beth" <BethStone21@hotmail.com>
Newsgroups: alt.os.development,comp.os.minix
References: <nlig5tkfkjn3freuarvkglrf3fds1gl7rt@4ax.com>   <MPG.14c3b89d9332d8d798984c@news.direcpc.com> <slrn95ked9.3at.pino+comp_os_minix@mud.stack.nl> <MPG.14c3f049f50eae198984f@news.direcpc.com> <slrn95lq75.i3s.pino+comp_os_minix@mud.stack.nl> <93fssi$la1$1@news.ilstu.edu> <slrn95ooht.qhl.pino+comp_os_minix@mud.stack.nl> <Pine.LNX.3.96.1010110234351.5152D-100000@winnie.obuda.kando.hu> <93ku3p$6vn$1@news.ilstu.edu> <Pine.LNX.3.96.1010112014034.20983K-100000@winnie.obuda.kando.hu> <93m9kb$iv1$1@news.ilstu.edu>
Subject: Re: dumb question: do you fork()?
Lines: 197
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 5.50.4133.2400
X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4133.2400
Message-ID: <AHA96.6641$3N1.143602@news2-win.server.ntlworld.com>
Date: Thu, 18 Jan 2001 11:33:12 -0000
NNTP-Posting-Host: 213.104.142.160
X-Complaints-To: abuse@ntlworld.com
X-Trace: news2-win.server.ntlworld.com 979818336 213.104.142.160 (Thu, 18 Jan 2001 11:45:36 GMT)
NNTP-Posting-Date: Thu, 18 Jan 2001 11:45:36 GMT
Organization: ntlworld News Service
Path: news.adfa.edu.au!clarion.carno.net.au!news0.optus.net.au!news1.optus.net.au!optus!news.mel.connect.com.au!news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!btnet-peer!btnet-peer0!btnet!news5-gui.server.ntli.net!ntli.net!news2-win.server.ntlworld.com.POSTED!not-for-mail
Xref: news.adfa.edu.au alt.os.development:1019 comp.os.minix:36580

You'll have to pardon me jumping in mid-way through this debate and probably
spouting some irrelevance with regard to the current direction of this
thread but, in my mind, there is an important higher-level of consideration
that the fork() command demonstrates...

Namely, is an operating system trusting or untrusting of its application
programs? This also partially contributes, but is independent of, whether
the OS is co-operative or pre-emptive...

Established convention is that an OS is entirely untrusting of its
application programs...all modern OSes demonstrate this paranoia to the
extreme...applications are effectively completely strangled off from
anything but the OS's (hopefully) carefully designed API...

The apex of this inefficency is clearly demonstrated by GDI (even Micro$ot
aren't very proud of it, are they?)...on the same hardware that can
(ignoring hardware acceleration for a moment, to avoid distraction) generate
high-quality complex 3d scenes at very smooth and high frame rates, GDI can
be visibly seen to draw masks around icons and can take, even on high-end
machines, a second or so to refresh the desktop...and for what is this price
being paid for? An application accidentally drawing a pixel or two outside
its designated area...which, btw, these interfaces stop from appearing...it
is still wasting its time _trying_ to write outside its designated area...

A properly designed, coded and debugged application will never try to do
such a thing...these protections only serve to tolerate and propogate bad
applications...but _all_ applications pay the price continuously without
fail...

This attitude has great efficiency and performance disadvantages to say the
least... everything must be checked and double-checked from the application,
all the way down to the device drivers...data can potentially be needlessly
cloned from one address space to another...data may even be transformed from
a suitable format to a "standardised" format and transformed back
again...the possibilities are endless and none of them remotely good...

All this is to cater for the (hopefully rare) possibility of a rogue
program...either a malicious virus-like program or an unintentional bug in a
legitimate program...these should be the rare exceptions and not be treated
as the rule...

This returns to the fundamental "shoot yourself in the foot" argument, with
regards to unintentional bugs (and if you're feeling harsh, then it can also
be applied to malicious viruses too :)...is it really acceptable to punish
perfectly legitimate programs that do not have AWOL pointers to constant
"Big Brother" checks? Maybe, as programmers have reached a level of maturity
to be considered worthy of taking up the responsibility, a faulty program
should be allowed to bring a system down...such programs will not be
tolerated for long...and in order to stay afloat in the industry, and even
indirectly in society, programmers will need to smarten up their act...

[ Note that device-independence can still easily be achieved through
different means...you can still base your OS on device drivers...you can
still use standards...do not get confused by the usual convention of
bundling device-independence with untrusting protection...they are perfectly
independent ]

This "pampering" does nothing to system efficiency in the long run...it
unintentionally condones programs with AWOL pointers, as the OS will trap
these problems...the program and the programmer can fully ignore these
bugs...as long as it generally works most times then that's fine...

Also, if the OS ever lets slip any one of these protections then a program
that worked admirably before, can potentially bring the whole system down in
no time...once you institute these laws, they cannot be easily revoked...

Why not use the hardware correctly? Exceptions are exactly that - rare
exceptions - and not the rule...they should generally never occur and should
not form the basis of OS design...let's not quibble over this, bugs are
purely and simply ERRORS...they are WRONG...they should not be brushed under
the carpet...let's put our house in order, and vacuum up these errors before
we get the visitors round...the hardware traps can spot these ERRORs for
us...an exception handler has a very legitimate place in a debugger...in an
OS, it should be wholly unnecessary by this time of development...it is the
last resort, not the first thing to completely depend upon...

Excepting intentional exceptions (i.e. page fault for virtual memory
management, say), then an application that is capable, in at least one
situation, of generating an exception is _WRONG_...it has _ERRORs_...you can
fool yourself and everyone else all you like and get the CPU to pretend all
is well...but, deep down, you know your program has mistakes...

How this relates to fork() is that, under this "modern" OS convention, it
must entirely clone the existing process, even though it is very common that
the very next instruction would be to load a different new process in its
place...the fork() merely used as a means to start a new process...the
resources that can be potentially wasted can be phenomenal...and these
resources are wasted purely for misplaced philosophical ideals...

And, once, long ago, maybe you could have brushed over your mistakes by
demanding double the memory, twice the MHz but computers are no longer the
playthings of the rich, the tinkerings of a hobbyist...they are the
fundamental cornerstones of our modern society, for good or for ill, whether
we like it or not...

Children are pretty much completely required to use computers as an integral
part of their education...when you brush over your mistakes by asking for
more memory or a CD-ROM drive...you are indirectly demanding their parents
to work overtime...asking them to potentially lose time with their maturing
children...you are risking alienating and throwing a child's future away, if
the price cannot be met...goodness only knows the "what ifs" you're playing
with there...

In business, your decision to need an extra few megabytes of RAM may mean
that hundreds and thousands of their machines may need to be upgraded...such
an investment could easily make or break small companies, that have to
comply with your upgrades or face becoming obselete...but don't be confused,
this will declare a business and all its dependents to being obselete
too...a wrong decision as to which standard to follow may be
irreversible...and, let's not get carried away with city-thinking, a small
business can be the cornerstone of a rural community...

Yet another OS or yet another standard, could require entire libraries to be
"converted"...not that it wasn't perfectly well encoded as it was...but if
you want the continued hardware and software support, you've got to play the
stupid power games...that hollow and, quite frankly, pathetic "penis
extension" machismo...please, just save that for a fight in the local pub or
something, if you've got to do it at all...it was never that attractive and
we stopped the hunter/gatherer thing long back, so its also quite, quite
useless...

Sometimes the expense will simply just be too great...and the wisdom of
these libraries will be lost forever...lives devoted to accumulating
knowledge for all our benefits, just wasted...Hitler would be so
proud...books "burned", thoughts destroyed, history erased...all for the
sake of a misplaced philosophy...namely, "let someone else do it"...sorry,
there simply isn't anyone else with our qualification _to_ do it...

I hate to be the barer of bad tidings...but don't shoot the messenger for
the message...well, you can shoot me down, if you really insist...but it
won't do any good in the long run, save to massage a male pride or two (if
bullying a woman gets you your kicks)...you won't escape these new
responsibilities...as Uncle Ben was fond of saying, "with great power comes
great responsibility"...this is not a word of a lie...;)

The environment we're playing in has _drastically_ changed, folks...sorry,
but its time to grow up and face our new responsibilities...playtime is
_most definitely_ over...

Moreover, modern OSes usurp more and more of the responsibilities...and they
become horrible monolithic legacy-filled beasts, that your application
program simply instructs...again, efficiency is completely ignored as you
must fit your application responsibilities around some distant "sounds good"
standard, rather than around the real problem at hand...

Simply, pampering your children will just result in spoilt brats...and if we
never learn to stand on our own two feet, then we never can...the bird has
got to make that leap and pray that s/he can fly...its cruel...but that's
the nature of things...

There is a marked and crucial difference between protecting a child from
harm (debugging to the nth degree, exception handling, defensive
programming, etc.) and wrapping it in so much cotton-wool that it's liable
to suffocate...removing all possibility of doing the system any harm (by
removing the possibility of doing almost anything) but at the expense that
you lose the ability to do pretty much anything useful at any level of
decent efficiency...i.e. always getting a "grown-up" to use those nasty
sissors or to even handle the paper (well, paper does have a sharp edge, no?
:)...

Guess which path we seem to be heading down...and guess why we're dominated
by bloatware and monopolies we can't break...left, right and centre...the
OSes and PLs are pampering and pandering to each and every whim...our
"hobbyist" attitudes have lead to this, we still haven't grown up...

Let's abolish this unnecessary slavery...for, in the long run, everyone's
sake...

More than what I seem ;),
Beth :)

"Tim Hockin" <thockin@isunix.it.ilstu.edu> wrote in message
news:93m9kb$iv1$1@news.ilstu.edu...
> In alt.os.development Kovacs Viktor Peter <vps@winnie.obuda.kando.hu>
wrote:
> :> wrong.  User mode threads are just lightweight processes - scheduled by
the
> :> OS.
>
> :  Those are kernel threads...
>
> let's agree to disagree both on terms and on philosophy.  I feel
> comfortable saying there should be no such thing as a user-mode scheduler.
>
> :  Each process gets a fixed percent relative to other processes.
>
> each process gets as much as it needs, to the point at which there is no
> free CPU time, at which point the scheduler does what it can to balance.
>
>
> --
> As long as you're a man, you're what the world will make of you.
> Whereas if you're a woman,
> You're only what it seems.
> --Stephen Sondheim - "Passion"


