Newsgroups: alt.os.multics,alt.sys.pdp10,alt.folklore.computers,comp.lang.lisp
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!uunet!gatech!howland.reston.ans.net!ix.netcom.com!netcom.com!vsocci
From: vsocci@netcom.com (Vance Socci)
Subject: Re: Retro-Computing!
Message-ID: <vsocciD6H5wI.KLt@netcom.com>
Sender: vsocci@netcom21.netcom.com
Organization: Xenos
X-Newsreader: News Xpress Version 1.0 Beta #2.1
References: <3ledga$rcr@news2.delphi.com> <3005720917.911848@naggum.no> <vsocciD6DErH.ADz@netcom.com> <3lmivt$ao@usenet.rpi.edu>
Date: Mon, 3 Apr 1995 16:56:23 GMT
Lines: 147

First off, sorry for the line-wrap.  Excuse: 800x600 screen,
proportional fonts, bad message editor in news client.

wilsonj@alum01.its.rpi.edu (John Wilson) wrote:
>In article <vsocciD6DErH.ADz@netcom.com>,
>Vance Socci <vsocci@netcom.com> wrote:
>>But I will say something about the problems with command line interfaces.
>>
>>First, realize that a lot of our way of interfacing to computers has been
>>limited by our
>>previous methodologies: linear writing with stone and chisel and the
>>successors to that
>>technology, pens and paper. Then, one day, someone stuck some typeface on
>>piano hammers
>>and invented the typewriter.
>[and so on]
>

I don't think procedural programming is bad; I think organization is the
essence of intelligence.

However, there are lots of things that don't lend themselves well to the
linear approach. Especially tasks that are inherently multi-threaded
and/or non-linear.

We'll never be able to make real progress in artificial intgelligence as
long as we limit it to "artificial linear rationality", for example. The
intuitive side of human cognition is not necessarily linear. The fact
that our physical structures limit our interpersonal communications in a
linear fashion may tend to limit our thinking, but this is not
necessarily a good thing in general.

Speech is a good example - in-person speech communications actually has
several elements besides the words. The frequency/inflection placed upon
the words is another dimension, and the body language associated with
the communication is yet another. The context in which the speech is
heard is yet another dimension. The human mind seemingly assembles all
these elements into an almost instantaneous understanding of the
communication in what  has to be a *non-linear* process.  If you study
the little we know about the brain, you'll find that it operates mostly
in a non-linear fasion - lots of substructures operating simultaneously
on the same data, like edge detectors, level detectors, etc.

Here's a practical example of another approach relevant to
alt.sys.pdp10: the command parser for DDT-10, written by Tom Eggers.
Rather than collect the characters into a buffer and do some kind of
linear search after receiving a delimiter, the parser actually runs in a
sense multiple threads. The first thread to reach a terminus ends the
parsing and activates the desired function. This allows the work that
would normally be done all at once at the end to be distrubuted
throughout the user's type-in. If you were on a slow machine, this could
significantly speed up the response at the end of the command.

This is not a typical linear procedural approach to the problem - more
reminiscent of a real AI approach. The code literally simulates a set of
entities simultaneously looking at the typein as it progresses.

By the way, this is not easy to express in terms of linear languages
like Von-Neumann machine languages. In a graphical language, it would be
trivial to  express.

Another example would be the type-ahead example someone brought up
earlier. Rather than type ahead and assume that the earlier commands
properly yielded their results, a graphical command language could
create a  seperate window/icon for every operation, and one could wire
the results into the next "command" window/icon without having to wonder
if the whole process was going to collapse due to an intervening error.
If an error did occur in the chain, you'd still have the structure of
your task sitting on your screen; you just correct what went wrong and
start the whole thing up again. (I'm imagining a sort of dataflow
paradigm here). If you like, you can select the whole structure and
represent it as one graphical object, to be used again and again in the
future.

Complex mathematical calculations are another application that at first
seems naturally linear, but upon closer inspection turns out to only
look that way because we've been trained to think that way. If you write
a program to compute (x*y)+(x^2*y^2)+(x^3*y^3)+..., most compilers would
probably not be smart enough to accumulate the powers of x and y to use
for multiplication, and if multiple processors were available, would not
be smart enough to shove off the results to be added somewhere else.

It gets worse when we have successive statements with formulae which are
not interdependent on each others' results until far later in the
program.  Linear programming languages deprive the solution engine from
using non-linear aspects of the hardware if they are present, and make
the compiler's job arbitrarily difficult. If you think about it, a lot
of compiler optimization is a sort of de-linearize/re-linearize
operation to look at the given expression another way.  In the example
above, if the author wanted to multiply successive powers of y and x and
sum them, they could express this in a more fundamental way by creating
two power generators, one for x and y, wire them into a multiply engine,
and then wire the successive output of the multiply engine into a sum
engine.

This may not be an ideal example, but should serve to illustrate my point.

Another question: if typewriter languages are so superior, why did
mankind invent illustrations and diagrams to communicate concepts so
clearly? Why do hardware designers write schematics rather than using
TECO to  generate wirelists directly?

Granted that graphical interfacing is in its infancy and suffers from
the kind of stagnation that mass commercialization brings to any field.
But the possibilities for expanding the scope of computing are great,
and we shouldn't miss out on them by being overly committed to linear
thinking or programming.

For those tasks that are still described best as linear, there is
adequate ability in multi-threaded environments to express linearity.

>Somehow it seems like one mechanic saying to another, "so she's pinging is
>she?  Well just hook up your timing light, loosen the distributor clamp and,
>no wait -- let me do this interpretive dance to explain it to you!"

Ever been through this linear procedure?
	1) Car won't start
	2) Take to mechanic; replaces the starter solenoid.
	3) Still won't start; mechanic replaces the battery.
	4) Still won't start; mechanic replaces the coil, distributor
	   cap, ignition wires, igniter module
	5) Still won't start; mechanic replaces the starter (throws old
	   solenoid away)
	6) Still won't start; mechanic finally replaces the starter
	   switch on the steering column
	7) Car now starts
	8) Owner gets bill for ALL the aforementioned parts totalling
	   $500. Switch cost $40.
	9) Disgruntled owner writes check out for $500; makes linear
	   entry into checkbook for same.
	10) next customer with car that won't start; goto step 1.

I guess linear procedures do work well for *some* people!






- Vance

/=======================================\
|    Vance Socci   vsocci@netcom.com	|
| "The worst secrets are those we keep	|
|   from ourselves . . ."		|
| "I am not a number; I am a free man!	|
\=======================================/
