From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!decwrl!access.usask.ca!ccu.umanitoba.ca!zirdum Tue Apr  7 23:23:02 EDT 1992
Article 4799 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!decwrl!access.usask.ca!ccu.umanitoba.ca!zirdum
>From: zirdum@ccu.umanitoba.ca (Antun Zirdum)
Newsgroups: comp.ai.philosophy
Subject: RE: Systems Reply I (repost perhaps)
Summary: Another refutation of Searle's life works.
Keywords: AI Searle Dickhead Barf
Message-ID: <1992Mar29.083336.6608@ccu.umanitoba.ca>
Date: 29 Mar 92 08:33:36 GMT
Organization: University of Manitoba, Winnipeg, Canada
Lines: 243


Newsgroups: comp.ai.philosophy
Subject: RE: Systems reply I
Summary: 
Expires: 
Sender: 
Followup-To: 
Distribution: world
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Keywords: 
>
>In article <1992Mar18.072634.9259@ccu.umanitoba.ca> zirdum@ccu.umanitoba.ca (Antun Zirdum) writes:
>>In article <6423@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>In article <1992Mar12.001918.2564@ccu.umanitoba.ca> zirdum@ccu.umanitoba.ca (Antun Zirdum) writes:
>>>>In relation to the above, what would the ANTI AI crowd require
>>>>so that a symbol has a reference (semantics). If you do not
>>>>know what it means to have semantics, then how is it possible
>>>>for you to argue that something does not have it?
>>>
>>>You might start by showing how "cats" lines up with cats and
>>>not with cherries.  For which see the Putnam discussion now
>>>lost in the noise and perhaps abandoned.
>
>Decided to skip this part, eh?

All right, I will play by your rules! I do not see any problems
with computers lining up "Cats" with cats, just as they can
line up a subdirectory with the name of the subdirectory!
As you know the name of a symbol is not the sysmbol in the
computers memory. How does a computer line up a variables
name with the actual variable? I believe, unless you can
show otherwise, that the problem about computers lining up
symbols with things is the same as with people lining up 
symbols! Indeed I am not aware of any other "methods" of 
*lining* up, If you know of different methods of *lining up* 
please divulge them.

>
>>what is that *thing* that is required for understanding?????
>
>That is, unless I can tell you how human understanding works
>in detail, machines must have it too?  I'm sorry, but it just
>doesn't follow.
>
>(BTW, it should be obvious that more than behavior is involved.
>Do you just behave, or do you have thoughts too?)

My thoughts are an intricate part & parcel of my
behaviour, just because it is not externally visible
does not mean that it is not behaviour! I think that
you missunderstood my point, just because we cannot
explain how its done in humans - does that mean that
it cannot be duplicated by a physical system?
	It is your argument that does not follow!
If you came upon a man floating thru the air, would
you assume that he is a witch, or would you first think
that the phenomena can be explained.
Now I am not saying that understanding has already
been duplicated by machines, I am arguing that there
is no reason that it cannot be!

>
>>>We know humans can do it.  And there are arguments that machines
>>>can't (just by running the right program).  The correctness of
>>>those arguments does not depend on knowing how humans do it.
>>>
>>All of those arguments boil down to this "machines can't because
>>they are not people!"
>
>I'm sorry that you can't see anything more in them than that.
>
>If you boil them down, what you ought to come up with is that
>since people can understand and machines can't, people are not
>machines.  (Insert usual qualifications about understand merely
>by running the right program.)
>
>Quite simply, the arguments are _not_: machines are not people,
>therefore they do not understand.
>
I beg to differ, as evidenced by all those arguments about
intentionality, and agency on the net recently, which very
conveniently disqulifies certain things from having these
properties by mere hand waving!

>Here's a similar case.  Suppose there's an argument that a particular
>kind of airship is too heavy (or wrongly shaped) to fly.  This is
>_not_ an argument that it is not a bird and hence it can't fly.

In this case you have made specific observations as to
why flying is impossible, your observations may be
right or wrong, but they are provable! No such turkey
in your argument about intelligence & understanding!
>
>>What if I showed you that people are machines?
>
>Depends on just what you showed.  But look at Searle's remarks
>on "meat machines" first.  Indeed, I recommend (again) the first
>Reith Lecture, reprinted as chapter 1 of his _Minds, Machines,
>and Programs_ (or some similar title).
>
Nothing that Searle says proves that people are anything
other than machines! Granted, machines with extraordinary
capabilities, but still machines!
>>>>Take the word 'BLUE' (color) what does it mean to say that you know
>>>>what 'blue' means? Does this mean that you have something in your
>>>>brain that is blue,
>>>
>>>Are you serious?
>>>
>>Dead serious!
>
>Really?  You think the "something blue in your head" is a reasonable
>suggestion?
>
Exactly, it is a ludicrous suggestion! How is it then that we are
able to attach meaning to the word BLUE? You are right in one thing
we do not need to know exactly how it is done to be able to
duplicate it. (As I suspect that the neural pathways in different
people are not even similar. But in all those people, if we cut
the pathways - they would lose the meaning of blue.)

Now a little step further! What are all those pathways to?
Well, let us look at the origins of color. (keep in mind that
I am not a neurosurgeon) A light ray enters your eye, it is of
a certain enery level and it excites your "blue" cones. From
there a nerve is triggered and it fires others along its path.
It comes into the brain and in a mess of firings results in your
understanding that you are looking at a blue object!
	Understandably the detail are a little sparse, but I will
argue they are not necessary for what I am about to say.
Nowhere along the path is there any place where we can say
that we are dealing with meaning!!! No matter in how much
detail we go into we always come back to the fact that meaning
is not in the details, you have to look at the big picture to
understand what the light does to the system.
	So, whoever said "God is in the details" is wrong,
God is actually in the Whole!

>So I will leave it to you to present the definitions.  If you
>think this is unsatisfactory, I suppose that's understandable,
>but I don't think there's any hope of us agreeing in any case.
>
Sorry, but any definitions that I present will be too restrictive
for you. They would not allow you to get away with impricise
things like intentionallity!
>-- jd


Newsgroups: comp.ai.philosophy
Subject: Re: Language as Technology: A Phenomenological Study
Summary: 
Expires: 
References: <1992Mar26.003003.20515@a.cs.okstate.edu> <1992Mar26.134711.10708@mp.cs.niu.edu> <1992Mar26.223702.28641@a.cs.okstate.edu>
Sender: 
Followup-To: 
Distribution: 
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Keywords: 

In article <1992Mar26.223702.28641@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>In article <1992Mar26.134711.10708@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>
>>  Of course I can make such a statement.  As I already said, language is
>>far from the whole story, so there is no contradiction here.  I did not
>>claim that a bird's intelligence problem is purely a lack of adequate
>>language; only that the inadequate analog language does not enhance
>>intelligence the same way that digital language enhances human
>>intelligence.
>  Ah, but from my story, language does not enhance intelligence.  Language
>is only a product of intelligence.  If anything, language increases the
>number of interactions and the amount of knowledge.  But language itself
>has no impact on intelligence.  It is intelligence that generates language.
>
Gentlement, I think that we all start to flounder when we talk
of intelligence as a single monolithic entity, when in reality
there are many different kinds of intelligence.
	I would go so far as to say that there is a different
kind of intelligence for each type of sense/IO we have. There
is intelligence required in the use of your eyes/nose/ears/speach.
If you lost any one of these abilities tomorrow, I doubt that
that loss would have much of an impact on your other 'intelligences'
This is not only my view of the matter, it is documented for
accident and stroke victims. Although they remain as capable
as ever for many of their abilities, they become quite
'unintelligent' in the one area that is affected!

So, both of you are right 100%. Language does and does not
have something to do with intelligence, it really matters
which abilities you are talking about.
>>
>>
>> Sure, thought is possible without language.  But such non-linguistic thought
>>is quite limited in comparison to thought using language.  My assertion
>>"language has digitized humans" was an overstatement, which I acknowledged
>>at the time by prefixing it with "in effect".  But the effects of the digital
>>technology of language extend much further than to merely a digital output.
> By talking about digital effects, are you refering to the tendency for
>language to think in contrasts?  Like good/evil, rich/poor, etc?
>
When you think 'PAIN' what do you desire - 'Lack of pain' right?
When you see dark what do you want - light!
I do not think that language has digitized us. We use a digital
language because that is the way we think. We have naturally
chosen the best fitting means of communication, not because it
was available (because it wasn't - we invented it) but because
it was needed!
>>
>>  No.  Language really is digital.  Take your analog volt meter.  No matter
>>how hard you stare at it, you will have trouble getting more than two to
>>three digits of precision from it.  But, unlike analog technology, digital
>>technlogy allows precision to be arbitrarily extended.  And language has
>>this same property.  If we think it too limiting to talk about a forest,
>>we can coin words for trees.  If we need still more precision, we coin
>>words for tree trunks and branches.  For more we add words for leaves.
>>Next we add words for leaflets; next veins and stomata within the leaves.
>>In principle we can extend precision arbitrarily.
>>
> Ah-ha!  I now see what you mean.  Yes, I can agree that language is digital.
>(Sometimes, you just have to pound this stuff in my head.)  I am, however,
>still in disagreement that the digital property of language impacts
>human thinking is such a way that humans only think digitally.  As I have
>stated, intelligence and thinking are prior to language.  Intelligence
>generates thinking and the technology to communicate it.  The technology
>is in language.
>
While I aggree with you completely, as far as you go, I cannot
but think that the situation is much more complex than that.
I am sure that you are familiar with the studies done on 
feral children. The studies indicate that if language is
not learned at an early age, intelligence is greatly limited
forever. This seems to indicate that intelligence and language
work in a bootstrap fashion (both pulling each other up by the
bootstraps) Thus you do need a certain amount of intelligence
(that is not related to language) to get language up and
running, once that is done language can then extend intelligence
to a much greater degree.
(If you think about how it works in children it will make
much greater sense.)
-- 
*****************************************************************
*   AZ    -- zirdum@ccu.umanitoba.ca                            *
*     " The first hundred years are the hardest! " - W. Mizner  *
*****************************************************************


