October 21, 1999
Hi Everyone!
As I already said, today was a special day.....there were several nice
things that happened. One of the special events was the "back and forth"
messages between Joe Martin, linguist, and Lourdes Tollette, who is Deaf
with native signing children. Michelle Lovinas and I were also reading
these messages, and they started becoming so interesting that I asked
permission to post them, and I did receive that permission :-)
It all started when Joe posted a message called "linguistish shtuff" on
the SignWriting List awhile back. The message was very interesting but was
written in linguistic terminology. Then Lourdes wanted to know what it
meant, and so did Michelle!
So Joe was kind enough to take several hours to try to explain it in
simpler language. He did an excellent job! Here is what he sent to us first:
~~~~~~~~~~~~
Date: Wed, 20 Oct 1999 21:02:10 -0700 (PDT)
From: Joe Martin
To: SignWriting List
cc: , valerie sutton
Subject: translation
----- I'm sorry; I thought everyone (except other linguists) would just
skip over this stuff. I underestimated people. I will try to explain....
(not so easy though-- Whew!)
I said in the original message:
>> Spent hours last night reading your posts. (^_^) Curious now.......
> > evidence seems to indicate phonological recoding used as a reading
> > strategy by congenitally deaf persons, even though they have no way to
> > access the aural phonetic content;
MEANS: Hearing people talk to themselves when they read. Not out loud.
But when they see the letter, in their mind they "say" it. So they have a
little
inside voice saying all the words. The letters are a code--each letter
stands for a sound. The sounds are a code too; they stand for other
things. Phonology is the way the parts of the code go together. So when
hearing people read, they change the letter code into the sound code.
We call it phonological recoding (because we want to sound like know-it-
alls ;-) Deaf people would not know the sound code, so they couldn't do
this. But still, experiments seem to show that somehow they do this
anyway. (?!)
> >there seems no way to investigate empirically--until now, using
> >Signwriting.
MEANS: There's three kinds of phonology--the sound code, the alphabet
code, and the Sign code--the way the parts of sign language go together.
Maybe deaf people have a little inner person signing to them, instead of
speaking to them.---but how could you ever tell?. There's no way; since
written letters stand for sounds, any experiment that uses writing depends
on those sounds **Except---with signwriting, the person could see the
word and make mental pictures of the Signs. They could do the "recoding"
with mental images of the signs and we'd know that sound had nothing to
do with it.
> That's what I mean, and that is why I am so excited about SignWriting.
> > With the phonetic information
> > encoded in pictogens (?) or graphemes being in the visual modality, it
> > seems logical to map visual Phonetic Form onto meaning, without any
> >necesity to involve auditory processing at all.
MEANS: not much-just me showing off big words. Graphemes are the smallest
parts of a writing system; the little lines and
circles. Don't feel bad if you didn't know that--Pictogens is a word I just
learned from Fernando's last post. It means the smallest parts in a
picture. Oh, and phonetic is the smallerst parts of the sound code.;-)
the visual modality = seeing (instead of hearing.) hearing = auditory
processing. We linguists are starting to sound like real stuffed shirts,
huh? That paragraph says; "you wouldn't have to know any sounds to read
it"
> >With this established (has it been?) it would seem possible to
>determine if in fact the congenitally deaf are trying to do something
>analogous using the
>graphemes of alphabetic writing--- and since that system is designed
>around sound,
>the mapping would leave terrible gaps and it would be hard to learn to
>read...
MEANS: we always talk about "mapping". For us (not for the rest of
the world) it means Matching the parts of one code with the parts of some
other code. Here
I'm saying maybe deaf kids learning to read try to match what they see with the
meaning. In Chinese you can do this with a few words because the chinese
character
looks like a Picture of something. But for English, it wouldn't work;
words don't
look like the things they mean.
> Yes, indeed. My hunch is that that is what they try to do at first, but
>later they realize that they'd better learn to segment reading at >a
>morphemic level. And so they do and their reading and writing
>consequently takes off.
MEANS: when it doesn't work, the kids start to try break the words up
into parts that mean things (morphemes)
(intra-morphemic letter order = spelling
>The problem is that by doing so they find no help with respect to
intra-morphemic letter order
>and, most importantly, with respect to the syntactic dimension, which
remains relatively uncovered by >that strategy (even though there is some
relevant coverage, but only to short phrases, with long ones it >becomes
impracticable).
MEANS: "syntactic dimension" = grammar; you can only read short sentences.
>The funny thing is that the phenomenon seems universal and thus happens
>with hearing people also. >When hearing people engage in a
phono-articulatory supression task (repeating a sequence of non-word
>syllables in a long string), their ability to identify
grammatically-distorted sentences in reading decreases >sharply in
relation to their intact ability to identify semantically-incorrect
sentences (e.g., by descending >stairs, one reaches the attic). This shows
the importance of phonological recoding in phonological >working memory
for syntactic analysis during reading, and just how ineffective morphemic
analysis by >itself may be when it is left unassisted by phonology (or
cherology, for that matter)
MEANS: uuuhhhh
Wow! I just read that about ten times, and this is what I figured out:
They had people read, and at the same time the people had to keep
repeating some meaningless words over and over. This makes it hard to
read. They found that when people did this, they couldn't notice if there
was bad grammar or not. But, they could notice if the stuff they were
reading didn't make sense (like that stairs example). What this tells us
is that that little inner person saying the words in our heads is real
important for getting our grammar right.
That's for hearing people.
> > This seems nearly the same question you are investigating; In reading
your message it wasn't always > > > clear if you were separating
"ideographic reading" from "non-aural phonological decoding;"
MEANS: I didn't know what he meant. Never heard of ideographic reading.
Had to go look it up in one of those references he sent me. "non-aural
phonologic recoding," well, I made that up myself. J It would be doing
same thing to the signwriting symbols that we do with the alphabet
symbols. I was worried here; see. A lot of linguists think that
phonology is only for sounds. -----yes, they should know better----(But it
isn't; if Signing is a real language, then it has to have its own
phonology, like all languages do.) So I wanted to see how he would
answer. See if he gave me a stupid answer or not. (he didn't) When I
looked up ideographic reading it turned out to be..see the pattern of
lines on paper, recognize it, know that it means something. No breaking
it up into little parts; like when we see a really really common word like
"the" we don't sound it out into a t-sound, and h-sound, and a e-sound.
We just recognize it. A crazy linguist's'word for this breaking things up
is "analytic." Nobody else on earth says that. And "global, holistic,
gestatltic---GESTALTIC???? Nobody says that! All three words to mean
"all-at-once." Linguists, sigh..)
>Yes, they seem to be different processes. Ideographic reading seems to be
more of a global, holistic, >gestaltic reading. The best term for it
would be recognition. Whereas non-aural phonological decoding is
>analytic. That is why it is called decoding. Right hemisphere performs
pattern recognition in a global >fashion (i.e.parallel processing),
whereas left hemisphere performs encoding and decoding (i.e., recoding)
>in an analytic fashion (i.e., serial processing). That is why recognition
by right hemisphere is limited to single ideograms, so that when you have
sentences, you need left hemisphere analyses.
MEANs; the right half of our brain sees things all-at-once, and the left
half breaks things down into parts.
An important question is related to the analytic limits and units that may
characterize the right hemisphere. My hunch is that that is the realm of
morphemics. If these aphasic readers can match heterographic homophones
(I assume that means in an alphabetic script(?) then it's matching
pictures of (written) words with pictures of things; they should be able
to read signwriting, possibly using only the right hemisphere. the
difference between deaf readers and lesioned readers...i get confused.
MEANS: means I get confused by my own writing, see? Well, he's asking
how much the brains break things up, and thinks maybe it is into parts
that mean something, whatever that part might be. What I'm saying kind of
has nothing to do with that. (^_^) heterograph = written different.
Homophone = sounds the same. I'm wondering if people with brain damage
who can only use the right half-the half that does pictures and not
language-could read signwriting.
>Yes, we are talking about aphabetic script. The right hemisphere is
capable of matching written words to >their corresponding pictures, which
demonstrates that it is capable of some reading. The question is what kind
of reading is that: purely visual (ideographic) or phonetic? One of the
eloquent findings relevant to that question is that the right hemisphere
cannot match heterographic homophones. Because it cannot evoke the sound
images corresponding to the written words, it does not realize that they
are the same (homophones). It sees them as different because in fact they
are so from a purely visual standpoint (they are heterographs). Therefore,
the right hemisphere reads ideographically. There are additional
evidences, though, but this is a quite compelling one already.
MEANS: People who have brain damage so they can only use the right,
non-language half of their brain are called aphasics. Be glad you are not
an aphasic, because if you were you would have thousands of linguists
trying to study you all the time. Never get any peace and quiet. Anyway,
aphasics can read a little. We don't know how. Maybe they only do that
"ideographic reading"; their brain can see a picture and recognize what it
is, so maybe they see the written word as a little picture. (probably a
really bad one, but they recognize it as supposed to be a "cat" or a "dog"
or whatever . ) They shouldn't be able to break the word down into
alphabet sounds, put the sounds together and get the word for cat or dog.
Because they can't break things up that way.
Anyway, I'm wondering about this prediction; Naive readers who know Sign
should be able to read signwriting.
Yes, to a certain extent, at elementary levels of cheremic awareness.
Systematic instruction on the correspondences between cheremes and SW
pictogens would be required to raise that awareness level.
MEANS: I predict that people who Sign, but have never learned signwriting,
should be able to read signwriting. (a little anyway.) (**and by the way,
Lourdes, if you have any opinions on this I'd love to hear them. ) He
agrees with me-we think some of the symbols need to be taught. You'd need
to be taught that the little plus sign mean to grab something. But we
think that people should be able to figure out much of the signwriting
symbols without being taught, because they are mostly little pictures.
Deaf, hearing or aphasic, could all do this.
> >They shouldn't be able to read an alphabet-based writing system like
Stokoe notation. Then after exposure to "grapheme-phoneme correspondence
instruction," they should be able to read that too.
>Yes, I agree.
> Because they would see how to map the picture of the word onto the
picture of the referent; i.e. pick out the (visual) phonological
parameters and map them onto the manually-produced atriculatory movements
that these graphemes represent. Again, with no auditory involvement at
all. They could read the signwriting without instruction because it is so
highly motivated, (and that may involve phonology, or it might just be
drawing pictures of pictures....I dunno.)
Yes, I agree.
MEANS: people can't read a word written with an alphabet, like english,
until they are taught what the symbols stand for. Same for Stokoe
notation-it works like the alphabet. But aphasics couldn't do it.
Signwriting is different, not like an alphabet. The aphasics could do
that, or maybe not. That is what he is doing experiments to find out.
And he will find out another thing. He will be able to prove if deaf
people do that "phonological recoding" break the signwriting symbols up
into parts and put them together, to make mental picture of the words. And
that they do this without using sound. This has never been proven before
in an experiment.
However, I'd just like to say that all this pertains to the
neurolinguistically intact brain, not to the aphasic brain. Coelho and
Duffy have demostrated that the idea of using sign language (or
blissymbols) as a tool for the deaf is not feasible. We can extend that to
SignWriting too. Sign language requires the
linguistic processing capabilities that are damaged in the aphasic brain,
and thus SignWriting is not a viable tool in that regard.
> Sure is hard to be concise when discussing this stuff. What I'd really
like to know is if anyone has tested reading in any script designed to
represent signed language--thus accessing phonologic structure while
bypassing the auditory channel. (so logograms don't count. ;-)
That is precisely the issue of a research project application that I have
filed in a Brazilian research agency two months ago
MEANS: In order to use Sign language people need to use the language part
of the brain.. So if people had brain damage and couldn't use that part
of the brain, then they couldn't use sign language, and so they couldn't
use SignWriting either.
But for his research he wants to test how people read signwriting. It
should be that same "phonological recoding" as hearing people use for
sounds---look at the parts of the signwriting, break it down into parts,
put them together to make the sign, get a mental picture of the sign, then
understand the meaning. Only it would be done using the parts of the
signwriting, (which have no sound at all)
Whew! More clear now?
Hope so :-)
joe
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Val ;-)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Valerie Sutton
SignWritingSite...Lessons Online
https://www.SignWriting.org
SignBankSite...Databases Online
https://www.SignBank.org
Deaf Action Committee For SignWriting
Box 517, La Jolla, CA, 92038-0517, USA
|