NPR has a cute little animation (via BoingBoing) about the story of writer Howard Engel. Engel suffered a minor stroke which left him unable to read words, but he could still write (and read what he wrote for a short period, before it faded into incomprehensibility).
Fascinatingly, he has taught himself to read again using touch. By tracing a letter with his finger (or eventually with his tongue on the roof of his mouth or back of his teeth) he could identify it; he's fast enough now with his tongue that he can read about half of the subtitles in a foreign movie!
There are several things that are interesting about this. He is clearly able to detect the shape of the letters visually (his movements to trace them are visually guided); the stroke only impaired his ability to treat those visual shapes as meaningful. He can still get to that meaning via other means (e.g. tracing the shape) and he can still generate those meaningful shapes (i.e. writing).
That tracing helps is interesting in a couple of ways:
First, unlike Braille (where the raised bumps have been customised for quick access via touch) he is having to trace the spatially extended shape to gain access to the whole thing via touch - so the act of perceiving these shapes haptically is different from perceiving it visually, but this difference is just dictated by the need to access specific information.
Second, this relates interestingly to work on artificial handwriting systems (like the early Apple Newton and Netwon II PDAs). Research here shows that handwriting recognition is improved when the system understands the process by which a letter is formed, rather than storing a template of a letter and trying to match; this was the revolutionary idea that made the Newton II actually succeed where the Newton had failed. This, as I understand, is true of people too - one of the reasons why reading Chinese is so hard for an English speaker is that we have no understanding of the process of drawing a character; all we have access to is the static shape. Providing information about the process aids character recognition (e.g. Yim-Ng et al, 2000 - not the article I was looking for but it demonstrates what I mean). Reading is impaired if all you have is a static trace; it's facilitated if you have information about the process that created it in that trace.
Engel mentions in the video that when he writes words he is able to read them visually for a brief period. This is very cool; writing creates non-visual information which persists for some time and supports the visual recognition of the letters.
This story is also interesting because it reveals what reading actually is. It's not intrinsically a visual skill - it just happens to be a visual skill because vision is so important to us, plus it allows you to access the information at a distance, allowing for the development of billboards. Reading is actually the act of perceiving information about the shapes of some objects; it's information, not anatomy, that dictates whether you can perform the task. Braille makes this point too, although there the shapes-to-be-perceived have been customised to suit touch.
I was tempted to admit this might be a time when an imaging study might be interesting, to identify exactly where his injury is. But a) you'd only need a structural, not functional, MRI scan, and b) the results would almost certainly get interpreted as 'this is the seat of word recognition'. His deficit is clearly a lot more interesting than that so I'm still not a fan of encouraging that sort of thinking.
References
Yim-Ng, Y, Varley, R., & Andrade, J. (2000). Contribution of finger tracing to the recognition of Chinese characters.International Journal of Language and Communication Disorders, 4, 561-571. DOI: 10.1080/136828200750001296
Thursday, 24 June 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment