To most competent readers, reading is something they do naturally, much like walking or talking: things we do that we have developed to the point of automaticity. Because we seem to be unable to look at text without gaining meaning, we are rarely aware of the cognitive processes that go into this most complex of skills.
In fact, when we read, our eyes are moving forward very rapidly and stopping a number of times along each line of written text. Each one of these rapid movements is called a saccade and it these saccades that carry the eyes forward from one part of the text to another in staccato fashion. I say ‘staccato’ because in between each of these saccades, the eye pauses and becomes relatively still and it is in these moments of stasis, known as ‘fixations’ that we gain information from whatever it is we are reading.
According to Ashby and Rayner*, each saccade lasts for about a quarter of a second [see also Crystal’s Encylopedia of the English Language, p.218], making the reading process, as they put it, ‘similar to a slide show, in which the text appears …, is interrupted briefly by a saccade, then reappears, and so forth’.
There is good reason for why this happens. The human visual system enables us to see with greater acuity in the centre of what is known as the fovea, the area of the retina which offers the best visual detail, hence the need to fixate on a limited group of letters before moving on to fixate the next group. Outside the fovea in the parafoveal and peripheral regions of the retina, our visual receptors are unable to discriminate the detail of letters to distinguish one from another: in other words, the further from the fovea, the poorer our perception of difference in detail.
The authors of the piece liken the visual field to a bull’s eye, with the fovea at the centre, surrounded by the parafovea, which is, in turn, encircled by the peripheral region. However, to be more accurate, you would need to imagine a bull’s eye skewed or attenuated to the right for readers of English. Thus, perceptual spans are not symmetrical, extending three or four letters to the left and, in the case of skilled readers, ‘only seven or eight letters to the right of fixation to support their recognition of upcoming words’. However, as they point out, much of that information is parafoveal. [Perceptual span also varies according to the writing system, with perceptual span in Arabic and Hebrew, for instance, operating in the reverse direction.]
For obvious reasons, fixations are also influenced by factors such as word length and by lexical access. So, when a word is less frequently encountered or it contains a less frequent spelling, fixations are longer. This factor would appear to link ‘lexical access processes that operate very efficiently during skilled reading’ to eye movements. Furthermore, skilled readers are, as you would expect, able to glean syllable information parafoveally during a fixation.
So, how do young, beginning readers differ from skilled readers? The answer is that their eye movements reflect the problems they have in decoding words in connected text: fixations last longer, their perceptual span is shorter, and they tend to regress more often. Neither in the beginning stages of learning to read is fixation symmetrical, as it is in more fluent readers.
What the authors speculate is that more fixations and shorter saccades, as well as a more restricted perceptual span, limit the amount of text a reader can hold in foveal view.
What are the implications of this for teaching beginning readers? In my next posting, I shall be looking at some of the suggestions put forward by the authors.
* Ashby, J. & Rayner, K., ‘Literacy Development: Insights from Research on Skilled Reading’, in Dickinson, D.K and Neuman, S., Eds, (2006), Handbook of Early Literacy Research, Vol 2, London, Guilford Press, pp 52-63.
I think you would benefit from looking at this article:
http://www.microsoft.com/typography/ctfonts/WordRecognition.aspx Which clearly sets out the research on eye movements and reading. Rayner's research shows that the eyes focus in the middle of the word and sometimes skip words. This is evidence that skilled readers do not decode graphemes left to right but recognise words from less extensive information because the words are in their lexicon. Function words are not necessarily read at all.
You need to remember that Rayner was responding to a world in which whole language theories were influential. He was able to show that readers look at details not whole word shapes. But this doesn't mean they decode left to right through every word. We know that is not true as readers. We can read text speak and words where parts are missing for other reasons.
The consequence of these observations is that automatic reading if words does not entail decoding of graphemes. While decoding is useful to access words when they are unknown once they are known automatically this process is no longer used, and might impede fluent reading.
Once again, Ruth, you manage to confuse some of what skilled adults readers do with what beginning readers need to learn to do.
I don't dispute that we can read short, grammatical words ('up', 'down', 'was', 'the', etc.) in a single 'fixation' – when we are skilled readers. The fact that young, beginning readers cannot simply underscores the fact that we need to teach children left to right orientation and we need to teach them to decode accurately so that they can check what they 'hear' against their oral/aural lexical repertoire.
Neither do I agree that function words are not read at all. This is absurd! We would consistently misread text if we were 'not reading' function words and substituting whatever cam into our heads.
Text speak is still processed from left to right, not as you seem to think in any which way. And, again, adult skilled readers are able to make sense of some text when letters are omitted but that is because they are bringing their processing ability and their knowledge of reading together. However, when trying to read a much more complex text in a domain with which they are unfamiliar, they begin to have the same problems faced by novices.
I also don't think you understand the concept of automaticity – i.e. that which takes place so rapidly it takes place under the level of conscious attention. We only become consciously aware of what is happening when we are confronted with a word that may be outside our lexical repertoire or when it contains infrequent sound-spelling correspondences. This too is quite different for beginning readers, who are confronted by this problem with almost every word, unless they are given extensive practice in reading decodable readers which are commensurate with a good quality phonics programme.
Yes, you will see that I am in agreement that unfamiliar words have to be tackled by a decoding strategy. As many words are unfamiliar to children they have to access them by decoding. However, the evidence is silent on how this decoding occurs, for instance that it should be through the SSP route at present recommended to be used exclusively or by other routes. Of course, if left to right decoding was essential using a font which would enable a child to see the whole word would be irrelevant.
You will find that Rayner identifies that some short words are skipped in skilled reading, after all it is a fairly automatic response to know that 'was' is likely to be present between Jane and skipping in the sentence 'Jane was skipping' in text in the past tense. It's not so surprising.
I disagree that text speech is processed by decoding left to right. Why would they be different from non-abbreviated words? At first encounter the reader would decode, using their knowledge of the word they represent (phonics not always necessary by any means) and subsequently they would match the word to lexicon. By lexicon I mean the internal word store, not the aural or oral (read Dehaene on this). No doubt text in text-speak would be automatically recognised by focusing within the words in the same way as normal text.
You'll notice that I don't say words are read 'any which way'. That is a misrepresentation. The eyes move left to right because text goes left to right in English. This doesn't mean that words are processed by looking at each grapheme in turn rather than picking up the whole word through focusing upon a position a few letters in.
I believe that your response is a glorious example of how your thinking is governed by your own skilled reading practice. To ALL young children (at some point in their education), ALL words are unfamiliar, which is why high quality phonics programmes are essential.
You said: However, the evidence is silent on how this decoding occurs, for instance that it should be through the SSP route at present recommended to be used exclusively or by other routes. Of course, if left to right decoding was essential using a font which would enable a child to see the whole word would be irrelevant.
The evidence is NOT silent on how decoding occurs! Children say the sounds and listen for the word. It couldn’t be more plain than that. [Do you know how phonics is taught these days?] After much (or less for some) practice, the process becomes more and more automatic. And, on the issue of font, you have obviously missed the point of what I was saying in the second posting.
You said: You will find that Rayner identifies that some short words are skipped in skilled reading, after all it is a fairly automatic response to know that 'was' is likely to be present between Jane and skipping in the sentence 'Jane was skipping' in text in the past tense. It's not so surprising.
You choose the example of ‘was’. ‘Was’ is part of a very familiar pattern in English – ‘watch’, ‘swan’, ‘swap’, ‘wan’, etc, etc. – and, with the correct teaching will be decoded immediately. However, and this is obviously where you lack experience of actually teaching young children or children who have been taught whole language: the latter are very often likely to read the word as ‘saw’, while the former could easily substitute ‘is’ if they were being asked to 'make sense' of what they are reading. For reasons I’ve already explained in the blog and which both Ashby and Rayner are at pains to acknowledge, the multiple complexities required in reading text require great accuracy or meaning is lost.
Text speak is not different from non-abbreviated words and is processed from left to right by skilled (i.e. well taught) readers. Why would reading text speak be any different from any other text in English?
I don’t agree with Dahaene on this either and I don’t quite know why you would patronise me by assuming I hadn’t read his Reading in the Brain.
Finally, eyes do move from left to right – if they are trained to move in that direction. I would stress though that directionality isn't something that simply comes naturally.