Differences distract from reading - True or not ?

Denis_Masharov's picture

I participated in FaceBook discussions about the differences in the details of letters (drops, terminals etc.)
It was two opinions:

1. Legibility should base on the ability to small differences between characters. The more equal the more difficult to read.

2. Even the smallest differences in the details "separates" letter from the word, draws attention to it and thus slowing reading.

Legibility - helps or hinders readability?

In both views there is a grain of truth... I'm confused...
Help me understand, please feel free to express your opinion.

Andreas Stötzner's picture

I think it is about achieving a balance. The letters of a writing system should be ‘of one stock’, e.g. shaped out of some common ground pattern /model in order to form a unit. But the single parts need to be sufficiently different on the other hand, to make them distinguishable enough and avoid confusion. Both issues will distract the reader’s eye from comfortable recognition: too much similarity, and too much diversity.

I once found the thorough study of Bembo quite enlightening about those questions.

flooce's picture

I give you my opinion as a reader, as I am not working in this industry. I find a too “reductionistic” font bland in a way that the letter or words actually “blur” more. It is harder for me to read. On the other hand a font which is skillfully crafted but too fancy might seem inappropriate for the context, but was never hard to read for me. As far as I know we do not identify single letters after each other but word shapes or syllable shapes and so I think adding more character to the individual letters will help to distinct syllables and words from each other, and will not distract. This is because (afaik) we do not pay too much attention to the individual letter, but to groups of letters.
This is based on my reading experience with serif fonts. With sans I find this question harder to answer. Do humanistic models have more diversity of letter shapes compared to geometric sans faces? Or do they follow different principles?

ralf h.'s picture

I struggled with this question and all these vague definitions of legibility and readability myself.
Here is how I sorted it out: http://opentype.info/blog/2011/04/25/the-onion-layer-model-of-legibility/

Denis_Masharov's picture

@Ralf Herrmann - excellent essay, I'm impressed, thank you!

William Berkson's picture

Based on my own work and on the work I rediscovered of Matthew Luckiesh, I think that differentiation is important up to a threshold, but beyond that other factors become more important. For example, serifs help by differentiating the extremes of letters, which seem according to research to be more salient in recognizing letters. But beyond a threshold further differentiation of letters becomes 'noise', and doesn't help and can be distracting. Above the threshold of adequate differentiation of individual letters, more important are issues of weight and rhythm of the characters and of the spacing.

Here is an account of my own explorations.

And here is a discussion of Luckiesh's work.

hrant's picture

It's always nice to see a discussion where the opposition
of legibility and readability is discussed. It seems like a
paradox, but I've believed in the difference between the
two for a while now. In terms of visualizing how they
relate, I find it useful to refer to geometry: imagine two
vectors at a right angle, so separated by 90 degrees. So
they're both pointing towards the same "quadrant" but
they also oppose each other to an important extent,
and the balance one needs to find depends on how far
one wants to push into immersive reading; the further
you push into it the more letters need to contribute to
divergent boumas, as opposed to merely being divergent
on their own.

So for example sans letterforms are easier to identify,
but serif forms "talk to each other" better, essentially
becoming less themselves. And if you look at Legato
for example, the whites of the letters talk to each other
instead of with their own blacks, creating what I see
as much stronger bouma bonding.

It's also important to realize that it's quite trivial to ensure
legibility: people can read almost any letterform if given
enough time. And this is where the expressive dimension
of type (think of display type) nicely comes in. This doesn't
mean text fonts can't be expressive, it's that they express
themselves more at the level of overall texture.

BTW, here's something light I once put together:
http://www.themicrofoundry.com/ss_read1.html
For a much deeper treatment you could read
my article in issue #13 of Typo magazine.

hhp

hrant's picture

Sorry, I forgot to address an important aspect of the question:
How much divergence is too much? To me this answer is simpler
than one might think: the divergence is too much when the reader's
eye jumps too far ahead (to a subsequent line) because it's distracted
by an unusual texture (which often happens when a "pushy" glyph is
doubled - often the binocular "g", especially the closed-bottom form).

hhp

John Hudson's picture

Can anyone point to empirical evidence that making letters less individually recognisable ever results in improved word recognition? Or vice versa? Or that making either letters or words less recognisable ever decreases reading fatigue? [Word recognition and comfort over time seem to me the cornerstones of any measurable notion of readability.]

I don't think there is an opposition between legibility and readability if one assumes spatial frequency and stylistic harmony, which is a general assumption of type design and typography: we don't make or use 'ransom note' typefaces for reading, we make and use types which operate within single spatial frequency channels and that share stylistic features that allow the necessarily individuated forms of letters to be grouped without individual letters standing out. In order for legibility to be even independent of readability -- never mind 'in opposition' --, one first needs to throw out the spatial frequency and stylistic harmony, hence disrupting word recognition. In other words you need to throw out typography.

Hrant: So for example sans letterforms are easier to identify, but serif forms "talk to each other" better, essentially becoming less themselves. And if you look at Legato for example, the whites of the letters talk to each other instead of with their own blacks, creating what I see as much stronger bouma bonding.

Less themselves? That would imply that the letters that 'talk to each other better' -- and I agree that this is one of the virtues of Legato -- are less individually recognisable than letters that talk less well to each other and are, by your logic, more themselves. This makes absolutely no sense to me whatsoever. There is no opposition, there is no zero sum game in which working well to form recognisable words -- or 'stronger bouma bonding' if you insist -- requires the individual letters to be less legible. There is no reason at all why better recognisability of individual letters and better recognisability of words do not go hand-in-hand.

hrant's picture

To me it makes sense because reading is parallel:
the bouma is fighting to be recognized as fast as
the individual letters; without help it will usually
lose, and since reading a bouma saves time/effort,
this slows down reading. The good news is that
legibility is so easy to ensure that we have a lot
more room to play than we might think.

hhp

Nick Shinn's picture

Right, Andreas.
Legibility vs. readability is not a discussion worth having in general or theoretical terms, it only makes sense when applied to the instance.
One might just as well say strength vs. weight, with regard to cars or wine glasses.
It’s the designer’s task to optimize both, within the specific constraints of a product.

William Berkson's picture

John, your question about legibility raises the question of what 'legibility' means. Luckiesh wanted to avoid the term because he thought is was too unclear. Tinker used it to refer basically to reading speed. Lieberman defines it (following some work in the '60s) as ease of distinguishing individual letters. That could be tested by minimum time threshold for recognition of letters with accuracy to a specified level. Or you might mean something tested by a contrast threshold. There is also the question of what distance you can recognize a character. These might not all be the same.

Then speed of individual letter recognition and word recognition might be not totally go together, because of issue of coherence and spacing and rhythm in the font. And comfort in reading lengthy text is yet again a different question.

Above I wrote about differentiation of letters, rather than legibility, but I suspect that also legibility of individual letters affects readability up to a threshold, and then other factors dominate.

quadibloc's picture

I would think the answer to the issue raised in the initial post is "obvious".

The differences between individual letters that people expect to find in order to distinguish them need to be readily visible. This is why, for example, learning to read Hebrew would be more difficult than learning to read Armenian - some letters have very tiny differences.

But any irrelevant differences are distracting. If the descenders are of different lengths on a p and a q, since descender length isn't one of the expected distinctions between the two letters, this difference will only be a distraction.

John Hudson's picture

Bill: Then speed of individual letter recognition and word recognition might be not totally go together, because of issue of coherence and spacing and rhythm in the font.

Oh, absolutely good recognisability of individual letters cannot compensate for bad typography. But as I said, typography presumes a lot of stuff that naturally bring letter recognition and word recognition into alignment: spatial frequency and stylistic harmony (coherence) and, of course, spacing and -- duck, Hrant -- ryhthm. The latter so much presumed that I didn't even mention them.

Yes, if you isolate and any of these normal features of text typography -- if you change font weights suddenly in the middle of a word, or mix serif and sans serif letters in a word, or track out the spacing so that the letters don't knit together, or vary the spacing or widths of individual letters so as to break up the rhythm -- then word recognition will fail long before individual letter recognition fails. But then we're not talking about typography any more.

Kevin Larson's picture

John: Can anyone point to empirical evidence that making letters less individually recognizable ever results in improved word recognition?

As has been hinted at, there is empirical evidence that individual letters are recognized more accurately in time and distance threshold tasks when there is a more space between letters, but that reading speed is faster when there is less space between letters (but more between words).

In most cases I believe that faster and more accurate letter recognition will lead to faster and more accurate word recognition.

hrant's picture

> there is empirical evidence that .... reading speed
> is faster when there is less space between letters

I missed that! Could you provide the reference?
Also, could you explain it?

hhp

John Hudson's picture

Kevin, to clarify, when I referred to 'making letters less individually recognisable', I meant making their design less individually recognisable, not altering their setting. It makes sense that individual letters are quicker to recognise without crowding but that words composed of those letters are quicker to recognise when the letters are closer together and the words clearly spaced. But that is about the setting of the letters, not their individual recognisability. What I am suggesting is that letters that are difficult to recognise individually will make words similarly difficult to recognise, and words that are easy to recognise individually will make words easy to recognise. What I'm enquiring about is counter evidence to this, i.e. letters that are slower to recognise individually producing faster word recognition. Hrant's legibility/readability opposition hypothesis seems to suggest this possibility, even that is should be the case.

William Berkson's picture

John, the confusion between the lower case l and upper case I in many fonts is an example of where legibility problems of letters don't seem to affect readability of words. I would expect that with additional ambiguous letters you would have more problems.

Peter Enneson's theory is that the way letters are designed the normal tightness of letters make them less recognizable individually, but the sub-letter units, arranged in a regular rhythm are more recognizable as a whole word pattern. If he's right, that it may explain why letters that are confusable in isolation don't pose problems within words. I've seen this with fancy script caps also—no problem reading in context, but out of it you don't know what it is. The sum of the readability of the word is not the same as the sum of the legibility of the individual letters.

It might be that more differentiation would make an isolated letter easier to distinguish, but it may be harder to read in the crowded situation of words. For example some blackletter with a lot of hairlines, or some script fonts.

I haven't thought this through yet, but it's an interesting question, particularly as a possible way to test Peter's theory.

Chris Dean's picture

@Kevin: “…there is empirical evidence that individual letters are recognized more accurately in time and distance threshold tasks when there is a more space between letters, but that reading speed is faster when there is less space between letters…”

References?

Chris Dean's picture

@Kevin: “Can anyone point to empirical evidence that making letters less individually recognizable ever results in improved word recognition?

Not to my knowledge, but after a quick scan, I came across these two which I thought I’d throw out there for discussion (The second authour of the first paper caught my eye — pun intended).

————

Bouwhuis, D. & Bouma, H. (1977). Visual word recognition of three-letter words as derived from the recognition of the
constituent letters. Perception & Psychophysics, 25 (1), 12–22.

Abstract:
Word recognition is one of the basic processes involved in reading. In this connection, a model for word recognition is proposed consisting of a perceptual and a decision stage. It is supposed that, in the perceptual stage, the formation of possible words proceeds by separate identification of each of the letters of the stimulus word in their positions. Letter perception is taken to be conditional on position because of interaction effects from neighboring letters. These effects are dependent on both position in the word and retinal eccentricity, which are of particular relevance in reading. The letter-based approach rests on the strong relationship between the results from single-letter recognition in meaningless strings and in real words. Next, in the decision step, the many alternatives generated in the perceptual stage are matched with a vocabulary of real words. It is supposed that the final choice from among the remaining words is made in accordance with the constant ratio rule; frequency effects are not separately incorporated in the model. All predictions of the model are generated by means of data from earlier experiments. Despite being not optimally suited for this purpose, the predictions compare favorably with responses in word-recognition experiments.

————

Legge, G. E., Mansfield, J. S. & Chung, S. T. L. (2001). Psychophysics of reading XX. Linking letter recognition to reading speed in central and peripheral vision. Vision Research, 41, 725–743.

Abstract:
Our goal is to link spatial and temporal properties of letter recognition to reading speed for text viewed centrally or in peripheral vision. We propose that the size of the visual span — the number of letters recognizable in a glance — imposes a fundamental limit on reading speed, and that shrinkage of the visual span in peripheral vision accounts for slower peripheral reading. In Experiment 1, we estimated the size of the visual span in the lower visual field by measuring RSVP (rapid serial visual presentation) reading times as a function of word length. The size of the visual span decreased from at least 10 letters in central vision to 1.7 letters at 15° eccentricity, in good agreement with the corresponding reduction of reading speed measured by Chung and coworkers (Chung, S. T. L., Mansfield, J. S., & Legge, G. E. (1998). Psychophysics of reading. XVIII. The effect of print size on reading speed in normal peripheral vision. Vision Research, 38, 2949–2962). In Exp. 2, we measured letter recognition for trigrams (random strings of three letters) as a function of their position on horizontal lines passing through fixation (central vision) or displaced downward into the lower visual field (5, 10 and 20°). We also varied trigram presentation time. We used these data to construct visual-span profiles of letter accuracy versus letter position. These profiles were used as input to a parameter-free model whose output was RSVP reading speed. A version of this model containing a simple lexical-matching rule accounted for RSVP reading speed in central vision. Failure of this version of the model in peripheral vision indicated that people rely more on lexical inference to support peripheral reading. We conclude that spatiotemporal characteristics of the visual span limit RSVP reading speed in central vision, and that shrinkage of the visual span results in slower reading in peripheral vision.

————

I think a simple series of RSVP word recognition tasks using trigrams and different fonts for individual letters would be a step towards answering this question.

#ifyoureagradstudentijustgaveyouyourthesis

hrant's picture

RSVP is highly unnatural.

hhp

Nick Shinn's picture

RSVP is weird.
The whole point of reading is that you’re in control, creating meaning, probing, traversing swaths of text at your own speed and in your own manner, with carefully aimed saccade bursts, not being force-fed. [Crossed post!]

Chris Dean's picture

Rubin, G. S. & Turano, K. (1992). Reading without saccadic eye movements. Vision Research, 32(5), 895–902.

Abstract:
To assess the limitation on reading speed imposed by saccadic eye movements, we measured reading speed in 13 normally-sighted observers using two modes of text presentations: PAGE text which presents an entire passage conventionally in static, paragraph format, and rapid serial visual presentation (RSVP) which presents text sequentially, one word at a time at the same location in the visual field. In Expt 1, subjects read PAGE and RSVP text orally across a wide range of letter sizes (2X to 32X single-letter acuity) and reading speed was computed from the number of correct words read per minute. Reading speeds were consistently faster for RSVP compared to PAGE text at all letter sizes tested. The average speeds for text of an intermediate letter size (8X acuity) were 1171 words/min for RSVP and 303 words/min for PAGE text. In Expt 2 subjects read PAGE and RSVP text silently and a multiple-choice comprehension test was administered after each passage. All subjects continued to read RSVP text faster, and 6 subjects read at the maximum testable rate (1652 words/min) with at least 75% correct on the comprehension tests. Experiment 3 assessed the minimum word exposure time required for decoding text using RSVP to minimize potential delays due to saccadic eye movement control. Successive words were presented for a fixed duration (word duration) with a blank interval (ISI) between words. The minimum word duration required for accurate oral reading averaged 69.4 msec and was not reduced by increasing ISI. We interpret these results as an indication that the programming and execution of saccadic eye movements impose an upper limit on conventional reading speed.

————

In a growing market of small screen mobile devices I suspect that more research will be done in this area. Now if there were only some way to ease and expedite the grant application process’ and ethics protocols…

hrant's picture

I actually believe that saccade-free reading is not only
achievable, but also highly desirable. However this does
not mean using RSVP to figure out how we currently
read has any merit.

BTW, high-performance saccade-free reading cannot
involve machine pacing; what I mean is that the reader
must remain in control of the refreshing of the display*.
Furthermore, the main advantage I see to saccade-free
reading would be the displaying of increasingly large text
in proportion to the distance from the fixation point; this
would be in order to overcome the acuity decrease of the
retina, basically allowing for the recognition of boumas
much deeper into the field of vision.

* How? Good question.

hhp

Chris Dean's picture

Regarding pacing, I saw a fascinating demonstration where the time a word was displayed was increased or decreased depending on it’s importance (ie: words such as ‘of’ and ‘the’ were displayed for shorter times). While not controlled by the user, it resulted in significantly faster reading rates. I believe it was in a presentation from my old supervisor. I’ll see if I can dig it up. It’s quite striking.

Defining word categories and durations for and between them should be achievable. This has most certainly been done, but would require a little more looking (however, it’s getting late, and I’m only two episodes away the series finale after re-watching all of Cheers. Season 11, episode 22, is still funny enough to bring me to tears ;)

|_|)

John Hudson's picture

Bill, what I asked was whether there was any evidence that making letters less legible ever results in improved word recognition, i.e. made it better than when the letters were more legible. Yes, there are some difficult to recognise or ambiguous letter forms that, in the context of words, do not pose the same level of difficulty as in isolation, presumably because of linguistic context clues. But this is not the same thing at all as suggesting that less legible letters might improve word recognition, which seems to be the implication of Hrant's opposition of legibility and readability, or the converse: that optimising letters for individual recognisability might actually hamper readability.

hrant's picture

> the time a word was displayed was increased or decreased

I would posit that the minimum duration a word needs
to be exposed to be read with near-total accuracy is roughly
proportional to some compound of its frequency and bouma
distinctiveness. Roughly, because foveal decoding is different
than parafoveal decoding, with the latter playing a very large
role in reading, at least according to the model I believe in.

hhp

John Hudson's picture

Hrant: I actually believe that saccade-free reading is not only achievable, but also highly desirable.

Hmm. A saccade is a zero-input eye movement between fixations. The shorter the saccades and the longer the fixations, the slower the reading. The longer the saccades and the shorter the fixations the faster the reading. The upper limit of reading speed has always been comprehension: the longer the saccades and the shorter the fixations, the greater the chance of comprehension error and the more regressive saccades necessary to correct false readings. Even at these highest, most error prone speeds, though, the mechanism of reading remains the same: fixation and saccade. Not only do I not see grounds to consider 'saccade-free reading' achievable, I can't even imagine what it would be like. If you're getting rid of saccades, that implies you are also getting rid of fixations, since the two are part of a single mechanism. So what is left? What is the mechanism of 'saccade-free reading'? How does it work?

Chris Dean's picture

Perhaps I don’t understand the question, “What is the mechanism of 'saccade-free reading'? How does it work?

You put on eye tracking equipment, look at a dot, present words in an RSVP fashion, and ask the subject questions about what they just read. Look at the eye-tracker data, their eyes haven’t moved, and presto, saccade-free reading. It’s just one long fixation. It’s clearly possible to attend to a single location without eye movements, I watch people in a lab do this all day. If their eyes move, the trial ends, and they try again. Are we on the same page?

Regarding “I would posit that the minimum duration a word needs to be exposed to be read with near-total accuracy is roughly proportional to some compound of its frequency and bouma distinctiveness.” It is my understanding that frequency and semantics play the primary roles in determining duration. Again, I’d have to do a little more digging to find references to support this, but I know there’s a lot out there on just that.

hrant's picture

The main problems with RSVP:
- The parafovea is ignored.
- The pacing is artificial.

Just one of those would be enough to kill RSVP in terms of
providing true insight into immersive reading. With two the
second one can sit on the grave and make sure nobody lays
any flowers. But some people still have a shrine at home.

BTW, the reason boumas (assuming that's even being tested
as a data source) seem irrelevant in RSVP is because in the
fovea you don't need boumas: the individual letters are clear
enough to unambiguously compose the word.

hhp

Denis_Masharov's picture

I understand so - that the important thing is not to cross the line between "recognizability" and "personality" of the letter.

William Berkson's picture

John, I was trying to argue that there is not a linear relationship between the legibility of individual letters, summed, and the recognition of those letters in a word, even if they are well spaced. I don't know if it's true, but I can imagine that ornate, highly differentiated letters might be easier to distinguish, but do worse in word tests. By letter legibility I mean that the letters could be recognized in very short, millisecond times, without many errors. And the same for "word legibility": recognition in short time periods, reliably. I don't understand Hrant's theory, but on Peter's that might be possible.

dezcom's picture

Has there ever been any research or even discussion on what is "readable enough for a given task". One end of a continuum might be a phone book number and another might be "War and Peace". We could place things like brochures, ads, posters, etc.., wherever they may fall.
My point in this is that I assume there are points where the optimum for one task differs from another. There is a place where the form of the typeface may aid communication of the message of the text yet reduce continual reading. There may be a place where the form of the text has so little communication of content (beyond just as a letterform) as to be a non-factor.
When designing a typeface for a given target use, it might be valuable to know where these breaks occur.

quadibloc's picture

@hrant:
I actually believe that saccade-free reading is not only
achievable, but also highly desirable.

This comment might be more controversial than it deserves to be, on the basis of the fact that saccade-free seeing is not possible. If you stare at one point fixedly enough to suppress saccades, within seconds your visual field will narrow to a small point.

However, while eye movement is an essential component of vision, and hence of reading, and, on top of that, the saccade is typically a largely involuntary motion... you're still not nearly as wrong as all this makes you sound.

Because, of course, when reading, one doesn't stare at a single word, with one's eyes wandering back and forth along that word, the way one watches a movie or TV screen, or even admires a painting or sculpture.

One's gaze is in constant motion, but even with occasional saccades, in general, over the majority of words, one's gaze will pass unidirectionally, if one is reading at a good speed. Only if one is reading very slowly will a saccade be likely to take place during the reading of the majority of the words, or, say, even the majority of the words longer than three letters.

Thus, despite the universality of the saccade, the time scale of that phenomenon needs to be kept in mind. Normal, healthy reading of extended texts (i.e. where the reader is not moving his lips) is mostly saccade-free, and proper typography will facilitate that.

@Christopher Dean:
You put on eye tracking equipment, look at a dot, present words in an RSVP fashion, and ask the subject questions about what they just read. Look at the eye-tracker data, their eyes haven’t moved, and presto, saccade-free reading. It’s just one long fixation.

Possibly I misunderstood Hrant's comment, if in order to engage in the saccade-free reading of a book, it is necessary to somehow have its pages presented to me as one word at a time flashed on a screen.

But in checking, I see I misunderstood the word "saccade", and was mistakenly restricting it to involuntary movements of which we are generally not consciously aware which allow us to look at a wider visual field than our fovea can handle.

Those shouldn't normally be part of reading, but the voluntary movement of the eye from one word or part of a word to the next also counts as a saccade - and, indeed, some words are long enough not to be contained in the fovea if presented in a legible size.

One could still see the whole shape of a word and recognize it without having applied acute foveal vision to every letter of the word, but I think it would take a dedicated speed-reading enthusiast to be able to employ such a technique to advantage.

hrant's picture

> If you stare at one point fixedly enough to suppress saccades,
> within seconds your visual field will narrow to a small point.

I'm assuming that doesn't happen when the view
is changing, otherwise RSVP couldn't even exist.

> healthy reading of extended texts (i.e. where the
> reader is not moving his lips) is mostly saccade-free

?

--

If there could be a good way for a reader to trigger the refreshing of
the display* then we should be able to read [even more] immersively
by not having to saccade (even if my technique of enlarging** text in
proportion to parafoveal depth isn't implemented).

* And a way to trigger a regression...

** And probably also loosening.

hhp

quadibloc's picture

While you (Hrant) were replying to my post, I was editing it to reply to Christopher Dean's post as well as yours. Hopefully, things are clarified a bit.

John Hudson's picture

Christopher, I wasn't talking about RSVP -- and I don't think Hrant was either. Based on previous discussions going back several years, I understand Hrant to believe it is possible to massively increase typical reading speeds, not involving presenting text in new ways à la RSVP. So I'm interested to know how Hrant sees 'saccade-free reading' working.

[Personally, I consider RSVP a kind of parlour game, sort of interesting in that it shows that our word recognition skills are immediately adaptable to 'reading' text presented in novel ways. But it doesn't seem to me very relevant to how we actually encounter text, to how we are likely to continue to encounter text, or to understanding a reading mechanism broken down into saccades and fixations.]

hrant's picture

Uh, if we agree that "massive" is like 20%, OK. :-)
That's if we start making fonts like we're supposed
to, as opposed to how we've been making them.

To go beyond that I believe we do in fact need
to move to the "active display" of text, instead
of forcing the reader to keep moving his field
of vision, not to mention his neck, hands, etc.

hhp

John Hudson's picture

Aside:

If you stare at one point fixedly enough to suppress saccades, within seconds your visual field will narrow to a small point.

Which can produce some very cool optical effects. I was staring at the Blessed Sacrament exposed at a href="http://farm6.static.flickr.com/5257/5461485268_403245c316.jpg">Brompton Oratory, and suddenly a massive baroque church all became disconcertingly parafoveal.

John Hudson's picture

Bill, I don't think there is a strictly linear relationship between letter recognition and word recognition, but I do think they are mutually supporting, not in opposition, and improve or degrade more or less in parallel. Linguistic context gives word recognition an advantage that enables some letters to be more easily recognised in words than in isolation, but I don't see any basis to assume that such letters result in better word recognition than more individually legible ones would.

I do think, however, that individual letter recognisability probably gives diminishing returns to word recognition, which is why reading speeds seem only marginally affected across a significant range of typeface designs and styles. In other words, endlessly refining individual letter recognisability will obtain smaller and smaller gains in reading speed and accuracy. But the point is that they do seem to continue to move in the same direction, albeit at increasingly disproportionate rates. There doesn't seem to be a point at which improving individual letter recognisability degrades word recognisability.

hrant's picture

OK, let me try an actual image (although I feel that the
imaginative requirement of textual communication is
generally more fruiful)...

Taking the invented phrase from my (admittedly
extremely over-simple) "How We Read" page*:

* http://www.themicrofoundry.com/ss_read1.html

Imagine this is what an "active text display" (which I
now dub AcTeD :-) would show when the fixation is at
"wasn't". In addition though I would probably change
the serif-ness of the text, from none at the fovea to full
serifs in the parafovea. (But I'm not yet sure the "He"
there makes sense).

hhp

John Hudson's picture

Hrant: Uh, if we agree that "massive" is like 20%, OK.

In Thessaloniki you were claiming c. 700 words per minute. No?

Average adult reading speed in English is 250 words per minute. More significant to my mind is that average comprehension at that rate is only 70%. Comprehension demonstrably drops as reading speed increased, with tested 'speed readers' averaging a mere 50% in comprehension.

Now, I'm of the perhaps idiosyncratic opinion that the purpose of reading is to understand, so I'm only interested in increases in reading speed that do not reduce comprehension, and most of the time wish that people would take the time to read more slowly.

John Hudson's picture

Hrant, that's an interesting image. One thing I wonder is how the brain would cue the next fixation if the text setting is changing so completely during the saccade and, of course, how would the AcTeD system know where that next fixation would be until it happened, which is too late to change the display?

hrant's picture

Well, I actually believe experienced readers (the best and only real
benchmark) can do over 400 wpm now, with existing typography.
Those much lower numbers are from flawed (far too fovea-centric)
experiments. But maybe 700 was sensationalist, sorry. Although I
do think that an AcTeD system could exceed that.

BTW, I do wish Kevin would point to the experiment that
he said shows improved reading with tighter spacing.

> I'm only interested in increases in reading
> speed that do not reduce comprehension

Me too.

--

Good questions (including some I already
wondered about above). Let's figure them out.

hhp

William Berkson's picture

>the experiment that he said shows improved reading with tighter spacing.

Kevin may (or may not) be alluding to this paper, which found that the word superiority effect disappears with very widely spaced letters. I predicted that this should happen on Peter Enneson's theory. Peter then did a literature search and found this paper, which had already demonstrated this. Kevin was kind enough to repeat some of the experiments, with variations, but his conditions didn't have the same precision as the early ones, done with a tachistoscope and printed matter, rather than on a computer, whose screen refresh is on the order of the times involved in flashing the words. And the results were not very clear. Peter's view is that to get more informative results we need a good control on what is called the SOA (stimulus onset asynchrony). But it's probably too much to try to get into this here.

quadibloc's picture

Since one thing has emerged from this discussion:

letter recognition bad
word recognition good

it has occurred to me how this new insight can be translated into a weird new typeface designed to supercharge the readability of text!

Since descenders and ascenders play such an important role in determining the shape of a word, have them become gradually bolder as they extend away from the x-height region of the letter.

Of course, the difference between, say, a, v, and r individually, and c, e, and o on the other hand, also affects word shape, so something might be done to exaggerate the general shape characteristics of small letters.

William Berkson's picture

>Since descenders and ascenders play such an important role in determining the shape of a word, have them become gradually bolder as they extend away from the x-height region of the letter.

They're called serifs :) And with old styles there is often swelling toward the serifs...

Kevin Larson's picture

I haven’t quickly found a reference showing that letters are more accurately recognized in time and distance threshold tasks with more space between them. I suspect that Bouwhuis & Bouma first showed this and that Pelli has shown this in his crowding studies. I am confident that Sheedy’s lab has done this with a distance threshold task, but I haven’t found a publication of this finding, and I have replicated this in a unpublished time threshold study. I’ll be talking about that study at the upcoming Digital Reading conference.

Rayner’s lab has published some of the findings showing that reading speed is faster with Consolas and Georgia when letter space is decreased and word space is increased.

Rayner, K., Slattery, T.J., Bélanger, N. (2010). Eye movements, the perceptual span, and reading speed. Psychonomic Bulletin & Review, 17 (6), 834-839.

Abstract:
The perceptual span or region of effective vision during eye fixations in reading was examined as a function of reading speed (fast readers were compared with slow readers), font characteristics (fixed width vs. proportional width), and intraword spacing (normal or reduced). The main findings were that fast readers (reading at about 330 wpm) had a larger perceptual span than did slow readers (reading about 200 wpm) and that the span was not affected by whether or not the text was fixed width or proportional width. In addition, there were interesting font and intraword spacing effects that have important implications for the optimal use of space in a line of text.

Kevin Larson's picture

Again, I agree with John that I believe that word recognition will generally follow letter recognition. Easy to recognize letters will lead to easy to recognize words.

But in the interest adding meat to those that disagree, Arnold Wilkins has an interesting line of research where he demonstrates that fonts with very high periodicity (e.g. the picket fence) lead to slower reading speeds. He has also demonstrated that modifying the letter shapes to reduce the picket fence effect increases reading speed. He didn’t look at letter recognition for his modified letter forms, but I assume that would perform poorer on a letter recognition test.

Wilkins, A., et.al. (2007). Stripes within words affect reading. Perception, 36, 1788-1803.
http://www.essex.ac.uk/psychology/overlays/2007-177.pdf

quadibloc's picture

@William Berkson:
And with old styles there is often swelling toward the serifs...

Yes, but I'm thinking of something on the order of the recent attempt at a typeface designed to mitigate dyslexia. Here is the illustration I was drawing to modify my earlier reply:

John Hudson's picture

That image amuses me. It makes heavier and draws attention to elements that in 5 out of 6 ascenders are identical despite the fact that they are part of different letters. How is that supposed to help anyone, let alone dyslexics?

Syndicate content Syndicate content