properties of human visual processing play a dominant role in constraining the distribution of print sizes in common use

enne_son's picture

I was sent a pre-print of the following paper by Gordon Legge and Charles Bigelow after asking Legge about some critical comments he made on Miles Tinker’s work: “Does print size matter for reading? A review of findings from vision science”

Now it’s available from the Journal of Vision here:

The comments below are mostly cobbled together from Legge and Bigelow’s own words.

Legge and Bigelow present evidence supporting the hypothesis that the distribution of print sizes in historical and contemporary publications falls within a psychophysically defined range of fluent print size — the range over which text can be read at maximum speed. While economic, social, technological, and artistic factors influence type design and selection, they conclude that properties of human visual processing play a dominant role in constraining the distribution of print sizes in common use.

They begin by discussing metrics for print size used by typographers and vision scientists. “Confusion over definitions of print size has been an impediment to communication between the two disciplines, but common ground is necessary to understand our hypothesis.” This is probably the most thorough and helpful discussion on type measurement I’ve read.

Legge and Bigelow go on to discuss theoretical concepts from vision science concerning visual size coding that help inform an understanding of historical and modern typographical practices. Topics in this section are: Oculomotor limitations; Spatial frequency representation of letters; and Visual span and crowding. Included in this section are observations on optical scaling.

In their historical survey, Legge and Bigelow found three main trends: “(1) extension of type size range from a narrow cluster of fluent sizes in the 15th century to a broader range including several subfluent sizes in the 16th and 18th centuries; (2) nearly stable size range from the 16th to 18th century; (3) proliferation of type sizes in the subfluent range, from zero (for roman types) in the 15th century to 37% of the sample of typefounders’ specimens in the 18th century.”

I think the paper is ‘must reading’ for anyone interested in where typography and the science of reading meet.

Peter Enneson

Nick Shinn's picture

…less fatiguing…

The concept of fatigue is somewhat loaded, as during reading blink rate reduces to one fourth normal (non concentrating).

Therefore if you blink more, you are closer to normal.

NOT blinking is in fact the stressful activity.

But as people can read for long stretches at one quarter normal blink rate, reading none of the weights is fatiguing.

Blink rate measures concentration, not fatigue.

William Berkson's picture

Nick have you read any of Luckiesh? It might be appropriate to read him before dismissing him for making an elementary mistake—which he didn't in fact make.

The normal suppression of blinking during reading slowly breaks down with time-on-task, namely as we read continuously for longer times. This is the basic finding of Luckiesh. He also found increases in blink rate for very small text, low light and other conditions that you would expect to cause fatigue. Also the people involved in these tests report feeling more fatigued.

For Luckiesh the *inverse* of blink rate is an indication of higher readability. The higher increase in blink rate indicates lower readability.

Luckiesh was acutely aware of the fact that many things affect blink rate other than visual fatigue, and he tried to control the other variables.

So yes, not blinking is stressful, and causes a build up of fatigue as time goes on, and we blink more. Luckiesh argued that there are other stressors also involved, including non-optimal leading, line length, typeface, boldness etc. And these show up in blink rate.

The key idea here, which Luckiesh got from Ponder and Kennedy, is that blinks serve not only to wash the eye, but also as some kind of mental relief and refreshment. My own hypothesis, based on a number of discoveries in the past 30 years, is that blinks allow for the circulatory system to refresh brain chemicals such as dopamine used in neuro-transmission in the visual cortex.

russellm's picture

Yell louder or speaking more clearly.

flooce's picture

A fascinating discussion.

Interestingly enough just 4 days ago a type family appeared on myfonts directly influenced by readability studies: Pyke:

An early version of the typeface was subjected to experimental legibility investigations of distance and time threshold methods*. Participants were exposed to different variations of the most frequently misread lowercase letters.

*Beier, S. & Larson, K. (2010) ‘Design Improvements for Frequently Misrecognized Letters’, Information Design Journal, 18(2), 118-137

William Berkson's picture

Thanks Florian for the link!

Not impressed by Pyke as any advance on readability.

First of all, as I said two years ago when Kevin Larson reported on some of this research at a TypeCon panel about readability, I very much doubt that ambiguity at thresholds is a big factor in readability, particularly for those with normal vision. There is no good scientific reason to think that ambiguity at thresholds has any big effect *suprathreshold*. There seems to be an untested assumption here that the effects are linear, and I think that is very unlikely to be true, given that other type effects are not linear with optical scaling.

The impact of even some suprathreshold ambiguity is open to question. For example, the fact that 'l' and 'I' are confusable (suprathreshold) in many sans faces does not seem to have impeded the acceptance of these faces much at all, or lowered their readability noticeably. It doesn't seem to be much of a problem except for those of us who come from Illinois, like me. When combined in words, it seems that the impact of even suprathreshold ambiguity is greatly decreased, or attenuated.

The paper of Legge and Bigelow I think reinforces the point that suprathreshold factors are critical by showing that the critical reading size is twice the visibility threshold size. What is happening to affect the difference is a big part of the story. Following the results of Luckiesh and astute analysts from Theodore Low Devinne (1885) to Frutiger (today), I suspect that readability factors have to do more with rhythm (affected by character width) and weighting of strokes. Again, this is reinforced by this paper, which looks to periodicity (tested using the gratings) and crowding as critical factors in reading.

The other modifications of the Bodoni style that Sofie Beier undertakes do I think contribute to readability. But these features are all already present in old style and even in transitional type. And Beier doesn't introduce the descending f in or hooked l in roman, so "removal of ambiguity" part only applies to opening up the counters—which are already features of old style and transitional serif typefaces. And the lower thick-thin contrast and fewer parallel vertical edges are also features of old style.

So what Pyke does is to outfit Bodoni with some old style features keeping the proportions. Personally, I don't think the wide proportions of Didones are conducive to readability, on the grounds of rhythm and spacing. But I acknowledge that this is very debatable and hasn't been tested. But I don't think there is any reason, based in either research or type history, to think that Pyke is any more readable than any good old style or transitional text face.

Let me be clear: reducing ambiguity is a good thing, in my view, but it is far from the main issue. And the reduction in ambiguity in Pyke seems to be no greater than old styles and transitional types, nor, for example the recent types of Gerard Unger.

Incidentally, Fedra (also very open counters) in the Sans did introduce a descending f in the roman. I personally like it, but it is interesting that Bilak has produced a version without the descending f, evidently in response to comments from users. So the descending f *in spite of familiarity* from being used in italic, and less ambiguity at the threshold of visibility, was trumped by familiarity of the normal roman alphabet.

Nick Shinn's picture

I'd better not say anything.

Kevin Larson's picture

Nick, and everyone else interested in psychology, I strongly recommend reading Keith Stanovich’s book How to think straight about psychology. It is a fantastic, easy-to-read book that can answer many of the questions you ask here. I think it should be part of every Psych 101 class.

flooce's picture

Thank you William for your account. It is convincing that one can not deduct from a size at the threshold to a bigger size in a linear way. After all there are fonts especially designed for this threshold region, like this one very recent typeface family with cuts only for super small sizes from 3pt to 7pt, I forgot the name. I agree with your account of “not impressed”, I just wanted to add it to the discussion here, to give an example where research influenced design.

Other than that, reading is such a learned and cultural thing too, and so much of our capabilities are because we adapt to our environment, just therefore I wouldn't be too sure if it is possible at all to really find a “hardwired” connection from design to how one processes the read media.

Kevin Larson's picture

Bill, I hope you will read Sofie and my paper on letter recognition. I think it’s a really great collaboration between typographer and psychologist. Though at this point, I think Sofie could rightly claim to be both. A common critique of studies comparing Times New Roman to Helvetica is that they are uncomparable because differ in so many dimensions. In this paper we compare letters created within the same typeface style. Three typefaces were designed for and examined in the paper, including Pyke. Interestingly, the findings were fairly consistent across the three typefaces. I am surprised about your strong critique of threshold methodologies. Luckiesh used threshold methodologies, and data collected from threshold methodologies is known to correlate with data collected from reading speed studies.

Nick Shinn's picture

Nick, and everyone else interested in psychology, I strongly recommend reading Keith Stanovich’s book How to think straight about psychology.

Kevin, I could recommend several books on things you should think straight about.
Which "101" books have you read about art, design, advertising, publishing or typography?
BTW, do you have any experience or qualifications in these fields?
How far did you study these subjects at high school?

William Berkson's picture

Kevin, I look forward to reading the paper. Luckiesh used both threshold measures and suprathreshold measures, and in his work they did not match.

My critique is not that thresholds are irrelevant, but that they need to be interpreted with great caution, as what happens over threshold is not linear. Also there are many different thresholds—contrast, time, distance, etc.—and different aspects above threshold, such as reading speed, reading comprehension, and reading fatigue. Can you give me some references—preferably links on thresholds and reading rates being highly correlated? Do you have a link to your paper with Sophie Beier?

Luckiesh measured visibility thresholds—essentially contrast thresholds—with his visibility meter, which however extrapolated above threshold. David Rea, a current leading illumination expert, was critical of the extrapolation above threshold, though, and said he would not use the visibility meter today to measure visibility.

Whatever the reliability of the visibility meter, and I am not clear about this, visibility and readability (a suprathreshold measure) as measured by blink rate were different. For example, Memphis bold was more visible than Memphis Medium, but less readable. Reading speed for the regular and bold were nearly the same. As far as the general correlation of these various factors with reading speed, I don't know about this, but I do know that Luckiesh measured reading speed two different ways, and found it less sensitive than blink rate, with proper controls on other factors. And the readability as measured by blink rate did not follow the same curves. He measured normal, unhurried reading speed, and also maximum reading speed using text rotating through a window—basically similar to the RSVP method today, referred to in Legge and Bigelow's paper here.

As you know, I think Luckiesh's work needs to be repeated and extended to see to what extent it is valid and what we can learn from it.

What I am confident about is that more than one measure, not only reading speed, is desirable. One of the things that excited me about Luckiesh's work is that he used three or four different measures. He at times wrongly minimized the importance of reading speed, but he actually regularly used it as well as blink rate and visibility. I think we are going to get really robust results on readability when we have better theories—which would include the gratings and crowding effects in Legge & Bigelow—and tests using more than one variable. That is also the recommendation of a leading methodologist in psychological testing, the late Donald T, Campbell.

Kevin Larson's picture

Nick, I see that you have taken offense at my recommendation. I can assure you that I did not mean anything derogatory by a 101 level book. I used 101 to mean a great starting place. I believe that was the first book that I asked Sofie to read as her Ph.D. advisor at the Royal College of Art. While I would not consider myself a typographer, I have taken college level classes in art, design, advertising, and typography. I am also an avid reader and would guess that I have a better than average chance of having read any book about typography that you can name.

Bill, I don’t think the publisher has made the paper freely available on line. If you send me an email I would be happy to send you a reprint.

Nick Shinn's picture

Right Kevin, and thank you for your gracious response.

John Hudson's picture

Florian: much of our capabilities are because we adapt to our environment, just therefore I wouldn't be too sure if it is possible at all to really find a “hardwired” connection from design to how one processes the read media.

I think you have that backwards. We only adapt to environment insofar as we have a 'hardwired', i.e. genetic, capability to do so. Reading is a remarkable human capability that sits on top of hardwired capabilities of pattern recognition and cognition that must, evolutionarily, have preceded the invention of writing, probably by hundreds of thousands of years. [Recent studies on human nutrition that I have been reading suggest that the human genome has seen no significant evolution for 60,000 years, and we can pretty safely assume, I think, that core perceptual capabilities that would have been essential to hunter gatherer survival over two millions years are much older than that.] So there is, at least from the perceptual side, a hardwired component to reading which is not 'a learned and cultural thing', which in turn suggests that there may be hardwired performance benefits to certain kinds of shapes, or certain kinds of spacing, or particular levels of dissimilarity of form and similarity of spatial frequency, etc..

quadibloc's picture

@William Berkson:
Not impressed by Pyke as any advance on readability.

I thought that Pyke Micro looked interesting in that regard.

However, I was puzzled by their description of the face.

They said that the "i" with a tail, like in the italic, was more legible, and that they incorporated this finding into the text version of the face. I checked, and the "i" was conventional in the "text" version as well as the others I looked at.

And they admit that Didones are not terribly legible, and they tried to retain the spirit of the Didone with legibility improvements, instead of starting from something that was already legible. So instead of a new frontier in legibility, apparently all it is supposed to be is a humanized Didone.

Syndicate content Syndicate content