Legibility/Aesthetics - Improving the reader experience.

Chris Allen's picture

Afternoon all,

I'm currently looking at how the study of the effects of aesthetics on legibility can help to improve the reader experience.

Now I know this is a very much debated area, the whole area of "how do you measure legibility" (of which http://typophile.com/node/41365 is a great thread), and the fact that aesthetics are subjective (as I believe is legibility to a point - reader preference, familiarity, etc.).

What I am looking at, is if the conducting of legibility tests, followed by subjective aesthetics tests (in which various samples would be created with varying levels of creative elements and the same tests applied as in the legibility studies as well as preference, etc.) to see if it is possible to find a balance point between the two. The theory is that if you can find a point where the legibility is maximised, and then find the point where the aesthetics don't negatively affect the legibility, then you can effectively improve the reader experience.

My thought behind the possible application of this is in uses such as study materials and required reading. For example if you can increase the aesthetics to a point where they are maximised without decreasing the legibility of the text, can you in theory improve the reader experience, and would it have an additional effect on other aspects; retention, comprehension.

I know this possibly sounds a little vague, but I'm really interested in people's thoughts on this.

Thanks

John Hudson's picture

Nick: Letters and ligatures are no doubt seen as discrete objects, but it’s not logical to argue that a collection of non-contiguous, non-overlapping things are seen as a singular item.

Let's back up a bit here. The conventional word space is a visualisation of the word, a mechanism that indeed allows us to see ‘a collection of non-contiguous, non-overlapping things’ as a unit based upon proximity of those things within the unit and relative distance from other collections. This isn't in any sense ‘not logical’. A ‘collection’ is a singular noun: your own term implies unitisation of multiple things grouped together.

We have strong evidence that the length of words perceived in the parafovaea influences saccade distance, so that is one obvious way in which the visualisation of words, as separated by spaces, contributes to how we read. But that is a discreet reading phenomenon from word recognition, which is demonstrable by the fact the saccade distances remain predictable whether the words ane os arn nct rcaclahlc. What remains at issue is the cognitive process by which either word shape, or sub-word shapes other than individual letters, contribute to word recognition. Denis Pelli's study indicates a quantifiable contribution of word shape to reading speed, which is considerably smaller than the contribution of letters (as I would expect) but still significant.

What I understand Peter and Bill to be questioning is how the contribution of the letters takes place, positing that familiar patterns of proximal shapes may be recognised as units between letter and word. This isn't unreasonable, and certainly not illogical, since we know that individual letters are recognisable if key features remain identifiable even if the rest of the letter is degraded, and these features are not necessarily contiguous. I other words, we already know that ‘non-contiguous non-overlapping things’ can be recognised as ‘singular items’ -- in this case letters --, so it isn't unreasonable to posit that e.g. a pair of letters might be recognised in a similar way.

[None of this should be taken to imply that I accept the models that Peter and Bill put forward as true. Only that I don't think they are illogical.]

William Berkson's picture

Eben, I think calligraphy in is not such a good comparison, unless it is a formal hand. The formal scribal hands, intended for extended reading, are more regular than other hand writing. I don't know about Arabic, but I would bet that anything for extended text would be much more regular than when the purposes are more decorative. And type is generally more even than hand writing, because of the greater ability to adjust the weights of strokes and joins.

Japanese is a special case because they regularly mix more than one script. I think I remember someone writing here on Typophile that the different scripts are deliberately differentiated, so that the reader can immediately recognize which script is being used, and interpret it accordingly. So it is an interesting case, and I don't know enough to say anything about it. Perhaps if there are any Japanese graphic designers they could tell us about issues of color and mixing scripts.

John Hudson's picture

Bill: I don’t know about Arabic, but I would bet that anything for extended text would be much more regular than when the purposes are more decorative.

The standard formal literary hand for most Arabic text, and the model for Arabic text typography no matter how debased, is the naskh style. Like all the classical styles, naskh is rule-based and the proportions of letters are strictly defined relative to the pen width (which makes the idea of a bold or light naskh meaningless: if you change the weight to size proportions of the letters you no longer have naskh). But the nature of the rules are such as to allow considerable variation in performance through the use of discretionary alternate forms, elongations, etc., as the rules permit. Complexity gives the illusion of freedom, but once one starts analysing what is happening at the level of letters and their connections, the rules become apparent.

As announced at TypeCon, Brill Academic will be publishing Tom Milo's Grammar of the Arabic Naskh Script, which describes the rules of the naskh style in terms of competence and performance.

enne_son's picture

John Hudson: “None of this should be taken to imply that I accept the models that Peter and Bill put forward as true.”

[reacting to your “put forward as true”]

Ludwig Wittgenstein / Vermischte Bemerkungen: “Das eigentliche Verdienst eines Kopernikus oder Darwin war nicht die Entdeckung einer wahren Theorie, sondern eines fruchtbaren neuen Aspekts.” [ What a Copernicus or a Darwin really achieved was not the discovery of a true theory, but of a fertile point of view. ]

John: “[…] positing that familiar patterns of proximal shapes may be recognised as units between letter and word[…]”

[not sure if I can identify with that]

What I'm positing is that, when it comes to familiar words, recognition for skilled readers occurs directly from role-units and role-unit evoked forms, “without mediation through [internal] letter representations.” [ Paller & Gross / 1998 ] Letters contribute by supplying the role units (bowls and stems) and the role-unit evoked forms (counters). I think what Bill and I are proposing is that recognition doesn't happen through a complex combinatorial arithmatic that proceeds in heirarchical steps from simple features to entire words. Recognition happens simply and directly on the basis of a read-out of identity, local combination and location at the role-unit level. Something like the combinatorial arithmatic may happen at various levels when we are learning to read, encounter unfamiliar words or words in unfamiliar scripts, but not in what I'm calling the 'rapid automatic visual word-form resolution' that characterizes the immersive reading of motivated fluent readers.

Maybe this doesn't feel like a fertile point of view to those who don't already share it, or can't readily assimilate the terms.

William Berkson's picture

Copernicus and Darwin were so fertile because they had theories that did capture fundamental truths, even if there weren't totally correct. Wittgenstein doesn't get it right, as usual.

I'm not ready to declare Peter to be Copernicus, but if he's right, and I suspect he is, it is because he is on to something fundamental about reading.

What that something is, as Peter noted to me recently using a phrase in yet another paper, is that visual processing of words does not stop with identification of letters. In the slot processing view, once letters are identified visually, everything else is cognitive. There is a shift to abstract units, the letters, and these are used in some kind of combinatorial look-up process. But the fact that computers are good at spell-check in this way doesn't mean that our brains are.

In fact almost all of us are pretty pathetic when it comes to arithmetic, compared to a computer. And I have no problem reading all kinds of words that I would probably misspell, as I am a poor speller. All of which says to me it is likely that something different is going on in the brain, more like the matrix resonance model I explained above. In this, visual processing is not restricted to identifying letters, but a whole visual word pattern of sub-letter units is represented in a matrix. And then that matrix paints our whole visual word memory bank, and when something resonates, bingo, the meaning attached is sent on to further cognition.

To me the fact that we are bad at arithmetic compared to a computer, and great at pattern recognition compared to a computer, means that something significantly different is going on in my head from the way the computer I'm typing on now is programmed.

enne_son's picture

The paper I quoted is "Cracking the Orthographic code: An introduction," by Jonathan Grainger. In it Grainger writes: “ Individual words are undeniably the building blocks of the reading process, and a word is primarily an orthographic object [...] [my emphasis], and later, "[…] the central idea here is that there is a pre-emption of visual object processing mechanisms during the process of learning to read." That is, visual processing gets us to spellings, and then one of various complex and computationally heavy post-perceptual computational schemes take over.

It's the pre-emption of visual processing idea that's got me up in arms.

The word-as-primarily-an-orthographic-object idea is in stark contrast to the Noordzij notion of the word as primarily a visual rhythmic object constituted by the whites of the word and the black of the letters. In the scheme Bill and I are proposing, processing can 'stop with' information at the role-unit (black) and evoked form (white) level.

John Hudson's picture

Peter: What I’m positing is that, when it comes to familiar words, recognition for skilled readers occurs directly from role-units and role-unit evoked forms, “without mediation through [internal] letter representations.” [ Paller & Gross / 1998 ] Letters contribute by supplying the role units (bowls and stems) and the role-unit evoked forms (counters).

That's pretty much exactly what I meant by 'familiar patterns of proximal shapes may be recognised as units between letter and word'. But you express it much more precisely. Thank you.

William Berkson's picture

>processing can ’stop with’ information at the role-unit (black) and evoked form (white) level.

I think it's important to add that the information about letter parts or aspects also includes their relative location within the whole word. That's what the model of the 'matrix' is supposed to express: the location in the matrix would correspond to relative position in the visual representation of the word.

enne_son's picture

Thanks John. It's important that what we're proposing here be clear and not confused with conventional word-shape views.

Bill, I think that's right. The information that needs to be compiled includes information about relative location within the whole word or bounded map. Also included is 'local combination' detection. This involves where, on the whole word template or map, joins and vertices sit. (The latter qualification might make it seem as if in my scheme letter identity is coded after all. But I don't think that's necessarily implied)

John Hudson's picture

I wonder what Mr William of Occam would have to say about all this?

It seems to me that we have a number of reasonable hypotheses about what might happen during reading, any or a mix of which might well be true, but I'm not sure if any of them are necessary.

William Berkson's picture

Occam's principle only applies if you can't distinguish between alternative hypotheses by experiment. And even then its importance is debatable.

Here I am suggesting experiments whose results will, I think, seriously question the slot processing view, and follow from the matrix resonance model. Hence Occam is irrelevant.

As to what is necessary, no hypothesis is necessary. As Bertrand Russell said about the "economy of thought" view of science (Ernst Mach): "The greatest economy of thought is not to think."

But if you want to learn, then developing alternative hypotheses and testing them against one another has proven extremely helpful in advancing our understanding of the world and ourselves.

John Hudson's picture

I was responding to Peter's use of the word 'need': The information that needs to be compiled includes information about relative location within the whole word or bounded map. To be fair, though, his comment should be read as presuming the validity of your hypothesis and not, as it first struck me, as begging the question.

This is a very long thread, and I did not read all of the middle section. Can you direct me to the post that outlines your proposed experiment(s)?

William Berkson's picture

John, in a long post on Nov 15 I lay out the matrix resonance model; it is a model mechanism for the kind of visual processing that Peter has been arguing for. On Nov 16 and 17 I describe the experiments. On Nov 24 I reply to Denis Pelli. On Nov 26 is my rebuttal to his reply. On Nov 30 I propose some additional experiments.

All of these are alas very long posts, but if you print them out the will be easier to read. Of course that's an indication that there's more to reading than slot processing of letters :)

Hmmm. That was a joke, but it just occurred to me that it would be interesting to test printed material of the same coarse dpi as the screen, and see if, eg ability to proof read and the reported reading comfort deteriorate the same as they do on screen. This would get at whether the back-lighting is a problem with screen text, or it's mainly the resolution issue. Not a test of orthographic processing vs gestalt processing, but could be revealing.

Kevin Larson's picture

it would be interesting to test printed material of the same coarse dpi as the screen, and see if, eg ability to proof read and the reported reading comfort deteriorate the same as they do on screen. This would get at whether the back-lighting is a problem with screen text, or it’s mainly the resolution issue.

My guess is that resolution is the primary problem and not the backlight. To finish the test design you would want printed copy at high and low resolution and backlit text at high and low resolution. To get high resolution backlit text, you can print to transparency material. When you put transparencies onto a white screen, the text looks amazing. So very crisp. Interestingly, you get color fringing effects created by the RGB subpixel structure on the screen.

I would love to see this study done as it is an interesting question, but don’t have the time to do it myself right now. If there is a student interested in taking this up as a project, I would be happy to collaborate with them. I think we have code around here for printing text at lower resolution than the printer’s native resolution. Relatedly, there was an article in the October issue of Human Factors on optimal text versus background color combinations. I haven’t read it yet.

Syndicate content Syndicate content