Differences distract from reading - True or not ?

Denis_Masharov's picture

I participated in FaceBook discussions about the differences in the details of letters (drops, terminals etc.)
It was two opinions:

1. Legibility should base on the ability to small differences between characters. The more equal the more difficult to read.

2. Even the smallest differences in the details "separates" letter from the word, draws attention to it and thus slowing reading.

Legibility - helps or hinders readability?

In both views there is a grain of truth... I'm confused...
Help me understand, please feel free to express your opinion.

John Hudson's picture

Kevin, I'm trying to understand what changes Wilkins made to letterforms, but am having trouble making sense of his description and illustrations. In the abstract, he writes

By subtly distorting the horizontal dimension of text, and thereby reducing the first peak in the horizontal autocorrelation, it is shown that the speed of word recognition can be increased.

From this and the illustration, it looks as if spacing is being changed, not shapes, but maybe I'm misunderstanding.

hrant's picture

Kevin, thank you for those.

> Digital Reading conference.

I just looked online but was unsure what
event you mean - could you provide a link?

John S:
That's the sort of thinking we need. For example if we really
want to test the idea that legibility and readability do exhibit
a digression and are not "parallel", we could make a font
where the letters are hard to decipher but they come together
to form great boumas. Hmmm, what about Blue Island?
http://www.myfonts.com/fonts/adobe/blue-island/

Furthermore, taking John H's correct observation that your
example mainly emphasizes uniform elements (which means
it only hinders reading) to its terminus, one can only really
arrive at one place: alphabet reform! :-) My dearest -if lately
mostly dormant- project.

hhp

rs_donsata's picture

I just couldn't resist to comment. Excuse my empirical and light hearted assumptions.

As I understand legibility is mainly concerned with what could be considered the efficient recognition of a given signal (message or part of a message). In order for a signal to be best recognized it needs to contrast it's ambient (noise) and to hold enough meaningful information in a simple way. Readability not only is concerned with the efficient recognition of signal but also with it's correct interpretation.

A message (signal or set of signals) can be very legible (recognizable a as a message) by contrast and simplification, but excessive contrast and simplicity (let's say a monocase, square modular sans serif) will often harm readability because it will hold low redundancy and probably low context. Also a set of highly contrasted signals (an irregular type style) will induce noise (meaningless or even distractive signals).

I had a big fire in my house last november, so I lost most of my books, but I recall reading one about legibility and semiotics (maybe by François Richardeau or Roy Harris but I'm not really sure) that stated something like this:

As long as the type setting is fairly decent, there are many factors more important to readability than typeface style such as context and redundancy, because reading, as a perceptive process is the way the mind makes sense out of the chaotic set of signals it receives trough the senses. Basically our mind is always trying to guess, to anticipate and to connect the dots to make sense, to find meaning.

There are many theories about how reading takes place: letter by letter, phonetic recreation in the mind, by word shapes, by meaningful chunks... this book stated that our mind actually uses a mix of various strategies to read a text with the only purpose of guiding and feeding the perceptive process of guessing and confirming to build meaning.

So a text written in an engaging way (to keep interest) with a predictable sintactic structure (to aid anticipation) and with lots of contextualization and information redundancy will help readability more than a perfectly and painstackingly typeset text with let's say business financial babbling. This of course is highly dependant on the audience because different audiences will have different degrees of context, redundant information and interest in the text.

Now, there are also many reading levels as when we scan a list, or as when we are deeply engaged in a novel or as when we are scanning the columns of a newspaper. And it is true that different type styles aid each reading level in ways other type styles can't:

* A typeface for a phone book listing will try to make more individual standing out letters because you want to identify as fast as possible small signals that will help you navigate and find your information.

* A typeface for a book will try to create a visually ease texture with high graphic redundancy (that is many dull and patterned cues that will help you distinguish different signals) to avoid fatigue and distraction and to help you focus your mind on the fast building and understanding of meaning rather than in the correct identification of every signal. I agree with John Hudson here that the limit to reading speed is comprehension.

* A display typeface for a billboard or for magazine headlines will try to create more unique word contours to help you guess the words without having to go letter by letter because all the context you have (photos and graphics) does not directly relate to the linguistics of the word. So your mind may have a vague idea of the meaning of the words by the graphic context but needs visual cues as to which specific word it is.

hrant's picture

And an hour or so later the ATypI email arrives!
http://www.rit.edu/cias/readingdigital/

hhp

Kevin Larson's picture

John about the Wilkins paper: "From this and the illustration, it looks as if spacing is being changed, not shapes, but maybe I'm misunderstanding."

I see distortions of the letter shapes here. They look like they are being horizontally compressed and expanded. I’ve seen other proposed stimuli where the vertical strokes were angled to further reduce the picket fence. I believe that Wilkins was working on this project with Paul Luna.

1996type's picture

I've tried to study legibilty for a short while. And then I stopped. I've decided that legibility is simply too illusive to discuss theoretically. Best trust your intuition, and don't be a chicken, like so often with typedesign.

All I know is that nothing reads like Lexicon.

Denis_Masharov's picture

This is all of course very interesting to, but to my question no one replied.
On the one hand recognize the letter is on its characteristic detail, on the other hand is too specific detail put the attention on the letter, inhibit look at it, and thus interfere with reading.

The more specific details - the faster recognition of letters.
The more specific details - the slower the reading.

This contradiction.

Té Rowan's picture

@John – Why do I get the feeling that you are looking at @quadibloc's image as a finished design?

@Denis – That seems to be one of the few things agreed on, that the glyphs in a text face are choir singers.

@Hrant – But isn't that RSVP thingy a foveal-only test?

hrant's picture

> illusive

Do you mean elusive? With that I would agree, but readability is
no illusion (some other things in type design certainly are though).

Chicken? That's Intuition Island's entire demographic.

> nothing reads like Lexicon.

That's puppy-love. Beauty does not equate to functionality.
If you do have a preference for things Dutch, look instead to
the work of Bloemsma. Although I guess he was half Greek.

> But isn't that RSVP thingy a foveal-only test?

Yes, and that was my complaint against it. But since people
can read for relatively long periods using RSVP (just not very
realistically) John S's point about the "collapse" associated with
staring seems to be moot when the view is changing. I also
think blinking might have something to do with it (which Bill
will enjoy hearing, since it might tie in Luckiesh).

Denis, threads have a way of drifting! Your main original point
is very valid, there's just no firm answer yet*... because in fact we
first need to work out the nitty-gritty this thread drifted towards.

* Andreas's "balance" comes closest.

hhp

hrant's picture

Denis, let me try:

1) Make each letter contribute as much individuality
to boumas* as possible, without triggering an errant
fixation from a previous line - I mean when the eye
jumps too far ahead to check out a feature/pattern
that's standing out; I believe that the issue of shapes
"belonging" with each other is merely a byproduct of
this, at least in terms of immersive reading.
2) Get the whites of the letters talking to each other
as much as the blacks are.

* The shape of a word, or more
generally of a cluster of letters.

hhp

rs_donsata's picture

Denis, reading a long text requires a big amount of mental effort and it is best done when the text has redundancy, that is that there are many cues pointing in the same direction. That is why serif types are commonly more comfortably read in long texts, they contain more visual information. That redundancy however needs to be part of a muted pattern because individually standing out features will draw attention from the process of understanding what you read into the process of identifying the quirk and even to the effort of systematically trying to ignore it.

However if you are not inmersed in a long read but rather scanning a list, a table or even reading a head line or a logotype, then you have more room for individuality among letters.

So my direct answer is: legibililty is not unidimensional, it depends on the kind of perceptive process because it is a part of readability and so there are ways of making a text legible that will hinder readability of long texts but also ways that will help it.

hrant's picture

> serif types .... contain more visual information.

1) On the other hand serif forms have lower legibility.
2) The serifs also serve to bind the blacks, basically
creating a higher layer of information.

hhp

William Berkson's picture

Hrant, who says serif forms have lower legibility? If they do, it would argue for the kind of tension between legibility and readability that Denis raises. The strong preference for serifs in extended printed text, in spite of widespread use of sans, I take as probably indicating higher readability, though this is not yet proven.

hrant's picture

I think it's been shown empirically (which is easy to
imagine, unlike the empirical proof of the superiority
of serif fonts for extended reading). And that "tension"
is something I believe in.

hhp

quadibloc's picture

@John Hudson:
How is that supposed to help anyone, let alone dyslexics?

It wasn't suggested that this was designed to help dyslexics. They were mentioned for comparison to the other recent typeface that was so designed which was roundly criticized.

As for helping other people to read faster: while this wasn't a serious proposal, the theory behind it is the following - yes, five letters all have the same kind of ascender. But the purpose was not to make individual letters easier to recognize, it was to make individual words easier to recognize.

Not all words have letters with ascenders located in the same part of the word.

John Hudson's picture

Denis, I think your premises are mistaken, and hence lead to a mistaken contradiction. What is important to legibility of individual letters is not shape or size of particular details -- a term I would reserve for features that contribute to the style of letter, e.g. terminal shapes, kind of serifs, etc. -- but proportional positioning of structural features, i.e. shape. This is well demonstrated by the Fiset etc. bubble testing, which reveals those structural elements that are relied on for letter identification. If this is paired with the conclusions of Denis Pelli's work on spatial frequency, and the presumption made that letters and their features within an individual typeface design will be handled in a way that keeps them within a single spatial frequency channel (unlike the heavy extenders type that Quadibloc illustrated), then there is no ground to think that legibility relies on differences in details that, if emphasised, would diminish readability. Legibility relies on structural features, which do not need to be emphasised in ways that detract from word recognition.

John Hudson's picture

Not all words have letters with ascenders located in the same part of the word.

True, and there is a demonstrated role to extender position in guessing the identity of words at the edge of the fovea. But there's a trade-off in increasing the prominence of features to aid guessing about the identify of words at the edge of a fixation and doing what it best for word recognition within the fovea. Making the extenders significantly heavier affects the spatial frequency of the letters, pushing some letters or parts of letters into a different spatial frequency channel, which is one of the few things I am fairly sure is Not a Good Idea for readability.

hrant's picture

Hey, don't knock guessing! It's what humans do best.

hhp

John Hudson's picture

I'm not knocking guessing, which is clearly part of the back end of reading, albeit the least reliable part. I'm saying that making changes to type to make correctly guessing the identity of words in the parafovea easier may have detrimental effect on word recognition in the fovea, or on reading comfort, or on overall speed, if these changes disrupt things like core legibility or spatial frequency. If it were possible to make these changes on-the-fly and only in the areas where they would help, then they might be helpful, but this introduces the same issues that I believe sink your AcTeD idea: you can only make such changes during a saccade, and until the saccade has ended you won't know where the fixation will be. Now, it might be possible to train text display software linked to eye-tracking to anticipate an individual reader's typical saccade length, and even to adjust for text content, e.g. lengthening saccades over short common words and shortening them for longer uncommon words, but I reckon that's the level of precision you would need to make the system at all viable, because morphing the appearance of text during fixations is going to be terribly distracting with, I reckon, a resulting major hit on speed.

By the way, important though guessing can be, I disagree that it is what humans do best. The inverse relationship of reading speed increase and comprehension accuracy indicates that we do pattern recognition better than we do guessing. The more we rely on guessing, the faster we can read but the more mistakes we make. The more we rely on pattern recognition, the slower we read but the fewer mistakes we make. We often favour guessing because it is faster, but it is a false economy if we're guessing correctly no more than half the time.

dezcom's picture

Why is reading faster better?

John Hudson's picture

It isn't absolutely better, only better relative to comprehension accuracy. For many kinds of texts, if you can increase speed while not reducing accuracy, that would be a net benefit on the grounds that our mortal time is limited, so by reading quicker we could read more or, indeed, have more time for not reading.

I say 'for many kinds of texts' because I strongly believe that reading speed should vary according to what you are reading. Much depends on what you want to get out of the reading experience, and I can think of plenty of things that I'd want to read slowly, including a lot that I'd want to read out loud (most poetry is in that category).

Rob O. Font's picture

Wilkins' figure 9 is very interesting. Distorting parts of words to the left of word center via Photoshop, or whatever they're using, without changing word length, helped literacy challenged subjects, reading Times by 4%.

Then they say that we, type designers, could make contextual substitutions available to help with gatherings of straight lowercase strokes at the focus point of a word. That's the problem, the in in ninja, the ini in minimum, and in itself.

So, I think we're supposed to make sure that the inter letter spaces between straight stemmed letters are clearly smaller than the counter space. None of this is particularly difficult in proportional fonts, but it's of course impossible in mono spaced fonts.

Additionally, assurance of picket-free reading is not possible in common low res environments and I'll leave that right there.

But these study results, and most reading comprehension theory has set up a conflict loop at the center of the readability/design circus; on one side, we're told to make lowercase with careful spacing and differentiation of letters without loss of familiarity – I call this serif text type.

On the other side, we're faced with applicatigons who don't want to know what widths the type designer wants in crude media at each size, renderacadors who've made feature location a hinting epic and slowpoke adoption of contextual substitution – which I think all calls for san serif text type.

I did both serif and sans, small size designs with crowded inter letter spacing and open counters alREady. Which is my last point — using Times, a cruel Jobs joke, to test reading, is ignoring five generations of design and four generations of technology — and I wish they'd stop. Times Roman is broken when it comes to picket fence avoidance and cannot be fixed without hints or a redesign.

Syndicate content Syndicate content