readerability / readability

enne_son's picture

John Hudson /
December 6 2004 / Leagato & points
... reader-ability so far outweighs readability as a factor in reading that insisting on addressing the readability in typeface design is like trying to say that all shoes should be designed primarily to make walking easier, as if walking were difficult...

Tue, 2005-07-26 12:34 / Shinn / Hrant challenge
... the mass of everyday experiential evidence indicates that readerability matters much more to the reading process than the readability of what is read...

2005-07-27 23:33 / Shinn / Hrant challenge
... before we can meaningfully debate the readability of typeface design, we need to have a reasonable appreciation of the typical ability of the reader, which I maintain is very high indeed....

Thu, 2005-07-28 11:16 / Shinn / Hrant challenge
...my theory that readerability is the dominant factor in reading is based on my experience designing for multiple writing systems. The variety of h

Thu, 2005-07-28 17:20 / Shinn / Hrant challenge
... readerability is really what makes the immense richness of typography possible; if reading were difficult and required the designer’s attention to be completely focused on maximising readability, there would probably only be two or th

I think it is apt to insist that the typical ability of the average reader is very high and makes the immense richness of typography possible, but I think it is a mistake to oppose readerability and readability. John, you do this with phrases like 'so far outweighs' and 'matters much more' and 'is the dominant factor'.

Phenomenologically speaking, the two must go hand in hand: we can't read what is unreadable. Or to say the same thing differently we are not able to read what is not able to be read. I suspect we tend to forget that, in normal everyday use, readability is a gross measure. Writing in an illegible hand is unreadable.

In perceptual processing terms I see the reader's ability as primarily twofold.
1) an ability to see many (even eccentric) varieties of a letter as a given letter (for example a Raffia A as an a);
2) an ability to visually integrate ensembles of orthographically regular clusters of stimulus units (letters on a page) into familiar, object-like, perceptually molar, sense units.
Both involve perceptual learning, and the second ability is underdeveloped in people with dislexia.
The perceptual learning underlying reading ability has critical neurological learning components, extending through the multiple layers of the visual cortex up to an area that has been identified by magnetic resonance imagining as the visual word form area (VWFA) in the psychological literature.

It has been demonstrated that the spatial frequency channel used in reading is one in which, what I've called role-architectural components (and how they combine), are the grist for the perceptual processing in reading (and letter recognition) mill. This means: stems, bowls, counters, cross-bars, diagnols, and how they join. In typographical terms this is the province of letterform construction. So I would say that 'contstruction' 'outweighs' / 'matters more than' contrast manipulation--'is the dominant factor'--in basic reader-ability and gross readability.

I think contrast manipulation (and spacing) affects the level in the visual cortex just below the level at which 'role archiectural statistics' are compiled and integrated. This is a level where lateral inhibition and facilitation operate. This, I submit, corresponds to Hrant's subconscious level of reading and is still a bit of a dark area for empirical research. But issues of positive or negative noise and cue-value enhancement, are critical here. (For background on these perceptions, slog through to my Typo#13 contribution.) Much of the Typophile discussion about the finer points of readability are relevant to this level of processing. And it is important, because too much superfluous spiking in neurological impulse terms affects the 'effortlessness' experience in reading; effects the subconscious perception of 'transparency'.
Hrant makes a case against chirography in 'a priori' conceptual terms: the priority chirography gives to the shape-wise integrity of the black cannot (IHHO) logically meet the demands of total notan. (absolute notan is presumed--erroneously?--to be good for bouma integrity.)
I prefer to deal with the chirography question in functional anatomical terms.

Perhaps we need to distinguish gross readability ([g]readability) and fine readability ([f]readability). Chirographically regular fonts are wonderfully [g]readable; but ideological consequential chirography (rigourously executed in practical terms) is perhaps anti-progress in [f]readability terms.

BartvanderGriendt's picture

Thoroughly written, Peter. Thanks for the abundance of neurological insight into reading.

----------------------------------------------------
My work is a game. A very serious game [M.C. Escher]

hrant's picture

Peter, great stuff.

--

Footwear is actually an incredibly strong parallel to type (much more so than something like architecture). Comfortable shoes are important in proportion to the distance attempted. It's easy to walk 5 yards (a poster) in virtually anything. But it's hard to walk 5 miles (a book) in anything but really well-designed walking shoes - and these tend to be less stylish. No matter how sexy they are, when the shoes aren't good enough for walking long distances, you're liable to give up before you get to where you want to be. That subvisible, long-haul, subconscious stuff is the relevance of readability, not the ability to decipher individual letters given many seconds, and certainly not how pretty we think certain shapes (chirography) are.

Dismissing the relevance of readability is dismissing the craft of type design.

hhp

John Hudson's picture

I for one never dismissed readability. I completely agree with Peter: we cannot read what is unreadable. My point is that what is readable is vast and varied and not characterised by a single approach to the design of letterforms. This is why I consider readability to be a prerequisite for type design, just as it is for reading, and not a design goal. Peter says that phenomenologically readerability and readability must go hand in hand. To which I now counter that type design and readability must go hand in hand too. If you have not got readability, you have not got type design, just a collection of unintelligible abstract forms. Readability is the sine qua non of type design. If you set out to design a typeface and you succeed in making it readable, congratulations, you've achieved the prerequisite. Now what?

John Hudson's picture

but ideological consequential chirography (rigourously executed in practical terms) is perhaps anti-progress in [f]readability terms.

Can you give some example of what you mean by 'ideological consequential chirography'? Noordzij seems to come closer than anyone to an ideological stance on chirography. Is this what you mean?

enne_son's picture

John, I can't think of any type designer who is ideologically committed to the principl of sticking rigorously to the hand when it comes to contrast manipulation, or even when it comes to the logic of construction. Even Gerrit Noordzij. (Noordzij's formalisms claim no more than being convenient reference dimensions for a precise gauging of stroke contrast and glyph construction: one can get a good handle on Hrant's Paphos J by analying how it violates the type of stroke that a swept object or moving front proceeding in translation or with rotation might produce.) My goal in writing the sentence you asked about was to give Hrant's dictum 'chirography is anti-readibility'a less extreme expression. Doing so produces a judgement I can live with, but one which perhaps describes a case for which there is no exemplification (to my knowledge), in other words, a straw man (or woman).

When I said 'phenomenologically readerability and readability must go hand in hand' I meant a 'must' of phenomenological necessity, not a prescriptive 'must'. Readerability and readability are partners in one and the same perceptual subject-object equation. Your 'to which I now counter' sentence doesn't counter but adds to my thought.

The range of what is comfortably readable is vast and varied. Nevertheless, optimizing a type for smooth flow through the visual cortex and for effortless bouma-formation is very much like optimizing a type for high-quality on-screen implementation--the kind of thing you no doubt experienced with Cleartype. In both cases a kind of rasterization occurs and squelching / averaging mechanisms operate.

I also want to float this notion: it isn't type per se that's readable, it is the texts we produced with them. Characters in a font can be legible, and legibilit is a threshold issue, but only blocks of text can be readable, and there are degrees of readability.

William Berkson's picture

>Characters in a font can be legible, and legibility is a threshold issue, but only blocks of text can be readable, and there are degrees of readability.

A level between individual letters and blocks of text is indivivual words. Clearview has tested as more legible or readable--I'm not sure what applies--than the FHWA Series E modified (I think that's it) for individual words on a sign. This is, I believe, a matter of both letter form and spacing.

But blocks of text do have their own demands. Matthew Carter critiquing a text face at TypeCon: "Type designers design really not in Fontographer or whatever, but in Quark." The idea being that you need to print out masses of text to really assess a text font.

I agree with Peter that there are degrees of readability.

John, are you really holding that once a certain degree of readability is achieved, there is not anything a designer can do on this issue?

To me there are degrees of readability, which I think of as how inviting the text block is, and how fatiguing it is. For example, I think a page of text in Helvetica is readable, but the same text in Times New Roman a lot more readable. I think it would prove less fatiguing, quicker to read, and with fewer mistakes.

Once you are on the level of Times New Roman, I admit the differences between text faces as far as readibility may be small, but I still think they are present, and significant enough to be worthy of work by designers.

enne_son's picture

I said: "I can’t think of any type designer who is ideologically committed to the principle of sticking rigorously to the hand when it comes to contrast manipulation, or even when it comes to the logic of construction. Even Gerrit Noordzij.

I can perhaps think of type designers who seem to be de facto oriented to 'chiro-referentiality' at a contrast_manipulation or at a logic_of_construction level in specific designs or in their body of work.

Would Hrant want to extend his anti-[f]readability claim to chiro-referentialists? Probably. His reason is notan. For Hrant's 'notan' I would substitute appropriate_cue-value_toward_visual_wordform_resolution. Counters and the shapes evoked between letters (the white); stems, crossbars, ascenders, diagonals (the black) must have a proper salience in real-word situations so their appropriate cue-value toward visual wordform resolution in perceptual processing is optimally realized. Does chiro-referentiality obstruct that? I think not. Notan in type I regard more as an aesthetic referential axis than a functional requirement.

I can however see some sense in Hrant's use of 'notan' when I consider that the salience of any cue-value bearing component (white or black) must not be so disproportionate relative to any other (black or white) that it ruptures the visual integrity of the word image or bouma. Cue-value bearing components must blend into, or subserve, the bouma or word image and not be like a blemish the percptual system stumbles over or 'reads' as an 'attractor'

I think in practical terms, the competant type designer is constantly evaluating and adjusting salience vis a vis bouma-integrity. There is a small body of psychological research evaluating the cue-value-richness of role architectural zones, but most of it relates to individual letter recognition rather than visual wordform resolution.

hrant's picture

> ... Paphos J ...

Exactly. What I've done there is take the superficial aesthetic appeal of chirography (its redeeming quality), but prevent it from exerting its damaging modularity. Is it working? Well, Gary Munch -an expert chirographer- thought I was "coming around" when he saw it... Only a guru at GN's theories can see the violations. I can't even see them - probably because I'm not looking. So I think it's safe to say that laymen -the whole point- have no problem with this, and I feel they actually benefit.

--

I am quite confident that chirography is anti-readability. The two pieces of the readability puzzle that I'm still missing however are: the relevance of "cue-value structures" (where I'm following your lead); and the true nature of familiarity (where there seems to be very little to follow). The bouma-based nature of reading though makes superb sense; and a general admission of that would make the field of typography so much more comfortable with itself. That's what it's all about anyway - virtually none of us saving lives or anything.

hhp

Giampa's picture

The Giampa Challenge

CHOOSING A TYPEFACE

If it turns out the most readable typeface is also "ugly" which I suspect is the direction this is taking, that typeface will not be used by designers, (I hope) and therefore will not even be read, (I hope) therefore one must consider the readability of the unseen.

TYPSETTING FAILURES

Any type set poorly, or is that badly, (Take a risk) can be made unreadable. Rather a good jocky riding a slower horse, than a bad jocky riding the fastest.

Is there a moral to the story?

enne_son's picture

Hrant, you keep making the bald and sweeping claim "chirography is anti-readablity". Do you mean chirography in any form including chirography in the sense of what I tried to call chiro-referentiality? And do you mean anti-readability pure and simple in any sense of the term readability, or anti-progress in further optimizing readability at its high end? And if you mean it in the most sweeping and general sense possible, what makes your confidence that the claim can be sustained in even it's baldest form more valid than my suspicion that it needs to be tempered and surrounded with a number of qualifications to be sustainable, an more importantly, fertile for action and actionable?

Nick Shinn's picture

As part of designing Handsome Pro (a fully cursive script) recently, I did what I usually do when working on fonts, which is to print out text and hold it up to the light, viewing as a mirror image.

While it's not too difficult to read text in the Classic version (a high contrast "pen" nib), it's quite difficult to discern individual characters.

However, I found it almost impossible to read the monoline versions of the typeface "flipped", which led me to marvel at one's ability to read monoline joined script the right way round at all! I mean, how are we able to separate the joining strokes from similar "upstroke" marks within glyphs? Impressive, the power of readerability.

hrant's picture

Gerald:
1) There is no "most readable" type.
2) Yes, the setting of type is more important than the type itself. But in the microcosm of type design, the setting is moot. One could also say that the good setting of type is waste of time considering all the more pressing needs of the world. How each of us spends his time is a balance of what makes us happy versus what other people need, and a result of circumstance and genes.

--

Peter, that statement is sort of a brutal simplification, I admit. But it does contain the important truth. I remember recently writing a longer, gentler variant of it, and you liked it a lot. What was it?

> what makes your confidence that the claim can be sustained

I'm not confident it can be "sustained"; mostly because I think nothing can. What I am confident about is that it makes sense. Practical applications? A better "y" for Unibody. UC "A"s with symmetrical tapering arms. Thousands of things.

> how are we able to separate the joining strokes
> from similar “upstroke” marks within glyphs?

Peter can attempt a good -scientific- explanation of this.

hhp

Nick Shinn's picture

Another thought on reading handwriting/scripts:

I once attended a talk by a dance theorist who had asked himself the question "what's in it for the dance audience?". As part of his research, he discovered that the part of the brain which takes care of dancing is activated when you're watching dancing.

Perhaps something similar happens when we read handwriting -- we recognize our own actions, as it were, which aids comprehension.

John Hudson's picture

John, are you really holding that once a certain degree of readability is achieved, there is not anything a designer can do on this issue?

I'm saying -- have said repeatedly -- that once a base level of readability is achieved, as is achieved in the large and varied collection of what we call text faces, the measurable difference in speed and accuracy between different types is minimal and insignificant. Hrant introduces the additional criterion of comfort, which is partly subjective but also measurable in terms of eye strain, and I think there is probably more space, in type design terms, to significantly address this criterion than speed or accuracy. But I read a heck of a lot everyday, and there are few typefaces that have actually caused comfort problems, so even in this area I think that what is readable must be considered vast and varied.

What object to in Hrant's programme is his prescriptive and ideological tendency: the implication that the demands of readability need to be met by a specific design agenda (his). In its most extreme form, this programme becomes simply bizarre with the claim that the traditional approaches to type design that produced the huge number of highly readable typefaces that we read everyday is actually 'anti-readability'. I counter this with the idea of readerability, which explains just how it is possible for us to read quickly, accurately and comfortably across a wide range of typefaces designed according to a variety of methods, traditions and theories. Whatever internal merit Hrant's 'notanal type design' has as a method of designing new and culturally progressive typefaces -- and I believe that it has much merit --, it cannot be realistically opposed to a straw ideological chirographic method that is anti-readability when we have such massively overwhelming evidence that people can and do read types that are both more and less chirographic.

It is enough, I think, to make the point that type design does not have to be chirographic, and to help designers to realise that understanding the mechanics of reading liberates them to think about the construction of letters, their light and dark patterns of structure and contrast, in new and creative ways. This is a significant insight and one worth embracing. It is neither realistic nor helpful to try to make the further claim that type design must not be chirographic, and that chirographic design is in any significant way less readable that notanal design. We have the opportunity, in the understanding of how we read, to extend the vocabulary of type design, to add to the variety of ways in which letterforms are designed. Trying to replace all other ways of designing type with one ideologically correct design method is a vain revolutionary project.

Dan Weaver's picture

What you don't address is subject. If a mechanic who is interested in how to adjust a widget on a Lexus will care less about the legibility of the type as long as he gets the information, but if the same mechanic came to typophilia and even saw the word glyph he might be asleep before Hrant finished his explination. Content and Audience and how its written, 5 cent words or 10 dollar words.

hrant's picture

John, we think we're reading comfortably because we haven't seen anything better. People used to think horses were the ideal method of transportation. Furthermore, a lot of people have a lesser ability to read (like youngsters), and every little bit of improvement helps them finish that book, while enjoying it more (being more comfortable). Another factor is deficient environments, like reading on the metro, or in poor light. It is not casual reading, but reading in the "fringes" that we need to worry about.

Good type design is about subtlety after all. Saying readability doesn't matter is a little bit like saying we don't need more (text) fonts.

--

The reason I oppose chirography (and only in text face design - let the distinction be clear) is not because it makes people miserable. In fact it makes a number of type designers quite happy, and most other people are oblivious anyway. I opposed chirography as a matter of principle, as a proponent of cultural progress. And I detest the continued, convenient denial of its deficiencies, and the ideological lethargy of making yet more chirographic fonts.

I don't mind somebody saying "I make chirographic fonts because they sell better, I know how to make them, I think they're pretty, and my buddies pat me on the back for it." It would make me sad, but not angry. What makes me angry is when somebody says "Nothing else matters."

hhp

hrant's picture

> If a mechanic who is interested ...

Indeed. We Read Best What We Want To Read Most. ;-)

But:
1) Not everything we read is always absolutely interesting. Sometimes we have to read something uninteresting, or semi-interesting. And discomfort might make us give up. The typeface is as important as the content is uncompelling.
2) That's still not an excuse for a type designer not to care about making the experience as comfortable as possible. It's our duty.

hhp

enne_son's picture

Hrant, I would be happy if I could get you to see that there is a world of difference between 1) saying that chirography is cultural progress-inhibiting in improved type-functionality terms and 2) saying that chirography is [intrinsically] anti-readability. The one I find an interesting proposition, the other fatally flawed if readability is to have anything like the sense we attach to it in normal everday usage.

Nick, I'm intrigued by your observations about flipped type, especially the detail about not being able to discern indivdual characters while still being able to read the text. I want to check this out for myself and think it through in perceptual processing terms.

John, I think your statement that "understanding the mechanics of reading liberates [type designers] to think about the construction of letters, their light and dark patterns of structure and contrast, in new and creative ways" is entirely apt.

John Hudson's picture

I should add one other comment about my readerability argument. It is in essence a defensive argument, intended to preserve the creative space around traditional type design even while expressing interest in new approaches based on a growing understanding of the reading process. I want to be able to embrace both chirographic and notanal type design as equally legitimate methods, and the easiest way to do this is to point out that both presuppose readability and that this readability is generously provided for by readerability. If our pattern recognition skills were not so good, the scope of what is readable would be much narrower; but it is so good, and the scope of what is readable, especially if one considers all the diversity of type styles and handwriting across all the world's scripts, is huge.

It may seem from my last post, that I am specifically positing the readerability argument contra Hrant. But I should be clear that I would use the same argument to defend the creative space of traditional type design against other challenges. Consider, for example, this not-unlikely scenario:

Cognitive psychologists produce the results of empirical studies that show that loose letterfitting results in higher reading speeds due to greater accuracy in word recognition and hence fewer regressions. Hrant's response is predictable, since it is the response he has made several times to empirical studies that seem to contradict what he believes: 'My theory of the notan-bouma relationship predicts that tigher letterfitting is more beneficial to reading; therefore, there must be something wrong with your studies.' My response would be the same as I have made to Hrant: readerability makes it unnecessary to distort general typographic practice in response to such results, since it is evident that we read well enough in the current accepted range of letterfitting. Like Hrant's notanal type design, such results may broaden the accepted range of practice, or may provide a basis for edge cases of typography such as design for low resolution or information-critical typesetting such as pharmaceutical labelling. They would not constitute grounds for overthrowing 550+ years of experience, any more than Hrant's theories. This, to me, is the value of having a sensible appreciation of readerability: it allows one to be both generous to the incorporation of new ideas in type design while not having to sign up to revolutionary programmes.

enne_son's picture

"Cognitive psychologists produce the results of empirical studies that show that loose letterfitting results in higher reading speeds due to greater accuracy in word recognition and hence fewer regressions."

Is this a possible scenario John, or have such results actually been produced? If the latter, how is looseness gauged?

hrant's picture

> I want to be able to embrace both chirographic and
> notanal type design as equally legitimate methods

Yes, the former more for display type, the latter more for text. More than that, sorry, I want a lot things too that I can't have. BTW, "notanic", please.

> If our pattern recognition skills were not so good

There is no "good" and "bad". Our pattern recognition skills, not to mention our heuristic cognition, are fallible, especially in the parafovea. That's all you need to consider in concluding that good technique in type design matters. It's not all cultural expression.

> Cognitive psychologists produce the results of empirical
> studies that show that loose letterfitting results in higher
> reading speeds due to greater accuracy in word recognition
> and hence fewer regressions.

What would be predictable is what I foresaw when Kevin revealed that he's testing for letterspacing: that the same tired old flawed testing methods would remain stuck in shallow immersion, and continue to produce flawed data that will be misinterpreted yet again to support the half-world of parallel-letterwise decipherment. And the little tell-tale blips that indicate a problem with the Master Plan will continue to be brushed under the carpet. Basically, a waste of time and money, not to mention damaging to proper appreciation and application of typography. I have always explained why this is the case. If you keep seeing my stance as "it must be wrong because it contradicts me", you're just not paying attention. The PL model fails to explain too many things (like typos, regressions, skipped words). It also fails to make sense in the context of how humans think. It is half the story. And the boring, obvious half. But you're saying none of it matters anyway... You just want to keep designing the way you like to. That's not craft.

BTW, you will not reduce regressions (because that's the mind's inherent heuristic correction mechanism). You might increase saccade length. And you won't do that either.

> how is looseness gauged?

The only practical meaning I think would be "looser than practitioners anecdotally believe is optimal". Kevin and some others would love to disprove both centuries of anecdotal evidence as well as my theories. For that I don't blame them though. I do however blame them for not harboring enough self-doubt, and not using their instincts -or at least those of others- to better effect.

hhp

Giampa's picture

Hrant,

I have solution for the time impaired. Why don't we teach the world to use "shorthand".

Probably you are not aware that great minds have already figured this out. I find the conclusion a little confusing but maybe you could explain it to all of us.

"Florida Pennysavers"http://www.floridapennysavers.com/newsmb/prepress.htm

TYPEFACES - SERIF OR SANS SERIF?

Sans Serif typefaces are the best choice for news-print reproduction. They easily reproduce with desired clarity and readability. Type faces with thin or delicate serifs and strokes, and non-uniform character thickness, should be avoided."
•••••••••
http://www.tarleton.edu/~unews/marcomtypography.html
TYPE FACE SELECTION

San serif type faces are neither more nor less readable than serif type faces. Readability is enhanced by selecting common type faces, such as Helvetica, Times, Garamond, Palatino, and Univers, to name just a few. Common type faces have become common type faces because of their versatility and readability. And, because they are common, their letter shapes are immediately recognizable to most readers
••••••••••
I don't have my references with me, but my reading suggests that there is
actually a loss of readibility with increasing point size after a certain
point, just as there is with decreasing point size.

Optimum readability is 10-11 pts with most serif type faces. The reason for
the recent popularity of 12 point has more to do with the use of computers than
with readability on the printed page. Most monitors display 12 point text
much better than they display 10 or 11 point. Of course, monitors also
display sans-serif better than serif type faces, which may account for
the recent trend toward the readable sans serif Ariel, too.

The main point is, it depends on where the text will be read. If you
plan to display the text on a monitor, such as with html or help systems,
12 point sans serif is more readable. If you plan to print the text on
paper and deliver it in the traditional manner 10 or 11 point serif is
more readable ... to the point of actually increasing reading spead
for the average reader.

Regards,
Misti Anslin Tucker
••••••••

If this has not put your mind to rest, let me put yours to sleep.

I have summarized conclusions made by your kindred spirits sharing your concerns. Although all of their conclusions sound, in my opinion, idiotic.

1.) Use only San Serif Types.
2.) Use only Serif Types or San Serif Types.
3.) Use only Helvetica, Times, Garamond, Palatino, and Univers.
4.) Use only 10 to 11 point serif types.
5.) Use only 12 point serif types for monitors.

These 5 customers classes are waiting for your typefaces. Make that four. They already have Helvetica, Times, Garamond, Palatino, and Univers. Pardon me, make that three customers, they already have 10 and ll point for print. Make that two classes, they alread have and 12 point for monitors.

These people seem as interested in readability as any in the forum and make as much sense.

John Hudson's picture

Is this a possible scenario John, or have such results actually been produced? If the latter, how is looseness gauged?

It is a hypothetical scenario, but one based on the fact that Kevin Larson, as a result of our group discussions in Thessaloniki, has started conducting some tests involving varying spacing. I don't know what his procedure is yet, and I have not seen the results, but one of his proposed presentations for ATypI was entitled I'm coming for your sheep, in reference to the well known Goudy quote. This might suggest that his findings favour looser settings.

Unfortunately, there is only room on the conference programme for one of his proposed talks, so he has opted to present what I understand is the more advanced of his recent research: Measuring the aesthetics of typography.

John Hudson's picture

Hrant, I design typefaces using the methods I determine fit for a given project, and which of these methods meet your highly idiosyncratic criteria for 'craft' isn't a concern. Sometimes a project calls for a more chirographic approach, sometimes for a more notanal one, and I've never designed a typeface that was entirely one or the other, because my care if for the thing I'm making, not theories or ideology. And care for the thing being made is a definition of craft that more craftsmen will recognise than yours.

Glyn Adgie's picture

At the risk of being entirely out of my depth here, I would like to add an observation in support of John Hudson's views. A text typeface has a primary function: to convey information when used to form words. However, this function, important as it is, does not fully define how a typeface should be made.

I would like to make the analogy with food and the art of cooking. The primary function of food is to convey nutrition. But cooking defined purely in terms of nutritional value is likely to be bland and ultimately unsatisfying. We also know of fad diets based on the latest half-baked nutritional theories. People have been producing nourishing food for thousands of years, without the need for such theories. No doubt science may help us improve our diet, but God forbid that it should replace the art of cooking.

William Berkson's picture

John, I thought that Hrant had argued in the past that 'advanced' type design could increase reader speed by many fold. My view has rather been that the main thing is ease or comfort in reading, but maybe Hrant has argued that equally as well. I am very skeptical about the possibility of any great increase in speed of reading of printed texts.

In any case, while the differences between good text fonts may not be great on this score, I still think it is well worth trying to be on the upper end of ease and comfort in reading. So I don't agree that once a threshold is reached, the readability issue should take a back seat--if that is indeed your view.

And I agree with Hrant that one of the critical factors is the balance of white and black, and spacing. I believe that Frutiger long ago said that type designers look more at the white than the black, so Hrant's view of the importance of 'notan' is a familiar, and I think accurate view. This is confirmed by Clearview, one of whose key properties is that it is somewhat less bold than the FHWA gothic it is replacing--also creating more open counters.

And I think that the reason book designers stopped so regularly using standbys such as Baskerville and Bembo is that the digital versions were too light--an issue of balance of black and white. For otherwise the faces are largely the same. And they turned to Scala because it hit the black/white and spacing issues in a good balance.

I certainly don't agree with Hrant that the influence of the hand is bad, though 'moving front' shapes should be modified because of optical considerations. Nor do I agree that aesthetics is of minor importance in a text face, if that is indeed your view, Hrant.

So superb comfort and ease of reading being a goal for text fonts even after a threshold of good readability is reached, yes. And consideration of what the hand would draw, without letting it dictate, yes.

hrant's picture

> "Measuring the aesthetics of typography"

I wish I could be there for that! :-(

> ... does not fully define how a typeface should be made.

Well of course. Who said anything about "fully"?
Anything to do with humans is deeply gray, always.

I for one have always said that aesthetics are important, yes, even in a text face (William). As I've pointed out before, I have applied some proportions to certain glyphs in Patria that I believe actually hurt readability! This, to amplify the "mood" that I wanted the face to convey. Craft is about balance. That said, I do believe the balance between aesthetics and function is very different in a text face compared to a display face.

"Fully" is an attitude relied upon more by those who
would dismiss readability (think of the Emigre mantra).

> could increase reader speed by many fold.

?
My most aggressive estimate has been around 20%. But really, it's not the number, it's the principle. BTW, comfort and speed end up having the effect in the end. On this Kevin and I agree btw.

> ‘moving front’ shapes should be modified
> because of optical considerations.

Let me ask: modified how, exactly?

hhp

William Berkson's picture

>Let me ask: modified how, exactly?

I don't have a general theory, but one key thing is for even color. Handwriting of a Roman is going to violate this too much in the characters. That's what Jenson and Garamond figured out, and modified for even color. Garamond's z as opposed to Jenson's is the most obvious example, but such modifications are all over the place. But you are still left with a letter form that comes from the pen, and when you try to mess with it too much it starts to be less recognizable and readable.

hrant's picture

> you are still left with a letter form that comes from the pen

Are you really?
My point is: Why bother?

A lot of my own type design looks chirographic... to people wanting to see that. So I think it's all in our heads, and the fewer illusions we subscribe to the better our fonts will be.

> it starts to be less recognizable and readable.

How do you know this? (I mean "know" in the loose sense of the word, admittedly.)

hhp

dberlow's picture

"I think it is apt to insist that the typical ability of the average reader is very high and makes the immense richness of typography possible,"

Fresh Air. . . And true.

Actually, I feel that the Low Reader is not worth attaining, or attending. By low reader, I mean someone who's just "not up to" or "not into" the content of a long document. Then, people who are "up to" or "into" fall into three main categories when it comes to long reading requirements- But that's not important! I think of myself and my like-minded designers as serving the highest of the three categories, the Willing & Skilled, the reader who knows what they are doing and are doing what they like. The rest, who friggin' cares? relatively. While I serve this group of W&S readers, I make the Most Familiar Form Possible in the From that Reader Uses. If no such thing exists, and I must be sure about this, then I invent, a little. (.). Thus almost none of the above applies to long text on screens. I can not, because it is impossible, broaden the class of readers to include everyone and then find a proof of what can help them all. Sorry, I guess I'm not a type designer...

"‘role architectural statistics’,
" bit of a dark area for empirical research",
"stems, bowls, counters, cross-bars, diagonals, and how they join."
AKA: Things (those which "make the letter the letter" even if the highest possible "contrast" imaginable destroys all else), the connections between these things, the terminations of these things, the intersection of these things and the nothing (s) - if you want to get down to the atomic nomenclature/reality of it, this is what we read in a hierarchically arranged design of object and space, and it's universal to all scripts and readers.

Do we see word shapes? Absolutely, I.... a....and a few more are some of them. ANYONE who does not believe we do Some word shape recognition is not to be trusted to understand the main parts of the problem. I do not and I don't know any Real type designer ever who said We read MOSTLY or read even significantly by "word shape." Just think about it for a second, and you know it's not a possible method for Our Script, Ya? Kommen mit me, see? Shickkelnickermakenfüsser or Shickkelnicklermakenwüsser would slow all Germans to such a pace that nothing'd ever gotten done in Germany... ever....and we know that ain't the case, so, proof positive and complete. Chinese on the other hand? ANYONE who does not understand the importance of recognizing the physiological oneness of the human species when studying the universal methods used in reading the variety of scripts is not to be trusted to understand the main parts of the problem.

The techno-iceburg that we, as a reading culture, have multiple Titanic'd into — is resolution. "It's just an iceberg sir..." "Well Helmsmen, Hit it again, and call all the other ships in the area, let them hit it too!" Matthew and the good skiff Verdana took a safe route around said iceberg some years ago now but still they batter away. So, when I hear, "I’m coming for your sheep", not only do I want to start rounding up shepherd hunters (the bounty hunter in "Raisin' Arizona", Nicholas Cage's "brother" comes to mind), and of course the vice squad should be called ;) Microsoft, you see, has yet to produce either a test or test results, (or ideally both together with effective type and typography), that make their type-related decisions hold any more water than plain-old good power-marketing.

I have great hope though, but Meanwhile, da culture has left in another set of skiffs for text messaging, email, buttons, and a host of short type tasks for low resolution. That is where they "row" now, not in the long texts sinking on yonder 13th wonder of the world. They don’t and will not read books and newspapers, or longer articles, online for more reasons than the type, but type remains an icesue to be repeatedly rammed (And for best results in the software world, why not use a "scientific approach", you end up with patents & fonts?) And all the while I hope the bull has at least two, better three, tails by which the end users can wag that Longhorn.

*which reminds me to ask, with apologies for going off topic ;-. : Does anyone here have enough of a ma+h background to know if there is such a thing as an "imaginary imaginary number" ? I'm working on a high tech symbol font. . .

enne_son's picture

> how is looseness gauged?

The reason I asked this is because I think fourier transforms provide a relatively stable gauging mechanism. The interesting thing is that transforms using the 'bundled' spacing of professionally spaced fonts exhibit a characteristic spatial frequency distribution profile along the horizontal dimension.

The profile is different in detail for each font, but similar in structure for the variety of professionally space fonts I checked. The profile is different for each font because each font uses cartesian space in its own way.
The profile for professionally spaced fonts shows a high concentration of role architectural components in a clearly delimited area around a single peak spatial frequency. (In fonts that are generally considered highly readable the concentration is high but not superhigh right at the peak frequency. In bodoniesque fonts or condensed sans serifs like univers 57 the concentration at the peak spatial frequency is significantly higher and the clearly delimited area around the peak frequency becomes more compact)

When spacing is loosened or tightened beyond a range of about + 3 to -2 qxp tracking units, the spatial frequency distribution of role architectural material becomes less compact (more diffuse), shifts to slightly higher or lower frequencies, and concentrations at other (competeing) spatial frequencies start to assert themselves.

I will predict that qualified tests of perceptual processing efficiency will show an advantage for fonts and spacings falling within the spatial frequency distribution profile I described for professionally spaced readable fonts.

This has an important practical consequence: fonts whose proportions and components fall consistently 'on grid' in spatial distribution terms will be perceived as less readable by skilled readers than fonts whose proportions and key vertical components hover around a rhythmic mean.

Why is this so? I speculate it is because the visual cortex depends on off-grid information within a relatively narrow spatial frequency band (slightly different for each font) to compile it's word form resolutional statistics efficiently at a neuronal level, i.e., to ease its perceptual processing task.

hrant's picture

> The rest, who friggin’ cares?

Well, some people. In fact there's certainly more money in research aimed at dyslexics for example than "normal" readers.

I myself would instead state that readers who have enough of a combination of desire and ability to finish reading something (while not suffering undue discomfort) will take care of themselves, and our skills are better used to help out all other people/situations (like onscreen reading).

> ANYONE who does not believe we do Some word shape recognition
> is not to be trusted to understand the main parts of the problem.

I don't know about [mis]trust. There are people (Like Kevin*) who mean well and have the ability and resources to reveal the true nature of reading. We need to help them gain the necessary typographic sensitivity (or at least leverage it in others), simply because they can help us. We need to get THEM to trust US.

* BTW, I just encountered an interesting parallel to Kevin and me:
http://news.bbc.co.uk/2/hi/health/3745498.stm
"He said that the methods used were considered controversial by
some archaeologists, because they do not find direct evidence
of the medicine in use, but their findings were always
corroborated by other experts."

BTW, the whole words you're thinking of are what I call the "Taylor 60" (which aren't necessarily limited to 60). Yes, I agree that these are seen as wholes (it makes perfect sense after all), and the blank space's role as delimiter is what makes them stand out. They constitute one of the "blips" in the imperfect data (see above). But there's more. There must be more. It makes more sense to consider the existence of boumas: clusters of letters that are recognized are wholes. These are sometimes words, basically when they're sufficiently short, frequent, and distinctive in silhouette (or at least notan).

> Microsoft, you see, has yet to produce either a test or test results ...

True.
But the good news is that they have the people and the money necessary to make it happen for real, in spite of the "corporate culture". Their type people are really our best hope.

Also, I actually believe that there's a lot of room for improvement even within these confines of resolution. I'm not talking about matching print, just approaching it. And hey, maybe we can even match it, or pass it! How? The screen is dynamic and [potentially more] interactive - maybe we can use that to great effect? Think for example of text that's sans in the fovea (where letterwise decipherment rules) but serif in the parafovea (where "binding" is important), with the style switching depending on the reader's fixation point. This is not nearly as sci-fi as many other things which have become reality.

> fourier transforms

Yes, that does seem promising.
I for one would love to see a well-illustrated essay.

hhp

Nick Shinn's picture

>qualified tests of perceptual processing efficiency

Perhaps this could be configured as a video game, where players get to choose their font at the start, and then race round various tracks (aka "texts") competing with one another for maximum comprehension and reading speed.

enne_son's picture

... configured as a video game ...

Sure! why not?

... a well-illustrated essay ...

Sure! why not?

... or a 'poster'(see: http://www.psych.nyu.edu/pelli/posters.html)

Now all I need to do is lay down $700.00 US to get beyond using the FoveaPro 3.0 demo (http://www.reindeergraphics.com/foveapro/) on a different machine in my studio every month (I'm on my second--and last--month)

nepenthe's picture

What is the significance of speed reading in this debate? This is when someone scans very quickly over the page and can read several times faster than normal. Searching the site showed no results of any mention of speed reading technique. It seems to me that if someone can learn to read any kind of typeset page very quickly, regardless of the typeface, then John's postion on the minimal legibility requirement is right, provided the reader brings with him/her sufficient reading skills. Why has no on brought this up before?

(Sorry to throw such a short post in the midst of all these great essays!)

dberlow's picture

"they have the people and the money necessary to make it happen "
REALLY? HOW OLD WILL I BE THEN? Or should I be asking how long you plan on working there? ;-) My experience of 25 years with companies with "type in their hearts" of the large variety says you don't get the finest, you want the yes-sir-est and the days when cash won type design and quality contests ended in the late 70's.

enne_son's picture

Among the fourier transforms I tried was one comparing the black and white in three fonts, a regular serif, a bold sans, and a sans ultrabold. See: http://www3.sympatico.ca/penneson/blk_versus_wht.pdf

If you look at the meta bold, you will see that the spatial frequency distribution profile is similar for both black and white. Not so in the other two. For meta bold the black and white are equally valent (equi-valent) in spatial frequency disstribution terms. Does this mean only meta bold is in notan? That only bold weights with the boldness of meta bold are in notan?

But is this what we are after when we talk about attending to the white / not prioritizing the black? If we are to discuss the concept of notan in the typographical domain vis a vis text-weight fonts, don't we have to specify more fully what it means in this context?

This is why I prefer to think in terms of proper salience and appropriate cue value of role architectural particulars (stems, bowls, arms, etc,: the black) and role architecturally evoked material (counters, the shapes between letters, etc.: the white. (One-hundred dolllar words, I know, but worth buying into.) Both the black and the white carry wordform resolutional information; the visual cortex 'reads' both.

Competant type designers have a fine eye for and ability to manage proper salience, as well as an intuitive understanding of cue value. Empirical studies can perhaps inform us about what parts of letters in word contexts have greater or lesser relative cue value for reading tasks. For example, is it the counter of the lower case 'o' or the black of the letter that the visual cortex depends on most? And if the counter, does absolute notan obscure it too much?

(What has cue value importance in wordform resolutional tasks might not correspond exactly to what has importance in individual letter recognitional task.)

hrant's picture

> HOW OLD WILL I BE THEN?

Think TypeCon-2055, so you can pull a Rondthaler. ;-)

Dunno, David - I'm just trying. And who else besides MS is too? Adobe? Apple?!
Yes, they might not succeed. But Yoda was full of crap: trying is what counts.

> http://www3.sympatico.ca/penneson/blk_versus_wht.pdf

Nice.
Question: why in both the Garamond (Jannon) and the Gill do the two settings have the same "polarity"? What I mean is, since one is lighter than Meta (which has equi-valence) and one is darker, shouldn't the "black"/"white" results have an invert relationship between those two fonts? They seem to have the same relationship - or maybe I'm not seeing straight.

As for the issue of what is "ideal": I'd have to think about it some more, but I (and others, like Smeijers) have in fact expressed that the contemporary mainstream Regular is a bit light. Typography is still hung over from the waify 70s.

And another question: is it OK to do the reverse-field only on the x-height? Shouldn't you do full white-on-black setting?

> This is why I prefer to think in terms of proper salience
> and appropriate cue value of role architectural particulars

I agree.
Total black/white coverags is not enough.
On the other hand, I think it's plausible that such coverage can be optimal or not, meaning that the color range for optimal reading can indeed be quantified, maybe quite narrowly. A lot like interletter spacing, actually.

But Peter, I'm not even going that far, and have encountered stubborn resistance: the main thing I'm saying for now (since I haven't managed to delve deeper yet) is that creating the black (chirography) cripples ideal notan. People who don't even accept that will certainly resist your particular "constructivist" ideas... :-/ For example, they want the bowls of "b"/"d"/"p"/"q" to be be structurally similar. Game over.

hhp

enne_son's picture

Hrant' it's not about coverage; it's about distribution of visual information in the spatial frequency domain. Lots of light greys and white blotches in fourier transforms doesn't mean lots of white in the type. It means lots of visual information at the particular frequency at which the whitest blotches occur.

Forrest L Norvell's picture

After reading this thread and the thread started by Nick Shinn about his reworking of Eunoia into a "text face", I have a few comments.

For starters, I think almost all of the discussions Typophile has seen about readability, legibility, and "readerability" uneasily bridge the divide between science and scientism. The amount of wooly, inductive reasoning and obscurantist jargon that gets brought to bear on these discussions makes it very difficult to determine what's meaningful and what's, to be blunt, pretentious hokum. I'm not saying that it's all one or the other, just that it's very hard to evaluate which is which.

If we're to get anywhere in these discussions, we're going to have to put them on a scientific footing. To me, that means gathering quantifiable data, stating disprovable hypotheses that can be tested into theories, and using those theories to suggest further areas for investigation. To me, even in the recent postings by Mr. Enneson there's an excess of the trappings of scientific discourse and a lack of firm linkages between data (such as the Fourier transforms on text) and any sort of disprovable hypothesis.

As flawed as Hrant (and others) may believe Kevin Larson's studies have been, part of the reason why Hrant's even able to criticize his studies is that he's described what he's done and how he's done it very clearly. I agree that it's difficult to draw conclusions about word recognition, readability, and the role that readerability plays while running test subjects through contrived environments distantly removed from the relaxed reading of text on a printed page in a comfortable environment. However, I find what Larson's done FAR more valuable than making unsubstantiated claims about what is and isn't readable based on what you "know" to be true -- claims that are nearly impossible to prove or disprove -- and calling it "scientific".

This may come across as an attack, and if so, please forgive me, it's intended in the spirit of a challenge rather than a putdown: I would love to not only understand how reading works, but to have some meaningful guidelines, to follow or ignore, on how to design type to be more easily, speedily, and accurately read. I don't believe type designers have figured out all the answers to these problems and just "know" how to design type for optimum readability. People used to say the same things about physics and medicine, and science has transformed both of those fields beyond recognition. Closer to home, interface design and usability have both moved forwards considerably since the advent of controlled, user-directed testing. I'm certain typecraft can benefit in similar ways.

But if we're going to talk about the scientific basis for readability, we're obligated to do some real science to provide that footing, and having studied just enough cognitive psychology and neurology to freak myself out (sanity's much more illusory than I'm comfortable imagining most of the time), I feel like I have enough of an idea of how fantastically convoluted it is that I can suggest suggest that only the most rigorous and incremental studies are going to get us where we want to go. Fourier transforms have yielded some really interesting information about the history of type design, but I still don't see what meaningful things they're telling us right now about readability. They tell us that some typefaces, massed on the page, have certain relationships of white space to black text, that's true. But that information on its own says nothing meaningful about the role notan may play in readability, and it provides little substantiation one way or the other to any claims we might make along those lines.

enne_son's picture

Perhaps I was wrong to introduce fourier transforms into the discussion without a thorough introduction to what they do and what they reveal.
Individual transforms do not tell us anything specific about the relationship of white space to black text as you suggest they do. Rather, they tell us meaningful things about the spatial distribution (at various orientations) of the visual information on a surface.

I said, or meant to say, no more with my fourier transforms argument than that fourier transforms of text under conditions of professional spacing show a characteristic profile and that this might serve as a benchmark for the kind of test Kevin Larson seems to be conducting. It might serve as a norm against which to measure or define loose and tight spacing.

And I tried to use the kind of information supplied by comparing transforms of the black to transforms of the white to ask questions about what black / white balance (if that is what we are after) might mean in typographical terms--to ask questions about the meaning of notan.

I did not mean to imply that fourier transforms proved anything about readability.

Many of my terms--salience, cue value, response bias, role--come from the world of cognitive psychology. Others come from a feeling that some of the terms of reference used in empirical investigations of perceptual processing in reading (and interpreted by cognitive scientists using perhaps suitable / perhaps unsuitable paradigms) are inadequate and confounding. My use of the cumbersome phrase 'visual word form resolution' in place of the seemingly more direct 'word recognition' is a case in point. I hold that the perceptual processing task involved in reading is more effectively described as visual wordform resolution than word recognition (which in the cognitive psychology literature generally means matching abstract orthographic sequences to entries in the mental lexicon--a cerebral cortex based computational operation, rather than a visual cortex based perceptual processing operation.

I would prefer of course that my use of terms not be perceived as obscurantist, or my inclinations scientistic. In forums such as this I lapse into a kind of condensation of my argument which is perhaps unwise. My main interest in all of this is in exploring my conviction that there are sound perceptual processing reasons for type designers to be as concerned as they have alway been about strategic construction, contrast manipulation, scaling, space craft. That typographical'aesthetics' has a firm foundation.

Nick Shinn's picture

>I don’t believe type designers have figured out all the answers to these problems and just “know” how to design type for optimum readability.

That's not the way design works.
Keep your Stalinists away from my sheep.

hrant's picture

> it’s not about coverage; it’s about distribution of visual information

Isn't it both? I admit to not fully grasping those graphs, but when you say that Meta-Bold is equi-valent, are you basing that on the observation that the positive/negative graphs are structurally similar? Doesn't that mean the overall luminosity of each has to have a certain relationship too? What would be nice is if you went into more detail explaining what we need to see in those graphs. Sure, they don't prove anything, but they seem to be a very promising tool. Like my "encapsulations" of scripts (or more accurately script+language):
http://www.themicrofoundry.com/ss_rome1.html (far right image)

> it’s very hard to evaluate which is which.

True. And/but I feel there is no Answer [anyway]. BTW, even harder -and more useful- is evaluating the merits of the hundreds of potentially pertinent empirical research studies! :-/ There is so much junk out there, but luckily/sadly they each contain something useful... It's a really long, cold, endless slog.

> we’re going to have to put them on a scientific footing.

Do you have proof you won't get run over by a truck when you leave the house? Does this stop you from leaving? Different people have different thresholds of moving from assumption to action. You might think I'm wooly, I might counter that you're a prude. I mean, do you refuse to set type until you know how many microjoules the reader will expend reading it? We do the best with what we know. I think we know enough to do certain things, like use serif type for books, make more fonts like Legato, etc. Is this a cardinal sin?

That said, I'm all for more -but please, better- empirical research into readability. I'm certain that if the testing is good, boumas will come out of the woodwork. And I'm certain only because the confluence of everything I've been exposed to points to that. What other reason could I have?

> part of the reason why Hrant’s even able to criticize his
> studies is that he’s described what he’s done and how he’s
> done it very clearly.

Oh, totally. I couldn't count how many times I've expressed my gratitude for Kevin's presence. The reason I mention him so often is because he's a key ingredient in the quest to help millions of users. And this is somewhat parallel to how I feel about GN btw: his formalizations of chirography are what helped me nail down exactly what's wrong with chirography. Thank you, sincerely. The difference though is that Kevin is open-minded, which I guess he can afford to be, and even has a professional duty to be. That is in fact one difference between the Artist and the Scientist.

> calling it “scientific”.

Let's not quibble about terminology, at least not too much. Some people define Science quite narrowly. Others might see in it more of what the Ancient Greeks did, which involved heavy doses of introspection and intuition. And it's not crazy to state that the greatest scientists leverage their intuition heavily - think of Einstein: he knew he was right before he had concrete proof. Come on, Forrest, humans make all their daily decisions based on what they think they know - what other choice do we have? Formal proof is severely over-rated; don't let it shakle you.

> I don’t believe type designers have figured out all the
> answers to these problems and just “know” how to design
> type for optimum readability.

Of course you're right.

> Keep your Stalinists away from my sheep.

But they're so nice and wooly!

hhp

John Hudson's picture

'Nepenthe', tests of speed readers have generally shown that their accuracy and retention is poor compared to people reading at typical speeds. Speed reading isn't really reading at all: it is a form of systematic scanning in which one tries to pick up as much raw information as possible in the shortest amount of time. Reading is an engagement with a text, i.e. with the way in which the information and ideas in the text are structured and expressed. One of the benefits of such engagement is that the information is retained longer, because it is associated with structured thoughts.

hrant's picture

> Speed reading isn’t really reading at all

Agreed.

hhp

Nick Shinn's picture

>Speed reading isn’t really reading at all

I disagree.
All reading is predictive to an extent, and edits the text accordingly.
Speed reading is part of the spectrum, at the other end from spelling out every letter.
It's unscientific to dismiss speed reading without analysis as to how it works: the extremes can be revealing.
Given how little one actually remembers of what one reads, speed reading would seem to be an efficient strategy.

If you're reading a text, and you've got a good idea what the writer's on about, and your first saccadic scan of a paragraph suggests that it's redundant, you skip to the next. This is how reading works at every level: you only read a letter, or a word, or a sentence, to the depth it takes for closure to occur on signification.

enne_son's picture

"This is how reading works at every level: you only read a letter, or a word, or a sentence, to the depth it takes for closure to occur on signification."

Lovely!

dberlow's picture

Yoda? He saves the univers doesn't he? Fine thanks he got.

A good reading Latin script for print text is slightly more densely “populated” along the x-height region, (across the page), than it is along the baseline region (also across the page), while both are more dense than the ascent and descent regions, with the addition of Caps boosting the density of the ascent area over that of the descent. A lot can be learned and discussed about the evolution of this system by looking at its dychographic (sp?) shifts and changes for technological purposes from Charlemagne’s to Linotype’s Times, but the evolution of the x-height region, and of expanding speeds in western reading and writing, go together, in scientific terms, like sunshine and growth without even knowing about photosynthesis.

Looking at any good latin l.c. x-height region without the bottom, very scientifically and effectively “proves” the importance of the x-height region. You can also show its importance by messing with the denizens of the x-height region: e.g. removing the ball from the “r”, flaring the top of the “a” to a chisel instead of a ball or teardrop, diminishing the top serif of c and s to stubs and using a modern g instead of the old one...and the x-height region, and readability, slowly goes, in scientific terms, pffft. (see Hrant’s scientific pictures of brains on various types: pfft is represented by brains looking mushroom cloud-like in plan view, while antipfft is a more relaxed spiraling nebular brains).

The readability equivalent of the plague comes to town when resolution is lowered down to our screens, and applied to type — the hierarchy of regions is "flattened out" by rounding so brutal that our x-height is hardly more pixel populated than that of the baseline. The champion serif fonts of print text, no matter how well hinted, have been rejected for sans, Why? because the x-height region of a sans is "slightly" more readable at low resolution: there are no serifs in the baseline region and few in the x-height region to out dense the x-height region as resolution collapses the black features to 1 pixel for all things.

Look At The Type On Your Screen. What's missing versus good print text? It’s the Bonneville Type Flats and that, is just the vertical side of the story. There is an equally important horizontal one.

Now, maybe, I have a different definition of the word TRYING than Hrant, but it seems like trying to be dumbest when study one is: of screen fonts of different sizes AND designs, proving conclusively that people don’t like screen fonts that are too small; study two was “does the average mall visitor like Italic type fuzzy or sharp!”; and actions so far have included decisions in the CT rasterizer that Horizontal resolution is of more importance than Vertical, or it the other way around this week, and that hints need no longer apply in the future, “the rasterizer has gotten that good, but not below 15-17 ppm(!)”, coupled with the obviously biased and narrow-minded “academic” marketing/research and the impending hilarity of the next set of “studies”, and I’d say, maybe trying has a different definition for me than for others...

Nick Shinn's picture

>The champion serif fonts of print text, no matter how well hinted, have been rejected for sans, Why? because the x-height region of a sans is “slightly” more readable at low resolution:

There are also cultural reasons, "pull" factors such as Cool being desirable and Old-fashioned being anathema. It doesn't matter that uncool is less efficient (like walking on stiletto heels). I think part of the reason that serifed fonts don't look right in so many comps is that traditional serif text types rule the roost in conservative periodicals, and a lot of font development follows the money, directed at this market. However, it's interesting that since the Millennium a number of new-looking sans serif faces have come onto the retail market, and I believe that art directors will start to use these more and more.

A "push" cultural reason is accessibility: I recently did some inquiry concerning a local government publication where the text was 14 pt Gill Sans Light, tightly leaded and tracked. It transpired that there was a legibility standard specifying a mimimum size, and sans serif style, to aid the reading-challenged. To meet the requirement, and yet still retain a modicum of grace, and fit enough copy on the page, the designer had followed the letter of the law, but not the intent, producing something that looked good in layout, but was hard to read for everyone.

They had come across the idea that sans serif is easier to read (seems logical enough, get rid of the unnecessary old-fashioned petticoat decoration), and preferred to follow this specious "common sense" rather than look in the large print section of the library/bookstore, where it can be seen that all the books are set in serifed type, which is the user choice, implemented by the economic Darwinism of the marketplace.

oldnick's picture

Yoda? He saves the univers [sic] doesn’t he? Fine thanks he got

Well, it is a sans-serif font...

Syndicate content Syndicate content