Ligatures & Children's books

Miguel Sousa's picture

I'm a graphic designer who works in a Portuguese publishing company specialised in children's books.

My question is:
How many of you consider it is correct to use ligatures (like fi or fl) in these kind of books ?

The point is:
I want to set beatiful and typographically (more) "correct" layouts, making good use of the features I have available. However, I don't want to confuse young readers who are still developing their reading skills.

Thanks in advance for your time & opinions :-)

hrant's picture

There are different classes of ligatures, and aesthetically the only ones that really makes sense to me are the ones that fix problems: like the beak of the "f" touching the dot of the "i". The other ones (like "ct") are usually too decorative.

On the other hand, I feel that ligatures have an abscure functional advantage: they diverge boumas*. But it's very hard to use them properly in this manner, and the gain is admittedly small.

* http://www.themicrofoundry.com/ss_read1.html

In the functional realm, it's worth pointing out that children only start reading boumas around the age of 9. So if the books are for younger kids, there's a very good reason not to use ligatures: individual letters have to be highly decipherable, so the dot of the "i" can be very important for example. Choose a font where the "f"-"i" sequence sets well by default - that usually means a shy lc "f", but...

hhp

Miss Tiffany's picture

While I appreciate John's comment. I would have to say that you shouldn't use ligatures. Children, especially those learning to read, are also developing writing skills. I would think that ligatures would introduce confusion. At that level it would be difficult to depend upon conformity in the typography of the books they are reading.

It is also important, I think, to keep dyslexia in mind. Those children already struggle with this disorder. It could be especially damaging for them.

John Hudson's picture

I knew what they were, I just didn't know they were called boumas.

As far as I know, only Hrant and people in discussions with Hrant call them boumas.

jfp's picture

Still don't know and don't want to know this new religion, Bouma and his friends, sorry.

About children books, as a true lover of ligatures, as father of 6 year old child who just learn to read this past year, as President of the jury set up by the "Minist

hrant's picture

> As far as I know, only Hrant and people in discussions with Hrant call them boumas.

Hey, you can never prove something doesn't exist. :-)

It's not easy getting other people to use a new word. I know, 'cause I've been trying since my pre-teen years. When I moved back to Beirut after living in Palo Alto for 2.5 years, I tried to introduce "moron" to my classmates. I think like three of them were using it profusely by the following year. But "bouma" is much harder, because you never hear it on TV, plus the target group is adults. The advantage is that the concept itself is pretty old by now - even though it's never been explained properly, at least not in the context of typography.

Anyway, I think there are now almost a dozen people who use it without quotes or italics or anything, which is great. And it's possible they've passed it on to people who haven't been in my "vicinity" - who knows? Like this was pretty "unprovoked": http://www.typophile.com/forums/messages/29/7060.html

But really, it's not a matter of vanity, "bouma" is much better than the only alternative, "word gestalt", which:
1) Sound like a German lobotomy clinic;
2) Is too long;
3) Is misleading: we don't read words, we read letter-clusters. The space is certainly the strongest delimiter, but the longer the word the less a bouma is a word.

I simply like the term "bouma" (which I didn't invent - it's a simplification of "Bouma-shape", which is used by at least two "real" scholars) simply because it's an elegant and effective solution to talking about this stuff.

I should do really some Google exploration and see what I can dig up...
Here's something: http://cgm.cs.mcgill.ca/~luc/readability.html

And if I remember correctly, you've used the term yourself! :-)

--

> this new religion

?
It's just part of good craftsmanship.

hhp

John Hudson's picture

...we read letter-clusters

Hmm. I've got an idea, why not call them 'letter clusters'? :-)

hrant's picture

> why not call them 'letter clusters'?

Because that emphasizes the perceived prominence of individual letters, whereas -in terms of immersive reading- you need to actually demote that idea, in favor of emphasizing the "nebulous" nature of reading. A new term helps make a clean break from legacy misconceptions.

hhp

Mark Simonson's picture

I recall noticing fl and fi ligatures when learning to read (first grade, I think). (I actually have copies of some of the books that were used, so I know they were there.) I don't recall being confused or alarmed by them. It was more like a discovery about how things are done.

John Hudson's picture

Because that emphasizes the perceived prominence of individual letters, whereas -in terms of immersive reading- you need to actually demote that idea, in favor of emphasizing the "nebulous" nature of reading. A new term helps make a clean break from legacy misconceptions.

I think this is why I have a problem with your boumas. I think understanding of reading has advanced considerably in the past hundred years or so, but it is not complete, and one of the areas that we still don't fully understand is exactly how we recognise these clusters of letters. I'm not aware of evidence that eradicates the possibility that we are assimilating multiple data including the sub-pattern of individual letters within the larger pattern of the cluster. I don't think there are grounds to over-promote the 'nebulous' by forcibly demoting individual letters beyond consideration. The 'legacy misconception' is that we read by moving from letter to letter; this is clearly bunkum and has been known to be false for a long time, but recognising this misconception is not the same thing as saying that individual letters do not contribute to the recognition of, ahem, bouma-shapes. Obviously individual letters do contribute to this recognition, because they define the overall bouma-shape, but I don't think we know enough, yet, about how this recognition takes place.

hrant's picture

> it is not complete

Totally. And really, it never will be.

> I'm not aware of evidence that eradicates the possibility that we are assimilating multiple
> data including the sub-pattern of individual letters within the larger pattern of the cluster.

But it is there. Herman Bouma did a lot of it.

There is empirical evidence of the retina's acuity distribution, empirical evidence of the degree of recognizability of individual letters within the parafovea, and empirical evidence that saccades exceed the range of the (hi-res) fovea by about a factor of three. One can only conclude that individual letters are simply not used more than half the time. Bunches of people have concluded this.

So for example Larson's mistake -I presume- is to assume that ClearType text is fully immersive, and that throws eveything off, in many ways.

> forcibly demoting individual letters beyond consideration.

You're right - not beyond consideration. In fact when you run into a brand new word, your resort to letter compilation. But that's rare. And really, the best reason not to compile letters into words is good old-fashioned efficiency: letter distribution is not flat, and frequent groups of letters greatly reduce the overall recognition time of a text, because things proceed in clumps. Even in the hi-res fovea, boumas are used primarily, simply because it's much faster. The brain is a quick and dirty little devil.

The basic valuable conclusion of the bouma model is that legibility of individual letters and readability of real texts are somewhat opposed. Not totally opposed of course, but think of it as vectors 90 degrees apart. So a letter that is too much itself is bad for reading, and a letter that easily decomposes into components -in the low-frequency (blurry) layer of perception-, for example thanks to stroke contrast, is good for reading. This is a core reason why some studies have found that sans fonts are more "legible", while others have found the opposite.

hhp

John Hudson's picture

There is empirical evidence of the retina's acuity distribution, empirical evidence of the degree of recognizability of individual letters within the parafovea, and empirical evidence that saccades exceed the range of the (hi-res) fovea by about a factor of three. One can only conclude that individual letters are simply not used more than half the time. Bunches of people have concluded this.

Well, 'not more than half the time' is a heck of a lot more that I thought you were giving individual letters credit for. Consider, though, your own examples of readability in para-foveal vision on your website: apart from the very obvious benefit of clear ascenders, there are evident benefits of certain letterforms within the bouma-shapes that limit fixations and regressions. The lowercase k in your images seems to me a good example of this. In order to say that a bouma-shape in one typeface is more readable than a bouma-shape in another typefaces is necessarily to say that individual letterforms have a profound impact on bouma-shape recognition.

I'm not debating legibility vs. readability; I largely agree with you on that front. I'm saying that individual letterforms provide the underlying sub-patterns of all bouma shapes, and this needs to be incorporated into a useful description of reading. There is a very simplistic and reductionist view that says bouma-shapes are recognised by their 'outline', but the examples on your own website clearly show the importance of internal shapes and space within the bouma-shape. When multiple data are contributing to pattern recognition in a complex way, I don't think it is a good idea to talk about 'demoting' the importance of a particular kind of datum, even if the intent is to correct an historical conceptual imbalance, albeit one to which almost no one still holds. Why not simply try to be accurate and, when necessary, acknowledge the incompleteness of our description of reading and, hence, the uncertainty of our practical responses? I'm not advocating paralysis: there is still a lot we can learn from the description and apply to type design.

ponofob's picture

I wonder if i didn't understand, but the whole bouma thing made me think of the big question that went in France (i don't know about other countries), about reading learning. There was a method which prefered to teach reading by shadowed words, or something like that, opposed to the old one which basically teached syllabs one after another to make words. The "new" method was interesting in the fact it permitted children to read some things (their name, usual things as "mother"

hrant's picture

> 'not more than half the time'

I was being tentative for a change, gimme a break! :-)
It's not crazy to put letter-wise compilation as low as 10-15%.

> there are evident benefits of certain letterforms within the bouma-shapes that limit fixations and regressions

Ah, but you're looking at it "deliberatively": you have all the time to consciously analyze each letter and figure things out - that's not how immersive reading works: for the sake of overall speed, the brain follows a "lossy" model where faults are tolerated, even absorbed.

> individual letterforms have a profound impact on bouma-shape recognition.

Well, sure. It's just a matter of understanding how a letter participates in reading. The distinction between the letter-wise and bouma models affects the ideal x-height size for example.

> There is a very simplistic and reductionist view that says bouma-shapes are recognised by their 'outline'

Yes, that is somewhat simplistic, especially in the foveal region (which still uses boumas primarily). But it's a fact that the brain recognizes shapes outside-in: it starts with the silhouette, and works itself inward as needed. The point is that the better the silhouette, the faster the reading.

Again, avoid seeing the forms "deliberatively": that's in fact the difference between legibility and readability.

Good text type design relies on the proper balance between letter-wise and bouma decipherment, where the latter should get the lion's share of attention.

--

> the big question .... about reading learning.

In fact the same "battle" has been raging in US schools for a long time. Some people say you have to "sound-out" each letter in a word to teach children to read, others say that -after you teach them the individual letters- you simply show them words and tell them what they are, and they'll become immersive readers sooner.

But over here I haven't heard of trying any "special effects" to words like that... maybe that's too confusing to the New World people. ;-)

hhp

steve_p's picture

On the original question, I agree with Yves:
>Choose a typeface where the lc 'f' doesn't clash
into the dot of the lc 'i'.

I can't see any reason to take that issue any further. (Most kids who are beyond really simple abc books would have no problem with ligatures, but some might and I don't see the point of confusing them for no benefit).

> the big question .... about reading learning
This went on in England for a while, but now it seems to have reverted back to the traditional method where children are encouraged to recognize word shapes (boumas, letter-clusters...) very early on. Flash cards are used with words which have a distinctive pattern of ascenders descenders etc.
When reading text blocks, its only when the child fails to recognise a word that it is broken down.

>children only start reading boumas around the age of 9
I have no empirical research for this but my experience of my own children is that they appeared to use 'boumas' much earlier - maybe even 4 or 5.

hrant's picture

> my experience of my own children is that they appeared to use 'boumas' much earlier

Since the bouma model is the "traditional method" where you are, I can believe it!

hhp

steve_p's picture

OK, maybe I'm using a different timescale to you when I say traditional. What I mean is the method I learnt when I was at school (30 something years ago) was based on (what teachers called) word shapes.
By the early 1980s I was aware of some controversy over 'new' teaching methods, where letter recognition was more popular.
With my own children learning to read about 7 or 8 years ago I noticed that the word shapes approach had returned.

ponofob's picture

In France it is called the "global method". It seems that now, they mingle this one and the syllabic one.
I think that there isn't an universal way of reading learning, not even a way valuable for all latin alphabet users. The "word shapes" way may work in english, where there is far less word quite similar structurally with that little differences, between genders for example, but less in France in my view.

hrant's picture

Well, certainly different scripts have different degrees to which they support the move from the decipherment of elemental shapes to the decipherment of groups of shapes (and the use of the parafovea) as a person is learning to read, but the tendency of the human brain to process information as quickly as possible is universal. Getting used to patterns in written language (frequent groupings of letters), and using as much of the information available as possible (through the entire retina, not just the fovea) happens naturally for any normal person.

It would be very interesting to compare English and German for example. My guess is that the latter having longer average words, it's slower reading.

hhp

John Hudson's picture

Good text type design relies on the proper balance between letter-wise and bouma decipherment...

That's basically what I was asking you to acknowledge: the balance.

...where the latter should get the lion's share of attention.

We can probably argue for a good long time about what the appropriate balance is. But I worry that you are still misunderstanding my basic premise. You write:

The distinction between the letter-wise and bouma models affects the ideal x-height size for example.

This suggests that you are still thinking in terms of two distinct and opposed models (doubtless appealing to your love of adversarial relationships). But I'm talking about the r

steve_p's picture

Maybe we all tend to write in a kind of forum-shorthand, without qualifying statements as much as we might in a more formal setting, so apologies if you are already taking the following as read.

Isn't it the case that all of the questions you seek answers to depend not just on the language, but also on:
the level of reading experience of the reader
the familiarity of the reader with the subject matter
the frequency of uncommom words, technical phrases, abbreviations
the motive of the reader for reading the material (are they relaxing with a novel, or magazine article, or must they understand complex concepts in order to pass an exam etc)
the quality of the reader's eyesight
lighting conditions
the duration of a reading session
and probably many other factors, so when you say for instance "does the brain (re)construct info from lossy gleanings" or "what is the ideal x-height", there really is no answer except that "it depends on the conditions"

As an example, reconstructing meaning from lossy information may be faster but risks misreading the text. If I'm reading a film review I can afford to skim the text, because it doesn't really matter if I get the wrong idea about the plot.
Now if I'm reading the instructions on how to pack a parachute I think I'm going to read very differently. I think every single word is going to come very slowly and deliberately under the centralised glow of my foveal gaze.
Then again, if that film review makes lots of references to unfamiliar, perhaps foreign, surnames or place names, then I might just have to go back and read it again...

hrant's picture

> you are still thinking in terms of two distinct and opposed models

Well, it depends on what you mean by "model". To me there is one model of reading - but it's very complex. The heart of the complexity is in fact that there are two poles: letter-wise and bouma decipherment. I'm not sure how "opposed" they are, but they are certainly different things.

Nothing exists in a pure state, and any given reading instance lies somewhere between total letter-wise and pure bouma decipherment. My argument is that as you're learning to read (a given script/language) you progress from the former to the latter, and you spend the aggregate of your adult life much closer to the latter.

> I'm talking about the r

John Hudson's picture

I base my view -- that the majority of reading, in the sense of cognitive processing of perceptual elements (as opposed to the wider sense that includes liminal recognition of peripheral cues, regression, etc.), takes place in foveal vision -- on what studies indicate about the phenomenon of lateral masking. This is the phenomenon by which adjacent shapes in parafoveal vision actually distort perception of each other: note, not blur each other, as the illustrations on your website suggest, but distort. This distortion, because it is the result of interaction between letters, is present not at the individual letter recognition level but at the bouma-shape level. The lateral masking of adjacent letters in parafoveal vision results in radically underdetermined bouma-shapes, with no recourse to internal details because it is the distortion of these details that is affecting the bouma-shape as a whole.

That said -- but without undermining the view that most cognitive processing during reading involves perception in foveal vision -- the quality of cues available in parafoveal vision are affected by type design, which can certainly make it harder or easier to anticipate fixations. Mansfield, Legge and Ortiz (University of Minnesota) published an interesting study in which they addressed the 'paradox' of lateral masking: the fact that lone letters in parafoveal vision, without flanking letters, are easily recognised, but strings of letters become unrecognisable either as individual letters or as groups of letters. They measured the phenomenon of lateral masking as the space between shapes with and without 'joining flanks' was reduced, with some interesting implications re. the usefulness of serifs. Lateral masking is released when shapes with 'joining flanks' get so close enough to form a single figure. In other words, letters with 'joining flanks', e.g. serifs, are more likely to release lateral masking and form recognisable letter clusters, but note the central implication of all this: bouma-shapes are built up from individual letters, whose form and interaction determine the recognisability of the bouma-shape.

This is the conclusion of the Mansfield, Legge and Ortiz paper, with direct implications for your 'nebulous' reading concept: A 'blurring' explanation cannot account for the release of masking when the flanks and target form a single figure. Our finding suggests that lateral masking occurs only after figural segmentation.

hrant's picture

Lateral masking (LM) is difficult to grasp. I have to admit that it's one of the two big holes remaining in my understanding of reading*. On the other hand, what I do undestand about it indicates that it doesn't dessimate parafoveal information the way you seem to be implying. Three things:
1) LM doesn't affect all shape-adjacencies, just certain ones where curves and straights are in certain configurations.
2) As far as I understand, even foveal vision isn't immune to LM.
3) Without any empirical measurements of saccade length versus foveal span, you could plausibly use LM to think what you do. But the empirical measurements are in fact there to give credence to the view that the parafovea is the major player - the evidence is where the idea comes from, to me. How could saccades span three times more text than the fovea can see while enabling us to still read? Again: what kind of data from the parafovea can provide such a good location to saccade to next, while not providing significant usable textual information, especially in such a wide span?

* The other -actually the much bigger one- is something that nobody seems to understand yet (and I'm not holding my breath that it'll happen in my lifetime): how the brain maps a shape to a concept.

> strings of letters become unrecognisable

But other experiments have shown otherwise. Who do you choose to believe? Not any single researcher, but the "converging" model.

And actually, reading the rest of your elaboration makes me think that the UoM researchers* validate the bouma model! You can't single out individual letters in the parafovea, but when multiple letters cluster together they form a nice big shape (that also happens to be more efficient for reading). And serifs are very interesting: my view is that they help readability in effect because they hinder legibility! :-/

* Whose work I think I have a copy of, but haven't read yet - I have yet to muster the forces to climb the LM mountain. :-) BTW, you might also check out this: "Enhancing the perception of form in peripheral vision" by G Geiger and J Y Lettvin, '85 - it contains some very interesting LM research.

Also, the issue might very well be that they were measuring deliberative decipherment but in the parafovea (not the fovea where it happens naturally), and that's not something that happens in normal reading: you don't stare at a point and try to figure out what's a few centimeters to the right of it! Where consciousness fails -by worrying too much about absolute recognition- the subconscious might very well say: Who cares if I'm not sure? It's close enough - let's just assume the most probable case based on the semantic context of the actual phrase, and move on, and if we were wrong, we'll regress (which in fact does happen). Speaking of regressions: they could only happen if reading is in fact a "lossy" process.

As for merely blurring, yes, it is indeed naive - it's just a start, and I hope to have subsequent pages on my site to delve deeper.

hhp

hrant's picture

Ah, Peter... :-)

> shouldn't the question be: what happens where, and when?

It would indeed be great to know! But in the absense of such depth of understanding, we could still try to extract some usefulness out of what we do know. The nature of the balance between letter-wise and bouma decipherment (which I feel can be gleaned with what we already know) does affect type design, for example in the determination of x-heights: if letter-wise is the main player, why not make x-heights as large a possible (without for example making the "h" look too much like an "n")?

> what Hrant refers to as boumas

Hey, you call them "bouma"s when we talk in private! :-)

> is information about internal shapes accessible to the parafoeveal processing mechanisms used in immersive reading or is it masked there

Whether it's masked or not, most of the parafovea is inherently too blurry to convey the jumbled internal details. And again: the brain processes shapes outside-in, and presumably stops moving in when the recognition of a given shape is adequate.

> 97.3% of words in an essay length text are uniquely identifiable!

Which source is this? I have a bunch of photocopies yet to read, not to mention "process"! :-/

But anyway, is that percentage in terms of the overall text, accounting for the semantic context? That's believable. I'll also note that the proportion of regressions to forward saccades makes this number very suspicious. I suspect it might have been arrived at through tests of deliberative reading, not immersive.

> the expansion of the zone of perceived acuity?

Are you saying the fovea is bigger than it seems. By a factor of three?!

Lastly:
What about the view that boumas are the primary mechanism in the fovea as well?

hhp

John Hudson's picture

The nature of the balance between letter-wise and bouma decipherment (which I feel can be gleaned with what we already know) does affect type design, for example in the determination of x-heights: if letter-wise is the main player, why not make x-heights as large a possible...

No no no, that doesn't follow. You're regressing to the legibility vs. readability duality that we left behind many discussions ago. What Peter and I are talking about is the r

hrant's picture

> we left behind

Who's "we"? To me it's all related.

> What Peter and I are talking about is the r

sham's picture

Can "children" handle ligatures? I suppose it depends on the age and experience of the child. It also depends on the book itself. Some books are meant to specifically teach reading skills, others are intended for parents to read to their kids. Children range from 0-17 years, but I'll assume you're talking about young children.

Personally, my parents bought me books from all over the world when I was a child. Many were set in both english and another language. Usually these books used ligatures and they were even more magical because the typography was special. In fact, I credit my current passion for type and design from my early inspiration from these "children's books."

My point is this: children's books are very important, and grown-ups shouldn't underestimate the readers of these books. Do you really think a five-year old will be harmed by an "fi" ligature? This doesn't mean you should incorporate every possible ligature, but if it really increases legibility, it will most likely go unnoticed.

hrant's picture

> grown-ups shouldn't underestimate the readers of these books.

Good point.

--

Auto-Correction:
> the acuity of the fovea

Better: the acuity distribution of the retina

hhp

John Hudson's picture

Hrant, I'm at the point of giving up this discussion. Every time I try to explain what I'm talking about you just shoot back these disrupted one line responses to cut snippets from my message, typically cutting out the central point of my message (in this case noting the phenomonal difference between letter recognition in isolation and in bouma-shapes and, hence, that the latter does not imply giant x-heights or other design practices associated with the former). Rather than considering this or moving the discussion forward, you just keep blurting out the same thing over and over again. I don't feel I'm engaged in much of a conversation.

hrant's picture

> I don't feel I'm engaged in much of a conversation.

And I don't feel like I'm discussing this with somebody who knows the field well enough - sorry. Hey, I don't know enough about encoding, so I tend to trust your own views on that, since your logic is generally sound. I've been studying this stuff for over 5 years, and your arguments are too preliminary.

Nonetheless, I'm trying to develop things by asking elaborative questions based on your statements, but you're not answering any of them. Silence is not a valid answer, but "I don't know" would do just fine!

My "one-liners" encapsulate my ideas. If you don't agree with any of them, say so, and explain -specifically- why they're flawed - that's what I've been doing to you. Like I keep saying "the brain recognizes shapes outside-in" (which is huge in terms of bouma superiority), but you look the other way. And I keep saying that saccades are simply too long for the fovea to play the bigger role - same [non-]reaction.

hhp

hrant's picture

John: I'm sorry if I was in some way insulting. Just to be clear, I think your logic is generally solid, but I also feel you have a problem accepting something when it comes from Hrant Papazian, and my contention is that when there's a gray area you'd rather strike down my ideas than elaborate openly and patiently. If you feel frustrated, please at least know that so do I.

hhp

John Hudson's picture

Hrant, thanks for the apology, because the previous message was insulting. I think the problem here is that you are raising points (length of saccade, outside-in shape recognition) that support your view -- and I acknowledge that they support your view -- and I am raising points (lateral masking, insufficiency of envelope, importance of sub-patterns to bouma-shape recognition) that, at least, suggest that your view is simplistic and far from complete. I don't think you are wrong; in fact, I still think we agree on more than disagree about. You've repeatedly stated that you are interested in what is useful, specifically what is useful to type design, and I share this interest. Increasingly, it seems to me that the usefulness of the envelope stops somewhere around 'make sure ascenders are obvious and the x-height isn't too big'. This would be an important and radical insight if we were still living in an age of gigantic x-heights, but how many recent text typefaces suffer from this problem? So I'm interested in the more complex implications for type design that emerge from the acknowledgement that individual letter shape does play a r

hrant's picture

Brian, your DIY testing is interesting. I've learned that on occasion astronomers actually use parafoveal vision consciously (it helps them see flicker in celestial bodies, since the fovea is less sensitive to movement), and one day I hope to explore than personally myself.

> distortions

The distorions are complex, sometimes mysterious, and they happen in different realms. In terms of raw acuity (resolution), biological and perceptive tests have revealed that the retina has a small central area of high acuity, beyond which acuity plunges deeply, and then plateaus. Picture a narrow bell curve with the top sheared flat. So in terms of rod/cone "hardware", the fovea has suberb resolution, while the parafovea starts blurry and gets blurrier. In addition to blurring, there's a strange phenomenon called Lateral Masking which causes certain shape adjacencies to implode, so to speak, reducing decipherability. LM is not very well understood yet, but from what I understand it happens at a higher level than the retina. Lastly, there's the issue of deliberative/conscious versus immersive/subconscious perceptual processing. When you get somebody to focus on a point, and you then ask him to deliberately describe what he sees some distance to the side (usually the right), how accurately does that reflect what happens when we read "naturally"? It can be said -and I would agree- that the amount of information the immersed mind needs to feel satisfied that a bouma has been deciphered is much less than the conscious mind needs to feel like it's answering an explicit decipherment question adequately.

In terms of empirical findings, Herman Bouma has done tests to measure the acuity of the parafovea (both in the recognition of individual letters as well as the recognition of component letters of [short] words), and a few researchers have done work towards understanding lateral masking.

hhp

hrant's picture

> insufficiency of envelope
> importance of sub-patterns to bouma-shape recognition

Could you elaborate on these two?

> it seems to me that the usefulness of the envelope stops somewhere around 'make sure ascenders are obvious and the x-height isn't too big'.

That is perhaps the most obvious result of accepting the prominence of boumas, and of course there are no resultant hard numbers one can lean on to make better fonts.

I would say however that there are other important results too, things like: the importance of tight spacing, the positive role of serifs, etc. The realization that we primarily read the silhouettes (your "envelopes") of clusters of letters really has deep and multifarious implications.

Also, I would add that a "culture of understanding" is perhaps the most valuable result of learning something specific like how boumas work.

> how many recent text typefaces suffer from this problem?

It's hard to say, since:
1) Readability performance is on a curve - there are no plateaus. There is always a point (although it moves) where an x-height is optimal.
2) Who knows how small an x-height should be? Maybe the average contemporary x-height is indeed too large. I tend to think it is, if not ridiculously so like it used to be.

But really, more important is that overall "cultural" attitude. It's pretty clear to me that type designers as a whole haven't been taking the technical aspects of the craft seriously enough, perhaps as a result of the "romantic" 90s. This atechnical attitude is contagious, and it permeates type users as well as designers. With more understanding of why boumas for example matter, readers will end up benefiting.

A specific example that sticks in my mind is Bilak's Eureka, and how it was [mis]used in One (?) magazine. Eureka is a wonderful font in many ways, including a modest x-height, but it's too loose to form good boumas, and this is a direct result of Bilak's announced attempt to make the individual letters more legible*. Now, this looseness could always be largely corrected through negative tracking in the layout app, but of course you'd have to know that's the problem. One magazine soon abandoned Eureka and went with something ultra-conservative (Minion or something). Although I have no proof, I do have a strong suspicion that: they realized its readability was low, but had no idea how to fix it, so they panicked and decided to play it super-safe. All they really needed to do is apply negative tracking - but they didn't know what that really does to a text face.

* The good news is that his more recent work, like Fedra-Serif, is much more well-rounded - and yet doesn't forsake his wonderful originality.

hhp

hrant's picture

Just to elaborate a bit more on what I see as the state of the craft:

The discussion here is informal, but the level is pretty high compared to what type designers usually get exposed to. Forget the intricacies of lateral masking, the bulk of the design community has yet to grasp the basics. This boumas stuff is pretty radical to most people, and when you listen carefully to what even some of the most highly-regarded designers really have to say, you realize that they often have no idea about this stuff. They make the most basic of mistakes, like believing that serifs create some kind of "flow"... So how do they still manage to make good fonts? Simple: follow the established -and generally sound- conventions. Mimic.

hhp

hrant's picture

> your negativism about its present prospects

No, I think the prospects are better than they've been in 30 years! :-) But I am indeed negative about where they've been recently, and still largely are.

Together, we are trying to lift the craft out of this quagmire. Let's see if our arms hold up.

--

I guess our main divergence of opinion here concerns the balance of importance of the silhouette (envelope) versus the internal details of a bouma. Larson notwithstanding.

As happens often enough to keep me online, thanks to this thread (and countless other threads on many forums) as well as some private exchanges, I've clarified my own views on this particular matter, unifying many disparate ideas:

The retina provides the brain with a great amount of information. Perhaps to manage this overload efficiently, the brain processes this information in layers of frequency: starting with "blurry" low-frequency versions that can be analyzed quickly, and moving to sharper higher frequency versions that take longer. The number of such layers the brain resorts to depends on need. In terms of reading text (composed of horizontal lines of well-defined b&w forms), there may only be two layers of frequency needed: a coarse one that conveys sufficient detail of silhouette, and a sharp one that conveys fully unambiguous data (where individual letters could be -but are generally not- fully deciphered).

My view is that the low-frequency data is usually adequate to decipher a bouma, considering the rich context of actual language. This view is substantiated by the fact that the parafovea (which cannot convey the higher frequencies that the fovea can, but which is larger) covers most of the reading surface during immersive reading. It is also substantiated by research which has shown that the brain processes shapes "outside-in". In truth this outside-in business is misleading, because there's really no such "geographic progression": it's a matter of subsequent processing of increasingly higher-frequency data, where internal details become increasingly clear. However, the level of detail needed to strike a balance in reading between speed and limiting errors is sufficienty expressed in the parafovea, upto ~3 times the span of the fovea.

hhp

hrant's picture

Append:
And it's not really a matter of parafovea versus fovea, since the latter also conveys the low-frequency data first (because it's processed faster). However, the fovea can convey the higher-frequency version of the data when needed: when the silhouette is not good enough; or when the bouma is unknown and letter-wise decipherment/compilation (the slowest of the slow) becomes unavoidable. Note that in the former case: if the silhouette is in the parafovea it results in a fixation upon it; if it's already in the fovea it means internal details will in the end be needed for adequate decipherment.

So:
The better the bouma, the less often the higher-frequency layer is
needed, because that translates into greater reading speed/comfort.

hhp

John Hudson's picture

I guess our main divergence of opinion here concerns the balance of importance of the silhouette (envelope) versus the internal details of a bouma.

I believe the sihouette is shot through with white: the white of the counters. When I have some time over the next couple of days I'll post an explanation of why I believe this high contrast pattern is important and, also, why I think it is not only perceived in immersive reading but is actually unavoidable.

.00's picture

...

hrant's picture

> I believe the sihouette is shot through with white

:-/
OK, I'll start using "envelope" from now on...

But I'll be waiting for your elaboration, certainly.

--

> increasing the letterspacing improved both recognition and legibility

But reading road signs is deliberative, not immersive.

BTW, whoever is going to TypeCon: If you make it to the James Montalbano, Kent Lew, and/or Peter Bilak presentations, could you please take some notes? Those are the three presentations I'll be missing the most, and I'd really appreciate as much "proceedings" as possible.

hhp

anonymous's picture

I would have to say that it depends on the level of the book. If the reading is going to be done on the sentence level the kids are old enough to undestand a ligature. If the book is quite short, where reading is done on a per-letter basis, i.e. b-a-l-l, than I would deffinately skip the ligature.

As I understand it good typography is about readability. Ligatures aid that by clarifying places in a word where the combination of letterforms is awkeward. Everyone likes that. And I can imagine as a kid ligatures would seem fancy and interesting.

As a father I would appreciate any addition of quality and sophistication to my childrens books. I believe that they will adapt to any level of sophistication they are presented with. For me ligatures are a go

Bald Condensed's picture

I follow Just van Rossum's opinion in these matters: don't
use ligatures if they're not warranted by the design of the
typeface. Choose a typeface where the lc 'f' doesn't clash
into the dot of the lc 'i'.

It's not "correct" typography to always use ligatures; they
are not compulsory as far as I know.
It is more correct to use them when they're needed or as
an artistic choice.

Bald Condensed's picture

Cross-posting is a bitch... Look at my silly little entry compared
to John's or Hrant's.

THX for the link, Hrant. Now at last I know what boumas are.
(I knew what they were, I just didn't know they were called
boumas). I can die in peace now. ;)

Joe Pemberton's picture

I just trade-marked LetterCluster™ John. I expect royalties too.

Too bad we can't just use Word™, because Microsoft owns that trademark.

=)

anonymous's picture

I agree with those of you that have said that ligatures would cause problems for some children in various developmental stages. I did make an exception for the developmental level of the audience. Deffinately special needs should be considered.

Still, I would buy the book with ligatures. Then I could take it to play groups and show how much better my kids were. Everyone would be so impressed.

Bald Condensed's picture

Joseph, LOL

Hrant, you DO have a point. Consider me converted. ;)

John, LOL too

anonymous's picture

In relation to the side-thread here about perceptual processing in reading, shouldn't the question be: what happens where, and when? rather than who is the 'major player', or where does the majority of reading take place. Specifically, in the context of what Hrant refers to as boumas, since by definition bouma shapes include information about shapes internal to an aggregate of letters we process as a unit, as well as information about the envelope, the question is, is information about internal shapes accessible to the parafoeveal processing mechanisms used in immersive reading or is it masked there. Empirical evidence from at least one of Hrant's sources suggest that when there is discrimination up to the bouma-shape level, 97.3% of words in an essay length text are uniquely identifiable! Does the way the parafovea is used in reading include discrimination to that level? Or does such a level of discrimination only happen during fixations, triggering word recognition, and the expansion of the zone of perceived acuity?

pe

Joe Pemberton's picture

My two cents: children are amazing. They will
surprise you every time. So, it's worth a bit of
user testing if you're serious about the ligatures.
(Post a sample, I'll have my 5 year old try it out. =)

However, I would suggest using ligatures only
if they're a core part of your concept for the
book -- such as typo-illustration or visual word
play or even a puzzle of sorts.

anonymous's picture

In relation to the original question for this forum, I would guess that for readers who have mastered the writing, recognition, naming and orthographic assembly of a particular alphabet's placeholders (letters), the curve for gaining familiarity and readerly comfort with ligatures is quite shallow. As a child I delighted in such clever concoctions.

That, for many types, ligatures assist in guaranteeing the rhythmic cohesiveness of the orthographic clusters the letters help form is clear.

Our perception of whether ligatures have a long-term perceptual processing benefit for immersive reading might or might not depend on how divergences in opinion about the mechanics of perceptual processing in reading -- such as the ones one senses in the above -- get resolved. I for one do not see a ready-to-hand answer emerging from what I think I understand about the process, and as a result feel hesitant about subverting this forum to that end. Perhaps the Mansfield, Legge and Ortiz material John Hudson cites can provide an oblique clue.

The Hrant source I referred to in my earlier post is Chapter 9 "Letter and Word Recognition" of Taylor & Taylor's _The Psychology of Reading_.

I use the term bouma because I like the emphasis it places on the word as a *perceptual* entity defined by more than just its 'envelope'and parts; this rather than as an orthographic cluster, whose identity as a word presumes a subattentive cognitive assembly process every time it is encountered. I see a similar emphasis in Gerrit Noordzij's talk about the word image and word-blindness. (Of course the word is more than that, but that is the side of it the type-designer cares about most) Most studies of reading I have consulted don't take this dimension into account when they frame their investigative strategies and draw their conclusions. This is why so much of what is surmissed here is based on extrapolation (of which I think Hrant's #3 in post 2582 is a misfired example). I don't necessarily think my formulation or the term states well enough what is at stake here, but they go a ways.

pe

anonymous's picture

I agree profoundly with what you say, Hrant, about a developing 'a culture of understanding', but feel uncomfortable with your negativism about its present prospects.

I believe the study of perceptual processing in reading should be given disciplinary status within the encyclopedia of the sciences, and that reading in the discipline should be an educational requirement for type-smiths. Even if only to convince us that space-craft (Kindersly) is more than a niceity, it is a percepual processing requirement in texts meant for immersive reading.

First we will have to get clear about first principles, as this forum demonstrates.

pe

Syndicate content Syndicate content