Why do CAPITAL LETTERS so annoy us?

dberlow's picture

>Because your experimental set up did not really test what we looking for...

Bill, maybe it'll be better if I ask you: did that study include any actual reading conditions? Did it contain any systematic study of real world word-shape destruction, along the lines of Quart or ClearType?

Cheers!

William Berkson's picture

>did that study include any actual reading conditions?

The tests involved regular and widely spaced three letter combination, in Minion, at pretty big size, but distant, so probably low resolution screens were not an issue. There were both words and non-words.

The complication was that Purcell and colleagues originally used a Tachistoscope, which can expose a stimulus down to as short as 1 millisecond intervals, to get their finding that the Word Superiority Effect disappears with wide spacing. Kevin was using a computer screen, whose refresh rate, if I remember correctly, is 17 milliseconds. As a result, I think, Purcell's results couldn't be duplicated with his mask of overlaid Xs and Os.

The idea behind the experiment:

Peter Enneson had argued that the finding of a word superiority effect for mixed case--which seems to refute the existence of any gestalt or global "word image" effects--was flawed. The experimenters had not controlled for what is called the "Stimulus Onset Asynchrony." The SOA is an awkward name for the time interval between exposure of the stimulus (the word or non-word) and the exposure of the mask. After the exposure of the mask, subjects identify a letter in the word or non-word. If they are better at identifying letters in words, that is a 'word superiority effect'.

Because we read mixed case more slowly--at 80% speed according to Frank Smith--Peter felt that a different reading process was going on, and this confounded the results. And Marilyn Jager Adams had mentioned a longer SOA.

Following Peter's analysis, I hypothesized that there are two different Word Superiority Effects, one that works at very short times, which involves word image or gestalt, and one that works at longer times, when the word already gets into the brain's memory banks, and involves memory of spelling.

Since the experimental set up Kevin used had longer time intervals, and didn't have any difference in SOA for lower case and mixed case--unlike Marilyn Jager Adams--I figure that what I was looking for is not really being tested.

I also suggested other tests, involving disrupted spacing--as mentioned above--and also using missing letter parts, which I think would be the best test of all. However the one Kevin started with involved words vs nonwords, at regular and wide spacing, both with all lower case and mixed case. And, as I said, I don't think the results were as revealing as I had hoped, because of the lack of fine control over the SOA.

Chris Dean's picture

Idea to assist in following a threads of this academic nature:

The first time you mention a study, provide a full reference (APA or otherwise):

Majaj, J., Pelli, D., Kurshan1, P. and Palomares, M. (2002). The role of spatial frequency channels in letter identification. Vision Research, 42(9). 1165-1184

Thereafter, provide a name and date:

As seen in Majaj et al. (2002), it is obvious that…

I suggest this as several pages in, it becomes a bit confusing to keep track, and if it's an exceptionally long thread, without dates (2002), it's very difficult to track unless you start reading the entire thread from the beginning.

@ Berkson: which Purcell study are you referencing?

William Berkson's picture

The Purcell papers I am referring to are:

Visual angle and the word superiority effect, by Dean G. Purcell, Keith E. Stanovich and Amos Spector, Memory and Cognition, 1978, vol 6(1), p 3-8.

Some Boundary Conditions for a Word Superiority Effect, by Dean G. Purcell and Keith E. Stanovich, Quarterly Journal of Experimental Psychology (1982) 34a, p 117-134.

Originally I predicted the vanishing of the WSE based on Peter's theory. In a literature search, Peter found that the experiment I suggested I had suggested had already been done, with the results I predicted, in these two papers.

However, why the WSE vanishes has never been explained, as Purcell was mainly concerned to remove spacing as a confounding factor in other experiments, not to investigate it.

We want to investigate it and other effects I have mentioned systematically, as we think they are a clue to a deeper understanding of reading. We think it will also refute the current notion that reading is all a matter of legibility of individual letters, and establish the importance of other factors, including gestalt effects.

dberlow's picture

>The tests involved ...

..no actual reading. :(

Cheers!

Chris Dean's picture

This is a great thread. A lot of good material.

@ Berkson: Thanks for the references. Re your MixEd CaSe aND STylE slide, have you conducted/published an experiment using these IV's?

William Berkson's picture

>..no actual reading. :(

I don't have access to a lab, or the financial backing to do psychological research.

I would love someone to do the timed SAT reading tests with different fonts and layouts, such as I mentioned above. These would be real reading tests.

However, the tests with flashed words and masks may help reveal some things about the reading process that we can't easily find out other ways. And the new insights may be later confirmed by real reading tests. So I don't think such tests are in principle useless.

What happens in the time interval between flashing a word very briefly, and flashing a mask? Answering that question may give us insight into how the brain--both visual cortex and language areas--processes words. Peter's view now is that we need to study this in more depth, by seeing how step-by-step small increases in exposure times and SOAs affect the Word Superiority Effect.

Kevin was generous enough to do these experiments, even though he was very skeptical about our hypothesis of the two levels of Word Superiority Effect. That is really a tribute to his open-mindedness and good will. Thank you Kevin!

Unfortunately, the experimental apparatus wasn't adequate to get the base result: the difference between lower case and mixed case words in the SOA needed for word recognition. Thus unfortunately the set up was not able to test what we were looking for.

Christopher, no, as I do not have access to a lab, I haven't done any experiments with the mixed case and style. I do think it is pretty obvious even from your example that this is going to very significantly slow down reading, and probably even more than mixed case. And this I think shows at the very least that the approximate "grid fitting" of letters is important to readability.

The shifting of the angle of the grid seems to have a particularly damaging effect on reading. I also have noticed that after working on an italic design for hours, when I switch to Roman, it appears slanted, even though it is actually upright. I think that shows that the brain has a "switch" for what grid it imposes, to look for letter features on and departing from the grid.

Chris Dean's picture

@ Larson: Have you written/published this work? I would love to see the paper.

Kevin Larson's picture

> Have you written/published this work?

The plan was for Bill and Peter to write this paper, but the paper frequently doesn’t get written when data doesn’t turn out conclusive one way or the other. No one is willing to publish (or read) papers that don't have strong conclusions.

> I don’t have access to a lab, or the financial backing to do psychological research.

I would like to emphasize here that reading tests don’t have large out-of-pocket costs. The only equipment that you need is a room with adequate light. I use a software program that costs a few hundred dollars (E-Prime), but that’s not even necessary for most studies. The only other cost is recruiting people to come and participate in your study. Friends or students on a college campus can be recruited cheaply.

Kevin Larson's picture

> ..no actual reading. :(

Reading a single word is reading.

For many questions it makes more sense to look at reading single words than reading sentences or pages. Earlier in this thread I described decoding and comprehension as two separate components of reading. Typography has a clear impact on decoding, but comprehension is impacted by the quality of the writing and the readers’ background knowledge.

Reading single words is a good task because it measures the ease of decoding separate from the quality of the writing or readers’ background knowledge.

There are typographic variables that extend beyond single words, and longer reading tasks are needed to investigate these variables, but these are more difficult to measure because of the impact of comprehension.

dberlow's picture

>The only equipment that you need is a room with adequate light.

Couldn't each participant just stay at home? You could save a bundle and have more diversity in readers than just the unusual suspects. Can't you do this online yet?

>Reading single words is a good task because it measures the ease of decoding separate from the quality of the writing or readers’ background knowledge.

Well, then! you now have ideal conditions for studying readability (of Chinese). Not to mention reading single words, for the most part, loses those logically pesky saccades. Let's say, more than 7 saccades is reading, and less is leging.

Is seems eager enough to lump our interaction with long passages of text & our interaction with the single word, into the general meaning of 'reading' in the context of these 'conversations'?

Describing decoding and comprehension as two separate components of reading can't be proven to be study-able one at a time.

Decoding and comprehension break down into topics that cross your definitions. This was proven in one such study that showed the Helvetica e more 'readable' than the old style dutch e.. That is false in terms of 500 years of readability, true from 50 years of legibility (and eye doctor visits).

I would always say you are studying signage, or to be more technical, legibility. Whenever you get to readability, you'll know what to do.

Cheers!

Chris Dean's picture

"Couldn’t each participant just stay at home? You could save a bundle and have more diversity in readers than just the unusual suspects. Can’t you do this online yet?"

Yes, but that's an internal vs external validity trade off. Both have their benefits.

John Hudson's picture

David: Let’s say, more than 7 saccades is reading, and less is leging.

Why not 6? or 8? or any other arbitrary number?

We can, of course, assign names to any arbitrary phenomenon we want, even deciding to call reading a sentence in isolation different from reading a sentence embedded in a longer text, even if the same number of saccades are employed. But what we call things is not the question. The question is what is actually happening when we read.

I'm very much aware that quantitative changes can result in qualitative changes, so maybe 7 saccades is the magic number beyond which something different happens. But I wouldn't assume that this is the case or, even, that there is a magic number, i.e. that our perceptual and cognitive functions have, in effect, more than one way of reading.

I also think that we can become subjectively aware of experiential changes in consciousness that are independent of unconscious functions. In other words, how we experience reading a page of text might be consciously different to us from how we experience reading a head word at the top of a dictionary page, but that does not imply that we are using different perceptual and cognitive functions.

Chris Dean's picture

"…maybe 7 saccades is the magic number…"

Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63. 81-97.

enne_son's picture

Nice!

"The input is given in a code that contains many chunks with few bits per chunk. The operator recodes the input into another code that contains fewer chunks with more bits per chunk."

Isn't this what happens in learning to read, where letters make up a word?

dberlow's picture

Enne>Isn’t this what happens in learning to read, where letters make up a word?

And aren't we learning 'till the day we stop reading? It would be naïve, in my opinion, to put any other façade on it.

John>But what we call things is not the question.

Good, then I'll do a study where the definition of dressed is a toe ring, monitor the vital signs of my subjects observing a dressed and undressed model, and not understand why there is no vital sign delta in my subjects.

At least we can all have a good time.

>But I wouldn’t assume that this is the case or...

On normal adrenaline it is seven. Why do CAPITAL LETTERS so annoy us?

We have a preprogram, a font program, and a glyph program (which only runs when the decomposer fails), and a decomposer.

So, all caps bother us here, because the preprogram determines what here is, and thinks we're reading shouting.

They don't bother us in signage because the preprogram recognizes them as normal, the font program makes them useful and as there are never too many for us to decompose with ease, the glyph program rarely runs.

They bother us in long passages of text because, after a while they disturb the decomposer which is 'used' to more different tops and bottoms in the letterforms, than all caps provide. I think we can read more caps in a heightened state, and fewer in a depressed state, but that'll have to wait 'till I combine my reading experiment with my dressing experiment.

We are using different perceptual and cognitive functions because type, (and dressed) is not one thing. But I don't think 'people' have much trouble with anything but the most generally imagined use of that word — Do They? Do we need to improve the way people can read eye charts? mall signs? Headlines? James & Co. have taken care of road signs.

The last frontier seems to be ye olde arbitrary computer screen and long small stuff. ;)

Cheers!

John Hudson's picture

David: They don’t bother us in signage because the preprogram recognizes them as normal, the font program makes them useful and as there are never too many for us to decompose with ease, the glyph program rarely runs.

But Kevin would say that the glyph program is running all the time, that the glyph program is, in fact, the decoder, that there is no word recognition without letter recognition. And he'll cite the studies that back up this position. And you'll say that none of those studies involve ‘real’ reading. And then we'll go round again and everyone remember to wave at Mummy and Daddy.

John Hudson's picture

David: We are using different perceptual and cognitive functions because type, is not one thing.

Or are we using the same perceptual and cognitive functions that happen to be highly adaptive? The latter seems much more likely to me.

enne_son's picture

I'll be cryptic: local combination detection (laurent Cohen and Stanislas Dehaene) for role-unit level features (i.e., aspects of the glyph program) is running all the time, but so is the global positioning device for these features. This allows a single-tiered integrative gathering in which incipient recognititions for letters is impeded. Matrix resonance does the rest.

Kevin's boundary and moving window studies imply, under a more circumspect interpretation, that outside the uncrowded window (Denis Pelli) where crowding destroys letter identification at normal typographic spacing, accurate ensemble statistics (Patrick Cavanaugh) are being compiled.

dberlow's picture

>Kevin would say that the glyph program is running all the tim

Was it running all the time when Gutenberg was composing with more than 250 l.c. glyphs?

Then when did it start?

Cheers!

Kevin Larson's picture

> Was it running all the time when Gutenberg was composing with more than 250 l.c. glyphs?

Yes. Our visual system developed long before Gutenberg. Our visual system simplifies the task of recognizing complex objects by recognizing simpler parts and building those parts up into a whole. When we look at a human face, we don’t look at a whole Gestalt, but look separately at eyes, nose, mouth, and other contours, and use those parts to recognize a face.

It is whole lot easier for the visual system to identify 250 glyphs than a whole Gestalt for the 50,000 words that the average college educated adult knows.

William Berkson's picture

Just a word on the reading words vs reading lines--and multiple lines--question. It seems to me quite possible that the preview of upcoming letters in the parafovea helps first in planning the next saccade, and even sets up a template for further interpretation when the eye moves, and the the word comes into the area of the fovea. If either of these is operating, then other factors than simply what makes a word readable will be operating in reading lines. For sure word spacing is important, and maybe extenders enable better pre-identification in the parafovea, speeding up interpretation when we look directly at the word. And in reading multiple lines, for sure leading is important.

Methodologically, the key thing is not to assume that one theory is right when conflicting theories are consistent with evidence. Good scientific method would involve inventing tests to see which of the conflicting theories holds up to observation.

enne_son's picture

I haven't been able / it hasn't been possible so far to give my intuitions about processing -- or the proposals of Edmund Burke Huey, the early pre-goodmanian Frank Smith, and the early Neal F. Johnson -- the elaboration that will allow them to be perceived as consistent with the bulk of evidence.

John Hudson's picture

David: Was it running all the time when Gutenberg was composing with more than 250 l.c. glyphs?

Of course. It's been running ever since the first marks associated with language were scratched in clay or daubed on a wall. This is what we do: recognise variable forms of letters as letters. Where we run into trouble isn't when the glyphs vary -- as the entire history of writing and most of the history of typography clearly demonstrates -- but when spacing and texture are messed up, i.e. when we have difficulty making the step up from letters to words. This is why I am unimpressed when people squawk about sub-pixel positioning resulting in different renderings of the same letter; yeah, so what?

enne_son's picture

Don't different renderings mess feature or role-unit spacing and texture up above all?

John Hudson's picture

Spacing no -- the whole point of sub-pixel positioning is to improve spacing --, but texture sometimes, if the rendering mechanism can't maintain reasonably consistent stroke density. This is the key difference between ClearType's colour filtering, which seeks to retain as high a stroke density as possible for better shape/ground contrast, and full fuzz rendering which allows thin strokes to grey-out. But stroke density is an issue in all antialiasing, regardless of subpixel positioning or the consistency of the rendering of glyph: glyphs antialiased on full pixel widths may be consistently rendered badly.

My point is that reproduction of identical letter shapes has never, ever been a requirement of reading. If it had been, no pre-typographic writing would have been readable, none of our handwriting would be readable, and the vast majority of typography itself would be unreadable. The exact reproduction of identical shapes that briefly became possible with full-pixel width aliased and antialiased text on screen is, in the history of writing and reading, freakish.

I reckon a certain amount of biodiversity in letterforms is probably a very good thing, so long as that diversity does not disrupt spatial frequency consistency and spacing. As with other kinds of diversity, it makes us adaptable. Imagine a person who has reached maturity only ever having read a single typeface and always with each letter perfectly consistently reproduced. Would he be able to read anything else? Would he have developed the kind of adaptability that allows us to read so widely across to many different conditions?

dberlow's picture

>...the whole point of sub-pixel positioning is to improve spacing...

...relative to the print metrics, not the screen metrics.

>... reproduction of identical letter shapes has never, ever been a requirement of reading...

We are so different, I doubt you will ever get it. Styles of writing developed so strictly people were flayed alive once for departing from the exactitude of each letter.

>The exact reproduction of identical shapes that briefly became possible with full-pixel width aliased and antialiased text on screen is, in the history of writing and reading, freakish.

And is that true of all resolution satisfactory print, freakish? You're going to say letters there are not identical. I am going to say the intent is identical.

>When we look at a human face, we don’t look at a whole Gestalt,

Not even our parents?

Cheers!

enne_son's picture

Yes, the visual system abstracts, but to abstract efficiently requires that information is properly phase-aligned, the criterial features have properly coordinated salience and a weight appropriate to their relative cue-value.

Also, reduction in resolution (or a reinterpretation on a differently calibrated monitor) removes the specifics that contribute to the gestural-atmospheric individuality -- the distinctiveness of the personality -- of the font.

I sympathize with David's luminous non-alignment.

Chris Dean's picture

@ Enneson: "…the proposals of Edmund Burke Huey, the early pre-goodmanian Frank Smith, and the early Neal F. Johnson…"

To which proposals are you referring?

For the thread, I came across an interesting biography on Edmund Burke Huey.

enne_son's picture

Edmund Burke Huey [1908] proposed that ‘simultaneous co-activation of determining letter parts’ sparked recognition. He further hypothesized an ‘inhibition of incipient recognitions for letters’ during normal word reading by skilled readers of continuous text.

Frank Smith [PhD. dissertation / 1967] proposed that a word is identified by the distribution of features across its entire configuration. This is an ongoing theme in his articles and books.

Neal F. Johnson [mid to late 1970s and 80s on into the 1990s] proposed words are single unit patterns; the functional components of word patterns are features rather than letters. He echoes Huey's hypothesis of an ‘inhibition of incipient recognitions for letters.’

John Hudson's picture

David: Styles of writing developed so strictly people were flayed alive once for departing from the exactitude of each letter.

Sure, precise formal styles of writing in some cultures were based on exactitude and the writing manuals and exemplars of those cultures are endlessly reproduced in books on calligraphy. But that neither implies that all the writing in those cultures was so exact or that the exactitude was a requirement of reading. Nor does it imply that reading was more difficult in less exact styles of writing, or that cultures that were less obsessive were non-literate or even less literate that those with a highly exact calligraphic aesthetic. And aesthetic is the key word here: exactitude in the reproduction of letters is a stylistic decision, not a functional requirement.

Further, Ottoman writing of Arabic script was among the most obsessively exact of all scribal traditions, yet the styles in which these scribes worked involve extra variant forms of almost every letter, such that the same text written in the same style by a dozen different scribes will look very different, despite the correctness of the form of individual letters within the bounds of that style. Exactitude does not imply uniformity and, again, nowhere is uniformity a requirement for reading.

You’re going to say letters there are not identical. I am going to say the intent is identical.

Who cares about intent? What we're talking about is what is necessary or not necessary for reading. The fact that some people would like to make every instance of a letter identical does not mean that this is a requirement for reading.

John Hudson's picture

Peter: Yes, the visual system abstracts, but to abstract efficiently requires that information is properly phase-aligned, the criterial features have properly coordinated salience and a weight appropriate to their relative cue-value.

But this is almost tautological, because the best test we have for whether ‘information is properly phase-aligned, the criterial features have properly coordinated salience and a weight appropriate to their relative cue-value’ is whether someone can read the result. What we don't have is a functional understanding of what ‘properly’ means in these contexts. We're back to ‘in order to be read, text has to be good enough to be read’.

enne_son's picture

John, the point is that different renderings disturb phase alignments, relative weights and the rhythmic coherence of salient whites.

I can read the results, but I suspect these disturbances have a computational efficiency cost.

John Hudson's picture

Peter, just to be clear, this is the sort of thing we're talking about:

As you can see from the enlargement, the subpixel positioning does result in different colour values being used to render vertical stems at different positions on the line. But can you really say, looking at the actual size text in the image, that phase alignment is disturbed, weight varies or that the rhythm of the salient whites is incoherent?

Rather, I'd say that this level of rendering variation is much less than encountered in most of the printed typography of the past 550 years and certainly less than encountered in most smaller manuscript writing. If there is a computational efficiency cost, I would be surprised if this is significant relative to readerability.

David: ...relative to the print metrics, not the screen metrics.

In practice, yes. Of course, it shouldn't need to be that way; subpixel widths could be addressable via hints. [Insert usual complaints about the fact that they are not.] But on a low typical 96dpi screen, subpixel positioning gives you something approaching the spacing refinement of a 300dpi printer: far from perfect, but a heck of a lot less crude than whole pixel screen metrics. I guess the question is ‘Where is the dividing line between metrics appropriate for print and metrics appropriate for screen?’ It is, of course, a line that shifts relative to type size, resolution and individual typeface, which is why I don't think 300dpi virtual resolution justifies a general move toward print metrics on screen without providing a mechanism to hint subpixel widths. On the other hand, for a lot of fonts at a lot of sizes on a lot of screens it is going to provide better spacing than rounding to full pixel widths.

John Hudson's picture

A comparison that might be helpful, Peter: top, CT rendering on whole pixel widths; bottom, CT rendering on subpixel widths.


dberlow's picture

John,

If you want to compare the quality of the last 500 years, and not the intent, or compare between quality in aliased vs a-aliased, I'll have nothing to do with it.

On your excellent illustrations;

the difference in Gamma is startling. My Mac, uses the additional size-on-em of Verdana, the increased resolution to 96, and the rendering of AA text to get to an approximation of the print (i.e. resolution independent), image of that font vs. your Window Vista (?)'s, which seems much closer to OS/9, XP pre-CT, i.e. aliased of the past decades.

... and in the sub-pixel vs. pixel spacing example, I 'held' to be self evident, did you make that yourself, or did a computer do it?

>But can you really say,

We'll see.

>... looking at the actual size text in the image,

16-17 px?

>...that phase alignment is disturbed,

Only very slightly though in these highly specialized fonts you show at the size you show...
(Others: be sure to try and Read the big l.c. glyphs down, not across, to See what he means, a little).

>...weight varies or that the rhythm of the salient whites is incoherent?

Did anyone say incoherent? With all due respect, and to credit all aliased-to-antialiased OS leapers, saccade hiccups are not causing incoherence. Users are by-and-large using specialized fonts, like the ones you show, for everything, except long reads.

More sizes, more fonts, show cross-platform, and it's not so neat, before the long reads. Not that what you show is neat. The heart of the little web letter is, in my opinion, no longer displayable or testable via an instance. It's barely even discussable.

Cheers!

John Hudson's picture

David, your description of the difference between the Mac and Vista renderings seems to me fair, but I'm not sure what your opinion is, or if you implied one. What seems remarkable to me about the Mac rendering is the fuzziness of the horizontals and the inconsistency of the stroke density. When I see sequences like the 'eme' in elementary, I have to wonder what affect this kind of stroke inconsistency has on spatial frequency tuning.

The illustrations show Meiryo, not Verdana. I specifically chose this because Matthew took the Meiryo Latin off the pixel grid of Verdana, making it a good candidate to demonstrate subpixel rendering and spacing as applied to a font that was not specifically designed to a whole pixel grid but which has many good features of a type for screen. The comparison was prepared in a test tool from MS that allows me to simulate different versions and settings of the CT renderer. The whole pixel positioning rendering is, I believe, the same as the Vista rendering; the subpixel positioning rendering is akin to WPF minus y-direction AA.

Did anyone say incoherent?

Peter suggested that variant renderings, among other things, ‘disturb ... the rhythmic coherence of salient whites’.

The heart of the little web letter is, in my opinion, no longer displayable or testable via an instance. It’s barely even discussable.

I agree. But I wasn't testing or discussing ‘the heart of the little web letter’; I was demonstrating for Peter that the level of rendering variation produced by subpixel positioning does not -- not necessarily at any rate -- result in the kind of disturbance with which he is concerned. I demonstrate this by comparing two settings of the same font that differ only in whether they are whole or subpixel positioned, i.e. I isolate this difference for comparative purposes.

Where comparison with other platforms would now be of interest is in determining what features of a rendering system are necessary in order to maintain good phase alignments, relative weights and rhythmic coherence of salient whites, to adopt Peter's criteria. I'm quite sure that subpixel positioning can result in rendering variations that disturb all these things in a system that fails in these other features; the trick would be to isolate this effect from other problems in the rendering system. Take the Mac rendering you show above, for example, I would argue that despite the consistency of the rendering of the individual glyphs the inherent inconsistency between the letters due to the full fuzz rendering, which results in stroke density degradation, is already disturbing all the things about which Peter is concerned, and trying to isolate the impact of subpixel positioning on top of such rendering would be more difficult than testing on top of a system that sucks less.

dberlow's picture

>I specifically chose this because Matthew took the Meiryo Latin off the pixel grid of Verdana...applied to a font that was not specifically designed to a whole pixel grid...

I rendered Verdana at 12.6 point on the Mac, and it looked size-wise pretty close to whatever you rendered, but I'm flattened by my error. Nevertheless, though it does round easily to 11 ppm, Verdana is not designed to any whole pixel grid except 2048, and that's a fact. I'm pretty sure going to another version of Verdana will demonstrate nothing, unless one looks at a range of sizes to see the problem web designers and users face on this issue. A size of a font made for the purpose is not what's about to be unleashed now is it?

I completely agree that building readable web fonts on top of a web system that sucks is a problem. If you look at the whole web system problem however, and what's ever likely to be solved, making fonts, good phase alignments, relative weights and rhythmic coherence of salient whites for web sites on the Mac sucks way considerably less than making these things on Windows. You can ask anyone, but that's my opinion having done it several times. In fact, the opinion popped up that MS should be embarrassed to the gamma.

I don't agree and I tell them why, but again, the unleashing is here and the performance of Verdana/Meiryo is not enough to counteract the onrushing epidemic of saccade hiccups — no more than twiddling half pixels randomly as you show is going to make words according to St. Adrian. ;)

Cheers!

Chris Dean's picture

Mewhort, D. J. K. and Johns, E. E. (1988). Some tests of the interactive-activation model for word identification. Psychological Research, 50(3). 135–147. doi:10.1007/BF00310174

John Hudson's picture

David: ...making fonts, good phase alignments, relative weights and rhythmic coherence of salient whites for web sites on the Mac sucks way considerably less than making these things on Windows.

Even considering this?

We're talking here about a font that was designed and produced specifically for on screen reading. And this is what the Mac does to it. Where's the crossbar?

What I do see in your comparison is that the spacing of the Mac example is very similar to the spacing of the CT with sub-pixel positioning, and that both these are superior to the spacing of the CT on whole pixel widths. Look especially at left-side bowls following tall verticals as in the le in ‘schooled’ and elsewhere. In the CT on whole pixel widths example the spacing of these is always too tight and out of rhythm with the other spacing (so much for salient whites). In the CT with sub-pixel positioning example, the spacing of these shapes is fractionally wider, and is consistent with the rest of the spacing. This is also the case in your Mac example, but this is where comparing Verdana with Meiryo is misleading, since the spacing of Verdana rounds better to the whole pixel width; I'd be intrigued to see Meiryo rendered on the Mac.

mike_duggan's picture

john: I’d be intrigued to see Meiryo rendered on the Mac.

if you are running Windows and have Meiryo, you can install, Safari, and get the same rendering as if you were on a mac

stormbind's picture

I would like to point out that, not so long ago, capital letters were the staple of electronic communication. Furthermore, snapshots of the past still appear friendly.

Drawing on this historical evidence, I tentatively suggest that word shape is not responsible for the aggressive tone discussed in the opening article - or at the very least, not entirely responsible.

I am not the expert, but do you think its the spacing between block of text that makes a difference? For example, the attached image shows numerous shades of colour that are fairly evenly distributed on the page. In contrast, modern day capital letter 'shouting' is not carefully arranged on a screen.

In other words, perhaps it is the total screen composition that offends some people? :)

dberlow's picture

John quoting David: ...making fonts, good phase alignments, relative weights and rhythmic coherence of salient whites for web sites on the Mac sucks way considerably less than making these things on Windows.

John> Even considering this?

Yes, even considering that e John. I was showing you rendering, not scaling.

Maybe.... you should do some work in this area, instead of not thinking with your eyes.

Below, we selected a font nearly randomly, ( a client chose it ), and did the easiest thing possible to prepare it for use as large text, small display and large display on the web ( we did no hinting ). We linked to that same font in its two too required formats from IE on Vista (left side of page), and Safari on the Mac (right), on the same hardware.

Take a stroll beyond the fonts that don't even matter. Making unique readable identities online, with good phase alignments, accurate weights and rhythmic coherence of salient whites for web sites on the Mac sucks way considerably less than making these things on Windows.

If both platforms gave me the option of controlling both dimensions with hints, I'd have another opinion for you. But since they don't, and phase alignment, accurate weight and rhythmic coherence of salient whites is denied in the x dimension on both, then gimme good gamma on both and let it be. John, we're talkin' billions of users and thousands of fonts. If the scaling wars have ended with the trenches where they'll always be, I give up, you can blow the whistle on some detail of scaling some more if you like, but I think we are all still sitting in the middle of a font tech war zone, moving on to rendering, with no benefit to us, the reader or... the commerce that it now constrains.

Cheers.

John Hudson's picture

Can I see the real-size version of that comparison, David?

Fonts with no hinting... yeah, they look better on Mac than on Windows. But fonts with hinting are going to look just the same on the Mac, but have the potential to look better on Windows. So what you appear to be saying is that making fonts look good on the Mac is easier and cheaper because you don't have to hint them. I guess that's one definition of ‘sucks less’. Sure, if a system ignores all hints, shipping fonts without hints is easier and cheaper. The quality issue in this case -- as distinct from the ease and cost issue -- is what Windows does when it encounters a font without any hints. What it does at the moment sucks. What it should do is recognise the absence of hints and apply a Mac-type rendering. Karsten and I were giving Greg Hitchcock grief about this recently, and he agreed that something like this would be the better approach.

As you say, we're in the middle of it. Which is why talking is worthwhile.

dberlow's picture

>As you say, we’re in the middle of it. Which is why talking is worthwhile.

I was going to say, we are not going in circles, as you suggested. Many many people are gathering and a circle is forming around these issues.
UNHINTED ITC FRANKLIN BOLD ON WINDOWS


UNHINTED ITC FRANKLIN BOLD ON MAC

Feast your eyes.

Cheers!

dberlow's picture

P.S. I should add that I tried very hard to get Ms to move off this spot.

>what Window[s]... should do is recognise the absence of hints and apply a Mac-type rendering.

It was my hope to move MS out of thier rabbit hole of quality, so they would not have to keep following Apple. This might jolt Apple when they see the quality firms like ours produce on Windows. Remember, I like Windows rendering, with hints, better than the Mac 'with hints'. I like FreeType most of all, regardless of rendering. Make a instctrl setting for CT that behaves well with ALL HINTS, that is my five year-old advice.

Cheers!

John Hudson's picture

David, is that Windows rendering example ClearType? The colouring doesn't look at all typical.

dberlow's picture

ClearType. When security improves, I'll give you the url, you can color for yourself.

Cheers!

John Hudson's picture

So there are two problems with the rendering of the unhinted font on Windows, as shown in your comparison.

One problem is specific to the absence of hints: inconsistent y-direction distances at smaller sizes and drop-out at the smallest size; also nicely documented by Karsten and the reason he and I were pestering Greg about this. [I have to admit that fonts completely without hints came as a surprise to me: as long as some rasterisers continue to observe and apply some hints in some way, I can't really imagine shipping a font without any hints at all. That would seem to be asking for trouble, but we'll take it as read for now that such fonts exist and Windows could do a better job when it encounters them.]

The second problem is specific to the GDI ClearType rasteriser: absence of any y-direction antialiasing. The obvious answer to that is to replace the GID rasteriser with the WPF rasteriser, which applied greyscale y-direction AA in combination with x-direction CT, for what I genuinely think is the best rendering for larger type I have seen anywhere. Unfortunately, Microsoft didn't take the obvious answer, and instead suggest the less obvious 'Re-write GDI applications for WPF'.

David, if you are willing to send me the unhinted ITC Franklin Bold, I can provide a screen shot of the rendering with y-direction AA enabled. I think this would be a useful comparison with the images above.

fatjellypenguin's picture

BECUASE THEY SOUND ANGRYYY!!!

gives me a headache just looking at them on the page

...ever wonder if illiterate people get thefull effects of alphabet soup???

Syndicate content Syndicate content