Help on dissertation, Legibility versus readability

daniel_hall's picture

Hello everyone, i am a graphic design student in my 3rd and final year, and currently writing my dissertation.
My dissertation is on the current trend of hand rendered typography over more traditional typefaces and looking into how well this style communicates.
I hear there is an article in raygun composed by mr Carson which has been set entirely in webdings. I have been searching for this article for a long time now, and wonder if anyone could tell me if indeed it does exist, and if so in which edition.
This would be most helpful in one of my chapters on how the motivation of the audience helps in determining the readability of a piece.
any help would be greatfully recieved.

John Hudson's picture

Point of information: while some people, especially native English-speaking typographers it seems, have taken to distinguishing between legibility and readability, this seems an anomaly, both professionally and linguistically. As I discovered when moderating the legibility panel discussion in Thessaloniki this summer, cognitive psychologists studying reading do not generally make a distinction between legibility and readability. If they do make a distinction, it is not the same as that made by typographers. So while we, as typographers, might use 'legibility' to refer to letter decipherment and 'readability' to refer to word recognition, a cognitive psychologist might make no distinction, or might reserve the term readability to refer to document-level phenomena such as page layout, structure and even prose style. Many languages do not offer even a vocabulary distinction for legibility versus readability.

I think it would be a good thing for typographers to recognise that the distinction we've taken to making between legibility and readability is really begging the question: the distinction itself assumes that letter decipherment and word recognition are somehow separate and only the latter constitutes reading. No cognitive scientist is going to accept such a loaded distinction, since empirical studies are at best inconclusive and tend, in fact, to link letter recognition and word recognition very closely.

union's picture

Hi Daniel

The RayGun artcle you mentioned was at least eight years ago (so Iam not sure how relevant it is to current design trends). It seemed a pretty pointless exercise at the time, speaking as a reader of the magazine.

But that article in RayGun is probably a good example of why legible type is in no danger of being written off. Ask yourself why was the whole magazine not set in webdings, every month!

Because as a consumer you would go up to the magazine stand and say, I am not paying $8 for some pictures and some text which doesn't mean anything, even if it is designed by David Carson.

Design is all about communication, and while a hard to read display font, or piece of hand lettering may be funky for a magazine headline, the body text has to communicate with the reader clearly.

I'd suggest that you research th other side of the arguement, for example... if distressed looking type is so popular and trendy, why are the classic fonts the most popular (looking at sales figures). Why are companies like Sea Design ( http://www.seadesign.co.uk ) and Browns ( http://www.brownsdesign.co.uk ) in design magazines every month, and winning all the top awards (in the UK where I see you are) with clean helvetica based work if this isn't popular, and why are newspaper typefaces so conservative if readers love hard to read text?

Please keep us posted on your progress, as I am sure lots of us would love to read the finished thing.

Jim
===

hrant's picture

Another Hrant FAQ entry, it seems...

> cognitive psychologists studying reading do not generally
> make a distinction between legibility and readability.

And that of course is the biggest clue that they don't know what they're talking about! :-)

It's notable that the cognitive psychologists in Thessaloniki did nothing to change Richard Southall's mind, a person who is certainly not anti-empirical. The bulk of empirical research into readability is essentially flawed. If the scientists want to make a dent into reality, they have to explain various things, such as the importance of serifs, tight letterspacing, etc. They have not. And if they keep following this course, they never will.

The difference between readability and legibility is parallel to the difference between a type designer who makes real text fonts versus versus a designer who is simply very good at faking it.

hhp

union's picture

Today I had an eyetest and though I expected to need glasses, I was told that I had excellent vision. I wondered if the fact that I spend a lot of time looking at letterforms made it easy for me to recognise the blurred letters on the chart and made for an inaccurate assesment.

I guess what I am getting at, is that legibility and readability differ from person to person....

dewitt's picture

Jim

That's the funniest thing I've heard all day.

If only, in the days before Lasik, this information was available to all those boys and girls who would have chopped their arms off to avoid wearing the dreaded ocular devices. You'd have to dissuade children from being typographers there'd be such a mountain of them.

Daniel

I'd like to suggest focusing on the differences between the way we write and what we read. Most are taught not to use the two-story 'a' etc. Cursives are even more extreme in their differences.

If what we read often affects our ability to read then how we write must also hold some bearing. Typography has been a largely male dominated business, but women--in my experience--have much better handwriting. Of course that doesn't have much to do with why it's male dominated, but does the ability to write well influence reading? If you have horrible handwriting, it stands that you'd have an easier time reading badly kerned or misshapen type.

Just something I've been pondering. I don't know of any studies.

shylodog's picture

Too much "mouth" is spent on this topic.

Si_Daniels's picture

>My dissertation is on the current trend of hand rendered typography over more traditional typefaces ...

Is this really a 'trend' - if anything I think the trend is in the other direction.

Si

hrant's picture

They have modems on dinghies now?

BTW, just to clarify something before anybody throws a hissy-fit:
When I say the scientists "don't know what they're talking about", I'm referring to immersive reading specifically. They know plenty about legibility, certainly more than me. But the problem is they [generally] don't see the other half of the world we live in.

hhp

shylodog's picture

Simon Daniels,

Certainly, you would be right. Little is left of hand rendered anything. Anyone that would consider it is outcast from this circle in any event.

William Berkson's picture

>the distinction itself assumes that letter decipherment and word recognition are somehow separate

John, it seems to me pretty obvious that they are separate to some degree. I have seen in I forget which book how you can slow down word recognition with legible characters by having wildly irregular letter spacing. I would expect also that you can easily slow reading speed by extremes of leading and line length, again with legible characters.

The question of how much serifs help, etc is likely a more difficult problem to test, but I agree with Hrant that the distinction is not only valid but pretty obviously valid.

F or ex a mp l e yo u w il l n ot re a d t his orwillyoureadthisphrase as quickly as this.

John Hudson's picture

William, the model of word recognition that Kevin Larson describes is one in which word-identity is guessed at in the brain based on all available information and, particularly, the simultaneous identification of the letters in the word. Obviously, if you fragment words by excessive or inconsistent letterspacing, then you break whatever model of word recognition you are working with, but this doesn't imply that the letter-recognition -> word-recognition model is inadequate, because obviously the key to word-recognition would be letters in a word. If you can't easily tell where a word begins or ends, letter recognition isn't going to function as it should as part of word recognition.

Studies on the phenomenon of lateral masking on letter recognition are interesting in this regard, because they suggest how letter spacing -- and form -- affect the ability to recognise individual letters. The basic phenomenon is that letters in the parafovea are easier to recognise in isolation than when sandwiched between other letters; the effect of the other letters is called lateral masking. What is interesting is that there is a totally unintuitive improvement in letter recognition when parts of the letters actually touch, which Peter Enneson, for one, thinks may contribute to an understanding of the value of serifs, as elements that make connections between letters.

One of the results of the legibility panel in Thessaloniki is that Kevin is now very interested in the matter of letter spacing, and is beginning to test specifically in this area. I've talked with him a little bit about his initial results, but I don't want to steal his thunder so will let him make his findings public when he is ready.

William Berkson's picture

>this doesn't imply that the
letter-recognition -> word-recognition model is inadequate

What kind of relationship does your arrow indicate? You are conceding that there is more to readibility than letter recognition. That estabilishes the distinction. The question is, then, how important is the distinction?

The model of parallel processing sounds right, but it does not contradict the importance of additional factors in reading beyond legibility of individual letters. For example the masking stuff you refer to and the spacing both have to do with we recognize words, how words are recognized apart from simply being able to quickly identify individual letters.

So I think the new research you cite - and it looks like Larson has learned of the importance of new problems from type designers - seems to support the importance of the distinction that typographers have long made.

shylodog's picture

William,

Your input to the typophiles I am sure is not only recognized by myself.

My wishes for you are the very best.

Gerald Giampa

hrant's picture

> obviously the key to word-recognition would be letters in a word.

But not individually, not necessarily, not immersively. Obviously.

BTW, I think you're overplaying the lateral masking issue. Not least because it actually seems to help parafoveal recognition sometimes!

As for Kevin's research "evolution", it certainly is encouraging that he's open to suggestions from practitioners. However this can instead turn to [further] false assurance that the letterwise model is valid if he never even approaches deep immersive reading, for example by using deficient test subjects, or Arial, or... ClearType! :-/

As long as you're not promoting deep immersion, all the data will still point to letterwise decipherment; because in that realm (slow, fovea-centric, shallow immersive) the letterwise model is indeed totally valid. Just not for real reading.

When Kevin asked Richard Southall if he would change his practice if his tests showed that tight letterspacing is irrelevant, he candidly said No. And I'm with him. And so are many others. But not because we reject empiricism; it's because we reject deficient instances of empiricism, or at least false interpretations of data. It's very hard to measure real reading, in a wise typographic context.

hhp

hrant's picture

Daniel, about the Carson thing, here's something:
That dingbats article certainly does exist. Over the years there have been many explanations, some romantic, some pedestrian, about why that happened. I've heard two that are believable: one that Carson didn't agree with the content so he made it unreadable; the other that that was only a first draft, they never got the final version in time, they didn't have time to make an intelligent decision (it's hard for Patrick after all), so he made it unreadable. The stuff about making a point about legibility seems like the typical post-rationalized designer claptrap to me.

hhp

rjohnston's picture

Re Carson: -- <a href="http://www.amazon.co.uk/exec/obidos/ASIN/1856693511/qid=1102498136/sr=2-2/ref=sr_2_11_2/026-9574742-8306864" target="_blank">20th Century Type</a> has a reproduction of a spread from the Dingbats article (p153 in my edition). It's an interview with Bryan Ferry, and according to the notes the use of unreadable Dingbats is "Carson's reaction to the dull text".

R

rjohnston's picture

Sorry, don't know what's going on with that link. But it's kind of appropriate to have a bunch of garble, given the subject matter ...

R

John Hudson's picture

What kind of relationship does your arrow indicate? You are conceding that there is more to readibility than letter recognition. That estabilishes the distinction. The question is, then, how important is the distinction?

The arrow indicates that role of letter recognition in word recognition, according to the parallel letter recognition model: letter recognition underlies word recognition. So, yes, there is a distinction between letter recognition and word recognition, but proponents of the parallel letter recognition model would consider this to be a distinction of a part from a whole. The way the terms legibility and readability are used by many typographers suggests an unproven distinction of kind: that letter recognition and word recognition are separate activities.

John Hudson's picture

Carson spoke about the dingbat-set article at the ATypI conference in The Hague in 1996. He said that the article was crap.

Erik Spiekermann was standing next to me at the time, voiced my own view: Carson's attitude reversed the traditional role of the typographer as servant of the text. The text was now the servant of the ego of the designer. Of course, it was an untenable reversal, and like all whom the gods would bring low, they first made Carson fashionable.

hrant's picture

> an unproven distinction

Unfortunately for the formalists and scientists -but quite fortunately for most humans- people don't make their decisions (especially not the important ones) based [mostly] on lab results.

Letterwise decipherment makes sense only in very shallow reading. Boumas make sense in the big picture, when performance matters. It's really that simple, and no amount of bad science (or at least badly interpreted science) is going to change that, because humans are not algorithmic.

hhp

John Hudson's picture

Hrant, I'm just reporting, as correctly as I'm able, the opposing viewpoint. I'm not concerned with how people make their decisions, but I am concerned that people make assumptions and encode those assumptions into their language. Making a distinction between legibility and readability along the lines we have done so in recent years now seems to me presumptuous. Sure, it may be useful and it may even turn out to be correct, but you can't expect any scientist to accept such an a priori assumption. Science is slow and takes a long time to come to a conclusion. Jumping to conclusions is obviously quicker. When we have to select a typeface for a project and decide whether it is appropriately spaced, we must jump to a conclusion, based on intuition, experience, etc. But this is a different process from accurately determining how we read. I wouldn't base a decision about typeface use on partial empirical understanding of how we read, but nor would I base a theory of how we read on intuition and anecdotal typographic evidence.

...humans are not algorithmic

This seems to relate to something I said to Peter Enneson in Thessaloniki. The parallel letter recognition model seems to offend some people because it seems algorithmic; certainly, it doesn't seem holistic. And we like to think that we're oh so holistic, because that is how we experience thought. But all evidence of brain activity points to a crude, algorithmic device that gets by on sheer processing power: leaving us with the uncomfortable notion that what we experience as holistic may actually be the product of a massively complex algorithmic machine. The holistic appearance of thought may, in fact, be a function of the complexity. This is to what I was referring in my chess game metaphor at Thessaloniki: the experience of reading differs from the mechanics of reading. This raises all sorts of interesting questions. To what degree is the experience of reading directly addressable via the mechanics of reading?

Imagine, for example, that you changed the rules of chess to insert an element of chance into the game. For instance, the players would flip a coin that would determine which of them would miss his third move. Suddenly, chess would no longer be a function: it would become a game. But since we don't experience chess as a function anyway -- we experience it as a game --, such mechanical tinkering doesn't significantly affect our experience.

Likewise, it seems to me, that whether one accepts the parallel letter recognition or Hrant's bouma theory of reading -- or produces evidence of some other mechanism -- it isn't obvious that this choice, or type design practices arising from it, will significantly affect the experience of reading. And it is this, I think, that Richard Southall was acknowledging. Type designers and typographers are predominantly interested in the experience of reading. So when Richard told Kevin that he would not necessarily adapt his design practices to empirical evidence suggesting that loose letterspacing improved readability, one must also assume that he would not necessarily adapt his design practices to Hrant's recommendations based upon the bouma theory.

John Hudson's picture

By the way, Hrant, I think your continued use of the phrase 'letterwise decipherment' is unfair, since it is not an accurate representation of the parallel letter recognition model, and is too easily confused with the long-discredited sequential letter recognition model.

Actually, I think the term 'decipherment' should be dropped entirely, because it isn't even accurate applied to sequential letter recognition.

William Berkson's picture

>The way the terms legibility and readability are used by many typographers suggests an unproven distinction of kind: that letter recognition and word recognition are separate activities.

John, you seem to be assuming that the distinction is based on the extreme view that 'bouma' or word shape is everything, which has been refuted by the examples recently discussed of mixing letters within words.

But your understanding on the usage of these terms I think is not correct, at least historically. For example, here is the distinction as explained in my late Uncle J. Ben Lieberman's book 'Types of Typefaces' (1967):

***
"Legibility" is based on the ease with which one letter can be told from another.
[Here follows a graphic with b's and h's, including the old style italic h, which can be confused with a b].

"Readibility" is the ease with which the eye can absorb the message and move along the line. The choice of typeface is not the only thing that determines readibility. The size of the letter, the spacing between letters, the amount of "leading" (spacing) between lines, the width of the line itself, the size of the margins around the type block, the quality of inking, the effect of the process used - including the amound of "sock" [in letterpress] the texture or finish of the paper stock, the color of the paper and ink - all these are involved, both in affecting the appearance of the particular typefaced used and in the resulting readibility.
***

I should add that Ben Lieberman was in contact with the leading lights of his day in the type world, so this definition almost certainly reflects widespread practice in the type world. So we are talking about usage that probably dates back to at least the 1930s.

This definition does not make any prejudgments about the relative importance and exact role of disciphering individual letters verses whole words. It just says that there is more to readibility than distinguishing quickly individual letters.

Furthermore, it does seem to me from the discussions here on Typophile - I don't know the literature - that psychological research on these additional factors beyond letter discipherment has been rather neglected.

As irritating as I find Hrant's extreme and insulting way of putting it, I think he has a sound point that a lot of research remains to be done to enlighten us on these additional factors. I do agree with you, and not with Hrant, in thinking that in the end these results are not likely to affect normal typographic practice or speed of comprehension very much.

By the way, I would like to split further readibility as defined by Lieberman. There is one aspect of readibility which is reading speed and comprehension, and another which is comfort. I may be able to read a paragraph on screen as fast and with as much comprehension as a printed version. But it is common knowledge that it is more fatiguing to read low resolution screens, and that if you want to proof read something, you'd better print it out. The degree to which a setting is welcoming and comfortable to read it seems to me is extremely important to typography, beyond legibility or readibility. And it, too, could be studied scientifically.

hrant's picture

No, in the end reading speed and reading comfort are the same.
This is one important thing that Kevin and I agree on.

I'll reply to John's points in print. :-)

hhp

John Hudson's picture

No, in the end reading speed and reading comfort are the same.

Well they're obviously not the same, since one can comfortably read slowly if one so chooses. If something is particularly well written, I often choose to read it slowly so as to relish every phrase, just as I will more slowly chew something really tasty.

Where you, Kevin and I agree, I think, is that high reading speed is possible only when reading comfortably. So one can be comfortable and read slowly, but one cannot be uncomfortable and read fast.

I'll reply to John's points in print.

No fair! :-)

John Hudson's picture

William, thanks for the Lieberman quotes. I had not realised that the typographic distinction between legibility versus readability dates back that far, but it doesn't surprise me. Yes, the distinction is widespread in the type world: I'm questioning whether it should be, and to what degree it represents an a priori assumption about how we read.

As I noted earlier, many languages do not provide vocabulary for such a distinction. When Gerard Unger writes or lectures about what we would term 'readability' in type design, according to Lieberman's definition and common typographic distinction, he calls it 'legibility' since this is the common translation of the Dutch word he uses.

And as I discovered in Thessaloniki, our friends in cognitive psychology simply do not make the distinction. The majority of what we would be obliged, following Lieberman's definition, to call readability studies, are conducted and published as legibility studies.

hrant's picture

We both think they're the same when it matters: long-duration, necessity-driven reading. And this is in fact where slight improvements in performance matter a lot, no matter that readers don't consciously notice a difference in the "experience".

You can eat anything if you put enough ketchup and pepper on it.
You can read any font when you don't mind the fatigue/delay.

hhp

John Hudson's picture

We both think they're the same when it matters: long-duration, necessity-driven reading.

No, they are not the same. That is just sloppy language. Same means identity.

Speed and comfort are very closely related -- regardless of the kind of reading -- and one (speed) is not possible without the other (comfort). One is a necessary condition of the other. This is not the same thing as saying that speed and comfort are the same thing.

If we're going to talk about these things, let's at least try to be precise in our use of language.

William Berkson's picture

>I'm questioning whether it should be, and to what degree it represents an a priori assumption about how we read.

As a rule of thumb, when a distinction is widespread, it represents something real. People may be wrong about what exactly it represents, but I don't think language evolves distinction-without-difference concepts, except perhaps in the realm of persuasive speech, where all kinds of crazy things happen.

Furthermore, the affect of spacing on reading speed shows that quickness of identifying individual letters is not the only factor, so that the distinction is valid at least this far.

>our friends in cognitive psychology simply do not make the distinction

That is not necessarily due to greater wisdom. There are competing research programs in science, and they are full of all sorts of biases about what research is going to be fruiful. It is the positivist's mistake to elevate whatever are the current biases into a substitute religion.

It is healthy to have competing research programs, as this sharpens research. So it would be bad to dismiss either assuming that word recognition is an insignificant problem or assuming that it is of overwhelming importance. The thing is to develop both models into testable theories, and test them against each other. That is the way we can really learn from experiment.

What words we use - "legibility," "readability" - is not important. The idea that there is more to help the reader read than clear letters is important, and worthy of scientific study.

hrant's picture

Indeed.

"In questions of science, the authority of a thousand is
not worth the humble reasoning of a single individual."
- Galileo Galilei

This is what great scientists are made of; not hordes with powerful computers.

hhp

John Hudson's picture

I may be idealistic, but what attracts me about science is the idea that science is testing hypotheses, not trying to build a case for a particular bias. So what appeals about the attitude of people like Kevin Larson and Mary Dyson -- who disagree with each other in a number of areas -- to the relationship of letter recogntion and word recognition is that they seem to be avoiding assumptions and want to test things. I think the interaction between the experience and anecdotal evidence of typographers and the empirical methodology of cognitive psychologists will be most valuable in determining what to test and how to test it.

William Berkson's picture

>avoiding assumptions

I agree, a beautiful thing in science is the willingness to test hypotheses, rather than just dogmatically push biases.

My late teacher, philosopher of science Karl Popper, argued that science grows with bold hypotheses and severe empirical tests of them. He said that we cannot avoid assumptions and biases, and the scientific way to deal with them is to articulate them, develop them into testable hypotheses, and test them, to see whether they stand or fall. So I think that 'avoiding assumptions' is somewhat misleading. It is more a matter of articularing what they are, and testing them.

John Hudson's picture

Yes, that is well put.

This idea of articulating assumptions seems to me very important, hence my comments regarding the assumption that might accompany or be embedded in our trade distinction between legibility and readability. It is akin to a Gadamer's approach to hermeneutics: acknowledging and articulating the unavoidable context of interpretation.

42ndssd's picture

Would most people consider a script font such as Zapfino legible? Readable? IMO it's not terribly readable, and I would be loath to read a book written in it. I do think it's quite legible in that individual letters and even words are easily decoded, and I am confident it would become readable with practice. (And Zapfino would be more readable than, say, P22 Cezanne, but I don't find either one illegible.)

Double ditto with Comic Sans MS, which has been used by far too many half-crazed office weasels to encode lengthy memos. It is entirely legible, but not eminently readable--especially when printed at 75dpi without antialiasing. (Despite plenty of practice I never did become entirely comfortable reading it.)

I believe there is a useful distinction between the two terms, but I also strongly agree they need to be scientifically defined. Such studies will add value to typography, if for no other reason than to better describe the limits of art versus practical usability. (And if it helps to suppress lousy office memos, I'm all for it.)

I would go so far as to say that illegibility does not preclude readability! I suspect many of us have known husbands and wives where the husband could read his wife's writing so long as he "knew what it said", but the writing was so inherently illegible that it precluded reading unfamiliar words, telephone numbers, etc.

That's an extreme example, and not everyone would agree with that particular definition; yet I feel that is still a useful distinction to be made. Other examples might include bad bitmap fonts, low-resolution LED/LCD displays, or printouts from 8-pin dot-matrix printers. All of these may be easily "decodable" (which may be a better term than legible) but not necessarily easily read. Any of these may become more readable with practice, but that's equally useful information to know.

I can imagine situations which might necessitate an unreadable but legible font, like when many similar figures are being displayed and it's important that each one is distinctly noted and checked.

So, here's a bold definition with testable hypotheses. Legibility is how quickly an individual letter or word may be decoded; readability is the amount of practice required to learn to "read at full speed" without strain, and may include a herd of relevant factors such as type size and spacing. Both of these can be measured, although the latter is enormously more difficult than the former to quantify. (Those dratted test subjects have a nasty habit of learning without telling you.)

As far as I can tell, the cognitive psychologists can't even definitively state there are different modes of reading or what the distinctions are--let alone determine if glyph-level or word-level legibility is a separate issue from ease of reading.

hrant's picture

John, idealism is great, but ignoring human nature is not. Data is always open to interpretation, and how you test something necessarily determines the results. Kevin and Mary make assumptions just like any other human - they have to. It's just a different kind of assumption than people like me have to make... But at least I admit the inescapable subjectivity which results.

> most valuable in determining what to test and how to test it.

And with Kevin's venture into letterspacing we seem to be finally making a dent into the former. Critically though without also making a dent in the latter (which I have tried but so far failed to do) all we'll get is just more data pointing to the parallel letterwise model. :-/

--

42nd I think you're on the right track with a lot of what you wrote. And it is possible to define legibility and readability "scientifically" (if not measure them as precisely as formalists might like). To put it simply, to me legibility relates to how easy it is to decipher individual symbols when looking deliberatively (with no real time constraint), while readability pertains to the reliability with which boumas (clusters of letters* which are sufficiently frequent and distinctive) can be quickly mapped to words, as deep as possible into the parafovea.

* Noting that the cluster can be one letter, thus nicely covering the letterwise model.

hhp

John Hudson's picture

I'm the last person to ignore human nature, Hrant. But it seems to me -- apropos of the unavoidability of interpretation -- that 'human nature' is something that you have chosen to interpret in a very particular way.

hrant's picture

I really don't see what's "peculiar" about pointing out that the LP model goes against the generally accepted* view of human cognition - specifically here that the subconscious uses all the information available to it (for example blurry -but context-rich- shapes deep into the parafovea) to slowly and fuzzily build a base (ie boumas) for the conscious half. Your stance that people do/should make decisions (like choose a wife) based on formal Proof is certainly more peculiar.

* If "inconvenient" to empiricists...

hhp

John Hudson's picture

Hrant, don't move the goalpost: you were talking, as you frequently do, about 'human nature', not about human cognition. You cite 'human nature' in almost any context to dismiss opposing arguments -- i.e. anything you say is based on an accurate interpretation of human nature, while your opponents, to a man it seems, ignore human nature. It is one of your most frequent dodges (recently seen in the Typographica discussion on type design education).

And I did not use the adjective 'peculiar'. I said particular.

And I never said that people should make decisions such as choosing a wife on the basis of formal proof. In fact, I never made any mention of formal logic. What I said is that I think it is foolish to choose a wife on a romantic basis, since romanticism tends toward the delusional. I said that one should use reason when deciding whether to marry someone. And my wife agrees.

Kevin Larson's picture

Sorry I'm so late to the party, this has been an excellent and lively conversation.

William, I envy the excellent conversations that you must have had with Sir Karl Popper. I admire him greatly. It seems worthwhile to mention one of his more important contributions to the philosophy of science. Popper proposed that every good theory should be falsifiable. Basically, that it's possible to run an empirical test that could show the theory to be wrong. For example the parallel letter recognition model predicts that longer words (words with more letters) will take longer to recognize than shorter words because of the additional computation of putting together more letters. If we ran a test and found that long words are recognized faster, then the parallel letter recognition model would be proven false. A theory like creationism is a poor one because there is no test that can ever demonstrate the theory to be wrong. I would likewise put many of Freud's theories into the same category.

What tests can we run that would falsify the bouma model?

Cheers, Kevin

Thomas Phinney's picture

More importantly, what tests can one design that would yield significantly different results for the two models? If the only tests that would falsify the bouma model would also falsify the parallel letter recognition model, and we strongly suspect that one of the two theories is true, then those tests aren't very informative. Of course, despite decrying existing tests and their interpretation, Hrant has not suggested anything himself.

As for John's comments, I don't see any need to make a simple binary choice between reason and romance in deciding whether to get married. Personally, I considered both emotional and rational factors in getting married. (As did my wife.)

Cheers,

T

Thomas Phinney's picture

More importantly, what tests can one design that would yield significantly different results for the two models? If the only tests that would falsify the bouma model would also falsify the parallel letter recognition model, and we strongly suspect that one of the two theories is true, then those tests aren't very informative. Of course, despite decrying existing tests and their interpretation, Hrant has not suggested anything himself.

As for John's comments, I don't see any need to make a simple binary choice between reason and romance in deciding whether to get married. Personally, I considered both emotional and rational factors in getting married. (As did my wife.)

Cheers,

T

John Hudson's picture

[Aside, since we've got enough going on without bringing religion into the mix: the idea of 'creationism' as any kind of scientific theory is simply daffy. And I say that as someone who believes that God created all things except Himself and the other persons of the Trinity.]

John Hudson's picture

I don't see any need to make a simple binary choice between reason and romance in deciding whether to get married. Personally, I considered both emotional and rational factors in getting married.

Of course. But recognising what is called, with some justification, 'true love' requires some amount of reason in itself. Our feelings are submitted to rational judgement, unless we're adolescents (of whatever age). I don't even remember when or in what context the original exchange with Hrant took place, but my recollection is that I simply asserted the importance of reason in choosing whom to marry. What I likely said, because it is a construction I have used at other times, is that love is a necessary but not a sufficient condition in marriage. And note that I say in marriage, not for marriage; I think there is sufficient evidence of success in arranged marriages to suggest that love may develop within a marriage rather than before it. Which is not to say that I think marrying someone you have never met is generally a good idea!

Anyway, back to the legibility...

William Berkson's picture

Kevin, yes it was a gratifying and unforgetable experience being around a truly great thinker. In my highly biassed opinion the best book on Popper's relevance to cognitive psychology is 'Learning from Error', which I wrote with John Wettersten. The first half, which mainly I wrote, is on Popper and cognitive psychology.

Thinking back on those days brings back to mind one of the many things he said that stuck in my mind at the time - 'file for later reflection' - that may be relevant to your work. He said he thought that the periodic reversal in the necker cube optical illusion was due to the mind raplidly trying out hypotheses to interpret the incoming data, and then refuting them and trying another, and refuting it. The necker cube causes an unusual 'loop' in the process. I'm sure Popper would say that some kind of rapid testing process is involved in identifying words.

The bouma model in its more extreme form - that word shape is much more important than individual letters is refutable and refuted. As I mentioned earlier, the research mentioned in the letter reversal discussion here showed that: you can mix letters in a word and still read quickly. So the more extreme form is dead.

It is important to understand here, I think, that models in the vague sense of an idea that guides research are not testable. For example the continuous vs atomic views of reality are still vying with one another after several hundred years. As Popper emphasized, what are testable are theories.

A model that is a heuristic idea like 'you can explain reading in terms of word shapes' is too vague to be testable, even though it can inspire testable theories. Some of Popper's followers, such as Agassi (whom I was also a student of) made clear that both research programs, inspired by 'models' and other things, and testable theories are important in the growth of science. But when it comes to evaluation you can only empirically evaluate theories, and only indirectly research programs, by whether they are fruitful or not of new discoveries.

As Thomas mentions, the most informative tests are 'crucial experiments' that will discriminate between two competing theories. But both theories must be testable to be so tested.

As Agassi pointed out, there is a danger in not distinguishing between research programs and theories. You can always cite confirmations of research programs, but experiements are only scientifically significant when they are potential refutation of theories.

On the 'bouma' model, those who are partisans of that research program should come up with theories to test. But no, it is better for anyone researching to come up with testable versions of theories they disagree with, as well as agree with, so they can do crucial experiments.

Personally, I think there is more to word recognition than letter recognition, but the challenge is to put forward alternative theories on what that 'more' is, and then to test them.

The discussion on emotion vs reason and marriage decisions is something I have written about, but I really don't want to get into it as I will be hours writing out the whole issue - and not exactly relevant here.

I will just mention though that a lot of the evolution debate is muddied by a failure to distinguish between testable theories and research programs - evolution 'theory' involves both.

hrant's picture

> It is one of your most frequent dodges

While I think accusing me of moving the goalposts is one of yours! :-)
There are no goalposts. The field is the whole scope of Life, and nobody can win. Just play.

> I never said that people should make decisions such
> as choosing a wife on the basis of formal proof.

I've asked you before if you chose your wife based on Proof. You said yes. No way. Reason, even evidence, of course I couldn't agree more*. But that's exactly what makes me disbelieve the LP model and believe the bouma one, in spite of lack of Proof.

* And OF COURSE emotion too!

If the two models were women (and I weren't already married) I'd chase Bouma to the ends of the earth; and I'd get a restraining order on that damn control-freak chic.

--

> What tests can we run that would falsify the bouma model?

That's a very good angle, I think. Because as you're trying to disprove something you will learn more about what is happening. I couldn't reasonably ask more than for you to assume boumas exist simply to try to prove they don't.

The big obstacle -and I really don't mean to give anybody an impossible task, I really like empircism, if only for what it can do- is that the scientific methods used (not just by you) simply can't get there from here. Like I said it's great that you're looking into the effects of letterspacing, but as long as you're not promoting deep immersion you're never going to get any data that refutes your model, because your model is sound for deliberative reading. All you'll see is little blips of strangeness like short and frequent words being skipped more or less - not enough to shake you from your position, not enough to give Doubt real steam.

--

> Hrant has not suggested anything himself.

It's hard. I've always said that. It's the PL people claiming they've "arrived", thanks to their shiny computers.

I can think of a few things though, but the basic thing -again- is implementing a fundamental change in the way data is collected. As I suggested to Kevin in Thessaloniki (and people like Kinross have always maintained this) you can't really measure reading by attaching devices to people at some exact distance from some exact screen, and tell them they're guinea pigs. They'll never read naturally that way. That's why Kevin's data shows such low reading speeds for example.

What I've suggested to Kevin is this: get avid readers in their 40s or so, pay them to sign a consent form allowing you to: provide them with reading material*, and secretly observe their reading at any time during let's say a one year span; then if you get enough data about reading times, you'll see all the boumas come out of the woodwork, like little spirits lurking on the page, waiting for people not to look. I promise.

* Maybe books of their choice but re-typeset in certain ways. Like a string of chapters with tight spacing and a string with loose.

The big problem is that entails a lot of funding. The good news -and the final piece of the puzzle which makes me excited to have Kevin around- is that MS is filthy rich.

--

> Our feelings are submitted to rational judgement

John, you can't seriously think that everybody feels this way. There's something called Intuition, and it's not the same thing as Reason. Or maybe you could say it's a Reason of the Subconscious. But that's totally different than Logic.

--

William, that was a great post.

To try to clarify what the bouma model in my head is, it certainly doesn't say we read whole-word shapes all the time. That's nuts. It simply says that the brain uses as much as it can as fast as it can. So for example when it sees some totally clear letters in the fovea, it compiles them in parallel into words. This is Kevin's stuff. Here boumas are single letters. But it also says that the brain takes "educated guesses" about blurry clusters of letters some way off to the right of fixation (notably where individual letters are undecipherable, even if you give a reader extra time to consider the shape consciously - in fact consciousness might actually be an obstacle there). It can do this if the cluster (maybe a word, maybe a part of a word) is crisp, frequent and distinctive enough for the given experience of the reader at hand to have enough faith (never certainty) to consider it read. This is why typos are missed so much. This is heuristics. And yes, it makes empiricists quiver.

hhp

William Berkson's picture

ps. I can't find the thread that Kevin came in earlier, but I did suggest a hypothesis on word recognition, but I don't remember exactly what it was. I think it was related to an ideal range for letter spacing. Maybe you, Kevin, are testing this and other ideas, from what John says. I (as others) also believe that serifs help the eye identify and hold a distinct line of text, which is why serif texts need less leading. I think there is a lot more...

hrant's picture

Leading totally overpowers and any line-forming ability that serifs have*. Serifs help bouma bonding; they actually help letters become less themselves in order to promote good notan.

* Maybe if you use too-little leading then serifs (if they're long enough) can help out a little, but I think by then you'd have totally trashed decent readability anyway, because word spaces (which have to be a certain minimum size themselves, serifs or no) will overpower the leading.

Coming back to "my" model, I just wanted to add this:
It's the model that assimilates everything I've encountered since starting this research in '98, including all the empirical data I've seen (in the context of validity), all the anecdotal evidence, plus beliefs about human thought. What choice do I have but to believe it? Just ask yourselves, what is more probable: that the empirical data that points to the PL model is essentially limited to deliberation (or very light immersion), or that virtually all the anecdotal evidence (that a book set in sans is a more laborious read, that there's such a thing as a too-big x-height, etc.) is totally wrong? A big problem with Formal Science is its tendency to treat anecdotal evidence as Old Wives Tales. There has to be more give-and-take, on both sides. Hey, like marriage again!

hhp

Kevin Larson's picture

Hrant, now we're getting somewhere.

> then if you get enough data about reading
> times, you'll see all the boumas come out of
> the woodwork, like little spirits lurking on
> the page, waiting for people not to look. I
> promise.

The first phrase is a hedge. If I don't find what you expect then it sounds like you'll say that I didn't look long enough. This is exactly the kind of statement that makes a theory unfalsifiable.

The rest of it is in the direction of making your theory a legitimate falsifiable scientific theory. But it's not there yet. You should say exactly what it is that I will find. Will people suddenly start fixating only once per line? Will people stop making regressive saccades? In the boundary study, will people read matching word shapes in the parafovea as fast as the actual word?

Cheers, Kevin

Kevin Larson's picture

William, I am not currently investigating differences between serif and san serif fonts. Old Lund reviewed the many studies investigating performance differences between the two font styles, and did not find compelling evidence to support that case. Clearly the different font styles carry emotional differences which are independently valuable. I am involved in investigating performance differences with different amounts of spacing between letters. The work is quite slow going. I believe that you proposed that the optimal letter spacing should be in porportion to the amount of counterspace - this is also the view that Robert Bringhurst has supported in the past.

Cheers, Kevin

hrant's picture

I don't hedge. I want to learn (and I want to teach/convey truth). When I first found out about Javal's discovery, I was in total shock! It opened a pandora's box. But the phantasms only made me smile in awe... and then jump into the box!

If you convince me, you convince me. Then I'll be the loudest anti-typographic-anecdotalist out there. And I personally will stop making serif text fonts (unless somebody pays me to :-). "My" theory is very much falsifiable, I know that for sure.

I have no problem admitting I am/was wrong. If I am wrong, I had very good reason to be - I didn't know enough, and not through lack of trying. And hey, people still admire Tschichold. :-)

> You should say exactly what it is that I will find.

Well, I'm willing to try that.

Some things you'll see:
- Longer saccades.
- Skipping over words (and not just the "Taylor 60").
- Fixations on the latter parts of words. Maybe.
- MORE regressions.

Most notably, the first two have already shown up in your hard data, just very very faintly. You've noted saccades that are "too long" (like 15!), or words like "and" being totally skipped. That's the tip of the iceberg.

> Old Lund reviewed the many studies investigating performance
> differences between the two font styles, and did not find compelling
> evidence to support that case

Come on, dude, he said more than that. He said it's inconclusive because the crushing majority of empricial testing has been fatally flawed!

> the optimal letter spacing should be in porportion to the amount of counterspace

Yes, because we read notan.

hhp

Syndicate content Syndicate content