bouma slam

enne_son's picture

I was born Gereformeerd. Gereformeerd is or was in my youth a denomination within the Dutch Protestant orbit stemming from the Calvinistic response to 15th-century Catholicism. While keeping our distance from denominational expressions of Christianity my immediate family and I still keep to some of the observances ensconced in the wider Christian tradition. For Christmas I asked for and received copies of Stanislas Dehaene’s recent book Reading in the Brain: the Science and Evolution of a Human Invention, and Oliver Sacks’s The Mind's Eye. The Mind's Eye describes, among other things, the case of Howard Engel, the Canadian detective story writer who lost his ability to read due to a lesion caused by a stroke, in what Dehaene, the author of the other book, and his research associates have dubbed the Visual Word Form Area of the brain (VWFA). In Reading in the Brain the VWFA is called “The Brain's Letterbox”. Reading in the Brain describes: how we read; the VWFA; the neuronal basis for reading; the evolution of writing; learning to read; the dyslexic brain; reading and symmetry; the author’s own pet theory of neuronal recycling; and it speculates about neuronal recycling as the basis of culture.

Along the way Dehaene forcefully (his word) blasts (my word) the Whole Language approach to reading instruction, mostly on account of the apparently well-documented damage it’s "Whole-Word" instructional bias has caused to literacy. The bug-a-boo is the approach’s sidelining of the systematic inculcation of phonemic awareness. "Cognitive psychology directly refutes any notion of teaching via a 'global" or "whole language" method." Dehaene incidentally keeps his description of this method vague.

I haven’t finished reading the entire book, but noticed in passing that, as part of this critique, Dehaene mentions that the emphasis on the global shape of words “also invaded the world of typography, where the term “bouma” (named after the Dutch psychologist Herman Bouma), was coined to refer to the contours of words.” Deheane adds, “In the hope of improving readability, typographers intentionally designed fonts that created the most distinctive visual “boumas.” His source for this observation is — wouldn't you know it — Kevin Larson's 2004 “The Science of Word Recognition.”

I of course find this somewhat disappointing because the slight involves a misreading of what Bouma Shape was actually coined to refer to — something more than “contour” or the “global shape” of words. Perhaps the slight or slam is excusable, but digging deeper into both the psychological literature where it originated (Insup and Martin M. Taylor’s The Psychology of Reading), or Paul Saenger’s Space Between Words, or the peregrinations on typophile surrounding the term might have induced Dehaene to explore the matter a bit more fully.

The book is wonderfully provocative though, and it has it’s uses. In some places the language positively sparkles. And it has an endorsement from Oliver Sacks. Moreover, it’s a fairly manageable and, for the most part, internally cohesive compendium of a host of empirical rummaging in various domains. It’s also a bit of patch-work in places, in that there are disparities between several of the ways of seeing reading that Dehaene recruits to build his case. For example, Dehaene passes along the interactive activation model but his own local-combination detection based approach doesn’t fully mesh with the interactive activation account’s basic premises.

But somewhere in the section on “The Brain’s Letterbox” I had to take a break. The reason's were: 1) my ever-mounting frustration with the — to me — too rudimentary notion of how words are encoded in the brain — i.e., according to their spellings — became insurmountable, as did 2) an intense feeling that I had to make a more significant concession to the parallel letter recognition scheme than I had felt able to make before. In other words, I felt I needed to rethink the “inhibition of incipient recognitions for letters” idea I had found in Edmund Burke Huey. I felt the need to try and plot a “convergence of perspectives.”

The long and short of this is that my thinking is entering a bit of a new phase. Basically the idea is that on the new iteration of my scheme, categorical perception at the letter level can and does occur, but it doesn’t normally or necessarily lead to an independent downstream labeling in the higher regions of the brain. This is a kind of paradox that might require further elaboration. A similacrum of parallel letter recognition happens, but parallel letter recognition as currently schematized isn’t the central or foundational mechanism. The central mechanism remains for me the variety of “feature analytic processing” or “simultaneous co-activation of letter parts” I tried to describe in the recent “Monitor on Psychology” thread, and prior to that in TypeCon Atlanta, and the central event for me remains Bill’s notion of matrix resonance.

I’m thinking of starting a typophile blog page to describe 1) how I now see the structure of the underlying matrix — the inner bouma, if you will, 2) how I think it becomes established, and 3) how the connection with it is made in normal reading contexts.

I think I might call my scheme “distal ring plus proximate ring coding” or some such thing. DRAPE-coding for short.

I'll do that when I get a chance, and if my intuitions about this stick.

enne_son's picture

Chris, if this were facebook I would press the ‘like‘ button several times.

dezcom's picture

Thanks, Peter!

Nick Shinn's picture

The "lassooed" shape is problematic from a fractal perspective (e.g. "how long is the coastline of Britain").
The longer the lassoo, the more it comes to represent individually discernible letters.

A similar difficulty occurs with open counters: the boundary between internal white space and between letter spacing is fuzzy.

But at some point in the perceptual process, a decision must be made which fixes such values, in terms of their textual meaning.

In the language of quantum physics, this might be described as decoherence -- the untangling of multiple quantum states (possible words) into a single macroscopic meaning, or reality as it is read.

enne_son's picture

I sent a link to this thread to Gerrit Noordzij, thinking he might be interested. He commented:

Yes, the discussion is interesting for me. The participants argue at a considerable level. The weak point is the separation of type design from other kinds of lettering and from handwriting. Under general conditions you make the counters by enclosing them between consecutive strokes; by isolating pieces of shape from the world wide background. It does not make any difference whether successive strokes belong to the same letter or to the space between letters. Typography breaks the pattern of strokes. The counters of the letters are prefabricated in typefaces. Only typesetting makes the white shapes between the letters. Rolf Rehe demonstrated impressively how a self-indulgent typographer may spoil the equilibrium. The jammed shapes can’t dance anymore. We should regard typography as a special instance of lettering that obeys the same laws.

In a subsequent e-mail he added: Feel free to throw my reaction into the ring.

dezcom's picture

"...It does not make any difference whether successive strokes belong to the same letter or to the space between letters...."

That is it in the nutshell! Neighbors semi-enclose the sum of their adjoining negative spaces, thereby, creating a new shape that describes their union as much as it described the glyphs individually before. Letter pairings, or groupings, make distinct shapes which probably are not describable as a specific object but none-the-less contribute to reading a word or words.

ChrisL

Nick Shinn's picture

...It does not make any difference

Indeed, readers can manage letter combinations which deal with white space in quite different ways:


And both can appear in the same typeface.

dezcom's picture

There is no hard rule that says these interactions of space must be the same in every encounter. It makes sense that they do not take over to such a degree as to make them dominate the sum of all the parts.
Nick's 1st ea appears to create a dominating shape but ea does not live in a vacuum. Perhaps if we add say an s or a t to the mix, this domination falls away and we see a word harmony that is not interrupted by the finger-like shape just the e and a make?

Rob O. Font's picture

>Only typesetting makes the white shapes between the letters.

Amen!

dezcom's picture

Nick,

Here is a case of the shape being recognized to some degree in its native, unadorned state, then pushed to dominate the word-read but in a way that ADDS communication value, rather than detracting:

daniele capo's picture

Peter,
Can I ask you why do you find the hierarchical model used by Dehaene has difficulty with your description of our lived-experience of words?

I'm asking this because, when I read the part of the book where D. explains this hierarchical model (if I remember: neurons that can discriminate between black and white, group of neurons that can see the difference between more complex things, etc.) I thought: this is the little door from where someone can re-introduce the 'wordshape'. You only need to 'enter' at the level of 'word' (to give an example: When I see a tree, I have no doubts that a single neuron only knows about really simple stimuli, but from my 'normal', 'natural' knowledge – our lived-experience – I see a whole tree.)

Moreover, if we think about, a text (set with type or written) is hierarchical because (as someone said before) we have spaces that define words, spaces that define lines, paragraphs, etc. Organizing phenomena with hierarchy seems to me the 'normal' way our perception works.

("My" point of view is that white spaces are there to define hierarchy.)

By the way, there are also other thing interesting in the book: what he calls proto-letters (simple shapes recognized by primates), graphic 'invariants' in writing, for example.

quadibloc's picture

Incidentally, there are some optical illusions that show that when we look at something with "white space", our brains ignore the white space, so that if there are obvious shapes in the white space, we are slow to notice them.

Instead, we notice the shapes of the things that are supposed to be meaningful, so that an equally apparent shape in the wrong place is all but invisible - or, at least, its perception is delayed.

So while word outlines are used in reading, white spaces between letters... not so much.

daniele capo's picture

So while word outlines are used in reading, white spaces between letters... not so much.

White spaces between letters, lines, etc. define (or contribute to the definition of) the contour of the 'object'. That's why I think that the function of white spaces is the definition of hierarchy. (Moreover, you notice white spaces when they are 'wrong': wrong letter spacing etc.)
By the way, If you remove the white spaces it is very hard to read.

Another thing about white space: Longer ascenders and descenders, for early typographers and generally for letterpress printer, had also the side effect of a bigger gap between lines (they didn't have 'leading', right?, so this was the only way to achieve a bigger line height) and hence a better line definition, etc.. This side effect, maybe, can partially explain the improvement in legibility of longer ascender/descender.

But I'm not an expert

enne_son's picture

Daniele,

Letter, word, line, paragraph and section definition are important things and a hierarchy of white space helps delineate them for vision — see Gerrit Noordzij, Letterletter (the book) page 126. Such delineations help us navigate the text.

In Deheane’s hierarchy anything beyond V4, starting with his “abstract letter detectors,” but including identity as a word, exists outside or beyond the visual cortex. The only representation of shape that is responded to — beyond V4 — is ‘orthographic shape,’ and that is done by way of local (open) bigram detection. Global shape is not a feature that is used. You can’t enter the recognition hierarchy at the top level, you have to come up from the bottom.

When I used “unitary” above, I didn’t only mean “as an independent unit.” I also had in mind “whole,” in a gestalt sense of figural, with a unique internal structure of intrinsically interconnected parts.

As far as I know, all letter press printers have strips of lead for leading, but I'm not sure when these strips were first invented.

William Berkson's picture

>so that if there are obvious shapes in the white space, we are slow to notice them. ...So while word outlines are used in reading, white spaces between letters... not so much.

Interesting analysis. I think you have here a valuable insight, and a mistake. You have a good point that "figure" and "ground" are probably treated differently in the brain. I suspect you are right that the shapes of the whites themselves are not "read" by the brain *as shapes*. And this is where Frutiger's statement that type design is about "designing the whites" is misleading. And I suspect that Peter's "role units" as "atoms" making up a word pattern don't include white shapes themselves.

Where I think where you are mistaken is on the importance of the white spaces, in spite of their shapes not being "read" directly. That is because the ground helps define the "figure". Getting width of the letter spaces right is important to reading, as it is important to get the rhythm of the blacks, and the overall density—"typographic color"— of a word, and of the text, right. Rhythm and color are both important to readability. There are also other issues of the relation between black and white, related to optical illusions such as Mach bands and the picket fence effect.

Finally I should note that your use of the the term "word outlines" is not clear. If it means the outer perimeter of the word, as in the citations of Kevin, then it has little to do with Peter's theory, or with how we actually read. I prefer the phrase "spatial pattern of sub-letter features across the whole word" to explain how skilled readers normally recognize words.

enne_son's picture

Daniele, a further (cryptic) thought: Hierarchical coding has it's uses. Extrinsic stabalization — which is what happens through teaching in reading acquisition — combined with narrow phase alignment and the rhythmic formatting of the whites inside the word — gives rise to intrinsic integration at the level of role-units.

I'm going to describe an encoding of order that places information about fine-grained recognitional parts in concentric rings around salient centres (of the coarser granularity of letter-wholes). Sets of these combine to form highly redundant matrices. These become synaptically entrenched. We identify familiar words in immersive reading not by climbing ladders but through matrix resonance.

The idea is that there’s another way of schematizing the encoding of order that makes better sense of how we experience words.

Christian Robertson's picture

William is right. When it comes to extended reading I find it more useful to think about rhythms and textures than white space. When the metrics of a type are too regular, it becomes difficult to decode (the picket fence problem). The opposite problem, where there is no underlying grid or rhythm established can also be hard to read.

dezcom's picture

While I understand Bill's reasoning and Christians preference for looking at a different order of magnitude to describe how he works, I don't think we can ignore what the negative space does at any level of description. We can debate how much we "choose" to infer from negative vs positive shapes but we cannot deny that form is described simultaneously by both! As much as the black describes the white, the white describes the black. There is an interlocking that occurs without the need for a hierarchy. The better the two interact with each-other, the more descriptive the form and accurate the read. My guess would be that the earliest humans (probably all animals as well that survive partially on vision) developed this keen sense of quickly extracting a form from its surrounding noise if for no other reason than detecting both predator and prey. I surmise that this early "instinct" that evolved further into a "skill" survives today in all of us as "that which makes visual decoding of all things practical" to us. This includes reading, sign recognition, logo recognition, and still includes "that eery feeling" about walking down a certain fear-inducing path. We read the shadows carefully, even if the object casting them is not in view.

daniele capo's picture

Peter, thanks for your replies. I'd like to read more about that, but I think is too 'technical' (if I understand your concern about the model of D. is that words are encoded outside the visual cortex.)

Leading: I think that early printers didn't use strip of lead to space lines, but my point was that longer ascender/descender always increase the 'white' between the lines, that in turn can make the text more legible, or readable (I don't know why I introduced the early printers leading problem).

William Berkson's picture

Chris, I think we agree, and I didn't make myself clear enough. I do think the whites are very important, just not the exact shapes that they make. An exception is very bold faces, where figure and ground start to be ambiguous.

Frutiger said (p. 217 of Typefaces. The Complete Works), "The realization that the balance of the counters was responsible for the actual beauty of a type face was, for me, a revelatory experience."

I don't think I can go far as Frutiger on the the balance of counters being *wholly* responsible for "beauty". But the balance of counters (letter spaces included) is astounding important, and a fundamental insight on type design.

When it comes to designing, for example a swash stroke, the beauty of the shape of the black is more important than the shape of the white ground that it cuts through. That's because it is "read", more than the white.

Both black and white are important. I think what's very interesting here, and what John's (Quadibloc's) post raised, is that what is important in the white and the black are different. The eye-brain combo handles them differently, so the designer is better off doing so as well.

John Hudson's picture

Chris: I don't think we can ignore what the negative space does at any level of description. We can debate how much we "choose" to infer from negative vs positive shapes but we cannot deny that form is described simultaneously by both! As much as the black describes the white, the white describes the black.

There is good evidence that we perceptually distinguish figure and ground and that we give preference to figure as shape. Indeed, given our narrow field of focus and the need to identify shapes against backgrounds, it shouldn't surprise us that we treat figure and ground differently. Yes, the figure is defined relative to the ground, but that is not the same thing as suggesting that figure and ground are perceptually equal.

dezcom's picture

"...but that is not the same thing as suggesting that figure and ground are perceptually equal."

But they perceptually create each-other! There is no equality, one requires the other to exist. If you choose to call that "superiority" that is your choice. If you take away the one you feel has the "inferior" role, then the other more superior one also disappears.

John Hudson's picture

Chris: But they perceptually create each-other!

No, they don't perceptually do anything. We do things perceptually.

We see figures that are defined by contrast with ground: the stronger the contrast, the more defined we say that a figure is. The role of the ground in perception of the figure is to contrast with the figure; the shape of the ground is not like the shape of the figure, because what we recognise is the shape of the figure, not the shape of the ground. This is why we have notions of figure and ground, and not merely of positive and negative. Remember that reading is unusual in that figure and ground are on the same focal plane; but I see no reason to think that this changes the dynamic by which we perceive the shape of the figure against as ground that is ignored. The ground has to be there in order to make perception of the figure possible, but that is not the same thing as saying that the ground is perceived in the same way as the figure. And I'm perfectly comfortable characterising this as giving perceptual precedence to the figure.

John Hudson's picture

Here's an experiment: give someone a piece of black paper and some white pigment, and ask them to paint a letter A on the paper. How many people do you think will paint the counters and the white space around a black A? How many people do you think will paint a white A on the black ground?

I predict that almost everyone -- especially non-type designers -- will paint a white A on a black background, because the letter is a shape on the ground, and the ground is what is given.

dezcom's picture

"No, they don't perceptually do anything. We do things perceptually."

Fine, make this about semantics or philosophy if you want to stray away and ignore the concept completely. Just rewrite the statement in any way with the euphemism of your choice to cloud the issue instead of confronting it. We are talking about perception! If you remove the perception of the ground then you remove the perception of the figure. The perception of the one is created by the perception of the other.

"I predict that almost everyone -- especially non-type designers -- will paint a white A on a black"

So what! I am not talking about paint. Sure they would. They can paint the A with 2 strokes that way and they would need many more to paint out the background with any color. Pick any 2 colors, one for figure and one for ground. Another way to look at it is to give them a smooth wooden board and a chisel and hammer. Most people would cut away the thing we call the figure much the same way ancient Greeks and Romans carved letters into stone. By so doing, they are removing some part of the ground to reveal the figure--oh and both are the same "color," wood. The glass is half full--the glass is half empty. ;-)
This is not about efficient writing it is about seeing.

No, you don't perceptually do anything. You do things semantically.

William Berkson's picture

Chris, maybe John's thought experiment is not very persuasive, but surely we do "do things" perceptually. Our visual cortex and brain has an extremely active process for extracting figures and objects from a visual field.

It has been shown by studying cat's optic nerves and brains that the visual cortex starts by identifying edges. And at the end of the process there is not a reproduction of exactly what is "out there." This shown by optical illusions, and these are important in type, as you well know--the overshoot on round letters to look the same as square ones, dividing letters above the middle to look equal etc., etc. The brain is contributing a lot to what we see, and it seems to treat figure and ground differently.

That is the point John S. was making above:that we usually don't identify shapes in the ground. I am grateful for it—Thanks John S.! It identifies something that has bothered me for a long time about the idea of "designing the whites", without my being able to put my finger on it.

An example is the "arrow" in the Fedex logo. People usually don't see it until it's pointed out. How much a subliminal effect it has is open to debate, but if it were a black arrow everyone would see it right away. Hence, to me John S.'s point is made.

John Hudson's picture

Chris: If you remove the perception of the ground then you remove the perception of the figure. The perception of the one is created by the perception of the other.

What can you mean by ‘remove the perception of the ground’? Let's say that you mean remove the contrast between the ground and the figure, because it is the contrast that enables us to perceive the difference between the two and hence perceive the recognisable shape. [The alternative would be to somehow remove the perception itself, i.e. mess with someone's eyes or brain.] Diminishing the contrast obscures the figure relative to the ground, and if the contrast is removed completely then we could say that the figure is completely obscured, and hence not available to be perceived. Perception of the figure relies upon contrast with the ground, but that doesn't say anything at all about perception of the ground. To say that 'removing the perception of the ground removes the perception of the figure', by which I understand you to mean remove the contrast between them so that the one cannot be distinguished from the other, is simply a truism. So what? It doesn't say anything about the process by which, given sufficient contrast, we perceive and recognise the figure. Nothing at all. It certainly doesn't imply that the figure shape is not hierarchically preferenced in our perception, or that we do not habitually ignore shapes in the ground.

No, you don't perceptually do anything. You do things semantically.

Of course we do things perceptually, and much of what we do in terms of semantics, when we assign meaning to what we perceive, is dependent on and subsequent to perception. Where else do you think perception resides if not in our visual cortex and our brains? Perception is not a quality of the object perceived. And I am not being euphemistic or trying to ‘cloud the issue instead of confronting it’, I am trying to be precise -- anything but euphemistic -- and not to play fast and loose with words and concepts. Far from trying to avoid confronting ‘the issue’, I am saying that there is no issue in what you say, only a poorly worded truism.

johnnydib's picture

The brain is just an organ and the neurons don't have a mind of their own.

I would not be surprised if people who can't hear have more of a difficulty reading and writing then those who can hear (even if I haven't noticed anything firsthand to suggest that), since the alphabet is inherently phonetic and to a lesser degree orthographic.

Which brings me to the "Word Superiority" in Arabic. Whether it's an unfamiliar word (it could be familiar to hear, or to read in Latin, but unfamiliar in the Arabic script) or whether it's a bad handwriting or a really nice and ornamental piece of calligraphy I find my self spelling it out phonetically decoding the individual letters before understanding and reading the word. Same thing happens in English when you read the name of the Icelandic volcano or shorthand or overly done logos. Reading in Arabic is very similar to reading in English, there's somewhat of a difference in writing.

The major building block of reading an alphabet is recognizing individual letters and then seeing them in context to figure out how they're pronounced and then seeing them in pairs and then in groups. Therefore letter recognition is much more important than word recognition and this is exactly why type designers will sometimes design words or pairs of letters, because they don't want the r to look like an n or the v to touch the a in case that slows down the recognition of any of the letters.

Christian Robertson's picture

As designers we often need to train ourselves to see the ground in ways that we wouldn't naturally. This is why we do things like turn the letterforms upside down, invert them, concentrate deliberately on the white space, squint to blur the forms, etc.

I wonder if some of our focus on figure/ground is an artefact of our need as designers to override the perceptual shortcuts our mind takes as we process visual information. Think of the first graders' art lesson where the teacher reveals with a ruler that the eyes are not, in fact, positioned at the top of the head but in the middle.

I recently heard a great interview with Chuck Close, the famous photorealist painter where he described his struggles with a condition called "face blindness". Apparently a small fraction of the population doesn't have the specialized wiring in the brain connected with facial recognition. This makes it almost impossible to distinguish people by their faces. I wondered if his ability to dispassionately observe the contours of the face contributed to his photorealistic portraits.

dezcom's picture

"I am saying that there is no issue in what you say, only a poorly worded truism."

Thank you, John. You just proved my point. Your gift with semantics serves you well. Instead of making a respectable argument, you just attack the source of disagreement. I am sorry that you don't have a clue to what I am talking about and insist on changing it to something you prefer. I won't trouble you any further regarding this.

dezcom's picture

Christian, My youngest son has some degree of the affliction you mentioned. It always amazes me that he can instantly find the correct way to get me un-lost when we are driving in a strange area. (He has memorized every street, road, path, and trail in the north eastern US but can have a devil of a time recognizing a person he knows if they have dyed their hair or even worn clothes he is not accustomed to see them in.) Nature finds a way of letting each of us overcome a weakness by connecting via a different path than what we expect as "normal." Reality may truly be in the eyes of the beholder :-)

quadibloc's picture

@johnnydib:
The brain is just an organ and the neurons don't have a mind of their own.
Yes, but there are pieces of the brain that are doing things to signals from the eyes before they're passed on to the other parts of the brain in which our conscious minds reside.
So while the brain doesn't have a mind of its own, it does do things on its own apart from what we do with our own minds.

Rob O. Font's picture

JH>The alternative would be to somehow remove the perception itself, i.e. mess with someone's eyes or brain.

Diminished constrast, disturbed rhythm, clouded issues... all easier done than said with Cleartype.

enne_son's picture

“we […] don't identify shapes in the ground” [Bill et. al.]

Perhaps, but in the letter “o” the size of the counter and the roundness of its expression on both the left and right have tremendous cue-value. This is augmented by the full closedness of the ring-like black that contains it.

Now take the letter “n”. The size of the counter and the greater squareness of its expression to the left and right, as well as it's lack of containment at the bottom have critical cue value. This is augmented by the bilateral disparity, lack of closure at the bottom, and conjunctivity of and in the black that contains it.

I can devise similar statements for other letters.

Perception — or better, feature-analytic processing — relies on cues.

At the level of distinctive features it does not matter if the cues are provided by the black or provide by the white, by what is “figure” or what is “ground”.

Perceptual discrimination affordance or visibility relies on a differentiation between “figure” and “ground,” but in feature analytic processing, both the black and white are used.

There have been studies to determine what things have cue value. I don’t have them at my finger tips, and I don’t remember their exact content, but I know my impression about the whites comes in part from having had a look at them.

dezcom's picture

"...don't identify shapes in the ground”

We may not have an agreed upon name for these shapes, true; We may not consciously attend to these shapes to the degree that we would if they were an agreed upon object, true; That does not mean that they do not play a role in our perception of the agreed upon figure. Look at a script that is completely foreign to you, one that you cannot read or even know what the glyphs are. To me, the negative shapes in Arabic, Indic, and Asian scripts are fascinating beyond belief because I am not encumbered by "knowing the code". I am free to just see shapes as they have evolved over time and as they vary from one scribe to the next. I fear that the day I can read these scripts will be the day I lose this phenomenal joy.

enne_son's picture

I wonder if the import of my post was clear enough. By saying that “in feature analytic processing, both the black and white are used,” I wanted to convey they are used in reading.

In my functional ontology of reading feature analytic processing is part of what the visual cortex does when it encounters text with an eye to making out what the text says.

Nick Shinn's picture

Not a fair comparison, considering word breaks.

William Berkson's picture

Bert, we are able to read text without word spaces, mixed case, and so on, so we clearly have the capacity to first identify letters, then words. However IT DOES SLOW DOWN the reading process. I put that in caps because John Neatrour recently noted to me that the slowing in reading mixed case is not accounted for by the "received view" of interactive activation, and it seems cannot be explained. If so, the slowing down of ReAdiNg MiXEd CasE is a refutation of the view that we always first identify letters, then words. Since we are familiar with the caps, we should not be slowed. Saying that we are not familiar with the mixing shouldn't make any difference, because if we go first to an identification as an "a", then to the word, the particular shape shouldn't matter.

david h's picture

>...is a refutation of the view that we always first identify letters, then words.

Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.

Letter transposition:
Cambridge University versus Hebrew University

William Berkson's picture

Thanks David. That is interesting, and makes sense. I think the low redundancy of a consonant only language is also why Hebrew, it seems, always had some kind of visual indication of word division: mid-dot or word space. I think if you do the kind of thing in Hebrew that Bert does above, the Hebrew reader will be slowed down more than the English reader.

I don't see this as bearing on the question of whether skilled readers routinely use whole word patterns of sub-letter features.

enne_son's picture

Bill, the only way the slowing effect of mixed case would be a refutation of an interactive activation model the includes a letter level is if this model actually had an explanation for this slowing down. The interactive activation account doesn't account for this slowing, so there's no refutation. For there to be a refutation the account must develop a conjecture about what produces the effect that makes new or different testable predictions than another model.

I can imagine that interactive activation theorist may invoke some external explanation, not intrinsic to the scheme, but this makes the model weak, especially in the face of a model in which the slowing is expected on intrinsic-to-the model grounds.

John Hudson's picture

Chris: Instead of making a respectable argument, you just attack the source of disagreement.

What disagreement? This is what I am, honestly, trying to figure out from what you have written: whether or not there is actually any disagreement, and as far as I can tell what you have said about figure and ground is both unarguably true and without implications for the perceptual issues that were being discussed. The fact that the figure shape is defined relative to the ground does not tell us anything about how we perceive either.

William Berkson's picture

Peter, I'm not sure about that. It may be a consequence of the interactive activation view, starting with letter identification first, that there would be no difference in word identification with similarly familiar letters, even if the letters are differently shaped. I guess I don't know the theory well enough to say.

You could argue, as in my article on readability, that to identify letters we need regular rhythm, and with mixed case rhythm is disrupted. Then you could test this explanation by mixing lower case with similar stretching to caps. I would expect a similar stretching would not slow reading down as much as mixing in the caps.

Chris Dean's picture

@ William & Peter: Regarding mixed case, I have posted this in other threads but will do so again here so we have it in the same place.

Besner, D. (1989). On the role of outline shape and word-specific visual pattern in the identification of function words: None. The Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 41(1-A), 91–105.

(I have not had the time to follow this thread in its entirety)

enne_son's picture

Christopher, what does Besner mean by “word-specific visual pattern?”

Rob O. Font's picture

And what does Besner mean by "none" ;)

david h's picture

Derek Besnera -- Abstract:

A common assumption in the reading literature is that function words are often identified wholly or in part on the basis of outline shape and/or word-specific visual pattern. This assumption was examined in three perceptual identification experiments, a speeded oral reading experiment, and in a lexical decision task where the visual pattern of function words and control items (content words, non-words and control strings) was systematically distorted by cAsE alTeRnA-tIoN. In all five experiments distortion failed to impair performance more on functors than on various control items. This result is argued to be inconsistent with the view that outline shape and word-specific visual pattern typically play a role in the identification of isolated function words.

Shape distortion impaired function words less than controls in perceptual identification, but impaired function and content words equally in speeded naming and lexical decision. These contrasting effects of shape distortion are discussed in terms of a distinction between the uptake of information (lexical access) and phenomenal perception.

And see this paper:

Does “whole-word shape” play a role in visual word recognition?

enne_son's picture

WSVP (Word-Specific Visual Pattern)

Derek Besner [1989]: that is, (a) visual information that might span letters (e.g. the shape of the spaces between letters) and / or (b) visual information that forms part of the feature list that specifies a word without recourse to letter identification (e.g., components of letters).

In a chapter to the 1991 book Lexical Representation and Process [ed., William Marslen-Wilson] Besner and James C. Johnston explain that these features could be “the set of all component strokes in the proper arrangement, or they might be more exotic aspects, such as junctions between strokes, the shape of spaces between letters, or any other property of the pattern.”

So on this view the visual information that might be used in word identification is the “entire set of visual features in a word,” collectively constituting “a word-specific visual pattern or WSVP.”

David Berlow asks “[a]nd what does Besner mean by “none.”” Aye, there’s the rub. In the case of the WSVP it’s at best a qualified “none”. I actually laid down a 35 spot to find this out.

enne_son's picture

More about the NONE:

‘The careful reader will have noted that words are more impaired than non-words by format distortion in Experiment 5. At first glance this seems to offer support for some form of wholistic analysis. This result is not new, however, and is discussed at length in Besner (1983), Besner & McCann (1987), and Besner and Johnston (1989) where it is argued to reflect a form of wholistic recognition, as opposed to wholistic idenrification.

Note that the word “identified” is used in Besner's title.

We get an idea of what the wholistic [Besner’s spelling] recognition versus wholistic identification entails from the following: “Deleterious effects of case alternation may arise in part after abstract letter identities have been computed and an attempt is made to integrate abstract orthographic information with the original sensory pattern so as to afford a linguistically interpreted phenomenal percept (Besner, 1983).

The idea is that in the uptake of visual information case mixing doesn’t affect performance, but in the formation of a stable percept it does. The Perea / Rosa paper explores this further but also makes the following claim about Besner: “[…] the multiple-process model of Besner and Johnston (1989) predicts that when the visual format is familiar, lexical decision responses could be made on the basis of “word-specific visual pattern information” (i.e., it would reflect a form of holistic recognition, as opposed to holistic word identification, as in the Allen et al., 1995, model). However, when the format is unfamiliar (e.g., alternating-size stimuli), participants will ordinarily use the letter-level routes (i.e., the orthographic familiarity assessment process or the unique identification of the word).

Lexical decision responses are no / yes responses when the subject is asked to decide if the presented letter-sequence is a word (using short exposure times). This is “wholistic recognition,” as opposed to identification.

So the NONE relies on a distinction between recognition, which can rely on word-specific visual pattern, and identification, which can’t.

What’s to be made of this?

Rob O. Font's picture

I.O.U. $17.50?

Syndicate content Syndicate content