Defining notan's edge

enne_son's picture

Hrant wrote: Painting involves making filled marks; drawing is defining
notan’s edge.
See http://typophile.com/node/88004?page=2#comment-488377

I think this is too black and white.

In “notan drawing” filling in with a marker defines notan’s edge. See: http://emptyeasel.com/2008/08/12/seeing-notan-how-to-make-stronger-compo... Here notan drawing is a reductive operation that resolves a gradated scene or manifold into a light / dark composition.

In writing no such reduction is required. If the paper is white and the ink is black there is only the binary light / dark. In writing, notan’s edge is directly defined by the moving front.

In punch-cutting notan’s edge is defined by the subtractive process of cutting away.

Notan’s edge can be sharpened or recalibrated in digital technologies, but notan doesn’t have an edge until an area has been filled in or part of it cut away. Until an area is filled in the only thing that can be controlled is the trueness and action of the curve or line.

According to Kevin Larson, ganglion cells (that project from the rods and cones in the human eye to neural dendrites and axons in the visual cortex) look for edges. As far as I can find, they are also sensitive to surface polarity.

Does writing with a hand-held tool misinform?

Our internal representations of bounded maps of letters are role-unit based and feature-tolerant. That is, they are highly tolerant of variability in contrast styling and somewhat tolerant of role-architectural drift. These tolerances are enhanced by the cortical dynamics of font-tuning.

What we don’t know for sure, is if writing with the hand-held tool optimally calibrates stress.

Here is a non-ideological but hypothetical context for assessing this.

My sense is that the sub-attentive parafoveal pre-processing that occurs in immersive reading responds to coarse-grained notanic composition, not fine-grained notanic equilibrium. It responds to this in deciding what words to skip and where to land when it makes it’s next saccade. It responds to salient areas of disturbed or undisturbed expressedness, or divergent aspect, or projection beyond the x-height range.

And my sense is that effortless, effective and automatic fovea-based processing relies on fine-grained rhythmic co-ordination of the blacks and whites. Fine-grained rhythmic co-ordination of the blacks and whites according to their relative saliencies and their allocation of weight in the cue-value domain optimizes the efficiency of cortical integrational routines within the visual cortex.

This imposes countervailing pressures on the sharpening and calibration or recalibration of notan’s edge.

Peter Enneson

dezcom's picture

Just as the mechanics of the human eye react to such things as size, distance, color, difference ratio, and proximity to neighbors, I would think that the whole system reacts to and accommodates for whatever basis the form follows--if it be tool based or otherwise. The human brain is an amazing tool. It can "read" very well far, far, beyond optimum and trains itself to adapt to whatever it finds. The magic is in the ability to adapt. All life forms have survived by their ability to adapt. Humans are the most adaptable creatures except for viruses. I feel as though the search for the holy grail of "optimal" is not going to make reading much more possible or faster to comprehend than several of the typefaces already out there.

A few years ago, I designed a very strict 90 degree only bauhaus typeface. At first, I thought it would be impossible to read. I was amazed how quickly I adapted to the limited forms complete with picket fence effect and very minor differentiation. The mind is a beautiful thing. If that same mind is given free reign to design a typeface, I am sure that some very readable ones will come out. The problem with awaiting the optimal research is that there is the danger of staying away from innovation for fear of missing the optimal. Type designers are type designers and research scientists are research scientists. We each have our role. Waiting for one or the other to get there is of no use. Scientists measure what is available to measure. If we stop trying things and stay with historic conventions, we limit what scientists will have to measure. If a form is tool based or not is not the issue as much is making many different forms to see if there is a logic not yet discovered.

I would say, just keep working in whatever way your own mind leads you. At some point,scientists may be more capable of making sense of it but giving them more options to test gives them more tools to work with.

Just do it.

hrant's picture

> just keep working in whatever way your own mind leads you.

That's not Design, that's Art.

Peter: Something eventually.

hhp

dezcom's picture

That is not true in the least, Hrant. Some minds will design better than others but that has no bearing on Art. It just has a bearing on the probabilities in the distribution of skills and abilities among those who attempt the task. Total failure is equally possible as total success. Ad nauseum blandness has the greatest chance of happening but the mediocre is soon forgotten and the complete failure is a great teacher for others. The point is that there is no chance for progress without enough attempts. Let there be failures and boring nothingness to be forgotten. Out of all of the collective effort, there will be also some degree of success.

William Berkson's picture

>What we don’t know for sure, is if writing with the hand-held tool optimally calibrates stress.

I think when you say "hand-held tool" that is too wide. There are a lot of different tools. For example the pointed Chinese brush works a lot differently from the pen. But interestingly they do the similar sorts of thing with thick and thin, even color, and so on. The thing is, the mind is guiding the tool, and trying to make it work for the eye.

If the question is whether the broad-pen drawn Carolingian miniscule optimally calibrates stress, I would say that evidence of history is that it handles stress very well for the eye, but not ideally. The Jenson, Griffo, Garamond line of type makers systematically changed the stress of the letters, and I think made them more readable. And publishers and readers certainly followed. For example, the top left to bottom right diagonals are lighter than they would be pen drawn. There's a lot of other subtle modulation that's different also, but the diagonal thing will do for a start.

John Hudson's picture

What we don’t know for sure, is if writing with the hand-held tool optimally calibrates stress.

Or if there is an optimally calibrated stress. We should consider the very real possibility that our reading ability is evolutionarily grounded in perceptual skills that by their nature are non-specialised and do not require particular input; indeed, they need to be able to handle a great variety of input with little performance variation. Presuming, reasonably I think, that reading is a free rider on shape recognition abilities that evolved in the context of negotiating the natural world, why should we think that it requires optimisation of particular kinds of shapes, when what we're confronted with in nature is phenomenal diversity.

William Berkson's picture

>why should we think that it requires optimisation of particular kinds of shapes, when what we're confronted with in nature is phenomenal diversity.

We know we can compromise speed and comfort of decoding letters by too tight or irregular spacing, for example. There is no question we can still decode with too tight spacing. The question is, how fast we can do it, and at what psycho-physical cost. The slow down in reading speed is well documented with markedly sub-optimal type. So is fatigue or discomfort in reading, if Luckiesh's work with blink rate is sound. So I think there is reason to believe, as Dwiggins and Frutiger have said in the past, there there is some kind of ideal. And if Luckiesh is right, it does involve weight. He didn't test modulation of stroke, which Dwiggins thought an important question, or evenness of color, which I would like to see tested.

I would concede that it's not a matter of shape as such, as different scripts have greatly different shapes. But I do think such issues as spacing, evenness of color, and even modulation of stroke may make a difference. I think there is no doubt that even an "ideal" for text type (lengthy passages at 8-12 point printed) will have a range, not a pinpoint, and allow for expressive and aesthetic variation. But I do think there is reason to suspect that such an ideal range exists. The recent article by Legge and Bigelow put a fluent reading range from 4-40 points, but I think a lot more can be done to narrow that in a number of ways.

enne_son's picture

[editorial comment: cross-posted with Bill's comment just above.]

John, the term feature-tolerant accounts for the performance variation part of what you have to say. I adapted the term from a 2011 paper by Andreas M. Rauschecker, Reno F. Bowen, Lee M. Perry, Alison M. Kevan, Robert F. Dougherty, and Brian A. Wandell called “Visual Feature-Tolerance in the Reading Network” [Neuron 71, 941–953 / available online]. Another term that’s used is invariant recognition.

One of the areas of current debate is to what extent reading is a free-rider on recognition abilities “evolved in the context of negotiating the natural world.” The debate centers around the question of specialization for words in the ventral occipital-temporal circuitry, sometimes referred to as the Visual Word Form Area (VWFA). Laurent Cohen and Stanislas Dehaene (author of Reading in the Brian) are prominent names on one side: Cathy Price and Joseph Devlin on the other.

I tried to place the defining notan’s edge question into a more inclusive framework that the one Hrant uses, but this probably needs to be further developed to become clear.

The stress-factor probably has more to do with ‘perceptual psychophysical costs’, the term that came up around the discussion of the Luckiesh / Tinker divide.

Optimal is probably the wrong word to use when talking about contrast styling. I think there is no visual-circuitry related reason why the allocation of stress in western scripts typical of at least the best broad-pen variants of writing exacts undue perceptual psychophysical costs. I think this is where the debate about type of stress can most profitably touch done.

In my mind, I’ve begun to relate the notan part of these discussions to Luckiesh and Moss’s blink rate results in assessing relative boldness, a result that Dwiggins embraced. Is there a better notanic balance or distribution in the somewhat bolder range than the typical regular weight? Does this have to do with the better visability of both the black and white? and how does this relate to perceptual psychophysical costs? Perceptual pschophysical costs have to do, as far as I can see, with the efficiency of corticl integrational processing routines inside the visual cortex.

dezcom's picture

"We should consider the very real possibility that our reading ability is evolutionarily grounded in perceptual skills that by their nature are non-specialised and do not require particular input;"

BRAVO, John!!!

William Berkson's picture

As far as relative boldness, Peter, my hypothesis is that what is working here is the interplay of two things: the figure/ground distinction, and visibility. To "read" the figure it need to be clearly differentiated from the ground. So the ground need to have clearly more of its color than the figure. This would argue that lighter strokes would be better. Bolder strokes are, however, more visible. Yet they may crowd out "ground", so that the figure/ground relationships is upset. So somewhere between very bold and very light is best. Frutiger thought it was, in the x-height area, having the black strokes about 30% of the white space.

As far as modulation, one thing I can think of is that keeping the joins of relatively even color with the strokes requires thinning of one or the other of the strokes, or both. If I got it right, Chinese also follows this rule, as well as Hebrew. I don't know about other scripts. Broad pen does some of this automatically, which is an advantage. So that would explain why modulation is better.

dezcom's picture

" So the ground need to have clearly more of its color than the figure. This would argue that lighter strokes would be better"

Why? I know it is your "theory" but what prompts you to hypothesize this?

hrant's picture

William, don't forget leading!

hhp

William Berkson's picture

Chris, to me what makes it easy recognize as the "ground" is that there is more of it, and it's an even color. So when the ground is less than 50% locally, there is more possibility of the eye taking it for figure. That's why I think that it's reasonable that less than very bold weights are less taxing to read in extended text. But how much isn't clear, at least to me. The countervailing force is the need for visibility. According to Luckiesh bolder is more visible.

Hrant, you are right to note leading. That's one of the indications that there's a range as far as weight, not one ideal. There's a very interesting comparison in Mitchell & Wightman's Book Typography. There they show that a light type, Spectrum, works well with less leading, but a dark type, Quadraat, works better with more leading. So the interlinear space does affect what the eye takes as "normal" or comfortable text weight. Of course, extenders are another factor affecting what works best as far as leading. So when designing a text type, what leading it can take well or not take is a consideration, and as both the darkness and the extenders will affect it...

Té Rowan's picture

When you have less 'ground' than 'figure', you have entered the Realms of Blackletter, incidentally independent of the Realms of Blackadder.

enne_son's picture

I am still stuck on the idea that the entire word is figure. Both the black and the white shapes inside the word, that is both the stroke-units and the counters, are information for vision in a real and active way. Both contribute actively to visual word-form resolution.

Designing letters so that the entire word can pop out as if it were an internally cohesive figure against a ground is to me part of the challenge facing type designers. Designing the whites recognizes that both the stroke-units and the counters, are information for vision in a real and active way.

I think there is a hierarchy of figure / ground relationships in text, one centered on the stroke, another on the letter, a third on the word, another on the line, a fifth centered on the column, a fourth centered on the page. All require attention. The word-related one I think is central.

hrant's picture

It makes a lot of sense that parts of letters take part in reading.
But since they can only exist in relation to the white, even parts
of letters suffer from chirographic constraints.

hhp

William Berkson's picture

Peter, I think that the figure and the ground operate differently as far a psychological processing. One of the most basic perceptual functions is recognizing a "object." That involved identifying edges, as you say, but also putting these together to form an object, or in this case letter, that is distinctive from its background. The distinction between figure and ground is thus critical for recognition.

I don't have a clear theory of how white is handled, but I am pretty sure it must be different. My feeling is that handling the whites well for text is mainly a matter of getting the black to be detected easily. There evenness of color and consistency of form are important. I have the feeling that only in the case of issues as whether a form is closed or open does white play a direct role in character recognition.

dezcom's picture

The whites [negatives] play a very strong role in defining the blacks [positive]. I don't see the value in pulling them apart.

William Berkson's picture

Chris, I think they may function differently in reading, so that to understand brain processing in reading, the distinction my be important. For example, it may be that a slight variation in width of the black stems, or positive, has a bigger impact on recognition or ease of reading than a slight variation in negative, such as space between letters.

What exactly an understanding of the differences will tell us I don't know, but I'm pretty sure it would turn up something of interest to designers.

For example, what makes the white seem more vivid in some counters of bold letters? Is it figure/ground ambiguity, or something else? And what does that tell us about when this is a good or bad thing?

enne_son's picture

Bill, to my mind reading involves edge detection as well as detection of surface polarity, in this case black and white. As well, to my mind, neither the visual cortex nor any other part of the brain puts these edged shapes together to form an object. They are already together. To my way of seeing it, all the visual system has to do is resolve, by eye-fixations inside a word, what stroke-units are connected to what other stroke units and counter units (local combination detection), and how the stroke-units in their combinations are distributed across the figure or word-object as a whole.

The problem as I see it is not just to get the black and / or white to be detected easily, but to have the entire word be seen and processed as one cohesive, weight, contrast and spatial frequency co-ordinated thing. Unless they are meaningful markers, eveness of colour and inconsistency of formal logic created a signal / versus noise confound for the visual cortex, which it has to neutralize. They present roadblocks to efficient processing in the visual cortex.

hrant's picture

I think much of what you say (or at least the way you're saying it)
is in the realm of consciousness, so outside of immersive reading.
For example, there's no time/need to do edge detection. All that is
detected during immersive reading is black and white shapes (which
are in fact inescapably one thing). Also, uneven color is nothing
more than contrast, which is the meat-and-potatoes of information.
There's only too much of it if it causes errant fixations.

hhp

William Berkson's picture

Peter, I didn't mean to imply that letter recognition has to precede word recognition. I agree with you that aspects of different letters being identified, and response to the whole pattern of these across the word, may well be what is going on. But even in identifying the aspects, I suspect that the black has priority. For example, the brain is looking for joins, or branches in the black. I thing there is putting together the glyph or word in the way that an object is put together.

There may be differences from object recognition because you are dealing with decoding to reach meaning, as in listening to language, but still I see a priority on the structure of the blacks. For example, I don't think that the shape of whites between has the same *salience* or impact as the shape of blacks in glyphs. And I think this could be shown, as I suggested to Chris, by a greater sensitivity to blacks than whites, except for identifying open and closed structures.

For example, if you show me a word in a new typeface, then hide it, I and other type designers might be able to reproduce the distinctive features of the letters fairly quickly and well. But if you ask us to reproduce the shapes of whites between the letters, without drawing the glyphs, I think the result would be much slower, and worse.

enne_son's picture

Bill, probably at least three things are going on simultaneously, quantization into role-units from edge and surface polarity information which draws on both the black and white; local combination detection which relies on alertness to joins and branching behaviours in the blacks, and global distribution detection, which has to do with the design's spatial frequency modulated use of cartesian space in both the black and white.

William Berkson's picture

Peter, you're not addressing my point about the salience of the blacks. I think that if you track one line out by 10% compared to those above and below, the readers might well not notice it. If you increased all the stem widths by 10%, I think readers would notice it. If I am right, how would you explain that without ascribing a different role to whites and blacks, and greater salience of blacks?

hrant's picture

1) A reader noticing something means little, and can in fact be misleading.
2) Whites simply have more open forms; that doesn't mean they're less
important, it just means there's a different skew (but no qualitative
difference). Legato BTW is a great example of the whites being linked
in the way you ascribe to the blacks.
3) Maybe this is happening because fonts are poorly designed, ignoring notan?
4) Since the black and white cannot exist independently, this is an illusion.

hhp

enne_son's picture

Bill, along the same lines as the skew issue mentioned by Hrant, I would think the blacks and whites occupy different bandwidths within the same basic frequency channel. A 10 percent change to material attended to in a lower spatial frequency band would have a smaller optical effect compared to a 10 percent change to material attended to in a higher spatial frequenct band.

I would ascribe salience to features, such as expressedness which can be circular or n shaped, disturbed as in the e and s, or undesturbed as in the o and the dpqb group. Other examples of features are extension beyond the x-height range; or aspect, which is oblique in the x; or closure and the lack of it. Things like expressedness and lack of closure are co-defined by the black and the white.

enne_son's picture

Bill I don't have a clear theory of how white is handled, but I am pretty sure it must be different.

Working further on the idea that the entire word is figure, and that both the black and the white shapes inside the word are information for vision in a real and active way, here a something about the white that occurs to me:

Not all the white are quantizable into fully realized role units. Closed counters seem to be (o, bpqd); those in the m and n and u as well. But some open counters like in the lower part of the a and the s are perhaps only quantizable into what might be called demi role-units. Between-letter whites aren't quantizable into role units at all. Probably they are used as word-integral reference points for compiling distributional statistics.

It seems to be that case that information of different orders and polarities existing in at least two spatial frequency bands is compiled in a single feed-forward cascade.

William Berkson's picture

Peter, I agree with you that some whites, such as those in opbdq, might well be "read" while spaces between letters are not. The spaces would sets the scale of the "grid" that the brain lays over the word, and keep the blacks in the rhythm (periodicity) expected by the eye. Still my point would stand, following what you theorize, that blacks are likely treated by the brain differently than background whites.

hrant's picture

Why would the eye expect any such thing?

And why would any information be ignored?
The mind is like Chinese cuisine - even the chicken's feet are used.

hhp

William Berkson's picture

>Why would the eye expect any such thing?
The eye would expect periodicity (partial) because it is a feature of our of good text fonts, as confirmed by Fourier analysis of text blocks. (And even built into our scripts.) And the periodicity helps the brain where to look for salient letter features that will help identify the word.

>And why would any information be ignored?
The brain has to pick out of the visual field the salient information. I don't think "ignore" is exactly the right word, but "winnowing,"—which is essential to isolate and decode text in the visual field. What I'm saying is that the winnowing of much of the white space is different from the black. The spacing around the letters and lines helps us set the grid, but the shapes of the white around words and between lines is not, I suspect, generally registered as *shapes.*

enne_son's picture

Bill, I think we disagree. I think everything is ‘read’ in the sense that everything is information for vision in a real and active way. Not everything is ‘read’ in the sense of being quantized into role units though. The black is only special in the sense that it has a different polarity than the white, works within it's own spatial frequency band and allows for exhaustive quantization at the role-unit level. From the point of view of the visual cortex, the whites between letters aren't background, though they flow into the ground against which the word is figure.

hrant's picture

I think those Fourier images show people what they want to see...
"Our good text fonts"? How do you know they're good? Is "good"
good enough, for a type designer? And Bill, why is this periodicity
absent in Williams Caslon?

This "rhythm" business simply doesn't make any sense.
It's a romantic construct of our self-important consciousness.

> the winnowing of much of the white space is different from the black.

You might not be under-estimating the relevance of the white,
but you might very well be over-estimating that of the black!

hhp

William Berkson's picture

Peter, I think in conceding that whites between letters are not "quantized on a role unit level," you are conceding that whites and blacks are not treated the same, which is my point. [For those not familiar with Peter's terminology: role unit = salient sub-letter feature, such as a branch, an open counter, etc.]

The greater salience of the blacks compared to whites between letters and above and below the line is my point. The eye and brain processing starts with a bunch of visual information, and has to winnow it down to identify the code, and decode it. So it starts with both black and white, but the mechanism is after features of the black. That's why we don't consciously notice or remember the shape between the c and a in ca, but we do remember the c and a.

Hrant, by good text font, I simply meant those fonts widely recognized as good, and widely used in extended text, such as Garamond, Times New Roman, etc.

Here, thanks to Peter, is a strip of a Fourier image done on a text block of Minion:

Objectively, there is a lot of regular periodicity, indicated by the evenly spaced white bands—indicating a concentration of black stems, as I remember. But there is also a significant amount of departure from strictly regular periodicity. Here there is room for interpretation. My interpretation is that letters define enough regularity to give the eye a grid to read off of, but that the letters depart from it this way and that, which is what we "read". We need both the underlying meter, and the departure from it, like meter vs rhythm in a song.

You can have an alternative theory of why Fourier transforms of text look the way they do. But if you want to deny significant regular periodicity, I think you have gone into the land of loony denial.

As to the m in Williams Caslon Text, yes I did make the counters slightly narrower than the n and thinned the middle vertical slightly, as is a common trick. I think it saves a little space and differentiates the m a little better. But I didn't make it much narrower than two superimposed n's, which is also common in many fonts. That's because I felt the departure from regularity hit the right balance of being close to the underlying meter (which is around half the width of the n), while departing slightly from it. It's probably what Caslon did also, but right now I don't remember.

hrant's picture

All this talk of white versus black makes no sense when you
take into account that there's nothing in between, so the two
are in fact one thing: notan. I can't imagine how one could
focus on one aspect or another of the black or the white and
design with that in mind without adversely affecting the other.
And that's why we should look at the border and not the bodies.

> those fonts widely recognized as good

Now that sounds like the high priests who confronted Galileo...
And here's an opportune Galileo quote: "In questions of science, the
authority of a thousand is not worth the humble reasoning of a single
individual." My whole point is that we don't really know what's good.
And I feel safe in saying that this stuff isn't.

> The Fourier images are a mathematically objective indication of periodicity.

Well all that says is that those fonts are [somewhat] periodic!
If you could couple that to data -good data, not the sort of thing
we've seen so far- showing that the more periodic a font the greater
its readability then you have a case.

The problem with data is that the more sophisticated it is,
the harder it is to interpret, and the more room is left for
subjective interpretation...

> We need both the underlying meter, and the departure
> from it, like meter vs rhythm in a song.

This is post-rationalization.

A song that deviated from a rhythm at almost every note is not something
most people want to listen to! That's because music appreciation is a
conscious activity. But a text font is like the sounds of the jungle. If there's
any rhythm, it's like chaos theory, and beyond our conscious appreciation.
Which means we can't design for that, and we must forgo rhythm.

So to me it's not that we need some illusion to depart from, it's that we
need to find a new destination. The big mountain range that rises much
higher than our plateau, with the big body of water in between...

hhp

William Berkson's picture

>A song that deviated from a rhythm at almost every note is not something
most people want to listen to!

Hrant, you seem to not understand the way rhythm works in music, which is maybe the source of your objection to the metaphor in type. There are two different things going in music. One is the underlying beat, which is normally extremely even, in a single meter—the same number of beats to each measure. Played against the underlying beat is the rhythm of the melodic line, which can vary a lot from the beat. In syncopated music, which is common for dance music, the rhythm of the melodic line constantly plays with going on and off the beat. And normally in dance music you have a steady beat of the drum, and so you can clearly hear both the rhythm of the melodic line and the beat, playing against one another.

What I am suggesting is that the fairly regular bars of the verticals are analogous to the beat in music, and the departures from it like the rhythm that plays against the underlying beat.

enne_son's picture

Bill, going back a few steps in this thread, I make a distinction between features and role units. At TypeCon / Atlanta I said:
the visual system
(1) breaks stimulus words down into oriented lines and curves, to the point where
(2) responsiveness to aspect, closure and expressedness accumulates and
(3) resolution or ‘quantization’ into role-units occurs.

#1 needs refinement in relation to what I said in my opening post about ganglion cells looking for edges and registering surface polarity.

#2 relates to features and is the focus of feature analytic processing. This is what Frank Smith based his ideas about perceptual processing in reading on. Quantization into role-units is one of the things that happens in feature analytic processing. Closure is a feature, bowls and counters are role-units. So instead of saying the mechanism is after features of the black, I would say the mechanism uses features of the black / white composite to isolate role-units, many of which are black, some of which are white. It doesn't only isolate role units, it reads the black and white relationally in parallel*.

*I'm working on the idea that this is the essence of what Hrant means when he says we read notan. We read the black and white relationally in parallel. Optimizing relational processing with a focus on the black and white composition is it seems to me one of the key things Hrant is after.

Apparently the neurophysics of the visual cortex has an on-center / off surround structure. Without going into all the details — some of which I haven't been able to assimilate fully — I think it can be said that optimal relational processing across the word using both blacks and whites occurs when the on-center / off-surround routines in the two polarity domains intermesh.

I think the intermesh issue underlines the relevance of Hrant's quest. It is one of the things the visual cortex needs for efficient processing of words.

hrant's picture

> the mechanism is after features of the black.

1) The mechanism is after anything it can possibly use.
2) Any feature of the black is also a feature of the white,
although often on another (maybe superior :-) level.

> That's why we don't consciously notice or remember the shape
> between the c and a in ca, but we do remember the c and a.

1) Consciousness is not relevant during immersion.
2) We don't remember the shapes of "c" and "a", we
simply remember the lexical units.
3) We remember something about the "c" and "a" because
that's what our consciousness was taught in school. And
maybe if we were taught to respect the white as children
we'd be over this parachirographic hump already! :-)

> Played against the underlying beat is the rhythm of
> the melodic line, which can vary a lot from the beat.

Yes but they're both grids. I hope you're not saying we
need more grids in type! The first step is to take to heart
that immersive reading is not conscious (while music is).
Jungle sounds don't have beats or rhythms or anything
artificial like that.

To me drawing a parallel between music and reading
is much like drawing on a blackboard with a burin!

hhp

hrant's picture

Peter, re-reading you original post:

When I try to define the terms "painting" and "drawing" it's not
an ideological move - it's just to promote clear communication.
Also, any concept is necessarily absolute; however any application
of any concept is never absolute.

> In writing no such reduction is required.

What I would say is that in writing (what I call "painting", to
remove any ambiguity, since "writing" might imply the formation of
the skeletons - so in a way, ductus) no such objective is achievable.

> notan doesn’t have an edge until an area
> has been filled in or part of it cut away.

No, notan is defined as soon as the edge is marked.
It however does not exist until the black/white is marked.
Type designers do the former. If they do (or even factor in)
the latter first (painting) they cannot achieve the former.

> Our internal representations of bounded maps of letters are role-unit
> based and feature-tolerant. That is, they are highly tolerant

The human body is highly tolerant of grapefruit. And it tastes great.
But if you have too much you may lose a member or get breast cancer.
http://news.bbc.co.uk/2/hi/health/7978418.stm
http://news.bbc.co.uk/2/hi/health/6900482.stm

> [The parafovea helps decide] what words to skip and
> where to land when it makes it’s next saccade.

You know I've said this before, but it was a while back, so:
Information is information. If information in the parafovea
is used to decide what word a bouma is (and that's clearly
happening because at the top end only 1/3 of text is foveated)
then that's reading.

Oh, and again, the only thing that's effortless is dying.

hhp

Té Rowan's picture

Oi, Hrant! You left an unfinished sentence up there...

> My whole point is that we don't really know what's good.

... but we always know what isn't.

John Hudson's picture

I'm with Bill on figure bias, which I think is part and parcel of recognising things as distinct from the space around or through them. The fact that text typically involves a radically simplified figure/ground relationship, of a kind that we seldom encounter in nature except when a bird is silhouetted against a bright sky, doesn't suggest to me a variant mechanism: we still bias the figure, the thing to be recognised, over the ground, the background against which the thing stands. This is also one of the reasons why I remain suspicious of notions of word recognition distinct from letter recognition, because I think 'word' is a conceptually fuzzier concept than letter, such that most people asked to describe a letter will describe its figure form, i.e. conceptualise a letter as a thing; whereas, in this discussion, a word is put forward as a gestalt of figure and ground, but also as a collection of letter things in a particular periodic relationship.

[Thanks, by the way, Bill, for introducing the notion of periodicity, which provides the opportunity to talk about regularised and flexible spatial relationships without using the more contentious term 'rhythm'.]

enne_son's picture

[hrant] not on ideological move

I made my “a non-ideological but hypothetical context” comment in the context of addressing the issue of whether or not writing with a hand-held tool informs or misinforms. In the ductus thread you wrote: “I'd also substitute "misinform" for "inform" — that is an ideological issue.

About your other comments: defining notan’s edge and defining notan are different but related things. The two go hand in hand. I’ll stick for now with what I said about defining notan’s edge. If optimal notan is posed as an objective is to be achieved we have to have a handle on what it is in the domain of reading writing. I tried to do this with my intermeshing on-center / off-surround comments above.

The only way to gauge this intermeshing ‘on the ground’ in design is, as far as I can see, by optically assessing phase alignment in the blacks and optically assessing the rhythmic cohesion of the whites, as well as optically assessing if one pole drowns out the other, that is if rhythmic cohesion in the whites has come at the expense of phase alignment in the blacks and vice versa. Optimizing notan ‘on the ground’ is a highly iterative process involving working at notan’s edge.

An other way to gauge how well the black versus white on-center / off-surround routines intermesh is, I suspect by using fast fourier transforms, though I’m not sure just yet exactly how.

Your commented on my feature-tolerant comment seems designed to suggest it’s not worth mentioning.

The point of my mentioning this was not to encourage satisfaction with what we’ve achieved but to contextualize over-reaching statements about dark ages and inverse relationships between making progress and encouraging awareness of ductal arithmetic. A criterion level of readability is fairly easy to achieve. Pelli’s work suggests there is a wide plateau. Legge and Bigelow show there is an ample fluent range. The importance of Luckiesh and Moss is that they were able to push beyond this. Their work shows there are peaks and valleys on the plateau.

So we are not in the dark ages but there is more to do.

About your “1/3 of text is foveated” comment. Here I think you are counting letters, not words. If I remember correctly you use a low-end estimate about how many letters can sit in the fovea at one time at a decent viewing range, and a high-end estimate of how many letter spaces a typical saccade jumps over. A more useful concept for schematizing how much info can be picked up during a single fixation might be the Pelli idea of the uncrowded span.

I’m still searching for a good term to describe what kind of gathering occurs in parafoveal preview. The closest I can find to something I think works is the notion of “accurate ensemble statistics,” [Patrick Cavanagh] or “summary statistics” [Benjamin Balas], though neither feel quite right.

hrant's picture

John:
1) Some things have holes in them; and the holes are part of them.
2) The subconscious brain, being driven by efficiency, tries to eat
up text in the largest chunks possible.
Hence boumas (which can actually even span word spaces).

hhp

enne_son's picture

John, isn’t the thing to be recognized in reading the word, and the ground against which the word stands out during a fixation the line? My idea is that in reading, the response bias is indeed to the figure, which is the word. As a figure, the word has a composite structure. The letter has this too*. But in the word the composite structure is compounded. I think this is a more productive approach when thinking through the processing mechanisms occurring in the visual cortex.

*Noordzij begins The Stroke by saying that a letter is two shapes, one light, one dark. Because the white is a shape it becomes figural. Noordzij uses this to talk about the history of writing, and in his hands it becomes more than a design conceit. To treat some instance of “word-blindness” Noordzij ran across in in school, he provided exercises that prompted the visual cortex to imprint on the white shapes.

Noordzij contends that educators focus on the black. My extension of this is that cognitive and perceptual scientists make this mistake too.

In this context Hrant is more Noordzijian than you.

dezcom's picture

It is like saying, when talking about water, which is the more important, the hydrogen or the oxygen. Someone then might respond, the hydrogen of course because it has 2 parts to one of oxygen. The truth is, it is the relationship of the bond that counts not the quantity. That is why there can be both thin and thick type but the ratio for each is dependent on the desired output.

hrant's picture

> About your “1/3 of text is foveated” comment. Here I think ....

Peter, we've done this all before...
But maybe this time you'll manage to make me see the flaw in my logic.

I'm counting linear length, via number of letters (a conversion that
I doubt introduces an unreasonable skew). I define the fovea as the
central 3-4 letter area where individual letters can be identified
after many seconds of conscious evaluation, with anything beyond the
fovea dropping quickly in acuity and being useless for small things
like letters
*; if anything this is generous. As for the length of saccades,
they can easily go over 10, sometimes to around 15. So I have to think
1/3 is conservative, and in fact anything less than 1/1 means the parafovea
is reading boumas. And really, if it's good enough to determine the next
point of fixation, why would it be shy about how it got that determination?

* But still OK for clusters of letters; and the more
distinctive and frequent a cluster the deeper into
the parafovea it can be recognized. I mean really,
is that elegant or what?

Not that Pelli's research is pointless, but I honestly don't
need to grasp it simply to draw the basic conclusion above.

hhp

enne_son's picture

[Hrant] [M]aybe this time you'll manage to make me see the flaw in my logic.

Later, and probably in another thread.

Peter Van Lancker's picture

How about white letters on a black background, or white letters on a complicated and colored background like in movie subtitles?
How about hinting as the ultimate way of bringing rhythm into text, and kerning doing the opposite? And why does one often compare rhythm in typography to extremely simplistic pop or even dance music (I would associate this rather with "mtv typography"; whereas in contemporary classical music, rhythm often takes a different role).

enne_son's picture

The flaws in Hrant’s parafoveal reading of boumas logic.

Estimates of acuity and saccade distance vary. The wikipedia on eye movement in language reading says that around the fixation point only 4 to 5 letters are seen with 100 percent acuity. The drop-off is quite rapid. The distance the eye moves in each saccade is between 1 and 20 characters with the average being 7 to 9 characters. A new saccade places 2 to 3 of the 7 to 9 characters crossed in a saccade into foveal vision during the new fixation. This means that all in all, on average, 4 to 5 of 7 to 9 characters have become available to foveal vision after a single saccade. 1 and 20 character saccades are highly unusual I would think.

So I think it’s wrong to suggest that only 1/3 of text is foveated.

The conclusion Hrant appears to want to draw from this calculation is that the parafovea is or can be reading boumas at a rate of 2 out of every 3 words. The conclusion that should be made, is that with short words 2 out of every 3 words are available to parafoveal vision with each fixation, but since the resolution capabilities of parafoveal vision don’t allow genuine visual word-form resolution, at least one of those two must be brought into foveal vision in a subsequent fixation. Typically only short, frequent connective tissue words, like the and in and on are skipped.

Pelli has characterized what paravoveal vsion delivers as jumbled percepts. What the coarse-coding capabilities do allow seems to be what I might want to call jumble statistics. Jumble statistics would than account for what is commonly referred to as the parafoveal preview benefit. Jumble statistics, which probably can capture where salient features like areas of disturbed or undisturbed expressedness lie, for instance, can undoubtedly affect “saccade planning.”

hrant's picture

> How about white letters on a black background

That would indeed make for an interesting test.

> Later, and probably in another thread.

It's gonna have to be, since that didn't work at all... :-)

A text face designer interested in marking a non-trivial increase in
reading performance isn't concerned with how people usually read;
he's concerned with how they might read under the right conditions.
So the fact that most saccades are short means virtually nothing;
the significant thing is that they can be very long without adversely
affecting comprehension. The only way to explain this that I can think
of is that boumas can be picked up in the parafovea, and probably at
a rate greater than in the fovea. And when you say "the resolution
capabilities of parafoveal vision don't allow genuine visual word-
form resolution", I would say:
- You don't know that.
- How do you explain the existence of very long saccades?
- How do you explain regressions?
Concerning the last: to me regressions happen when the guesswork
(something the brain is great at) based on the blurry parafovea is
simply wrong; the fovea doesn't make mistakes like that. And the
more experience the "firmware" acquires the better a balance it can
strike between speed and regressions; a total absence of regressions
is actually a bad thing - it means it's not trying hard enough.

As for foveal acuity, I guess the number range I'm working with is fine.

As for Pelli, all I can say is that it seems he's been measuring low-
performance reading (like Larson, and pretty much everybody else).
I define that as reading by people who are not motivated to read fast
or whose "firmware" does not yet have the experience to leverage the
parafovea; it is known for example that people's reading improves
well into middle age and beyond. And the little bouma-hunting fairies
know better than to come out for casual reading tasks.

Lastly, if you don't think the parafovea is active in actual reading
of boumas, your case for boumas is... poof! As Larson has convinced
me at least, the fovea does not need more than individual letters.

hhp

hrant's picture

> boumas can be picked up in the parafovea and
> probably at a rate greater than in the fovea.

So the question becomes: how do we make boumas more distinctive?

Maybe by mapping a language and diverging the boumas of
word pairs* that are close; for example in the "quest"/"guest"
pair, one (probably the former) could be given an "st" ligature.
But of course this has to be done consistently**, and we might
even develop common standards.

* Or actually, groups.

** Although if familiarity is picked up
quickly enough the consistency could
be limited to one font in a long book.

hhp

Syndicate content Syndicate content