The good ship Notanic

ebensorkin's picture

This thread is not just another Notan / Bouma / Readability ramble. Instead it's a place to let us post visual examples of excellent, ( or vile ) Notan to discuss. Samples could be from a font, be just a glyph pair, or come from non-digital sources such as stone carving, calligraphy etc. Who's first in line?

enne_son's picture

[again, relaive to 'valence' ratio]: I raise the asymmetry thing, because equal, or equivalent valence, pure and simple, might not be 'what the body needs.' I think the body needs a well-formedness of the white, though that doesn't necessarily mean geometrical well-formedness. But, perhaps the white needs a more positive valence in a coarser spatial frequency register than the black, while the black needs the stronger valience at a higher spatial frequency register or channel.

And this might be because our neuro-synaptic codes for words or morphemes might have the form of compressed / compact large-cell or magnocellular encodings with specifying parvocellular (small-cell) codelets or attachments.

Outside of this, dark / light (notanic) balance or equivalence might be just another term for good colour, and the dark / light of it isn't really leveraged in terms of 'what the body needs.

enne_son's picture

"BTW, when you take into account that English for example has a redundancy of over 50% (and apparently almost 70% according to one researcher) you start realizing that the parafovea playing over 60% of the role in reading (in optimal situations) is entirely plausible." [Hrant]

I doubt this because the magnocellular encodings alone, are insufficient for visual wordform resolution. The redundancy--or better: the overdetermination--is at the rap / raevf* level. This means that only selective foveal activation of the parvocellullar codelets has to occur, except where a word is skipped.

* role-architectural particular / role architecturally evoked form

[me] Why force a choice?

[you] Not a choice, an awareness, of differences and relative merits.

If this is your intent, I can understand your rhetorical practices, while bristling at your logic, labels and methods, and worrying about their productiveness in building a common understnding.

hrant's picture

> the magnocellular encodings alone, are
> insufficient for visual wordform resolution.

I don't know what magnocellular is. What is it, and what about it makes
you believe that we can't* decide -if "only" through heuristic guesswork-
what a bouma (not a word) probably is?

* And what about the skipped words (like sometimes "and") in the
empirical evidence? What about Taylor & Taylor's claim that about
60 short and common (and I would add distinctively-shaped)
English words are often skipped?

> worrying about their productiveness

Many people (especially conservative/established ones)
don't discuss openly unless they have something to lose.
They weren't discussing things until about 7 years ago.

hhp

enne_son's picture

[magnocellular] "What is it[?]"

The visual cortex is a mosaic of large-cell and small-cell neuro-transmitters, existing in different layers. The term for the large-cell neuro-transmitters is magnocellular, and the term for the small-cell neuro-trasmitters is parvocellular. It is known that the parafovea projects to the one and the fovea mainly to the other. The magnocellular stream codes information at a coarser granularity and spatial frequency than the parvocellular.

"what about it makes you believe that we can’t decide--if “only” through heuristic guesswork--what a bouma probably is?"

1) I don't know what 'heuristic guess-work' is in perceptual processing terms. Is it 'probability summation' on the basis of selective activation?
2) The information is too coarse to be of use for visual wordform resolution without supplimentation, except when a n+1 or 2* item in the visual stimulus stream is highly probable in grammatico-syntactic sense-following terms. It is too course because anything beyond ensemble statistics is unavailable due to greater lateral interference or crowding in rhythmically spaced bounded maps with distance from fixation during immersion, and rising sharply at the foveal /parafoveal boundary. Lateral interference or crowding are very well-documented, but the precise nature of the ensemble statistics isn't--hence my surmise about the coding of boumaform material being bi-valent: involing both magnocellular (coarse) and (yoked fine) parvocellular elements. Taylor's statistics also show that perceptual access to the fine elements are necessary for sufficient wordform differentiation to make reading possible.

* n = the word or word-part being fixated; n+1 or 2 are the next items in the stimulus stream.

enne_son's picture

[regarding heuristic guesswork]: It might be useful to add that a simple 'template matching' construction of how this might be schematized is all but totally abandoned, even by the most conservative, and certainly by the most established vision and cognitive neuro-scientists.

enne_son's picture

See my italicized elaborations in the 11:03 (18 November, 2006) post two posts up.

ebensorkin's picture

Peter, your points have to do with Notan in as much as Notan is resolution and so foveal /parafoveal dependent - yes?

I think we should look at vwy & VWY combinations next. What do you all say?

William Berkson's picture

Eben, let me try to explain what I am driving at without using the terms 'color' or 'notan', which may be getting in the way. Then I will relate what I am saying to these.

I think we all agree that some kind of dynamic balance and harmony is needed in a good text faces, and it involves both the blacks and whites.

This harmony is not only a question of how black and white relate, but also how black relates to black, and white to white.

For example if you thicken a hairline in a character, it has a dramatic effect on the look and balance of a character, and if you do it systematically, a big effect on the whole of a typeface. Have you changed the whites? Yes, but barely--it could be less that 1%. On the other hand, the black might be increased by 100% or more. And the eye is more sensitive to this 100% increase in a black that to the same amount which is say a 1% increase in the width of the white of a counter.

Hence in judging how good the balance is in a type, the designer would do well to be attentive not only to how the black and white relate to each other and affect one another, but also the interaction of black and black.

One of the features of a good text face is that the features of the letters that ‘read’ come across to the readers, but nothing else obtrudes or interferes with that information. Thus a black feature such as a stem or a terminal should be clearly noticable, but not force itself to attention more than other parts of the letter. In order to do this, the different black parts of the letter need to be in balance with one another. This is partly a matter of how dark each is, and how distant they are from one another—both factors affect whether the overall effect is balanced or not.

Furthermore, in general one letter should not pop out as much more dominant than another. They need to be in balance with one another so that what is readily recognized is words, rather than individual letters.

In a while I will post further applying this to the concepts of ‘even color’ and ‘notan’.

ebensorkin's picture

Have you changed the whites? Yes, but barely—it could be less that 1%. On the other hand, the black might be increased by 100% or more.
When I first read this I was shocked but then I began to realize that you don't mean in the 'gyph space' where the ratio would be 1:1 but on the page. The exact ratio of which is impossible to know but I do get your point I think. Also while I might take issue with the odd semantic issue here or there in what your saying - in general I say 'fair enough'. Go ahead & build your arguement.

black relates to black, and white to white
By this you mean thinks like stem to stem & counter to counter - yes?

Peter, about the valance ratio* and the asymetry: I have re-read your phrase several times now and I don't feel I have even started to understand it. Can you elborate for me? The closest guess I can hazard is that you are wondering if working for good color or 'balanced' notan is what we need on a fuctional basis for reading or if it's just a kind of related but side issue, or maybe just an artsy pleassure trip.

* a relative capacity to unite or react or interact (ratio) - apparently

I was saying something related earlier when I was talking to William about the 'disappearing' glyph. I had said that that some glyph sytems were easier to harmonize than others. And that the value of this harmony was lower in the sytems that were too easy. Yes, there might be Notanic balance there but it probably wasn't getting us the cue value needed.

What I had meant to go on to say then was that complex notanic balances made neccesary by what might look initially like kooky glyph shapes probably add to the cue value offered by those kooky shapes. In other words the value of getting a glyph to become part of the family and stop drawing undue attetion to itself may be closely related or even parallel to the difficulty in getting it to happen. There is the O after all - it's quite a bit less kooky than an L which is mostly difficult or a double storey g which is to my mind hyper-wacky. I don't know- maybe suggesting that the power of the g's cue is stronger than the O is silly. What do you think?

About 'forcing choice' - I agree with Hrant. I don't want to tell anybody what terms to use. I just want to make the case for Notan. If you a re student of type you shouldn't really avoid learning about the term color. But once you have you can choose your own tools. One - the other - or both. Everybody can ( & will ) do what they want. I am hoping that by dicing the Notanic issues fine, ( after all Notan is a big idea and should not be swallowed whole! ) and showing how I see it at work that I will learn more about it and maybe make the idea & it's utility more widely accessable & then maybe accepted.

About the parafovea, You are saying that Hrant's emphasis on it is not bourne out by the studies you have read. Is that it?

AzizMostafa's picture

Thought Provoking?!

AzizMostafa's picture

Explore:
1. Black lines on white background Or white lines on black background?
2. How are the lines spaced where?
___ Linearly, Exponentialy, randomly, oddly or evenly?
___ Show divergence or convergence?
3. What direction do the lines run where?
___ Horizontally, Vertically, Diagonoally, Clockwise or CCW?
4. Differences in the lines distribution on the various parts of the body?
5. Any More?

ebensorkin's picture

Aziz, where are you going with this? The categories of observable factors you mention amount to little more than a list of factors. Don't you have an opinion ( other than the back of zebra... ) or an specific obsovation to share?

AzizMostafa's picture

Eben Sorkin, in black and white:
Frustrated and feeling like a deaf in a feast carnival, I posted the back of the zebra in the hope of giving the visitors of this seem-to-be-never-ending thread and myself a break...
Though I read all the previous postings (some twice), I got nothing save headache!
Am I getting old? Are Typophiles postin puzzles?
Your Advice please: Should I delete or maintain it?

William Berkson's picture

Here is an example of the changes in a hairline I was talking about. And yes, I was talking about changes within the glyph, not the whole page.

Here the hairline at the top and bottom of the right character is 100% thicker than that of the left. But the area of white space within the right counter is only 13% less than the left.

Hence the need to compare the blacks, and eg. attend to how hairlines and other strokes and terminals balance within characters and from character to character.

According to the book, "The Yin/Yang of Painting", in the traditional Chinese philosophy of painting, nóng dàn (notan in Japanese) is one of four balances characterizing the overall balance of yin-yang within a painting: "value (light/dark) and texture (thick/thin) to color (warm/cold) and brushwork (sharp/blurred)" which together form "a true balance of opposites."

Note that thick-thin is one of these that is traditionally regarded as an somewhat independent variable, as I have noted earlier and illustrated here.

Evenness of color is an additional concept special to reading scripts, because it helps only the key cues from the letter to be easily detected, without 'noise' from other variations of dark-light.

Hrant's caricature of evenness of color as meaning reducing everything to grey is a misunderstanding of the concept. You will note that Hrant boasted about the great 'notan' in his Patria. Then he acknowledged, though, that my point about the serif on the L being too small was correct. In fact, as I said my analysis was based on 'color', not notan. The notan in Patria is generally good, but there are all kinds of color problems, IMHO.

Properly understood even color sometimes promotes *greater* diversity of form, as in this case, where the L often should take the biggest serif in the font for color reasons. As you referenced, I noted in my presentation at TypeCon Caslon's variations in letter forms. What I pointed out was that some of these variations actually promote even color better than more uniformity would.

The reason why the 'grey' argument misses the point is that we are dealing with strokes, not pixels sprayed over an area. The question becomes how the strokes and terminals and their weights in different parts and orientations will balance best, within the constraint of an apparently uniform average black/white density from letter to letter. A (usually assymetrical) balance of blacks within the letter is also desirable for the same reason of keeping out 'noise' that distracts from readability, and promoting easy identification of those salient features of the letter that enable words containing it to be easily read.

enne_son's picture

Eben, I have warmed up to Hrant's introduction of the term 'notan' into type talk. What has helped is to think of this in terms of 'leveraging' the light / dark, or leveraging the black and white of the item under consideration.

What I've resisted is a too simple schematization of what 'good' or 'ideal' notan means in the typographic domain. You seem to be attuned to this as well. The constructs 'equivalence' and 'balance' when applied to the 'black and white of it' in type are 'floating signifiers'. What specifically do they call for? If it's something spatial, do lights and ultrabolds have non-ideal notan? If 'equivalence' is 'equi-valence' or equivalent valence, do we take our cue from zebras, camouflage, Shirokuro, Neville Brody's typeface State? Must the valence of the white be as strong as the valence of the black in 'psycho-active' (visual cortex centered neurophysical activational) terms? Or is the situation more like what Bill suggests, where the whites must be well formed and in some kind of synchronicity, and the blacks as well?

We all have some good intuitions about this on a practical level, but at a certain point these intuitions diverge. My effort is to try to give the best intuitions perceptual processing 'teeth'. This can lead into areas as complicated as particle physics, and sometimes I let out ideas that are highly provisional (but that don't come out of the blue!). And it can lead to questions about how parafoveal processing is schematized, what it's role in immersion is, and what the prospects of an enhancement of it's role are. Because if the parafoveal component is of pre-eminent concern, an assessment of what it needs (perhaps in abstraction from or opposition to what foveal vision needs or wants) will skew our answer. It can also lead to questions about whether the information provided by the whites is dealt with in a different spatial frequency channel than the information coming from the blacks.

I don't really want to be the lone hunter at this Hrant+1 level, but I do want to keep it in front of us, so that we don't foreclose too soon on the answer to the question of what in the typographic domain leveraging notan means.

But, let me repeat, the lack of a final answer doesn't preclude intelligent action. Acknowledgement of the relevance of the reference axes indexed by the term goes a long way.

ebensorkin's picture

Aziz: You said: Frustrated and feeling like a deaf in a feast carnival - does this mean that you are interested & feel like you are being left out? Or something else? I can assure you that I have to read & re-read much of this myself. It's only by asking alot of questions & people being kind enough to answer them that I have my limited understanding. Of course there are other threads here & a whole internet. But you are here. You must hope for something. What is it? I don't know that anybody here can help. But we can try. And - maintain your posts!

William: You said: And yes, I was talking about changes within the glyph, not the whole page. You math has me puzzled then. Just to be sure, do you mean within the UPM? Or the area of a line of text? Maybe you are talking about the cumulative effect of the small ( 1% ) change. Maybe there are 100 letter 'o' s on a page & 1% change yeilds a 100 fold change. Any closer? Perhaps the maths don't matter. I had an idea that they were important to getting your point though. In any event - your post deserves a more complete response which I will have to compose later on. Peter, Likewise. And nice post too! I think it's very pleasant the way that ( despite various frictions ) we have managed to cross polinate many of our ideas. I feel richer for it. Thanks!

William Berkson's picture

Eben,the top points on the horizontal arches of the left 'o' are 20 units apart and those on the right 'o' are 40 units apart. That's a 100% increase in the width of those black strokes. Every other point in the outlines of the two o's is the same.

If you compare the area of the white within the counters of the two o's, then the white area within the 'o' on the right is 13% smaller than the white area within the 'o' on the left.

Clear?

ps I probably could have got the 100% increase and 1% decrease I mentioned above by starting with a thinner hairline, but this is more typical.

ebensorkin's picture

That’s a 100% increase in the width of those black strokes.

Oh! I see what your after. This makes perfect sense to me now. Of course 100% is just one relevant number or merasure of what's happening. There is also the increase in black on the page overall assuming more than two glyphs & more than two 'o's. And the increase in black for the glyph in the em overall is not 100%. Even though it happens twice! Still, I think you could be justified in saying, because of where the change is occuring, you are getting a disproportionaly large effect for the size of the shift. Are you saying this? If so, I would agree about that. The import of this change is bigger than you would get from a numericaly equivalent change in the thick verticals. Where I think you are going with this; is "here is an example where the black changes alot & the white does not." Yes? And that the white & black do not shift 'evenly' hence a crack in the concept of Notan. Yes? Just guessing.

Nick Cooke's picture

Most of this thread is over my head (and I don't have time to read it all), and very lacking in examples as asked for.

Here's a thing I did about 10 years ago - part of my first (and only) family with FontShop. It is actually based upon a Japanese typeface which was made from paper cut outs in a positive form. I wanted to create type that would appear as negative spaces from random black shapes just as an experiment to see how it would work.

I've never seen it used anywhere!

Nick Cooke

hrant's picture

William, I'm really not trying to strike you down indiscriminately, but you keep forging in the wrong direction, and not trying to listen, maybe even trying not to listen. Comparing percentages?! Jeezus. I would spend the time detailing the problems in your latest posts, but only if it benefits people besides you (since you yourself are not listening to me at all) and I'm not sure that would be the case. Guys, this is a group effort, and if I'm going to be the only one trying to curtail irrelevance (not to mention outright mischaracterization - "boasting about Patria's notan" indeed - complain to Peter, mensch boy) I'm not going to bother.

--

> the parafovea projects to the [magnocellular stream]
> and the fovea mainly to the [parvocellular].

I did know that rods and cones (to use the more accessible "street" terms... hint, hint :-) have a different distribution in the fovea versus the parafovea, with one being able to pick up fine detail and color, with the other being better at picking up movement (look at a CRT monitor askance, and you are much more likely to see the flicker). But unless you can show that they map to different areas of the brain there's not much to talk about (and you'd further need to show that the area of the brain that the cones map to has some sort of bouma-decipherment impediment). If there is anything to talk about here, I might talk about how that fact might be a notch against your theory of boumas being picked up in the fovea (as well).

> I don’t know what ‘heuristic guess-work’ is in perceptual
> processing terms. Is it ‘probability summation’ on the
> basis of selective activation?

I don't want to get too Rumsfeld, but it sort of refers to knowing you don't know. It's exactly not anything to do with summing probabilities, or anything deterministic. It's what happens when the brain needs to balance the risk of failure against speed, which does not [need to] happen in the fovea.

> The information is too coarse to be of use for visual wordform resolution without supplimentation

1) But you do have the "supplementation" in the form of grammatical and letterwise redundancy. Although these are rarely enough algorithmically, they are enough heuristically.
2) If it wasn't enough, we would never completely saccade over words. We do, and you shouldn't choose to ignore that.

> What I’ve resisted is a too simple schematization

It's a matter of educational progression. You don't teach hi-schoolers Einstein's laws, you teach them Newton's laws, which are known to be "wrong", but "right enough". This is especially true if you don't even know Einstein's laws yourself! :-/ It's not about Truth, it's about progress.

> very lacking in examples as asked for.

Indeed.

hhp

ebensorkin's picture

Nick, all typefaces are (have) Notan. No entrance fee, no qualifying, no judging & no prize. Notan is not a special rarified state or quality. It's just a way of looking at one aspect of type. Go back & read a bit more. It think it may get a little clearer. Thanks for showing us your type!

hrant's picture

It's hard to read this crap. There's too much noise, too much terminological divergence, and too much guesswork. You really have to have a vested interest (psychologically) to bother filtering through it all.

hhp

ebensorkin's picture

Time for some more examples I think. Hrant, would you stick around for those? I won't get them up today but maybe Tuesday.

hrant's picture

Stick around? Yes, I don't plan on causing a massive freeway pile up. This week.

hhp

William Berkson's picture

>yourself are not listening to me at all

Wrong. I just don't agree, and gave good reasons why. I did fuse Peter's praise of Patria with your response. Sorry about that. But if insults are your only response to carefully considered arguments, the fruits of hard work on type, then I can't be bothered to discuss type with you further, as I have probably unwisely tried in this thread.

>And that the white & black do not shift ‘evenly’ hence a crack in the concept of Notan.

I think it is a bit off too call it a 'crack in the concept of notan', as if notan in type design were a very clear and established concept and doctrine--which it is not. I wouldn't say 'crack' because that would be conceding that the concept of 'notan' is already a success in explaining a great many things. But I think it can only usefully explain some aspects of type design. I would say rather that you are trying to stretch the concept of notan to phenemona it can't really explain--including the greater impact in variation of hairlines vs of whites.

What exactly about type design do you think 'notan' supposed to explain? All good design? Part of it? What part or aspect?

I like the notion of 'notan' as describing what Chris called 'locking together' the whites and blacks. Or Cyrus Highsmith's description of 'becoming part of the paper instead of sitting on it.' It is indeed a very important part of good type design.

But as a general theory of type design what does it tell us? You seem to be advocating, if I don't misread you, that balancing black and white is pretty much the whole story.

My arguments were to the point that evenness of color is another important consideration. The point I made about small changes in hairlines having a big impact on color is only one point. There are many more concerning color that I don't see how it is reasonable to explain by 'notan'.

First of all there is the issue of consistency of blacks. The eye will pick up much more readily a variation in thickness of blacks from character to character than it will a variation in eg width of counters between the b and the d.

Second, there are all kinds of things done with terminals to give balance to a seriffed font. These are quite understandable in terms of balancing of the black of a character, but I don't see how you can 'get there from here' using 'notan'.

As an example, look at a typical E--here from Caslon 540--with serifs cut off and then with serifs:

If you use considerations of 'color' then the function of the heavy vertical serifs (much heavier than the horizontal) is easy to explain: they balance the character within its side bearings, so the black is distributed more evenly left and right, up and down. Without the serifs it is obviously unbalanced, as you can see.

Now in the abstract, every time you change the black you change the white, so that theoretically you could give equivalent explanations with only the black, only the white, or both. But in practice the eye feels the weight of the blacks more powerfully than the whites, so looking to that balance--even a physical balance as if they had real weight--is natural and I think likely reflects the way the brain works. And no, I am not saying that this balance excludes notan, it is just that notan cannot do all the work.

So how do you explain why the serifs on the E help by comparing black and white, with no other concept? And if other concepts are needed what is wrong with 'even color' as one of them?

I don't have any problem with notan as an important concept in type design. My problem is with 'notan imperialism', which wants to replace other traditional and valuable ideas, such as even color.

As I said, within Chinese philosophy of art nóng dàn is only one of at least four balances; it seems to me that trying to make 'notan' do too much work in explaining good type design either broadens the concept to where nobody knows what is means, or pushes out other useful concepts.

hrant's picture

> carefully considered arguments

You do yourself too much credit. For an argument to be carefully
considered it must consider the other arguments carefully.

> as if notan in type design were a very clear and established concept and doctrine

Hmmm, you wanna start with this one? No you don't.
I know because we tried, and it was a non-starter.

> in practice the eye feels the weight of the blacks more powerfully than the whites

Or maybe we should start with this pure illogic? Nope, not that either.

Oh, I know, let's start by not talking about notan at all! Because that's the only thing that would really satisfy you. Because your world is fixed, and there's no room left. "Traditional Values", hear, hear!

hhp

enne_son's picture

"I might talk about how that fact might be a notch against your theory of boumas being picked up in the fovea (as well)."

It might be, but what you attribute to me is not an adequate characterization of what I claim. My theory is that boumas (bounded maps) are indexed in parafoveal vision and some ensemble statistics gathered about them there, with foveal vision detecting selective criterial specifying information essential to visual wordform resolution. In most cases sense-context cannot narrow down the field of possible next boumas sufficiently for just those ensemble statistics to push the probability summation to good enough certainty about identity to support sense following. Exceptions are the 60 most skipped words, which are mostly syntactically highly probable from context. 'Picking up' of boumas in immersive reading is a two-step operation--unless by picking up you mean what I've called indexing. Probability summation is by definition not an algorithmic thing.

And, by the way, rods and cones are not 'street' terms for large-cell and small-cell neuro-transmitters, that is for 'magnocellular' and 'parvocellular' transmitters. The retina is a mosaic of rods and cones with different distributions and densities at different retinal locations. The visual cortex is a mosaic of large-cell and small-cell neuro-transmitters, existing in different layers of the brain, with different connectivities at different locations. Clusters of rods and cones project to clusters of neurotransmitters once they have passed through a ganglion cell layer, where they are summed and averaged. The mechanics and ratios of how many rods and cones project to how many and which neurotransmitters is a rapidly emerging area of research. How supportable your schematisms are is in large part dependant on what such research uncovers.

This is tangential to the notan issue but not irrelevant to the perspectival issues predisposing you to a certain 'take' on notan.

hrant's picture

Peter, I was simplifying your theory [as well], sorry.

> Exceptions are the 60 most skipped words

It makes no sense for there to be a limit of 60. And it makes no sense for there not to be a mechanism (like the one I describe) that reads things in the parafovea. Even if the limit of words is low, there has to be something else going on.

> a rapidly emerging area of research.

Let's hope it's rapid enough so it helps before we're too old. :-)
Could MS maybe speed it up?

hhp

enne_son's picture

"And it makes no sense for there not to be a mechanism (like the one I describe) that reads things in the parafovea. Even if the limit of words is low, there has to be something else going on."

Maybe not to you.

The mechanism you sum up as 'heuristic guess-work’ needs to be formalized or operationalized or elaborated in perceptual processing terms that take into account the mechanisms known or thought to exist inside the visual cortex. Unless you think the heuristic guesswork happens beyond the visual cortex in some higher area of the brain. Then you need to specify what perceptual features the cerebral cortex works from, how they are assembled in the visual cortex, and what guessing looks like in neuromechanical terms in the cerebral cortex, and where it occurs.

The limitations you like to believe are not hardwired are imposed by the constraints placed upon parafoveal detection due to what Denis Pelli calls 'crowding' and others call 'contour interactions' or lateral 'interference.' Study for yourself the literature on lateral interference, and study the literature on neuro-mechanical stimulus information pooling and relative receptive field densities as information moves through the visual cortex, and I believe the functional anatomical contstraints that make the limitations make sense will become apparent to you.

I've claimed--in contrast to most of vision science--that the crowding in the parafovea that is anatgonistic to letter recognition in the parafoveal field serves a critical function in reading, because it biases the response of the visual cortex towards the whole, and provides some critical pre-recognitional information about it: just enough to make the selective activation of criterial letter-featural information provided by foveal vision at and around the point of fixation massively effective. The criterial featural information is set up by the black and white in concert, as is the pre-recognitional information that the parafoveal viewing extracts.

Syndicate content Syndicate content