David Kindersley on Spacing

William Berkson's picture

I recently got hold of the monograph 'Optical letter spacing for new printing systems' (1976) by David Kindersley. I was looking for enlightenment on his ideas, but unfortunately I find the presentation rather difficult to understand.

His basic idea seems to be that the amount of space a letter should have--advance width times vertical extension--should be equal to the total white space within the letter, as defined by the black extremes left right, top and bottom--plus a fixed amount.

One determines the left-right placing of the letter within this space the following way:
1. if one slides a vertical bar left and right over the letter, when the total left and right white spaces are equal, it is the correct 'optical center'.
2. The letter should be placed with its optical center equal distance between the extremes of the advance width.

I'm not even sure whether I've got these basics right. Does anyone know more about Kindersley's ideas, and what happened to them? Did anyone take them up? Were they rejected? How do they relate to Tracy's principles in 'Letters of Credit'?

k.l.'s picture

Given that the idea of ‘optical centre’ has something to it, I think Kindersley’s most interesting stuff is on ‘designing to the centre’.
... plus the examples of flatter and rounder sided o's. Plus the L forms with longer/shorter horizontal bar and with/without serif:

My first thought was -- you can view this from an optical-center point of view, but you can also view this from a compactness-of-letterforms point of view! (The older thread which touched the spacing of italics.) The first looks at letters and their center, the second looks at inter-letter-spacing. This might also shed some light on David Kindersley's vs URW/Adobe's spacing method.
From the second point of view, it weren't even necessary that the L "serif" be that fat. It were more important that the serif is not too short so it can "close" the letterform. That in mind, E C L Z of serifed alphabets are easier to space than the equivalents of sanserifs which are more "open" (say, Baskerville C vs Gill Sans C).

How important is the optical-center aspect really, in spacing praxis? I cannot help regarding it as a mere side-effect. It doesn't tell anything about the amount of space actually needed -- and I was told that Mr Kindersley indeed did not care whether letters are spaced tighter or rather open. However, by his method he would run into serious trouble when spacing tightly, i.e. for today's eyes, "normal" spacing.  :)

What I find most helpful though is the "how to" example he gave in the Penrose Annual article: OI / OII / OIIIO / OI(anyletter)IO. I added IO(anyletter)OI, and this is the method by which I space uppercase letters for uppercase context.

Karsten

William Berkson's picture

Karsten, I don't know what exactly is the basis of the URW/Adobe method. If it is based on 'electrostatic repulsion', as one person in Raph's linked thread mentioned, then it mainly pushes away the contact points as they near. I can see that a heavier serif on the L shouldn't make so much of a difference in this method. But would even the sans L be spaced that differently from a serif L in the URW method? I think Kindersley's point about the L was that the whole cap alphabet could then be tracked closer, as L is a limiting character, and have balance. As far as its place between side bearings, it means you can shift it relatively more left, compared to a sans of the same dimension. Would the URW method do that? I don't think Tracy's method would recommend it. Would it be good?

The OI OII etc stuff is also in 'Optical Spacing', but it seems not as connected with the theory, which is why I didn't go into it. It does seem quite useful, though.

Is the optical center stuff a mere side effect? Again, since according to Tracy's method the side bearings are related to the inner space of the n or H, there is some degree of equivalence between the two approaches. I think where Kindersley may have something helpful is on changing weights of things and serifs, which I don't see Tracy as giving direction on.

I certainly don't want to be dogmatic about who has it right. I was just excited that here is a fellow with a different approach, and it seems to give some different insights. But I would be happy to hear where he is wrong.

I don't see why his method would get into trouble with tight spacing, as he gives some examples of that. He does avocate having the L overlap the A in LA rather than pushing it out, if the rest of the characters are tracked tight enough to demand it. Is this what you mean by trouble?

As far as practicality, I think Tracy is the way to go for a first or second pass. But when it comes to making adjustments, especially to modifying the character's design for the sake of spacing, it looks to me like Kindersley's ideas could be helpful.

John Hudson's picture

A note regarding the spacing of Helvetica: the digital Helvetica inherits its advanced widths from the unitised spacing of Linotype's phototype version. This accounts for the irregularities in the spacing.

k.l.'s picture

Yes, I had the LA overlap in mind when calling this combination problematic. L is indeed a "special case", but it may be similar for V W Y if Mr Kindersley's method is combined with spacing tighter than in his examples. Of course one can overlap letters -- if one likes this. Of course one can add heavy serifs to sanserif L's -- if one like this.  ;-)

Is the optical center stuff a mere side effect? Again, since according to Tracy’s method the side bearings are related to the inner space of the n or H, there is some degree of equivalence between the two approaches.

There is no contradiction. According to a more recent article by Peter Karow, of 1998, the "atomic model" (repulsion/attraction) element was completely kicked out of the URW spacing tool, which thus only uses the concept of inter-letter-space as a "viscose liquid" (Karow's terms). And -- here we are with Kindersley, Tschichold, Tracy again -- the space this viscose liquid covers should approximate the counter of the O for uppercase, or n for lowercase letters, and vary slightly with type size.
I really like the Kindersley model for it allows a fresh look at spacing, but it has its limits which you pointed out in your posts. To me, this is not a question of right or wrong, or Kindersley vs URW. What is of interest to me is, which method would work: Earlier I had hoped that in future it would be possible to not space letters at all on font level, but just define some "core spaces" -- inbetween uppercase letters, inbetween lowercase letters, and maybe some other contexts, plus a factor for type size -- and the type rasterizer would do the actual spacing job based on something like the URW method. Meanwhile I doubt that this is possible, or wanted.

As to Tracy, it is some time back since I read the text. This was when I also studied Kindersley's and URW's methods. In direct comparison with these, I couldn't help finding Tracy's a bit trivial, helpful only as a very basic introduction. Maybe it's time to re-read it.

Karsten

William Berkson's picture

>viscose liquid

Is this the idea that you 'pour' sand or a liquid between the letters, and the space filled up should be the same--one for caps one for lc? --presumably they make a decision on how far they assume the liquid penetrates open counters like the C.

It was some dissatisfaction with this that led me to look at Kindersley. I think his idea of having the character look balanced between the other letters is not the same as the 'area' approach. In particular my feeling is that where the counter fits between the letters is important (this is me, not Kindersley), and this may 'fight' with the standard of the area between letters being equal.

So I am looking for some rationale for how to compromise these competing ideals--both of which are probably desirable. What Kinderley has made be aware of this the way the black is distributed has an effect also.

At present, I don't have any guideline or principle to come out of this, but when I eyeball this stuff, I'm now looking at the area between the letters, the position of the counter, and the distribution of the black. And I consider how modifying the shape of the letter might make the different ideals not fight with each other.

Incidentally, I do suspect that in the cold metal punch cutting days in designing they were concerned with the balance of the individual letter between its left and right extremes, as Kindersley has it. This is because setting side bearings was a separate production process involving 'justifying' the matrices.

k.l.'s picture

Is this the idea that you ‘pour’ sand or a liquid between the letters, and the space filled up should be the same—one for caps one for lc? —presumably they make a decision on how far they assume the liquid penetrates open counters like the C.

In principle, yes. As to the URW approach, the adjective "viscose" is essential -- the space has some resistance. This addresses the C. And to my knowledge, a couple of exceptions are required. The "different spaces for LC and UC" however is my addition, I don't know if the URW engine makes a difference.
What I like about this idea is that it can be, that it has been, translated into a program. I cannot imagine how to do this with the Kindersley approach; some variables seem to be missing. [Added:] This touches, ironically, the essence of Kindersley's idea, the distribution of black (and white). Under the microscope, there needs to be rhythm/pattern so type becomes readable, to make individual letterforms distinct, to form recognizable word shapes. That is, unevenness in the distribution of black and white. From the distance, text should look more even (not completely even of course), which might be translated into a value of grey. This might in turn influence distances between particular letters, or the placement of one letter between its two neighbors. And everything starts to blur. Is the idea of placing a letter between its neighbors related to just two neighboring letters? This looks like too narrow a context.

dezcom's picture

"From the distance, text should look more even (not completely even of course), which might be translated into a value of grey. "

To me the clue in this is if it is disturbing and makes you take note of it. If you are reading and a wordshape is disturbed by a hole caused by a spacing/kerning issue, then it is inhibitive and bad. The same holds true for a black spot where glyphs appear darker than the rest of the word. The solution may be in refining either the space or the outline or both. There are always devilish problems like double "g" situattions and double cap "T" situations.

ChrisL

William Berkson's picture

>Is the idea of placing a letter between its neighbors related to just two neighboring letters? This looks like too narrow a context.

As I mentioned above, Kindersley actually qualifies this by the way he judged--or calculated--the advance width. A given advance width should have a letter that gives the same overall density (qualified by the weighting in his 'moment' stuff). And it seems that the wider advance widths would have similar density, though I'm not sure this is actually the case given his math. This even color applies to the whole alphabet, not just letters next to each other.

By the way, I am questioning not the importance of the optical center, but whether it needs to be in the center of the letter. If you look at the typical weighted 2, its optical center will be to the right of the center of the letter, meaning that the 2 should be shifted left between its side bearings. But this is exactly what should happen because of the opening to the left, according to the Tracy and URW approach. So the idea would still be striking an assymetrical balance, but within the side bearings, rather than necessarily the letter itself.

This still has implications for design, but they would be different from what Kindersley says.

k.l.'s picture

I think I understand what you mean.

raph's picture

I think I have a new way of seeing Kindersley's "moment" approach, one that suggests a mechanism by which it works reasonably well, and suggesting a hybrid approach that takes proximity into account as well. In short: wavelets.

The process of sliding a photographic mask behind another image (in this case, of a letter shape) is essentially convolution. Finding a peak or optimum in this process is closely related to edge-finding, which has been an intense focus of human vision research.

To see Kindersley's approach in a wavelet context, first consider the spacing between two letters rather than just the optical center of one. So, in Kindersley's approach, there are two "moment masks", one behind each letter. Consider combining the two by crossfading, so that the left one doesn't affect the right letter very much, and vice versa. The resulting mask will strongly resemble a 2nd derivative gaussian wavelet (first image of third row in this image).

I propose that adjusting the spacing between the letters in the two-letter image, and sliding the wavelet mask, in order to get a global optimum (peak response), will (a) give a visually appealing spacing between letters and (b) fairly closely approximate what you'd get from an application of Kindersley's method.

I also propose that the wavelets are much more powerful and general (albeit much more computationally expensive than was practical for Kindersley). By using wavelets at different scales, and localized in the y direction as well as the x, you can detect peak responses for proximity as well. I also think that you can adjust the scale of the wavelets to take into account the size of the type, to get different spacing results for text and display settings.

Hopefully I'll have time to code up a rough implemenation of my ideas before too long.

William Berkson's picture

Raph, fascinating.

I don't know if this is at all important, but Kindersley moved the letter, not the mask. When the light on the two sides of the light box equalized, that was the optical center.

One thing that Kindersley was--I think rightly--concerned with was producing both a letter and a spacing for it such that it would space excellently next to ever other letter in its alphabet. He called this having the letters (with side bearings) being 'interchangeable'.

Would your 'wavelet' approach catch any of these considerations?

raph's picture

I don’t know if this is at all important, but Kindersley moved the letter, not the mask.

Nope, not at all important. One of the great things about linear systems is the many symmetries, invariants, and equivalences. Slide the letter or mask, invert positive and negative, scale the letter (relative to the mask), and the "optical center" as found by Kindersley's machine is always going to be in the same place.

Would your ‘wavelet’ approach catch any of these considerations?

Yes, but I see this "interchangeability" criterion as overly limiting. If you've got an algorithm to measure or adjust spacing, don't you want it to be sensitive to the individual spacing between two letters, rather than just between a letter and a virtual, generic neighbor? Doesn't VA need to space tighter than AA?

William Berkson's picture

>Doesn’t VA need to space tighter than AA?

Well, Kindersley wanted the basic spacing of caps wide enough so that no kerning was necessary. Then if kerning was necessary, the LA RA etc would overlap. Obviously now with digital kerning and open type substitutions there are more options. Still, I would think that consistency of design of a letter and its spacing would help readbility.

I suspect that whatever you do, there will be come compromises needed between various standards of ideal spacing--relating to proximity, area between letters, relation of counters, overall weight of black, etc.

One of the things that might come out of a good mathematical analysis of the spacing problem would be to find out--by looking at the results--which of these should have the most weight. This would tell us something about the design of the letters also.

Syndicate content Syndicate content