Understanding TT hinting

Frode Bo Helland's picture

Suffice to say, the documentation of TT hinting is either too shallow (not explaining enough, as in the case of the Fontlab manual) or highly advanced (The Raster Tragedy). I have some questions that I'll just get right to:

1: What is the difference between single and double links?
2: The Raster Tragedy seems to say that with the correct hinting, very few deltas are needed, but doesn't describe a method (or I just couldn't decode the advanced language) to do so. Take for example an "o": with simple links, the stems are easily controllable, but the northeast/northwest/southwest/southeast areas will still have stray pixels or odd widths. How do you deal with those without deltas?
3: How far can the interpolation links move a point?
4: How do you approach diagonals?
5: How do you approach intersections, like in an "x"?
6: Is there a way to move an alignment zone one pixel for the whole font at a given ppm, or do I have to do this manually for each glyph?
7: How do you hint an "s"?
8: How do you hint diagonal terminals?

I realize this is a lot, but I've read all documentation I've been able to find in and out +10 times now and this baby ain't giving up her tricks.

Frode Bo Helland's picture

Posting nighttime and summertime can bury stuff pretty fast, so I'm bumping this into attention again. I know there are a many trying to understand this.

twardoch's picture

> 1: What is the difference between single and double links?

With a single link, the origin point of the link needs to be "touched" by a different instruction (align, another single link, interpolation instruction). So a single link cannot exist "by itself", it needs to exist within a chain of instructions. If you add a new single link to an untouched point, FLS will automatically first add an alignment instruction. But if the point from which you start a single link already is touched by a different instruction, then FLS won't add the attachment.

With a double link, you can specify a distance between two points but none of those points needs to be previously "touched".

Example of double link (in the middle stem):

Example of interpolation + single link (in the middle stem):

As far as hinting strategies for diagonals, intersections or "s", there are many. I hope that some more experienced hinters tune in here.

Best,
Adam

Frode Bo Helland's picture

Thanks Adam. I didn’t really understand when I should use those instead of the single links. Maybe in the crossbar of "e", as I can’t connect the single links to the bottom (or) top anchor in a row like horizontally. If I do use double links, how can I still control it’s (the crossbar’s) position?

Frode Bo Helland's picture

I’m beginning to realize how much design decision there is in hinting. Stray pixels is one thing, sure, but questions like when circular characters should go from square sides to rounded, when to supress design features and when not to, spacing etc. are much more important IMO.

Frode Bo Helland's picture

Answering my own question:
6: Is there a way to move an alignment zone one pixel for the whole font at a given ppm, or do I have to do this manually for each glyph?

Yes, you can use the yellow arrow on each alignment zone to adjust it for the current ppm. The Fontlab manual mislead me here because it talked about using this trick to fix single letters that were off on the x-height.

twardoch's picture

Here's an idea how an extra "stem" can be used to do some basic control of the diagonals.

I've started hinting this glyph from the center outwards (in the x-direction). First, I've created a double link on the bottom "terminal", and interpolated the cusp point on it.

I've created a new stem and made single links to the outside glyph boundaries and to the sidebearing points, plus single links inwards on the top terminals.

Now, in the TrueType hinting options dialog, I can control the ppm3 value to decide how steep the diagonal should be at a given ppm. (Scroll the images to the right to see the TT hinting option dialogs. Observe the ppm3 value for the "X: v 393" stem, which is the "stem" that controls the width of the diagonals.)


In a "v" shaped like this, starting from the middle with a double link and going outwards seems the most effective way to ensure the symmetry of the glyph while also maintaining control over the width of the flat terminals.

twardoch's picture

Similar technique for "w": starting off with a double link for the top middle terminal, and working outwards:

Since I've created this additional "stem" (X: w 330), in very small sizes (8-10 ppem) I can make the "w" really wide so it doesn't "collapse" and remains readable.

Frode Bo Helland's picture

Wow, Adam, this is extremely valueable!

dberlow's picture

FF >I realize this is a lot, but I've read all documentation I've been able to find in and out +10 times now and this baby ain't giving up her tricks.

Anything find valuable might be good or it could lose your path to greater knowledge based on the limits of the teacher.. But you never get her to give up her tricks UNLESS it begins with a Specification, as I've said before.

No one should even try to answer unless they know, are you aliased hinting only? Is the goal good lookin glyphs or readable text? Can you load up on deltas? Do you want to learn to hint, or learn to hint simple sans? How much time do you have? What do you want to spend?

And also, are you trying to do it all in FL where the right result for some specifications is impossible, or is the best you can do in FL as far as you ever want to go?

Frode Bo Helland's picture

Dave, I want to learn how to hint. I take into consideration both the various rendering engines with and without hinting, meaning I’m also adjusting outlines and experimenting with various techniques to aid rendering regardless of hinting. In this case, the fonts are workhorse text faces (a serif w/italics and two sans serifs) for a typical PC user. The goal is readable text, yes. I’d like to avoid deltas as much as I can. I can’t spend on additional software, but I’d love to spend money on some lessons, and I have enough time (given this (digital type design) is what I want to do for a living).

Frode Bo Helland's picture

And don’t think I didn’t know this was a steep hill to climb.

jasonc's picture

I’m beginning to realize how much design decision there is in hinting.

I'm going to get that on a poster in my office. Or maybe a t-shirt. ;)

David's question about software (and related to it, time) is important. Visual TrueType is a free (Windows based) tool, which (IMHO) can give you a lot more control, and better results, than FontLab. But the learning curve is very steep.

Jason C

dberlow's picture

FF >... typical PC user

Being XP with GDI CT, 96dpi, 9-12 pt.?

Get VTT, and let me know. If already got Make sure your unhinted TT fonts install And function on windows , if you are FontValidator error message aware, validate them there instead. If you tell me you only use the Mac and can't afford a PC, then you are not going anywhere, ever, in the knid of tt hinting you've specified.

For any meeting the above criteria and wanting to pursue, open your font in VTT, select "Prepare Font" from whichever menu has it, agree to whatever it asks you to do, and save as something else. Then having the glyph window and "control value table" (cvt) open, follow the documentation in the cvt, concentrating exclusively on alignments, and using the simple measuring tool measure the appropriate glyphs and fill those values into the cvt.

Once complete let me know and I'll tell you the six kinds of TT hints you need to write, where, why and how.

Frode Bo Helland's picture

I purchased a Win PC yesterday, David.

Frode Bo Helland's picture

Unrelated to VTT, but here are my serif results so far, with no deltas what-so-ever.

Cleartype

Greyscale

Frode Bo Helland's picture

Re: spacing. For Cleartype +, you’d want as little horizontal hinting as possible, right?

dberlow's picture

0 x hints work as well as some since none are used by the CT rasterizers.

Looks like there are spacing issues to resolve before hinting.

Controlling the diagonals is not working in the gs specimen and should be ignored

Looks like problems with x ht serifs being inconsistent and always different from the ascenders serifs

Frode Bo Helland's picture

Yep, many problems. I’m learning as I go.

Beat Stamm's picture

Quoting David: “0 x hints work as well as some since none are used by the CT rasterizers.”

This is NOT TRUE! The basics behind the CT path of the Windows TrueType rasterizer are described in §4.1 ClearType & “Legacy Fonts” and §4.2 ClearType & “Legacy Applications,” and I’ll repeat what I highlighted in §4.1.2 Deltas “Galore:”

ClearType rendering does not generally ignore TrueType instructions in x-direction. It merely rounds them differently, and it bypasses distorting deltas.

Greg has included some of my internal notes at the time in his post Backwards Compatibility of TrueType Instructions with Microsoft ClearType. Some of these are gory details from a tight-rope walk between broken “hinting” code and making the result “look nice” anyway. But most of these “jury rigs” can be switched off as described in Greg’s post and in §4.1.2 Deltas “Galore;” the latter should be safe to use on any version number of the TT rasterizer.

The exception to said switch is rounding to sample grid, and in tandem the CVT cut-in to samples. This is needed to implement Fractional Stroke Weights and Fractional Stroke Positions, which none of the existing fonts did, and at the time this was the preferred implementation. It also was my understanding that subsequently core fonts would undergo extensive “delta tweaks” to ensure sufficient Stroke Rendering Contrast. Some core fonts did get these “tweaks”(e.g. Arial, Verdana), at least at some sizes and for RGB LCD Sub-Pixel Structures, while more recent fonts didn’t (e.g. Calibri, Corbel)—font-makers’ artistic preferences vs end-users’ preferences (cf. §6.3.7 “Hinting” Respects End-Users).

Frode Bo Helland's picture

Thanks for chiming in, Beat. A little word from monsieur Conarre now and I've had the whole TT royality stop by.

I'm doing some experiments with smoothed Greyscale rendering at the moment. It has a tendency to turn out too light, but locking stems to the grid and adding a little weight to the rounds (with middle deltas) looks real nice to me. In fact, I find this more readable than binary rendering. Take for example italics: binary rendering is a disaster even with great hinting. Any opinions on that?

Frode Bo Helland's picture

Btw, for those of you that like me start from scratch I recommend this PDF. Once I learned the basic tools, it was real helpful explaining the basics.

jasonc's picture

>>Take for example italics: binary rendering is a disaster even with great hinting. Any opinions on that?<<

Hinting for binary rendering of italics can only be described as making the best of a bad situation. And if that was your target, you'd have to concern yourself with phase control to get the best stepping patterns, which is just really complex.

Jason C

Beat Stamm's picture

Seconded. For fonts with a substantially uniform italic angle, you can make the overall appearance of text more consistent by replicating the same stepping pattern, conceptually speaking, across the entire font. Once you have found a set of criteria that yields the “best” (least offensive) stepping pattern (phase control, number theory problem, etc), you’ll find that not every italic stroke extends from the baseline to the x-height or capheight—not to mention ascenders, descrenders, and serifs. Mathematically, this can be extrapolated as appropriate, while numerically this faces the “multiple rounding jeopardy” (cf. §4.0.2) of the TT rasterizer, and to be consistent, this must be exact (cf. §1.3.3 Precise vs Exact).

Beat Stamm's picture

@Frode, about gray-scaling: This sounds like a reasonable start. I’d suggest to have a look at the seminal paper by Roger D. Hersch, Claude Bétrisey, Justin Bur, and André Gürtler, Perceptually Tuned Generation of Grayscale Fonts, IEEE Computer Graphics and Applications (Nov 1995) for inspiration. Roger was the co-referee of my PhD dissertation and Claude (another of Roger’s PhDs) the colleague who brought me on board with Microsoft Typography at the time. Guided by André Gürtler from the School of Design in Basel (my hometown), they devised a set of guidelines on, informally and very loosely put, where to “put the gray.” In summary, they recommend

  1. to sufficiently reinforce thin strokes (cf §1.1),
  2. to allocate the gray on the trailing edge of vertical strokes, leaving the leading edge as sharp as possible,
  3. to allow for sufficient black pixels on diagonal strokes (cf. “black core” in §3.2.0),
  4. to allocate the gray on the outside of round strokes,
  5. not to overemphasize serifs (cf §3.2.2), and last but not least,
  6. to use a consistent pattern of contrast profile across the entire font (cf §3.3).

In other words, you don’t necessarily have to put both the left and right edges of a stem on the grid (cf Fractional Stroke Weights), but if you decide not to do so, their recommendations are e.g. to “put the gray” on the right of vertical straight stems (in a left-to-right script, #2 above) and on the outside of (vertical) round strokes (#4 above).

Frode Bo Helland's picture

Ah, yes, dberlow’s "someone elses theory of sharp left edges" I suppose. I’ve been aiming for most of these uncounsiously: that’s just what gave me the best results. I’ll read your comments carefully, Beat.

English is not my mother tounge, but until I came across your article I’ve rarely had to use a dictionary :)

dberlow's picture

"This is NOT TRUE!"

Prove it! And prove the worth of rounding and trying to carry typographic information to the user on a grid other than the one they are trying to read on.

"The Raster Tragedy..."

Is Cleartype specific.

“Hinting” Respects End-Users)."

But not when that hinting rounds to a fairy grid they know nothing about.

Beat! Since all previous discussions, most foundries seeking to solve the windows rendering problem have learned to lightly y hint and be on with it. You can claim anything you wish, it's a free forum. But facts is facts and no one has proven x hints or a Kabillion deltas to be effective against the windows rendering tragedy.

I wish crude media quality was you first language;)

Beat Stamm's picture

Not sure what you’re trying to challenge?

The validity of the Sampling Theorem? This has been around for a while and is the foundation behind all things digital.

My insight into the TT rasterizer? I know for a fact that it executes instructions when the projection vector is in x (perpendicular to the long side of the LCD sub-pixels), because I wrote that part of the CT code path.

The option to round to pixel grid? I have illustrated that in §6.1.0 “Trueness” to the Outlines, in Strategy 1, and I will repeat some of the illustrations here, enlarged, and showing the outlines:
















Calibri, 6 to 21 ppem (γ = 1.0)

None of the above illustrations required any deltas. Not a single one.

Frode Bo Helland's picture

Whoever said Greyscale rendering wasn't readable was telling a big fat lie.


Beat Stamm's picture

Seconded. In plain gray-scaling, there are no color fringes to deal with, no issues with rendering color-on-color, no issues with rotating your screen into portrait orientation, and last but not least, downsampling with a box filter can get you the sharpest strokes with maximum contrast due to absence of “bleeding.” All of the above with a mere 4 bits/pixel (Windows implementation).

It’s a lateral shift of compromises: I think ClearType and related sub-pixel anti-aliasing methods get you smoother italics, but at the expense of the color issues and 32 bits/pixel when including y-anti-aliasing (hybrid sub-pixel anti-aliasing).

What works for one end-user may not work equally well for another one.

Personally, during my last two years or so at Microsoft, I used “Standard Font-Smoothing” and an auto-hinted version of Corbel as my UI font. Such a nice and clean font, but on my 96 dpi desktop I couldn’t handle the blur incurred by the unoptimized stems even in full-pixel positioning (cf §3.1 Anti-Aliasing “Exclusion Principle”). It was pronounced enough that I was questioned if I hadn’t inadvertently left my CT tuner in BGR mode! On my 144 dpi laptop, plain gray-scaling got me smooth characters in both x- and y-direction, notably before DirectWrite. Looked sooo nice…

But like I said, what works for one end-user may not work for another one.

dberlow's picture

BEAT:Not sure what you’re trying to challenge?

The contention of unproven amateur type developers with dry feet and wet tongues.

BEAT: "On my 144 dpi laptop, plain gray-scaling got me smooth characters in both x- and y-direction, notably before DirectWrite. Looked sooo nice…"

Lol, and make sure to match 144 dpi up with the use of a yellow background, just like Mr. Hudson finally admitted three years into this same old argument from 2004!

BEAT: "But like I said, what works for one end-user may not work for another one."

Oui, zo le problem eternal de Vindows WEB FONTS ist el gappo twixt el fontos cheapo por les useros de la desktop ivorie et la fonts expensivo! por la cheapo desktop ebonie..

In your zenorie, e.g. You just spent the entire budget on a few sizes of 3/450th of the font. You have proven nothing until you have made an ENTIRE FONT FAMILY, and SERVED it UP DuDe for cross browser web use!

Beat Stamm's picture

“0 x hints work as well as some since none are used by the CT rasterizers.”

First, let me make sure I understand you:

Do you finally acknowledge that the CT rasterizer executes x “hints” (instructions)?

Yes, or no?

dberlow's picture

First no, I have seen no indication that CT executes size independent x instructions, mirps and mdrps, the same way B&W or GS does. Second no, I have seen no indication that CT executes size independent x instructions, mirps and mdrps, along with rounding variables the same way B&W or GS does. Third no, I have seen no indication that CT executes size independent x instructions, mirps and mdrps, along with minimum distances, the same way B&W or GS does. And I've seen no indication CT does all these things that form the perfect Berlow hinting environment from the reader's perspective.

Why is all that zen trying to fall on whole pixels with stray light blue and light yellow fringing?

dberlow's picture

Beat's predecessors "allocate the gray on the trailing edge of vertical strokes, leaving the leading edge as sharp as possible,"

Frode's assumption from a previous statement of mine "yes, dberlow’s "someone elses theory of sharp left edges" I suppose."

Which came from a guy named Adrian Frutiger long before anyone else in this conversation.

Beat Stamm's picture

You’re dodging my question.

My question was not whether you approve, disapprove, like, dislike, understand, don’t understand, or don’t want to understand how the CT path in the MS TT rasterizer was implemented—whether in the part that executes the instructions, nor in the part that does the downsampling (CT filter).

My question is simply:

Do you acknowledge that the CT rasterizer executes x “hints” (instructions) at all?

Yes, or no?

Once I understand you on this point, we can take it from there.

(Hint: In the preceding illustration, I chose size independent x and y instructions to deliberately tie the outline to whole pixels—for demonstration purposes)

dberlow's picture

Yes

Beat Stamm's picture

Thanks!

First, let me address the rounding in the presence of anti-aliasing methods like plain gray-scaling (I’ll bring back the colors for ClearType and related sub-pixel anti-aliasing methods in the following post).

To render intermediate stroke weights, one can add a commensurate amount of gray e.g. to the trailing edge of stems, or as I’ve observed in your example, on the inside of crossbars:

To do so requires to put the respective edge on intermediate or fractional pixel positions, like so:

How to do this with size independent x and y instructions (i.e. without a bazillion of deltas)?

  1. Use “super-round” (SROUND[])? This will only go as far as “double-grid” (RTDG[]), for a precision of 1/2 pixel. But the above example illustrates it could use 1/4 pixel (this is Windows gray-scaling specific, ClearType requires different fractions).
  2. Leave the trailing edge unrounded? This is not unlike unhinted bi-level, except that in unfortunate cases this will give you another shade of gray on the trailing edge, instead of an extra row or column of pixels. The width of the stems below differs by a mere 1/64 of a pixel:

    Not very good if you want to render like strokes with like grays or colors (Hersch & al’s “consistent pattern of contrast profile”), plus few (if any) of the existing fonts did something to the effect of “if bi-level then round else don’t.”

  3. Rasterize with a DPI commensurate with the required pixel fraction and subsequently turn multiple pixels into gray (or color)? If all the fonts were hinted correctly, this might have worked. But many fonts had (too many) glyphs that didn’t work under these conditions. Following is an example:

    There are several causes for these “explosions” (typically on italics or diagonals):

    • numerical failure (with the freedom vector almost perpendicular to the projection vector)
    • bogus TrueType code (e.g. setting the dual projection vector perpendicular to a pair of points that have not been both touched in both x and y)
    • the IP[] instruction (with the [dual] projection vector neither in x nor in y doesn’t work correctly with non-square aspect ratios, cf. §4.3.1 Italic IP[] & Overscaling)

What to do? Fix all (!) faulty fonts, painstakingly agonizing over pixel-by-pixel backwards compatibility in black & white rendering? I don’t recall we ever seriously talked about this route in the ClearType team. Rasterize with a square aspect ratio, leaving all outlines to fall on whole pixels, like the above “zen?” We called it some name that may be too politically incorrect for publication. Hence I proposed that, for ClearType, with the projection vector not parallel to the long side of the LCD sub-pixels, any rounding instruction shall round to the required pixel fraction.

This fixed the “explosions” and it dodged the need to add code like “if bi-level then round to entire pixel else round to fractional pixel” all over existing fonts; the latter assuming there was a simple way to round to fractional pixels in the first place. What I didn’t like about this solution is that it made some strokes blurrier than others, and at the time I was not the only one to notice. Deltas were added to “sharpen” the strokes, lots of deltas, at least to the core fonts, and at least at some sizes. The motto was “Hinting puts the ‘Clear’ into ClearType.” Fair enough with me, at least at the time.

I gather you don’t like this solution, and by now I’m not particularly happy about it, either, particularly since meanwhile stroke “sharpening” seems to have fallen out of favor. In 20/20 hindsight, I could have added another SCANTYPE[] switch to revert rounding to pixel fraction to rounding to pixel, along with a new “super-duper super-round instruction,” to make it easier for TT experts like you to implement their preferred rounding strategy and hinting environment. At the time, this engineer had to compromise for the masses.

Beat Stamm's picture

In this second post, I’ll address the anti-aliasing filters. A disclaimer upfront: I’m not an expert in the field of signal processing (that’s the name for the part of computing that deals with anti-aliasing and its filtering methods), but so far I managed to understand it well enough to get my job done. Accordingly, I’ll limit the discussion to describing the phenomena.

There are two main aspects: “bleeding” vs “non-bleeding” filters, and muting the color fringes.

Conceptually, for plain gray-scaling in Windows, characters are rasterized as if for black & white rendering, but at 4 times the DPI in both x and y direction. Subsequently, these large characters are “tiled” into 4×4 squares. The anti-aliasing filter operates on these squares. For each square, it counts the number of pixels that are “on.” This yields a number in range 0 through 16. Finally, these numbers are associated with gray values: 0 gets white, 16 gets black, and 1 through 15 get intermediate shades of gray, with 1 a very light gray (almost white) and progressively darker through 15, a very dark gray (almost black). See the Raster Tragedy, §2.1 Full-Pixel Anti-Aliasing for illustrations.

This simple anti-aliasing filter is called a box filter. For the purpose of this discussion, what’s important about the box filter is that it does not appear to “leak” or “bleed” ink or color outside the boundaries of each “tile.” A 1 px wide stem or a crossbar that has its outlines aligned with the pixel boundaries will be black:

That same crossbar, anti-aliased with a sinc filter, instead of a box filter, appears to “leak” ink outside its boundaries, and its core is a dark gray, but not quite a solid black:

I’m not expert enough to explain the theory behind this in simple terms, but this is the kind of filter that the Sampling Theorem (which is the foundation behind all things digital) requires. What I see is that the sinc filter appears to “leak” while the box filter doesn’t. Compared to a box filter, the overall appearance of text rasterized and filtered with a sinc filter may be smoother, but slightly blurrier and with less rendering contrast: The core of the crossbar is less than black. I won’t argue for one filter over the other, because to me this comes down to personal preference.

Enter the colors of ClearType and related sub-pixel anti-aliasing methods. The claimed benefit of sub-pixel rendering is that one can position e.g. a 1 px wide stem at fractional positions relative to the actual pixel boundaries. I’ll start with 3 such stems, positioned at offsets −1/3, 0, and +1/3 of a pixel, relative to the pixel boundary (left to right). In other words, they are positioned on the sub-pixel boundaries. The following illustration shows these 3 stems completely unfiltered and without gamma correction (see the Raster Tragedy, §2.2 Asymmetric Sub-Pixel Anti-Aliasing for more illustrations):

The complete absence of filtering causes strong color fringing for the 2 stems offset by ±1/3 of a pixel, yet all 3 of them are made up of 3 adjacent black sub-pixels!

To make this into a workable solution, it is necessary to mute these color fringes down to a level that makes them hard to notice. To do so, informally put, the filter needs to “smudge the line” between adjacent unlit (black) sub-pixels and lit (colored) sub-pixels. In the process, the colors get muted, but so does the black. For the erstwhile black stem this means that it will become off-black, with the difference seemingly “leaking” out to adjacent pixels. Following is how ClearType does the filtering:

This is blurrier than the unfiltered stems but with color fringing that is muted. Notice that all 3 stems are now made up of only 1 black sub-pixel; the remaining two are dark but in colors. That’s one part of “smudging the line”—the other part is the “leaking” or “bleeding.”

Is this enough muting? According to the theory elaborated by MS Research, it should be, at least for most people at typical viewing distances. But I’ve come across individuals who noticed color fringes at 144 DPI and a viewing distance of what I remember as about 2 feet (0.6 m). Accordingly, for these individuals the color fringing has to be muted down even further. Following is another filter that tends in the direction of more muting:

This may be perceived as even blurrier than ClearType but with color fringing muted to an even lower level. Notice that all of the 3 stems now contain no black sub-pixels whatsoever, and more color appears to have “leaked” on the left and right stems.

I’ll repeat the above 3 illustrations at their original size, in hopes that this illustrates a trend (to be viewed on an RGB screen in native resolution, “landscape” orientation, and at 100% zoom):

Top to bottom decreases color fringing but increases blurring. Also, the middle stems appear sharper to me, relative to the left ones, and trailed by the right ones. If all of the above looks more or less blurry, try the BGR version below:

I don’t know the exact filters that were implemented in DirectWrite or Quartz. There are a few “tricks” that experts at signal processing can use to improve rendering contrast (the “black core”) or “sharpness” of stems. As far as I understand, all these “tricks” affect intermediate shades of gray or color, in a way not entirely unlike gamma correction (hence I’d encourage anybody reading this post to try the various gamma buttons at the end of §5.3 Gamma Correction to develop some intuition of where this might go). But from my experience I wouldn’t be surprised to learn that optimizing in one direction usually comes at the expense of another direction.

To recap, I’m not trying to claim that ClearType is the “one size fits all” solution. It would require no “rocket science” to offer end-users a choice of filters—in addition to the choices I have proposed in §6.3.7. If I’m not mistaken, even Apple does this—offer a choice of filters—nota bene the same company that advocates the 1-button mouse just so that you can’t click the wrong button. But I have no control over these decisions. The best I can do is to work with the existing filters and instruction set, and point into the direction of what could be if …

dberlow's picture

So, Yes or No? Is there actualy something you think You can do a out this at this incredibly late date?

Yes or No?

Beat Stamm's picture

Something? Yes! Couple of things come to mind:

  • Even if the MS TT rasterizer makes it hard to implement CT stroke “sharpening” in a size independent way, it can be done, as I’ve shown.
  • Even if a font maker decides that this is too hard, CT can take advantage of knowledgeable x instructions. Leaving them out altogether can lead to spacing accidents, as illustrated in §3.3.0 Constraint Priorities (the CT cap ‘O’ towards the end of that section). Insufficient x instructions can also lead to spacing accidents, even with CT’s “Natural Widths,” as illustrated in §4.2.1 “Natural” Advance Widths in Practice (the string “lellellel” towards the end of that section).
  • Even if the application decides to layout text by fractional advance widths/sub-pixel positioning (not my favorite at the 8 to 18 ppem range—more like an accessibility issue), x instructions can be used to assert a minimum stem width of 1 px. As discussed in another thread, this requires to double the amount used for SMD[] but it can be done to help what little bit can be helped and not succumb to the verisimilitude-at any-price dogma.
  • Media tablets encourage changing orientation from landscape to portrait—a candidate for plain gray-scaling, with room to develop strategies on how much gray to allocate, and where to put it.

Not every word is read on a device controlled by hard- and software made by Apple and Microsoft. There are 3rd party browsers and there is at least one 3rd party TT rasterizer.

Spread the message: Font hinting is an opportunity to get on-screen text rendering as close to the art of printing as the available screen technologies allow.

dberlow's picture

I just read the four pillars of fantasy.
Dot1 you showed what!?
Dot2 one cannot hint x in the 11 to 34 px range without an "agreement" with the composition.
Dot3 does your dogma bite?
Dot4 tablets are in general, beyond the resolution of tragedy
Best of luck!

Richard Fink's picture

Spread the message: Font hinting is an opportunity to get on-screen text rendering as close to the art of printing as the available screen technologies allow.

Interestingly put.
I happen to like hinted TT fonts in DirectWrite a lot. Much more than what I see on my Mac desktop.
However, on smaller screens - iPad, iPhone Retina display - if it's closer to the art of printing (didn't know printing was an art, thought it was a craft) then it's certainly less pronounced.

Personally, I couldn't care less about hinting as a way of reconciling a font designed with print in mind to the demands of the screen by adding hinting data for that purpose.

The two kinds of fonts have forked - fonts for print are one thing, fonts for the web, another.

But what I do like is that hinted TT fonts in DirectWrite have a look that says "this is HTML text". It's renders text in a way that's as visually appealing, but yet different from, the "look" of non-HTML text - text done with images or Flash replacement, or whatever. On the Mac, fonts look more like text as images.

No text on screen looks like printing to me. And emulating print doesn't seem to me like the right goal, either. (The Kindle, maybe, looks like print.)

Why look to simulate print onscreen?

Beat Stamm's picture

Point taken. This may be understood differently from what this non-native writer had in mind. I seem to have an unfortunate bias with the word “craft”—it conjures up connotations of “bricolage” (“handicraft afternoon” in elementary school) before any connotations of the skills learned for a trade like punchcutting or typesetting, let alone any aspects of art involved therein. Hence I tried to rise it above that (perceived) level and used the word “art.”

Art of printing is my oversimplification in an attempt to come up with a <META> tag for the Raster Tragedy home page, from where I borrowed the line without sufficient thought at the end of the post and day. What I really meant was the skilled métier of cutting the punches and casting the type to best adapt the type design to the available combination of metal, ink, paper, and type size. Today, hinting does (at least part of) this adaptation, and the screen replaces ink and paper.

Hence, depending on the screen technology and/or customer preference, a font may be hinted for B&W, gray-scaling, ClearType, DirectWrite, or a combination thereof (or none of the above). One of the differences to the days of hot metal is that, strictly speaking, hinting is programming, it’s software, hence it can cut different punches “on-the-fly” depending on the targeted ink and paper—and, of course, depending on the flexibility of the hinting.

Interesting point about the “look of HTML text.” For reading text online, I certainly prefer “live” text over images or Flash. Flash seems stuck at 72 dpi, which makes it an accessibility issue for me, and images are a similar handicap. As to DirectWrite, I like the y-anti-aliasing (I prototyped it during an exhausting weekend late 2000, after having been told that this has been tried already and it didn’t work—hence the weekend), but depending on the combination of type size and resolution I have a hard time with the blur (a statement to that effect from this post has been taken out of context, quoted in Wikipedia, and used as some kind of “proof” in many forums on why ClearType is always blurry).

The long and the short of it, no, I don’t advocate simulating print on screen, certainly not as the “one size fits all gold standard.” But repeating what I wrote above, hinting is software, hence it harbors the opportunity to cater to a plurality of applications (print preview, web reading, …), screen devices (desktop, laptop, media tablet, smart phone, …), and end-user preferences (or accessibility requirements).

Beat Stamm's picture

@dberlow
Did you also follow the provided links, the illustrations mentioned, and their histories of origin, or are you merely trying to ridicule whatever I’m writing?

But thanks for the good wishes.

dberlow's picture

Beat, you misread. I have followed all the links and read all.

I will respond privately as time permits.

Bendy's picture

Frode, thanks for that pdf link above. I'm starting to think one day I'll understand a little about hinting.

Frode Bo Helland's picture

I want to write a basic introduction one day. If I can be of any help, please don't hesitate to ask!

Frode Bo Helland's picture

So David, am I on to something?

Zero deltas, of course. (Edit, noticed some wrong anchor connections on t and z.)

JM Solé's picture

I wonder how to solve the spacing problems one can see on some sizes between the "m" and "i" in Optimized or between "a" and "r" in ClearType. Are those rounding errors as discused in the “Natural” Advance Widths in Practice section of the Raster Tragedy Revisited? That last line about how difficult it is to “fix” that always intrigued me.

John Hudson's picture

JM, for some reason the link you provided is directing me to the wrong part of Beat's document. This works if you paste it in the address bar:
http://www.beatstamm.com/typography/RTRCh4.htm#Sec21

Beat Stamm's picture

Difficult (if not impossible) to say without looking at the font in VTT (or a similar tool).

In “natural” widths, VTT will show you a right side-bearing point on a fractional x-position and hence you’ll have a fractional advance width, but subsequent layout will be to full pixels (if the RSB point happens to be integer, it's plain luck with the numbers). If the fractional part is less than 32/64 (= less than 0.5), it will be rounded down—you’ll get that part of the right side-bearing “truncated.” If it is 32/64 or more, the fractional part will be rounded up—you’ll get the right side-bearing “padded” by the round-off. Neither will help the spacing.

Chances are that text laid out to full pixels will use “compatible” widths, which may fix some of the spacing issues, at the expense of messing with the glyph proportions (this may seem odd but that’s what this “lesser evil band-aid” called “compatible” widths is supposed to do).

To double-check what the correct advance width should be, in pixels, evaluate

px = (pt × dpi × aw)/(72 × emHeight)

and round the result. If you allow integer ppem sizes only (VTT forces this flag—I was told to do so) then first determine your ppem size

ppem = (pt × dpi)/72

round the result, and then evaluate

px = (ppem × aw)/emHeight

and round the result—again (“Double Rounding Jeopardy”). This may yield a different advance width in (rounded) pixels, but that’s because e.g. 11 pt at 96 dpi gets rounded to 15 ppem while the actual ppem size would be 14 2/3.

Syndicate content Syndicate content