Extremely well-spaced & kerned fonts?

twardoch's picture

Hello everyone,

for research & experiment purposes, I’m looking for a list of 60-100 digital fonts for the Latin alphabet (upright and italics, primarily serifs and sanserifs or various kinds), as well of 3-4 digital fonts for each of the major non-Latin scripts, which you would consider extremely/exceptionally well spaced and kerned.

I’d like to exclude from that list all the digitizations made of typefaces that were designed on a unit-based system which made it necessary to make compromises. In other words, I’m looking for fonts designed with “high UPM resolution” in mind.

We’re experimenting with various alternatives to the current spacing and kerning model, and would like to verify those models we’re prototyping with existing data (i.e. fonts) of high visual quality.

Please list font names preferably by their family and style name, plus foundry name. Please give more details regarding the format, version, source vendor or release year etc. if there is a possibility of confusion.

Many thanks in advance,
Adam

twardoch's picture

BTW, I assume that anybody who uses no kerns less than a given value should have all the kerns in a multiple of that value*.

I don’t agree with that assumption. Remember that kerns don’t create spacing by themselves, they correct the default spacing provided by the sidebearings.

People may choose not to provide small kerns for efficiency in the design process (speed or reviewing and correction, and easier management of a large set of data), for size reduction (still an issue if you have a very large character set, also there are the infamous limitations to the OpenType GPOS subtable size), and for the reason that tiny adjustments may lie below a certain visual threshold of perception (especially if the fonts are primarily intended for small sizes).

Think of a string in a guitar: if it’s hopelessly off tune, you’ll want to tune it, but if it’s just off tune a tiny bit, you yourself may not hear it, and the audience often won’t either. Similarly, if the default spacing provided by the sidebearings is “off tune” just a tiny bit, you may neglect a small-value kerning pair, but if it’s “more off tune”, then you’ll need to tune it, but the goal of that tuning is getting it “on tune” again and not of changing the pitch by a preconceived shift.

Another example: if a wall that’s supposed to be “straight” is off-balance by less than 5 degrees, then perhaps you won’t necessarily change it, but if it’s off-balance by more than 5 degrees, then you will want to fix it, yet it doesn’t mean that your corrections will always be in steps of 5 degrees.

I’d put it differently: if you assume that you don’t kern by less than 5 units in a 1000 UPM font, then it means that your tolerance of spacing errors is 0.5% of the point size, so in 12 pt type you agree that the acceptable deviation from the “ideal” spacing is less than 0.06pt. I.e. you agree that your spacing may be off by 1 unit or 2 units or 3 units or 4 units.

So to be consistant, you would also accept that your other kerning pairs may be “off by up to 5 units”, so for example, you could cluster all the kerning values which are different by less than 5 units. I.e. if you have a kerning pair of –41, a pair of –43 and another of –44, you might change them all to –43. But it doesn’t mean that you’d need to make them –45 or –40 (i.e. in steps of 5).

hrant's picture

Good points and analysis.

hhp

dezcom's picture

Adam,
I am not sure that I quite understand the point you are after and I have not studied the classic kerning systems mentioned. I am not saying that what I do is any great mystery or even better than the average bear. I would just say that I approach kerning/spacing with a naive but purposeful plan. Naive, because I have made no attempt to study the methods of master craftsman throughout history. Purposeful, because my intent is to marry the process of spacing and kerning, simply to make the glyphs fit with little effort. I certainly use positive kerns when they make the job easier. I do positive kern oo and even HH simply because it reduces the number of kerning pairs overall. I don't recall ever making a kerning pair less than 4 units either positive or negative. I tend to fit beginning capitals on their right side with lowercase glyphs left side because this is the most common occurrence of mixed case.
Since I started my journey in to type design within the past 8 years, it has all been digital and therefore not based on solutions to problems brought to bear by the older technologies. Is this any better way? Who knows. I just say that it comes only with its own baggage rather than including baggage influenced by past technology.
Not to add credence to right-wing politicians, but I am a firm believer in "Family Values", meaning classes by shape in the typical class kerning scheme. I consider some classes twins and other classes just siblings--that is to say, their behavior only partially mimics a sibling but in a predictable way. the "u" and "n" glyphs are siblings, etc.
I make no claim that my method is any better than anyone else's. I just say that this is what makes sense to me in my attempt to revisit my inner-child. If this is of any use to you in your experiments, let me know and we can discuss it further.

Chris

Mel N. Collie's picture

Dez, I think Adam is trying to say that Type Design, and spacing of type, involve quite a bit of difference thresholding and cluster quantizing, because Hrant was learning about them by merging such thresholding and quantizing into one thing.

Dez: "...consider some classes twins and other classes just siblings..."

Twins are so because they're quantized, siblings have been thresholded from each other.

It's kinda a big issue if you think about all things type, from a pair of counters, to the weights in a font family, where this pair of activities, quantholding perhaps, is much of the creative activity as well as being a big part, in reverse, of tt hinting.

dezcom's picture

David, I can't envision Type Design as separated from spacing of type. I can see letterform drawing perhaps as a beast of a different class but have difficulty prying apart the design of the spaces between letters from the spaces within a given letter. To me, there are times to solve a space to neighbor problem by adjusting the drawing of the glyph instead of kerning. This may seem like more work but it may also be critical work.
Space between glyphs can be quantified to a point and solved in groups. There are instances when the proximity of terminals of neighbors can over-ride the quantified space (v to y for example). The tension between these points has far less room for error and is often destroyed with enduser tracking. It is like placing two magnets next to each other at the exact point where they stay in that relationship regardless of polarity. Movement in one direction causes immediate collision and in the other direction, disengagement. This tension does not live alone. It is compounded by spacial volume and the shape of the space relating to nearby spaces. By this I mean that the space between parallel forms like A against V can create more of a magnetic attraction than unlike forms like S and H. When the W, A, V slopes become too far from parallel , the counter forms play in a different key than each-other and require a different spacing plan to play in harmony.

hrant's picture

BTW, since most of my work is in non-Latin I had never been involved in the production of fonts with very large character sets until I collaborated on Ernestine. I only made the Armenian glyphs but because we chose to put them in the same font as the Latin I hit the kerning-limit wall very early - basically I was told to limit the number of kerns within the Armenian somewhat severely, not least because some of the Armenian characters had to kern with some of the Latin ones (mostly the numerals and some punctuation). In contrast when you're making a font with not a very large character set the "kerning philosophy" can be qualitatively simpler, in a way more pure. Anyway now I realize first-hand how tricky the kerning-limit situation is (and frankly it's a bit surprising). Will it be fixed?

hhp

twardoch's picture

Hrant,

unfortunately, the 64K limit (65,536) is deeply tied into the OpenType font format specification, for example:

* An OpenType font can only hold up to 64K glyphs; it may appear as a large number but the Unicode Standard 6.1 encoded 110,181 characters. Some people may say that it’s “not necessary” to include glyphs for all Unicode characters in one font — but that, indeed, depends on the subjective notion of necessity.

* The OpenType “kern” table, which holds the “old-style” TrueType pair kerning information can include multiple subtables, each of them being limited to 64KB in size, which translates to up to 10,920 kerning pairs. If you were to kern each glyph with every other one, you could only provide a kerning table for 104 glyphs using this mechanism. Of course you don’t kern each glyph with every other one, but at least this limit means that you can only provide kerning for about 100 glyphs that need kerning. The kerning can be placed into multiple subtables but not all operating systems and applications can access more than one subtable in the “kern” kerning table.

* The OpenType “GPOS” table, which holds the “new-style” OpenType kerning can include multiple subtables (each limited to 64KB in size), but kerning in the “kern” feature can be expressed using class kerning pairs and individual glyph kerning pairs, and can be split into multiple lookups (that are being executed one after another). Also, a special ExtensionType GPOS lookup can be used to address subtables with an offset larger than 16-bit but the subtable size is still limited to 64KB. The combination of multiple lookups (simple and extension), multiple subtables and class kerning theoretically lifts the limit in kerning size, but is difficult to handle for the designer. (Writing all these lookups, subtables and classes so that they just fit into technical limitations of OpenType is like as if you were forced to write your novel with just the words and syntax that allow Google Translate to automatically translate your novel into a different language.) Also, various operating systems and applications do not support multiple lookups in the “kern” feature, or the extension-type lookups. So in addition, you need to prioritize your kerning values: put the largest adjustments into the first subtables or lookups, and put the minor adjustments “further down the stream” in anticipation that they get ignored by some apps.

Type design always has been subject to limitations of the currently available technology. But the limitations of the technology imposed by the OpenType font format should be easily solvable for today’s software. After all, we have gigahertzes of CPU power and gigabytes of RAM at our disposal. Unfortunately, we have to yet see a software developer who writes a good bullet-proof algorithm for expressing any kerning table, regardless of its size and complication, within the boundaries of the OpenType reality. We at Fontlab Ltd. are working on such solution, but the results aren’t yet ready for prime-time.

Personally, I believe the current paradigm of spacing (setting sidebearings) and kerning to be a mechanism of compression. If you look at the image formats such as PNG and JPEG, they include a “progressive” flavor where high-resolution images are compressed and in addition, lower-resolution versions (also compressed, but with less details visible) are provided in one file. The difference between PNG and JPEG is that PNG is “lossless” i.e. after decompressing, you get exactly the same pixels as before compression, while JPEG is “lossy” i.e. some neighboring pixels are made the same during compression and the pixel-perfectness gets degraded (by a factor determined by the user). The more loss, the more efficient the compression.

We’re working on transferring these concepts onto glyph spacing and kerning. In my view, sidebearings provide the “topmost” rule for spacing glyphs (“this is how this glyph spaces to all other glyphs including glyphs from other fonts”), class kerning provides spacing corrections to pairs of glyph groups (“this is how this glyph spaces to many other glyphs, correcting the previous spacing”), and individual pair kerning (class kerning exceptions) provide additional corrections to pairs of individual glyphs (“this is how this glyph spaces to this other glyph, correcting the previous spacing even more”). Following this paradigm, contextual kerning (which is possible in “GPOS”) provides correction in yet more isolated cases.

This is almost like the “progressive” image formats. “Just sidebearings” are the lowest-resolution version of the image, and sidebearings plus class kerning plus exceptions are the highest-resolution version. The way the sidebearings, the kerning classes, the subtables, the lookups, the individual pairs etc. are set up in a font is primarily a technical matter of dealing with compressing all spacing situations between all glyphs into a specific compressed format.

Along the same lines, if PNG is lossless and JPEG is lossy, then the cluster quantizing of spacing or kerning values is a method for lossy compression.

Note that some image editing applications, when producing the lower-resolution versions of “progressive” image formats, or when producing lower-resolution thumbnails for images, apply some image-processing filters — notably sharpening. This is very similar to the quantization and hinting techniques which David Berlow has been talking about for a long time.

Type designers and font engineers have become accustomed to drawing optical sizes, performing outline quantization, hinting, setting sidebearings, making kerning classes, setting the class kerning values, adding kerning pair exceptions etc. — mostly by hand, and sometimes as semi-automated steps.

However, all these steps have been mostly performed separately from each other. I believe that a lot of knowledge that has been gathered in image-processing research with regard to compression, low-resolution optimization, progressive image encoding etc., can be applied to type design and digital font production.

Sometimes, I think that type design and font production today works as if a digital photographer were hand-editing each region of a compressed JPEG image, fiddling with different regions, applying shuffling various compression parameters and filters by hand of all the different regions of an image. Or as if the font engineer was compressing a JPEG image by hand, writing each compressed segment himself (and running a somewhat imperfect, simplified version of the compression algorithm in his head).

While this may be a good working method for some, I don’t think this has to be the only working method for all. The image editing software has done tremendous progress in developing all kinds of smart algorithms — which was mostly because images arrived on the web and bandwidth was sparse.

Unfortunately, fonts arrived on the web at a time when bandwidth is no longer so sparse. And fonts are relatively small compared to images. I think if webfonts had taken off back in the IE/EOT times of 1996, we would have been in a much better place now when it comes to software tools.

But I still believe that there is a lot to be done. We should work on automatic algorithms for “electronic type compression and low-sizes optimization” (which includes autohinting as much as intelligent dealing with spacing/kerning).

Some weeks ago I found this Typophile thread on alternative spacing paradigms from two years ago. Much of what I’ve read there was in-sync with my own thoughts and concepts that I’ve been developing over the last two years myself.

So I started this current thread to gather some additional input data and get some discussion going — so that I can work on some models that, on one hand, could improve the current paradigm of doing spacing/kerning (by possibly separating it from the internal details of how this is handled in the OpenType font format), and on the other hand, could finally address the “compression” problems properly.

Because I’m really unhappy to see the “subtable break” problem being reported for the 987th time. But the “subtable break” problem cannot be reasonably addressed in isolation, just by itself. The whole thing needs a fresh look.

So I’m wiping my glasses now. :)

Thanks for the discussion so far,
Adam

twardoch's picture

Ps. The executive summary of the above: if you want to compress your data by hand (which is what the current spacing and kerning model is like), feel free to keep doing so. But I believe it shouldn’t be the only possible way of working.

froo's picture

I don't know if you plan an interactive module or just a "compressing" tool, but I am very impressed.

Ever wondered why we, graphic designers, hated Quark, and why so quickly migrated en masse to InDesign. Well, I guess, because it was not a pre-press program. It was a program about pre-press.
Definitely - as it was with Quark - the current font creation technology is too focused on telling a story about itself, about its history and current state. We think with pre-computer paradigms, because our new tools still mimic the old workshop. But wherever possible, the mechanically repetitable, technology-specific issues should be hidden from the user and displayed only on request (as it is with the JPEG compression settings/previews), because when a man does the work of the machine, he is a machine. It's the effect, that counts - not the way the classes were set. Any tool that helps to focus solely on the work will bring great relief.

charles ellertson's picture

Adam, I don't think it fits your model, but another tool for letterfit is to use alternate glyphs. For example, an "f" with a fairly large negative right sidebearing allows you to skip kerns with many x-height lowercase letters, at the cost of problems with glyphs having ascenders (or "ink", as with quotes) on the left, and word spaces. A second "f with a bit shorter terminal and a bit less negative sidebearing can be quite useful. The trick is to make the difference between the two glyphs at least unobjectional, if not unnoticeable.

* * *
The way I think about kerning is any kern value between a pair of letters is limited to the case worst situation. For example, take a look at the string

A',"

in Adobe's Warnock. Any two of those in combination look just fine. But the four together -- which can happen in a text -- creates a mess.

I'm stealing from my chapter in a forthcoming book here. A few more typophile posts and the chapter will lose all value. But what you're after is important, I think. As you define the problem, I worry it will cause sacrifices on the shapes of some glyphs, but that's a worry, not a fleshed-out thought.

Charles

twardoch's picture

Marcin,

We plan both. Basically, if we decide to break the welded bond between how spacing and kerning is expressed technically and how it can be defined by the designer, then we will have to come up with good solutions for both parts: a good design tool which is free of the technical constraints, and a good “compression” mechanism.

Charles,

this fits my model perfectly. I’m glad we’re thinking along the same lines!

Adam

Nick Shinn's picture

Trajan is well spaced, but it employs a methodology that only works for capitals.

Accordingly, it might be a good idea to have two sets of caps for u&lc fonts, same glyphs but different sidebearings, one for all caps and the other for u&lc.

Not that strange, really, as we already have bi-spaced figures (tabular and proportional).

The trick would be to integrate this into the FontLab interface without cluttering up the main glyph panel; nesting or layers, perhaps?

Mel N. Collie's picture

Dez: "David, I can't envision Type Design as separated from spacing of type."

Did I suggest such a thing is even remotely possible?

Adam: " But I believe it shouldn’t be the only possible way of working"

I agree :) the only caution I'd give is not to go too far with the image compression analogy. Or explain, is an image likened unto a font's glyph, to a font, or to a font family?

And also don't forget that for years I've been saying that simplified postscript font and hint creation and editing "shouldn’t be the only possible way of working". So, I'm surprised you're taking a giant step in spacing before FL-drawing gets the more sophisticated capabilities of TT contours and hints.

We should all be pushing however we can, for more sophisticated drawing, spacing, compositing and hinting tools, integrated to each other and to capabilities at, above and beyond the capabilities of the puny human formats we have now, shouldn't we!?

Nick: "Clockwork"

What clockwork!? "ocr" repeats thrice, spaced once for each word.:)

Nick Shinn's picture

Things are fine the way they are.
Why redesign the core to fix a few peripheral glitches?
If you do make changes, please keep classic as default.
I hate it when “upgrades” turn me into a senile novice.

How about slant-angle guides?
That would help kerning italics.
Especially in combination with the Measurement Line.

twardoch's picture

David,

we’re re-imagining our font editing products: in our new codebase (codenamed Victoria), we have rewritten it from scratch, completely expanded its possibilities, developed a new flexible storage system for working files, removed the limitation of integer coordinates. We’ll be introducing a new drawing environment with improved drawing tools, some completely new drawing tools for both PostScript and TrueType outlines, with fully new approach to guidelines and the grid. It’s under heavy development now, parallel to the development of FontLab Studio 6, which will still be based on the current FontLab Studio codebase with some improvements within the existing paradigm. So we’re working two-fold (short-term and longer-term).

We believe that Victoria will be a radical improvement for type design. For some time, both FontLab Studio 6 and Victoria-based products will likely co-exist, because users will need to re-learn the tools when switching from FLS to Victoria. Also, it’s likely that in the first iteration of Victoria, not all of FLS’s functionalities will be replicated right away.

I myself think of FontLab Studio being like Adobe PageMaker, and Victoria being like Adobe InDesign. InDesign 1.0 was, on one hand, much more flexible than PageMaker and free of many of its limitations, but it still took some time until InDesign incorporated (and improved) all of PageMaker’s capabilities.

As for hinting, there are numerous projects going on right now in the world (there’s VTT-based workflows, there is FreeType ttfautohint, there is RoboHint, and there are our own FontLab TT Hinting tools). We’d love to create a common project with all the parties involved that would allow at least basic interoperability between the various tools. For example, we’d like to be able to use the same basic information that RoboHint or VTT uses. We’d like to be able to read the “wings”, even if perhaps not all of the tools’ functionalities would be the same.

Since we’ve supported Werner Lemberg financially in the development of the FreeType ttfautohint project, and intend to continue to do so, I think it’d be very likely that we could come to an agreement at least between RoboHint, FreeType and FontLab for some interoperable notation. Do you know if RoboHint would be interested in open-sourcing or publishing at least some aspects? I think the actual contents of the wings and winglets could be owned by various developers, but the basic underlying data could be shared. We’d love to get involved in this.

I agree that the image compression metaphor shouldn’t be transferred literally to font editing. For me, the “uncompressed image” is analogous to a “spacing matrix” — the relative positions of each glyph in a font to every other glyph in a font (with the “text run left boundary” and the “text run right boundary” being pseudoglyphs in that matrix). Such spacing matrix is a finite, “decompressed” representation of glyph positioning in a font. Such a spacing matrix can be expressed or approximated with (i.e. “compressed into”) various combinations of sidebearings and kerning pairs. If the glyph pairs are prioritized by their frequency of appearance in real text, this would influence the compression mechanism — the most frequent combinations should have a preference of being expressed by sidebearings while the less frequent combinations could be expressed by kerning. Some very rare combinations or such that practically never occur in real life could be eliminated from kerning, especially if the kerning values were small. It’s a bit similar to how some audio compression mechanisms work — the frequencies that are less hearable by the human ear are compressed more aggressively (more lossy) while the frequencies to which the human ear is more sensitive are compressed more moderately (less lossy).

That’s roughly my direction of thinking. Comments?

Cheers,
Adam

twardoch's picture

Ps. In a message that I cannot find right now, I compared the glyph design and the layout/positioning in a font to the video and audio tracks in a video file. A video file typically has one video track but can have multiple audio track (for different languages, and with different mastering quality — mono, stereo, dolby 5.1).

An analogy could be made that a font usually has one set of glyph images but can have several kinds of layout/positioning info (TrueType kern table, OpenType GSUB+GPOS tables, TrueType GX/AAT tables, SIL Graphite tables etc.). TrueType and OpenType layout models are a bit like English mono (no kerning), English stereo (TrueType kern table kerning) and English dolby 5.1 (OpenType GPOS+GSUB), while TrueType GX/AAT and SIL Graphite are a bit like Norwegian and Slovenian audio tracks. The browsers that can display a font but ignore all kerning and features are like audio equipment that can only play mono audio tracks.

Of course, these are all just coarse analogies, but if we stick to that analogy (video+audio tracks vs. glyph design+layout/positioning), we can assume that in a video file, there can be very independent compression and optimization algorithms for video and audio, just like the glyph optimization and compression mechanisms (different hinting, embedded bitmaps, different outline flavors, different UPM sizes, outline quantization etc.) are different from the possible layout/positioning optimization and compression mechanisms (that I talked about before). But of course, the tracks still need to be in-sync.

I mostly draw up those analogies for myself so that I can explore existing approaches, processes, algorithms or methods employed by people who work in those fields that I draw the analogies into. Sometimes I find that some approaches developed for areas completely unrelated to type can be potentially employed in type design and font production, but of course often they need to be adapted for the specifics of type design. And of course, sometimes my analogies are moot. Yet it’s rather refreshing to still draw them :)

Bendy's picture

>Different outline flavors

I'd be interested to see if there could be any implementation of Spiro/cornu curves in a future FontLab.

twardoch's picture

Bendy,

theoretically, yes. However, I believe Raph Levien’s policy is to grant free licenses for his Spiro patent only to GPL projects while requiring some royalties for use of his Spiro patents in commercial products.

I have not yet analyzed the sources of opensource fonts made with FontForge as to how many of them actually have used Spiro (which is included in FontForge). I think it would be a reasonable indicator as to whether the technology has sufficient interest in the community.

Bendy's picture

I'm planning to give FontForge/Spiro a try.

It could be a bit of a chicken-and-egg: commercial foundries, who are mainly using FL, may not be aware of Spiro if FL isn't yet supporting it. One wouldn't want to exclude that interest if one is trying to gauge it. But then, I don't know how much the foundries look around at other workflows and software. Maybe they have tried FontForge/Spiro.

dezcom's picture

Sorry, David. I misread: "Dez, I think Adam is trying to say that Type Design, and spacing of type, involve quite a bit of difference thresholding and cluster quantizing,"
I misunderstood "difference" as "Different".

In the words of SNL's Emily Littella, "Never Mind"

brianskywalker's picture

Besides Raph's fonts, Dave Crossland's Cantarell used spiros for the initial drawing (if I remember properly), and I've used spiros to greater and lesser degrees in all my font projects. Sometimes it just depends on what kind of shape I want. Some unpublished things I have been working on use spiros almost exclusively. What would really make spiros useful is to be able to use them in conjunction with cubic splines at the same time. As in, the ability to set spline types based upon contour vs. glyph.

dezcom's picture

What I would like to see are plus/minus 0-100% slider preferences for typical spacing and kerning patterns. This would allow a type designer using the software to make educated kerning changes characterset wide. An example would be o-o, n-n, n-u, v-n, v-v, etc..
This would allow for a table of neighboring shape interactions which could be portable and editable. It would greatly reduce the amount of time by reducing the number of interactions needed to kern or set sidebearings for a given font. The computer could read left and right shapes of glyphs for categorization but allow manual override by the user. When finished, the software could then set classes and metrics globally. Changes could be made globally as well. The user set slidebar settings would vary with font width, serifs, contrast, etc. as the user sees fit.

HVB's picture

@Adam - re: removing requirement for integer coördinates ..

What would be the impact on rendering engines (if any)?

Mel N. Collie's picture

HVB: "...impact on rendering engines (if any)...?

I'd guess no impact. Sub-integer coordinates get rounded to integer coordinates for generation of TT 0r PS fonts. TT and PS fonts are rendered from these integer values, usually today with sub-pixel values being determined from the integer values of the outline coordinates.

AT: "I mostly draw up those analogies for myself... "

I think that's best, as the true nature of type design, from the first scratch in a tool to the deployment of device-, layout- and language-independent typography, is less like anything else, the more one understands it. :)

...and: "For example, we’d [FL] like to be able to use the same basic information that RoboHint or VTT uses."

In VTT's case, you're better off dealing with Microsoft, but we're talking about a documented and extendable font format (SFNT)*, so usability by other programs is entirely up to the program's programmer. VTT adds a table(s) that VTT interprets from VTT-talk to TT, and you can read and interpret that talk in FL if you write an interpreter, no?

In Robohint's case, as Petr presents it, he's just finishing the translation of all the TT hints to Python, implementing scaling, dash-boarding the graphic state (which VTT lost), displaying all the popular renderings and debugging the inter-hint translators. (It's not in alpha yet, but Robohint's use is not intended to be limited to me or by me).

On Winglet (a feature-specific single-dimensional output-specific hint set), and Wing (a complete single-dimensional output-specific hint set); they should "technically" be followed by the word "Command", and are followed up in a hierarchy by "Glyph Commands", "Glyph Sub-Group Commands", "Glyph Group Commands" and up to "Font Family Commands", (or they could be "Winglet Command, "Wing Command", "Stroke Command", "Sub-radical Command", "Radical Command", etc... whatever one needs to order their parametric scaling, actually) — these are just terms invented to contain python code.

So, I don't think anything technical should stop a marriage of use if FL is up-to-date in Python versions, reads and writes TT instructions in one of the common tongues and if vfb maintains close contact with the capabilities of TT & UFO as they evolve, which is all up to FL, no?

JanekZ's picture

"What would be the impact on rendering engines (if any)?"
I made an experiment and there is the difference in rendered outline. [I made two fonts where one point 1) has integer coordinate and 2) moved a bit. winXP + CorelDraw and FontForge, as FF does not round to integer coordinates for generation of TT 0r PS fonts.]

Té Rowan's picture

@Nick – Why redo the core? Well, since you are not a programmer, you have never seen source code grow scar tissue over time as patches and bug fixes are applied until it becomes an unwieldy, unmanage­able mess.

twardoch's picture

Nick,

> Why redesign the core to fix a few peripheral glitches?

As you are a type designer, you’ll know that there are situations when you can, indeed, “fix some peripheral glitches” such as improve some kerning pairs or fix a few odd outlines, like I’m sure Robert Slimbach had done with Adobe Garamond when he was working on Adobe Garamond Pro.

But then, there is also time for a comprehensive re-imagining, and redrawing everything from scratch, where you don’t need to worry about backwards compatibility, document reflow etc. Which is what Slimbach did with Garamond Premier Pro.

Both Adobe Garamond Pro and Garamond Premier Pro are on sale, and we intend to make incremental improvements to FontLab Studio as it exists today, but also have a new platform which allows us for much greater flexibility. On top of that, the new platform is by design much more stable because it follows a different style of programming. It’s easier to fix problems or change things, because they’re all cleanly separated from each other rather than being interwoven deeply.

We consider the FontLab Studio codebase “mature”, in both positive and negative connotations of that word. The positive aspect of maturity is that it’s well-known, familiar and predictable, but the downside is that it’s much less agile and flexible, and it takes a lot of time to change things. The Victoria codebase is “young and learning”, currently in its teens. It still needs to learn quite a few things (including learning from FontLab Studio), but has immense potential.

The codebases are 15 years apart, and since software ages a bit more faster than people, it’s roughly equivalent to one human generation. There is time in which both generations (parents and children) are “professionally active”, and we’re approaching the moment this starts. At some point, the old codebase will be retired and only the younger generation will be active, but it’s still a *long* way towards that.

Best,
Adam

Nick Shinn's picture

As a software user, I wasn’t thinking of the underlying codebase, but the design of the interface.

What not to do: Apple’s recent redesign of its Mail app replaced the text on its main buttons with icons. The previous “Reply All” text has been replaced by an icon I’ve always identified as “Fast Rewind”. “Get Mail” is now represented by an envelope, but why is that get rather than send?—both actions involve envelopes. And so on.

So please, don’t redesign the way the interface looks, or the way tools work.

I could do with a larger palette of Mark colors, though, and the ability to name them, e.g. Yellow = Small Caps.

riccard0's picture

In 1985, after a year of finding that pretty but unlabeled icons confused customers, the Apple human interface group took on the motto "A word is worth a thousand pictures."

http://www.asktog.com/columns/038MacUITrends.html

Syndicate content Syndicate content