A (very) simple question about OpenType Stylistic Sets

piccic's picture

Although I have read about the issue occasionally in a few threads, I am unsure about this question:
Stylistic sets are ideally created to address a range of glyphs substitution, and I also think it works better to use them for full range substitutions (i.e. Swashed forms, Mediaeval forms, etc.)
Inversely, I may just wish to have a single alternate glyph, which may allow the user to have – let's say — a more traditional [t] than the one I have decided to be standard.

In these cases, how do you think it's better to place that simple alternate?
I know some designers (for example, Karsten), like the idea of using Stylistic Sets as well, even for a single glyph, but I'm not so attracted by this "multiplication of the sets"… :)
Would it be fine to use [salt], or maybe I could just include the glyph and not address the automatic handling of the substitution?
After all, in lead we just entered the single alternates manually… :=)

twardoch's picture

John,

so n and ń are "the same letter" in this context we are talking about. Are ȷ and j the same letter? Are a and ą the same letter? Are L and Ł and same letter?

I guess it would be useful if we could work out some reliable terminology and distinctions in this realm, especially because the conceptual or "ideal" set-up is often not quite the same as the de-facto or technical set-up. The best example may be something like the ogonek: is ą a base letter with a mark or is it a new letter? Technologically, there is no clear answer: some fonts provide an ą as one solid outline, others as a composite glyph consisting of a and the ogonek, and yet others include a solid outline for ą but in addition include OpenType GPOS mark attachment code for dynamic positioning of ogonek under any letter.

So is your criterion the spatial separation of ink? Is it the fact of touching or overlapping that constitutes identity? Is the glyph's identity a "closed shape"?

If "á" is "the same" as "a", but "ą" is different from "a", what's with "Å" (where in some typefaces the ring is part of the letter and in others it is a mark)? What's with the semicolon? (Spatial separation of ink).

In other words: if we agree on a working term of glyph identity to separate it from the font-technological meaning of "glyph" and to denote this certain unique characteristics that makes one glyph differ inherently from another, I gather that your position is, a stylistic set can be formed only if there are two or more glyph identities involved. So now we need a clear agreement on what exactly this glyph identity is :) (And perhaps a different term.)

A.

twardoch's picture

John,

up until 2006, the Swedish language considered V and W to be diacritic forms of the same letter. In Polish orthography, Ą is considered a diacritic form of A but Ł is not considered a diacritic form of L.

twardoch's picture

> That they are also separate characters in Unicode and
> may be rendered using separate glyphs is simply a technical
> accident.
(...)
> Imagine all text is subject to Unicode normalisation
> form D (fully decomposed),

Right. So you say Unicode is "technically accidental" on some counts but should be used as an authority on others? That relativist approach does not quite work when at the same time you're trying to set some sort of objective standards such as "The set is a concept, the contents of which are recognisable by their relationships."

You know that I am a relativist, or a subjectivist at the very least. But then of course I also say to type designers "do what you want with your stylistic sets" (because, indeed, they are yours).

Of course, I, too, like objective standards, simply because we work in a service-oriented industry and our products (fonts) should meet some expectations. So I'd be interested in setting down some objective rules for this particular case.

Thomas Phinney's picture

a and á — and à â ã ä å etc. — do not form a stylistic set

Whether their distinctiveness is "accidental" or not is completely irrelevant to me. IMO, if you want the same effect on all of them (which I'd think you normally would), then it would make plenty of sense to group them in a stylistic set.

If you were constructing those forms through combining marks, you'd get the functionality of changing them all at the same time. It should be practical to get the same effect without having to use combining marks....

Cheers,

T

John Hudson's picture

Adam, read what I wrote in response to your first set of ‘is this the same letter as this?’ questions. Also note that I used the term letterforms, since a letter is an orthographic concept: the same diacritic might be considered a letter in one alphabet but not in another. The term letterform clarifies that we're talking about shapes and the effect of variant substitutions on shapes.

So, once again: If the letterform-part of a diacritic is identical to the base letterform and to other diacritic forms of that letterform, and therefore consistently follows the same stylistic variation patterns, then the mapping of that letterform to a variant form is a single substitution, not a set substitution. Nowhere did I say that this was a matter of spatial separation of ink: it is a matter of whether there are corresponding letterforms and whether (see last paragraph below) the designer wants those corresponding letterforms to be affected in the same way by the substitution.

Remember, I am making these points in light of the upcoming registration of one hundred new features for character variation, and trying to come up with a distinction that will guide developers in deciding what to include in those features and what to include in the stylistic set features. This is a separate issue from whether it is ‘kosher’ for someone to put a à á â ã ä å into a stylistic set feature today. In the future, we are going to have distinct features for distinct purposes, and we need to determine what kind of substitutions belong with which features.

Let's come at this from the other side of the future: let's suppose that the character variant features are registered, published and supported. Now, should the substitution that maps double-storey a à á â ã ä å -- and also ā ă ą ǻ ȁ ȃ ȧ and any precomposed-glyph representations of a plus combining mark -- to corresponding single-storey a variants but put into a character variant feature? Or into a stylistic set feature? Remember, the substitution only affects the a letterform (consider that we're dealing with a typeface like Helvetica in which the lowercase g is already single-storey, so there is no multi-letterform set to be formed, just this one lowercase character and its diacritic forms).

It seems completely obvious to me that this is a single character variant feature substitution, changing the appearance of a single letterform. And this is the only basis on which it is sensible to make the distinction between the purpose of the character variant features and that of the stylistic set features.

Now, as Thomas indicates, what matters in the development of a specific font, is what the maker wants to happen when features are applied. And it is this desire that determines what letterforms and diacritics should be grouped within a single character variant feature. So your question about whether a and ą are the same letter is the wrong question: the right question is whether the font maker want ą to change in the same way as a when a particular character variant feature is applied. But what I think is obvious is that the choice is between whether to put both substitution in one character variant feature or in separate character variant features, and not whether the substitutions belong in stylistic set features. If the character variant features are to have a meaningful distinct purpose, then variations that only affect single letterforms must not be treated as stylistic set substitutions; otherwise, you will have a permanent grey-area between stylistic sets and character variants.

piccic's picture

Oh? That’s news to me. Educate me.
[…]
(Seriously.)

Seriously? :)

Reading the definitions it seems to me that a letter modified by a diacritic may be treated either as a new, distinct letter or as a letter–diacritic combination, so in the end it seems what really matters is the identity of a letter.

Although I hardly understand the reason (for example) for which you should tread a letter with a modifier as accented in one language and as another letter in another.
If [ł] is not an [l] what sounds does it represent? Because if the sound is completely different, who called it "lslash", and why? And if the sound is l-based, it remains an [l], which has just a variation.

That relativist approach does not quite work when at the same time you’re trying to set some sort of objective standards such as “The set is a concept, the contents of which are recognisable by their relationships.”

"Relativist approaches" never work, but we do not hold "absolute approaches", and I simply can't understand your line of thought, that's all.

twardoch's picture

Claudio,

“lslash” was a name given to the glyph by Adobe. An American company. Why? Because they needed to give some glyphs some names, for technological reasons.

"ł" in Polish represents a sound that is pretty identical to the English "w", i.e. a labial consonant that sounds like a short "u" (not really very similar to "l"). I.e. the Polish "lot" sounds more-less like the English "lot" but the Polish "łot" sounds pretty much like the English "what".

It is futile to assume that letters have some universal similarity in regard to the sounds that they represent.

In Polish, the letter "w" stands for the same sound that in English is written "v", while the letter "ł" stands for the same sound that in English is written "w". In German, the letters "w" and "v" stand for the same sound but in English they represent two very different sounds. In German, the letter "e" represents some vowels of which some are similar to what this letter represents in English (e.g. in English "error") but is radically different from what it stands for in some other English words (e.g. "erase").

In Polish and in German, the letter "c" represents a sound that in English can be written as "ts" or "tz", and in German as "z", while in English, the letter "c" represents sounds that in German or in Polish can be written using "k" or "ss"/"ß". In German, the letter "s" is mostly used to write a sound that in English and in Polish is written as "z". Etc.

The diacritic marks have been introduced in many languages (Romance, Germanic and Slavic) because the Latin alphabet had enough letters to represent the sounds found in the Latin language but was definitely not sufficient to represent all sounds found in those languages.

Other alphabets used a different path. For example in Slavic languages using the Cyrillic script (e.g. Russian), the completely distinct and graphically unrelated letters с, ш, з, ж represent sounds that in Slavic languages that use the Latin script are written using Latin letters with or without diacritic marks (in Czech respectively s, š, z, ž) or digraphs (in Polish respectively s, sz, z, rz/ż), and other European languages use yet different letters to represent the same sounds (in French respectively, s, ch, z, j; in German respectively ß, sch, s, — and the last one does not have any representation at all).

In short, using phonetics as a means to make assumptions about relationships between letterforms is a dead end.

Best,
Adam

John Hudson's picture

Claudo...

Adam...

All of which is a good demonstration of why I make a distinction between letterforms (shapes) and letters (phonetic signs).

Cristobal Henestrosa's picture

Actually, "1 or 2" in Spanish would be written "1 ó 2" and the accent is placed for precisely this reason.

Reviving this old thread just to say that the Real Academia Española recently announced that the accent between figures is not longer necessary, so now you are allowed to write “11 ó 12” or “11 o 12”.

More info (in Spanish, sorry) here. They are saying that this is because “the computer has eliminated the danger of confusion between the letter o and the zero, which is bigger”.

Cristobal Henestrosa's picture

…So I guess they don’t know about oldstyle figures.

piccic's picture

I missed this. Thanks to Christobal.

So an "lslash" is not an "l" at all?

I would like to have the type awareness of John Hudson… :-(

JanekZ's picture

And Ż is not Z
It is the last letter in our (Polish) alphabet and is always in indexes, phonebooks etc as the separate LETTER
<------ see here ;-)

twardoch's picture

The fact that certain sounds in a language are written with an "old letter" that comes from the original Latin alphabet, or a "newer letter" of the Latin alphabet (K, W, Z), a newly-designed letter which does not resemble any of the existing ones, a digraph (two letters), or an "old" letter with some added graphical mark — this does not influence the fact whether that given graphical sign is a "real letter" or not. Those are all real letters, and they're equally important.

Every language uses a different set of letters. In a way, the difference between L and Ł is the same kind as the difference between E and F. Or between A and Ä. Different letters, they just share some common graphical elements, and differ on others.

The naming of glyphs, or the naming of letters in the Unicode Standard is just a sketchy convention that allows us to communicate about the graphical forms, and is usually based on the English language. In Polish, nobody calls Ł a "letter L with stroke" or "Lslash" or whatever. People just call it Ł (in English, the name could be represented, in a simplified way, as "ehw" or "ehoh").

Cheers,
Adam

piccic's picture

So, it should have been called "Ehw", or "Ehoh". A tranliteration is surely better than an "Lslash". I am bugged by this now that I know it…
You said it was Adobe's choice, but when did the name was used? In Unicode it would have been nice to use more "correct" letter names.

Theunis de Jong's picture

The Unicode consortium chose mainly descriptive names -- some "same" characters are called different in other languages. Actually, that goes for the 'regular' alphabet as well. What's a dubya? I call it a 'wee'.

("Would a Wee by any other name smell as sweet?" Nah... doesn't have the same ring.)

twardoch's picture

> So, it should have been called "Ehw", or "Ehoh". A transliteration is surely
> better than an "Lslash".

No, it isn't. The same character can be used in different languages and can be pronounced differently. "Z" describes a different sound for Polish and for German, and "Ó" describes a very different sound for Spanish, Czech, and Polish. And what about "Ż" and "Ź" — in Polish, their pronunciation is quote close: clearly distinct to a Polish ear but often not distinguishable to foreigners. There are no effective ways to transliterate this. Letters are letters, graphical marks. We need language to describe them. But every description of a graphical mark is inadequate, just like every description of a sound is inadequate. Yet it's the best we can get in order to communicate using language.

This is how language works, fundamentally.

The glyph names were picked in very early PostScript days, decades ago. This also applies to many other glyphs, e.g.: “ (LEFT DOUBLE QUOTATION MARK, quotedblleft) might imply that this quotation mark is to be used on the left side of the quotation. But this is only true for English. In German, this mark is actually the closing quotation mark, to be used on the right side of the quotation.

Both PostScript glyph names and Unicode character names have a long history, and especially the earliest additions are full of inconsequences. Today, we should just take them as they are. Just DON'T assume that the inventors of these conventions were always right, and your life will be easier. Don't treat these names as something they aren't: full descriptions, of informations about the shape, or the use. Treat them just as names.

Best,
Adam

piccic's picture

Okay, but as a Polish person which name would you suggest for that letter?
That’s what I am interested in. I don’t think it makes sense to keep a description if it is inconsistent with other descriptions.
An "Eth" is not a "Dhyphen", so I expect an "Ehw" not to be an "Lslash".
Sure, there are things which have been named in some way, but this does not mean we could attain some consistency by asking people which actually have those letter as representative of sounds in their own language, right?

charles ellertson's picture

but this does not mean we could attain some consistency by asking people which actually have those letter as representative of sounds in their own language, right?

It's not a perfect world.

A number of characters came to have meaning simply because they were available. Phoneticians, being fairly clever, didn't threaten to hold their breath until they turned blue unless type foundries made up new characters for them. They adapted what was available. It is understood those characters have a technical meaning to signal sounds.

Along come anthropologists and use those phonetic symbols (in written form) to represent sounds in languages that had no written form. They're already there, and the anthropologists want to publish, right?

Along come people proud of their ethnicity, who -- after the fact -- derive a written language.

Example: L-slash is used in the written form for some Central American native languages. The written forms are late 20th century; the language is ancient.

Question: What should Unicode call those characters? In other words, who should be offended?

agisaak's picture

@Piccic:

I think 'Eth' represents a special case since AFAIK it only occurs in Icelandic and Faroese (as well as a number of Germanic languages which are no longer spoken), both of which refer to it by the same name. Lslash, on the other hand is not restricted to any single language or even language family; naming it Ew rather than Lslash might appeal to Poles but not to the Navajo (who would undoubtably prefer it be called ła (pronounced similarly to lla while pretending to be Welsh).

André

dezcom's picture

"Those are all real letters, and they're equally important."

Well said, Adam!

oldnick's picture

I think 'Eth' represents a special case since AFAIK it only occurs in Icelandic and Faroese (as well as a number of Germanic languages which are no longer spoken)

In the "no longer spoken" category, one should include Old English...

agisaak's picture

In Old English, it also was called 'eð' -- I'm unaware of any language which uses this character but which calls it by another name.

André

Theunis de Jong's picture

Charles mentioned phonology; and that's another area where Unicode chose to describe form, rather than function (at least in some cases).

Take the phoneme 'ɔ'. Its phonological name is "open o", and, as it happens, Unicode also calls it that. But another one, the phoneme 'ʌ', is a variant of 'a' and goes by the ungainly name of "open mid-back unrounded vowel". Unicode calls this one a "small letter turned v", although it has nothing to do at all with the letter 'v' -- its shape has been derived from a small caps 'A' without the crossbar.

--

Theoretically, it's possible to assign a couple of different names to each of the Unicode characters. But I think they are trying to avoid that, so the confusion of 'what name describes what character' doesn't get any more complicated than it already is.

agisaak's picture

Theunis de Jong writes:

Take the phoneme 'ɔ'. Its phonological name is "open o", and, as it happens, Unicode also calls it that. But another one, the phoneme 'ʌ', is a variant of 'a' and goes by the ungainly name of "open mid-back unrounded vowel". Unicode calls this one a "small letter turned v", although it has nothing to do at all with the letter 'v' -- its shape has been derived from a small caps 'A' without the crossbar.

'Open mid-back unrounded vowel' is a phonetic description of the vowel, but I wouldn't consider that to be its name (the usage I'm most familiar with is simply to call it 'wedge'). Calling it 'turned v', though is rather accurate insofar as that is how it was commonly produced in metal. I don't think that there is any standardised set of names for IPA characters (Pullum and Ladusaw would likely be the closest thing to a standard, but many of the names they use don't reflect colloquial usage among linguists and, like the names adopted by the U.C., tend to be graphic descriptions -- the one which always gets me is 'turned f' to describe [ɟ]).

Nick Shinn's picture

In the "no longer spoken" category, one should include Old English...

"I don't think I have ever told you what an unforgettable experience it was for me as an undergraduate, hearing you recite Beowulf. The voice was the voice of Gandalf." -- W.H. Auden, in a letter to J.R.R. Tolkein.

Surely subsequent professors of Anglo-Saxon have picked up the torch?

JanekZ's picture

Thanks all for excellent info!
This is my attempt to resolve OSF problems with stylistic sets:




[vide: http://typophile.com/node/73413 ]
ss01 substitutes [one] by [one UC I like: \one.osf.alt]

lookup ss01StyleSet1lookup6 {
lookupflag 0;
sub \zero by \zero.osf ;
sub \one by \one.osf.alt ;
sub \two by \two.osf ;
.....
sub \nine by \nine.osf ;
} ss01StyleSet1lookup6;

hope it works...

twardoch's picture

Janek,

the problem with your code is that when a user activates "onum" and "ss01", he'll get the default oldstyle figures, not the alternates, which I guess is not what the user would expect. I'd suggest moving the "ss01" feature definition before "onum".

A.

twardoch's picture

Also, on an unrelated note, I'd recommend:
(1) Using proper codepoints (U+2160 — U+216F) and glyph names (uni2160 — uni216F) for the Roman numerals. For the CC glyph, that would be "uni216D216D" or "uni216D_uni216D".
(2) Designing the Roman numerals a bit like small capitals: somewhat smaller than the full capitals, and with some more letterspacing.

piccic's picture

Hi Adam,
what3s the problem in using personalized glyph names, as you respect the Unicode values of a glyph?

Anyway, I still think it doesn’t make sense to have some letters named according to a criteria, and some others to other criteria, in the same system, which is Unicode. Of course, “perfection” may be deceiving, but in the case of the "slashed l" the Navajo use is an adoption for transcribing an existing oral language, so it‘s not what I asked about. The name, in this case, is OK as it was previously defined, since we use a graphic sign which already existed for a Polish alphabet letter.

twardoch's picture

Piccic,

The rationale is given here:
http://typophile.com/node/2994#comment-23796

With some additional explanations here:
http://typophile.com/node/29469#comment-404691

Glyphnames have been part of PostScript since 1982, long before Unicode was conceived (around 1991). This is why there is a set of glyphnames that have "mnemonic" glyphnames such as "A", "asterisk" or "zero". The "uniXXXX" convention has been introduced in OpenType (1999) after Unicode has become a dominant standard for text and font encoding because it did not make sense to come up with mnemonic glyphnames for all Unicode characters.

If Unicode existed before PostScript, there would have probably been no glyphnames, Unicode codepoints would be the only identifier. Which would then introduce other problems, such as the inability to include multiple alternate glyphs for one Unicode character.

You're free to engage into an academic debate about an ideal glyph identification scheme. Such exercises have been done in the past (e.g. Apple's "Zapf" TrueType table), but have not been widely adopted. There is nothing academic about PostScript, Unicode, OpenType, glyph naming or character naming. All these things come from a fairly long technological evolution, and carry a lot of evolutionary weight.

Admittedly, the PDF-compatible glyph naming scheme may be one of (many) font technology's coccyxes. I think that it does not make much sense to debate whether it is necessary, we should just acknowledge that it's there and move on.

Best,
Adam

JanekZ's picture

Cześć Adam,
Thanks! Done: Roman.num - first, OSF with ordinary 'one' - second and OSF with oldschool 'one' - third. (dzięki za pomoc, jak widzę zawsze można na Ciebie liczyć :)
Yours sincerely
J

JanekZ's picture

errata: Roman.num - first, OSF with oldschool 'one' - second and OSF with ordinary 'one' - third.

piccic's picture

Hi Adam,
isn't it better to have glyph names? If they are just for commodity, and the software addresses the unicode values, I can’t see why – for example – in Arabic or Hebrew I should get mad to remind each unicode value whenever I have to design a glyph.

I understand most of these things came out of a situational layered history of technologies, but this does not mean we are forced to stick with that.
For example, what does it happen if I change just the name (within the font design software) of Hebrew glyphs?

I cannot engage in any academic debate, I have no background. I just try to figure out if I can make things more recognizable. I’m not a technician, although I attended a technical school, but as you say, every field contributed to this evolution; my questions are just for personal consistency, and possibly clarity, when I work.

Thomas Phinney's picture

I take it you didn't actually read the threads in question?

If you change the glyph names to completely non-standard ones, in some situations people making PDFs from your font will end up with the underlying text in the PDF being trashed: broken for search, broken for copy/paste into another doc, etc. It will look fine as long as the font is embedded, but won't be usable otherwise.

If you don't care about that, then go right ahead.

T

twardoch's picture

On top of what Thomas has written, I should add that in older versions of Mac OS X (10.4 and I believe even 10.5), if you don't have PDF-compatible glyphnames in OpenType PS (.otf) fonts, some glyphs may not even display in applications such as TextEdit, Keynote or Pages. (This is because the font renderer in those OSes used glyphnames rather than the font's "cmap" table to map Unicode codepoints to glyphs. This has been resolved in Mac OS X 10.6).

Thomas Phinney's picture

Thanks, Adam.

I'd almost forgotten about that odd Apple decision. I never did understand it. The relevant folks at Apple originally claimed that glyph names were more reliable than cmaps, for determining encoding in OpenType CFF fonts. I found that hard to believe. I long wondered if for some reason to do with code paths, it was easier for them to treat OpenType CFF the same as Type 1, and of course Type 1 fonts had no encoding cmap in the font. Anyway, I'm glad they stopped doing that.

Regards,

T

piccic's picture

Thanks, but all this does not change that glyph naming is a total inconsistency.
I guess the only way to have them descriptive is to change them, and then change them back to numeric values before generating the typeface. Bah.

agisaak's picture

That's a fairly common strategy -- if you are using AFDKO tools are providing for doing this using a GlyphOrderAndAliasDB file.

Within FontLab, you can also edit your standard.nam file to include your preferred names prefixed with an exclamation mark. FontLab will then correctly assign unicode values based on those names but, when asked to generate names from unicode values will use the more standard ones. Unfortunately, this solution isn't really viable if you are using lots of alternate glyphs since it will only rename glyphs which are mapped to unicode values.

André

blokland's picture

Adam: [...] I believe even 10.5

From: OpenType Status 2009:

In OS X 10.3, 10.4: Carbonized applications acces codepages via internal mappings. This doesn‘t work always correctly.

For COCOA/AAT the Unicode number was recalculated from the Postscript glyph name, although these fonts have a Unicode Cmap!?!

Name problems: Gcedilla – Gcommaccent Greek fonts : mu, Delta, sigma1, Omega Encoding problems with CFF custom encoding.

In OS X 10.5 and 10.6 Apple introduced Core Text and replaced the older text API‘s.
The problems have been fixed. (A good reason to upgrade!)

FEB

dezcom's picture

"(A good reason to upgrade!)"

Except if you still have a pre-Intell Mac :-)

Mark Simonson's picture

10.5 works on pre-Intel Macs. You lose Classic, though. 10.6 is the only one that's Intel-only.

twardoch's picture

I'd say that if "gcommaaccent" only works on Mac OS X 10.5 and 10.6, while "uni0123" works in all Mac OSes dating from 10.3 or probably earlier, there is no reason to ship "gcommaaccent".

The current STANDARD.NAM and all the encodings that ship with *Fontographer 5* follow the updated naming guidelines, so "gcommaaccent" will be changed to "uni0123" if you choose "Make PDF-compatible naming" in FOG, and -- if you install Fontographer 5 on the same machine as FontLab Studio 5 -- will also work when you choose "Generate Names" from Glyph / Glyph Names in FLS5.

As I wrote earlier, I recommend installing Fontographer 5 demo if you're using FontLab Studio 5. You'll get updated glyphname-to-Unicode lists and encoding files as well as a set of new "OpenType" encodings ("OpenType Std", "OpenType LatPro", "OpenType LatPro SC", "OpenType LatCyrGrk", "OpenType LatCyrGrk SC", "OpenType LatCyrGrk WGL4" and "OpenType LatCyr Asia").

A.

dezcom's picture

"I recommend installing Fontographer 5 demo if you're using FontLab Studio 5"

OK, but this sounds quite strange, Adam. Can't we just get a copy of the revised standard.nam file?

twardoch's picture

I've made those available here:
http://www.fontlab.net/downloads/FontLabDataFiles2010.zip
but this location is not official/permanent (since this ZIP does not come with a nice installer etc.) But it's the same files as those bundles with FOG 5.

Best,
Adam

agisaak's picture

Frank,

I was hoping you could clarify something from the .pdf you linked to (I realise you weren't the author, but perhaps you know the answer since I assume you were involved in the conference).

The file states "Always use the available Unicode [value] even if the glyphs are selected through an OpenType feature".

I'm interpreting this to mean that in a substitution such as

feature hist {

  sub s' @Letters by s.hist; # s.hist = long s

} hist;

s.hist should be encoded as U017F. Am I interpreting this correctly? The impression I had received from reading other discussions on this forum was that s.hist should be left unencoded and and that a separate (duplicate) glyph, uni017F should be encoded at U017F.

Do you know the rationale behind Dr. Willrodt's recommendation?

Thanks,

André

dezcom's picture

@Adam "But it's the same files as those bundles with FOG 5"

Does that mean that FontLab Studio 5 will recognize and accept the same .nam files as FOG5? or do I need to edit the file at all?

agisaak's picture

@Chris,

Yes, the .nam file is the same for both. No editing needed.

André

dezcom's picture

Thanks André!

twardoch's picture

Chris,

yes, one of the things we changed in FOG5 is to drop its old encoding formats and switch to FontLab's formats (.nam and .enc, not .cpg, but also vendor.dat, though no alias.dat yet).

A.

dezcom's picture

Great!, Thanks Adam! Now I can just grab it from my FOG5 folder!
But? What effect, if any, will it have on my old legacy files already done using the old NAM file?

ChrisL

Syndicate content Syndicate content