Coming soon (letters & symbols, preview)

Andreas Stötzner's picture


.
The forthcoming new version of Unicode will provide new blocks for symbol and pictographic characters. A new era of beyond-lettering-fontography is to dawn.
.
This new font will feature a large amount of new UC-6.0 codepoints for symbols. It is the first font to support letters and pictograms alike based on a large-scale standardized encoding scheme.

Nick Shinn's picture

The forthcoming new version of Unicode will provide new blocks for symbol and pictographic characters

Code bloat.
Will this make an "honest man" of Wingdings?

Andreas Stötzner's picture

Will this make an "honest man" of Wingdings?

No it won’t.
There have been honest efforts made to not repeat the “Zapf Dingbats” mess in codecharts.
I’ll report on some details later.

Xavez's picture

Ooh. Now I'm curios.

Igor Freiberger's picture

I'm somewhat suprised to see charts from Unicode 6.0 beta. Lots of new glyphs, especially emoticons and common signs. But it still lacks important accented characters used in marginal languages –as Yoruba and Tupi– and also symbols for Cartography, Chemistry, Ecomony, Medicine, Work Safety and Ergonomy. All these are far more important than some emoticons proposed.

I understand Unicode is based on proposals. So the problem is not with the Unicode Comitee but with the organizations which handle these matters and must ask for additions.

Andreas: the signs you are previewing here are wonderful. Hope to see more very soon.

Andreas Stötzner's picture

it still lacks important accented characters

… which have a chance for getting encoded *only* if you manage to make a case that it is not possible to handle them via precomposed sequences.

…symbols for Cartography, Chemistry, Ecomony, Medicine, Work Safety and Ergonomy

In principle, you’re right. The question is, how to deal with it. Unicode/ISO offer a fair procedure to get along with such things. But the point is: who is going to do the work? Who is willing to fund the labour neccessary to write a proposal for 500 characters?
Ask M. Everson about his efforts to get Egyptian Hieroglyphs encoded. Ask a cartographer about the willingness of his colleagues to undertake encoding work of their own subject. – You will get very disapointed.
However sensible an ALIEN MONSTER (1F47E) as a character may be in *our eyes*, someone proposed it and made a case for its usage. Therefore it gets in.

I hope being able to explain a bit more about the pending additions in the near future.

Igor Freiberger's picture

Thanks, Andreas. As I said, the problem relates with organizations (or researchers) whose handle these matters. It's a pitty the interest about Unicode, one of the most important standards, is not so wide as it must be. As an example, there is the little issue I told in another thread: Brazilian currency sign was dropped in 1967, but Unicode states it as current. I'm pretty sure nobody in Brazil ever contacted Unicode consortium to correct this.

hesi quetions's picture

Hi,WOW! That's great, addition of new UC-6.0 codepoints into fonts.
Success for u.
Congratulations,

quadibloc's picture

I do remember reading about how Burmese, being encoded by means of Unicode purism, is therefore very difficult to implement.

Accented letters for French, German, and Italian are in Unicode even though they could be handled by precomposed sequences - and what about Korean? Instead, Unicode should be handling all languages equally - it should be as simple and convenient to process Tupi, Yoruba, and Burmese by computer as French or Korean.

And assumptions about the technology in use shouldn't be made either - first code glyphs, then characters (i.e., for Arabic) - so that one can always print a desired set of glyphs, even if they don't follow the usual rules of the language. (Tables of initial, medial, and final forms, anyone?)

jch's picture

I think there's no doubt that if anyone on this list were to redesign Unicode from scratch, he'd make a better standard than the official one. That's not the point, though -- the value of Unicode is that it's a standard that everyone in the world uses, whatever their country or their OS. (And don't get me started about TRON.)

There are good political reasons why Unicode turned out the way it has. Originally, it was meant to be just the way you envision -- 16 bit wide, no precomposed characters, CJK entirely unfied. In order to make Unicode compatible with the ISO 10646 charter, a few principles needed to be sacrified:

  • large numbers of precomposed characters ("accented letters"), for compatibility with legacy encodings;
  • de-unification of parts of CJK, S-commaaccent, etc.
  • presentation forms (Arabic, fi and fl, etc.).

So could you design a better universal coding? You bet. But could you get all of the world to accept it without sacrificing at least some of your principles?

--jch

Khaled Hosny's picture

AFAIK, most "scarification" of Unicode principals was for political rather technical reasons, the whole idea of round-trip conversion (which is nonsense, if you ask me) was a pure political decision to convince national standardisation bodies that they can convert through and forth between Unicode and there existing national standards. Now that Unicode is past this stage and whoever does not play by Unicode rules is left behind, they can now (not exactly now, this have been going for a while now) force the original principals.

Andreas Stötzner's picture

Unicode 6.0 is official since today. Read more here.

Syndicate content Syndicate content