Pronunciation Characters

Stephen Coles's picture

What are these diacritics/characters called? They are commonly used by dictionaries to indicate pronunciation and I assumed they were part of the IPA, but no.

Trying to find fonts that include these characters but it's difficult to search without knowing the proper name.

Tomi from Suomi's picture

You could get in touch with this guy; he is a linguist, and works withall kinds of languages.

info[at]jltypes.com

hrant's picture

The double-oh with the wide breve and
the red zigzag is called "Furious Unibrow".

hhp

Tomi from Suomi's picture

Unicode: 0\_/0

John Hudson's picture

The top four rows are pretty common diacritics found in a wide range of natural orthographies. The macron and breve typically indicate, respectively, long and short vowel sounds. The diaeresis (double-dot) most commonly indicates that a pair of adjacent vowel letters are pronounced separately; the diacritic mark is usually applied to the second letter, e.g naïf. [In German and some other languages, the double-dot accent repesents a vowel, in this context it is referred to as umlaut rather than diaeresis.] As used in dictionary pronunciation guides, these diacritics are most like to perform these common functions, since the whole purpose of dictionary pronunciation conventions is to be easier for non-specialist readers than IPA, which is much more precise and complicated.

The circumflex is used for a lot of different purposes in various languages, and I'm not sure how it would be likely to be used in dictionaries. In a lot of African orthographies, it represents a rising-falling tone. In French it indicates that the word used to be spelled with an s following this vowel (how's that for etymological spelling retention!)

The ŋ is usually called eng and represents the ng consonant sound as in the English word ‘sing’.

I'm guessing that the double oo might be used in some situations to indicate the vowel of words like ‘smooth’ (long) and ‘coup’ (short). [English speakers often think of this as a long u, but it is actually a different sound, and the true long u is as in ‘truth’.]

I've no idea what the slashed letters might be used for. Do you have samples of these and the others in actual use? It should be possible to deduce their function.

aric's picture

Unfortunately, there's no single pronunciation spelling system for dictionaries; every dictionary publisher kind of makes up their own. You might find this article on pronunciation respelling useful.

John Hudson already identified the characters and diacritics. Those in the first four rows should all be defined in the Unicode Standard as precomposed characters. I don't believe any of the characters below those rows are defined as precomposed characters, except for ɇ (Latin small letter e with stroke, U+0247). The others would have to be made with the help of the combining double macron (U+035E), combining double breve (U+035D), combining long solidus overlay (U+0338), and, assuming that thing next to the eng is an acute accent and that it's supposed to be over the eng, the combining acute accent (U+0301). Not many commercial fonts support such far-out diacritics, but I'm guessing Andron Mega Corpus does. Tiro Typeworks is also developing some fonts which I believe would support these.

eliason's picture

I think the question is what are these called collectively.

Jongseong's picture

They fall under the broad category of phonetic symbols, and you could always be more specific and call them phonetic symbols commonly found in dictionaries for English. Each dictionary published in the U.S. seems to adopt its own set of ad hoc symbols to indicate pronunciation, although slowly some are adopting the IPA. I've seen most of the symbols above in one system or another, although I don't believe there is a single system using all of those symbols. The slashed characters are baffling, but I suppose they could indicate optional omission as in 'Com(p)ton' or vowel reduction as in 'penc(i)l'.

The International Phonetic Alphabet is a standardized set of phonetic symbols designed to describe the sounds of all natural languages, not just to indicate the pronunciation of English. Most dictionaries of English published outside the U.S. nowadays use variants of the IPA to show pronunciation.

Since the symbols you want are not part of a standardized system, the dictionaries that use such symbols probably use customized versions of fonts if they require uncommon symbols. Nevertheless, the letters with circumflex (3rd row) or diaeresis (4th row) will be provided with most fonts with good Western language coverage.

The macrons (1st row) and breves (2nd row) are trickier. A font covering the Baltic languages should have most of the macrons (except the 'o'). Scholarly transcriptions of classical Greek and Latin, the Hepburn Romanization of Japaenese, and Pinyin for Chinese use the macron, although it is hard to find fonts for these purposes. Vietnamese uses the breve for 'a', the McCune-Reischauer Romanization of Korean for 'o' and 'u'. Many fonts include the macron and breve separately though, even if they don't have the precomposed forms. Adobe fonts are useful for this, since their EULA allows some modification by the user.

The eng (1st letter of the 5th row) is used in the IPA as well as for some obscure orthographies that are not likely to be served by most fonts available. I do not recall seeing the double macron and double breve required for 'oo' in any font, though you may try Aric's suggestions.

English speakers often think of this as a long u, but it is actually a different sound, and the true long u is as in ‘truth’.

As long as we're talking about the sound, the vowel in 'smooth' and 'truth' are identical for most modern varieties of English, and will be so indicated in dictionaries. That is because 'oo' in 'smooth' represents /uː/ while the 'u' in 'truth' represents /juː/ (pronounced like 'you'), but most speakers drop the 'y' sound if it comes after 'r'. An important exception is Welsh English, which does pronounce something like 'tryooth'.

A better example might be 'hue' (long u) and 'Hoo' (long oo), still pronounced differently by most English speakers.

hrant's picture

> Adobe fonts are useful for this, since their
> EULA allows some modification by the user.

Theuth bless them. Not too long ago I was commissioned to add about 20 special characters for Sanskrit transliteration to Garamond Premier Pro. The only limitation was that I couldn't splice into the kerning classes with FontLab [at the time].

hhp

canvasjoe's picture

Hello everyone,

This tread has been exciting for me to read because it is verifying everything I am studying on my own about American English phonetics.

Stephen Cole who started this thread did so as a continuation of our exchange of emails regarding a single font that would conveniently contain all the diacritical marks that are used by American dictionaries and this for the purpose of teaching phoneme/grapheme association to children and adults.

(Phonemes are the smallest distinct sound, eg. /t/ in tin, and graphemes are the single letters, and even double or triple letter combinations that represent the distinct sounds, for example, the two letters ph for the /f/ sound or single letter b for the /b/ sound.)

I am a high school science teacher with an interest in American English phonetics. I have taught at high schools for over 15 years and the saddest condition of students today is many cannot read well (nor spell or write) and therefore lose all interest in study. Also content is dummied down to meet their lower skill set.

I have discovered however that the teaching of spelling and reading in some private and home based schools is conducted in terms of graphemes/phonemes association with greater success then the pedagogical methods for reading used in public school. So I am pursuing research in this not only for my own personal skill acquisition but also in the hopes of developing an effective curriculum based on this approach.

I am dismayed that educated Americans do not know that there are 44 sounds in our language and about 70 common graphemes that represent them. This should be taught in grades 1-3. We are unconscious of the depth and mechanics of our own language. If I ask a fellow school teacher how many sounds the letter s makes, they’ll answer it makes the /s/ sound (as in small). When I show them it makes 3 other sounds, /z/ bells, /zh/ treasure, /sh / sure, as well, they are amused. They have never thought of that. We are not taught that in school (at least not in such a way that is it retained as common knowledge). Another example, what sounds does the grapheme, ea make? Short e as in head, long e as in each, and long a as in steak.

Now you can see where a convenient font with diacritical marks would come in the picture.

It is true that the diacritical marks that Stephen Coles posted at the beginning of this thread are used in the ad hoc, keep-it-simple-for-our-readers, Pronunciation Keys of American Dictionaries. These Pronunciation Keys are not standardized, and are used as a tool to help readers pronounce the words in the dictionary. This is all well and good.

But it is also a key to teaching reading and spelling. Students are taught to sight-read, that is, look at the word and memorize it like an image, but not to attack or decode it, that is decipher the letters into sounds. Word attack, decoding letters into sounds, is poorly taught in public education and the consequences of this are indeed unfortunate. Children and adults learning English need to know from the beginning what the 44 sounds are and the 70 common graphemes that represent them. Students are in dire need to sound out words, spell them, know their meaning and comprend the written word and write well. The current percentage of failure to teach and learn this fundamental skill set is too great.

And what would help immensely is a font system that provides the American English diacritical marks. This basic font-system would become indespensible and yet such a convenient ready-to-use font cannot be found. This can be a highly specialized font that does not need capital letters or numbers. Read on.

Examples:

On the left is the English spelling, and on the right are the phonetic representations (graphemes) of the sounds that comprise those words.

English sounds (phonemes) are themselves comprised of key graphemes, wherein each of the 44 sounds of the Egnlish languge has one KEY grapheme that represents only that sound. Hence there are 44 sounds and 44 key graphemes to represent those sounds. The 44 key graphemes are the actual true “phonetic” alphabet of the English language, but we don’t use it. We use instead 26 letters.

The 44 key graphemes are actually what the American dictionaries attempt to represent in their Pronunciation Keys. The Pronunciation Key is usually at the very beginning of the dictionary and it usually has 44 symbols (Key Graphemes) that they choose to use to represent the words the dictionary defines in a purely phonetic manner. Each sound (phoneme)in the word is represented by one key grapheme. Well this would also be a profound teaching method in the early grades but there is no font to help teachers and students use this diacritical system to learn to spell and read!!

Here is where the strike through slash comes in. Many letters in English are silent. In the practice of word attack (decoding sound from letters) students could use a strike through slash to assist.

In the above example is a word attack exercise where the not purely-phonetic letters are represented with their purely-phonetic key graphemes directly above them (with diacritical marks). This could be just one small part of a method of teaching conscious word attack and spelling. Notice the silent letters have a slash through them.

I created these diacritical marks with a “font software” from linguistsoftware.com. It is a good program. It creates a virtual “second keyboard” on my computer and I have to manually switch to it to write the marks. I was displeased with the software at first and hence called Stephen to find alternatives, which I did not. Even Ergonis PopChar was not helpful. But later linguistsoft generously offered to customize the software for me at a very reasonable price.

Still I am convinced that a simple font could be made that contains all the 44 key graphemes and letter-slashes and any other marks, symbols or glyphs needed. This font would not need capital letters or numbers or puncutation. Just the key graphemes appropriately placed throughout the keyboard. The font would be used for phonetic exercises and in that sense highly customized.

I downloaded the trial version of Font Creator 6.1 to see if I might make such a font. But one step at a time. I’m still in the beginning phase of research.

By the way, on the image of the marks at the very beginning of this thread the mark next to the “n” letter is an “accent” mark to represent the stressed syllable in a word. You see them in dictionaries (of course). The “n” as mentioned in an earlier post represents the /ng/ sound as in sing, sink, or anchor.

Allow me to make one more example.

We use the common grapheme i, that is, the letter i, every day. But how many sounds does it represent? Guess... Four!

The letter i is a grapheme that represents four phonemes, four distinct sounds. Those four sounds (for pedagogical purposes) can be expressed in written form by 4 of the 44 key graphemes (as you see above). Our children need to be taught this thoroughly and teachers and students need a font to help.

Joe Panico

Jongseong's picture

Joe, this is a fascinating project. Here is my take on it.

I am inclined to agree that students should be taught the idea of phonemes. The major stumbling block for English, of course, is that there is no standardized way of expressing these phonemes as single graphemes outside of the IPA.

The IPA has its drawbacks; while the phonemic inventory of English is mostly consistent across all the spoken varieties, the actual pronunciation of each of these phonemes varies considerably. So a long o, which is a single phoneme in English, may be realized in very different ways by those who speak General American (GA), British Received Pronunciation (RP), Scottish English, and Australian English. Even within the U.S. there are regional differences.

That said, for GA and RP, there is a general consensus on how to represent the English phonemes in IPA. The long o is written /oʊ/ for GA and /əʊ/ for RP. There are numerous advantages of using an internationally standardized system such as the IPA that we should consider before designing yet another set of pronunciation symbols.

All the modern pronunciation dictionaries of English I know use the IPA, and the Longman Pronunciation Dictionary includes helpful spelling-to-sound guides that address questions like what sounds are represented by an 's' or 'ea', all the while employing IPA.

That said, a set of symbols where the grapheme-phoneme connection is made more transparent, such as you suggest, may turn out to be advantageous for the purpose of teaching students to read.

One thing that worries me is that this might veer into narrow dogmatism. There is more variety to English pronunciation than one may realize, and language is always changing. I see you are using the same symbol (u breve) for the vowel in 'cut' and schwa, which will seem illogical for RP speakers to whom the vowel qualities are clearly different (GA is generally considered to have fewer differentiated vowel phonemes than RP).

The statement that there are exactly 44 phonemes in American English is highly problematic. I refer you to Wikipedia:

The number of speech sounds in English varies from dialect to dialect, and any actual tally depends greatly on the interpretation of the researcher doing the counting. The Longman Pronunciation Dictionary by John C. Wells, for example, using symbols of the International Phonetic Alphabet, denotes 24 consonants and 23 vowels used in Received Pronunciation, plus two additional consonants and four additional vowels used in foreign words only. For General American, it provides for 25 consonants and 19 vowels, with one additional consonant and three additional vowels for foreign words. The American Heritage Dictionary, on the other hand, suggests 25 consonants and 18 vowels (including r-colored vowels) for American English, plus one consonant and five vowels for non-English terms.

canvasjoe's picture

Jongseong,

Thank you for commenting. Yes I agree with all your thoughts and concerns. The language does have many dialects and other than perhaps the IPA system there may not be a truly complete representation system.

My thought is though, we have to start somewhere. In the world of small children attending grades 1-3 a simple phonetic system similar to the dictionaries I think will suffice. The goal is to get children decoding words and reading and spelling well. Whatever phonetic system(s) is effective to that goal should be developed. Obviously for the early grades the IPA is infeasible but dictionary style system seems doable.

As was eluded earlier in this thread when a dictionary grapheme symbol is pronounced by say a Bostonian the sound may be different than pronounced say by a Californian. Regional dialect causes this. Because all the Bostonians are pronouncing it the same way… the system works for them. The same is true of the Canadians and Californians. The regional dialects will interpret the symbols in terms of their own speech pattern. Now this probably would not work for a linguist who is truly mapping the sounds of all languages and dialects and must have two different symbols for the two difference pronunciations.

But this is irrelevant in the immediacy of teaching little Sally how to spell. Everyone in little Sally’s world speaks the same dialect.

There is a formidable distance between the practice of scholarship and practical classroom methodology. However, this distance must be bridged with a gradual reduction of perfection, a morphing into a nuts-n-bolts system that guides the 7 years olds to grasp phonetic awareness.

We are not trying to produce the King’s English but rather provide a system for students to fully understand the phoneme/grapheme association map - regardless of their dialect - for spelling and reading. Their dialect will permeate and determine their own use of such a system. And the system must be flexible enough to meet the needs of dialects.

I personally think the schwa is a wasted symbol. It denotes that some dialects unstress the vowels to such a degree (that is pronounce vowels so quickly without clear enunciation) that the vowels are pronounced as almost inaudible little uhs (represented by a schwa). Fine, that is the effect of dialect, however, the word does have a vowel in it so show the vowel, indicate its proper pronunciation and then after the child learns the word if the vowels gets an inaudible uh in practice of everyday speech, so be it. Dialect will always win.

There has to be a phonetic system provided. It is very likely that if I or someone created a font and a curriculum based on phoneme/grapheme association, then soon someone else would do the same with their own twist. However, I would like to think there could be a general consensus on a flexible and practical grapheme/phoneme set. One set, or several sets, not written in stone, but a general standard that can be used by teachers, curriculum designers and education material publishers.

I know that the 44 sounds are not a standard. I did not mean to imply they are an exact number. In fact I was hoping someone would bring up this fact, as you have. I’ve heard 45 sounds, 42 sound, etc. Maybe teachers in Canada will say there are 47 sounds and those in California 43. Who cares? Just teach the children to read and spell! The number of sounds doesn’t matter when you consider a young struggling student learning to read, a daily phenomena far removed from university scholars, linguists and publisher boardrooms. If a system gets Johnny to read and spell with confidence before he reaches the 4th grade, there is the making of a potential professional, and a happier person. Crime and lack of education are correlated.

Right now reading teachers don’t even know that there are 44 (or so) sounds and 70 (or so) common graphemes in our language, let alone have acquired a conscious mastery of how they associate. Nor have they a phonetic reading curriculum to teach - and - a font system tool with which to teach it. The point is something must start now. The font is a smart start. We need the tool to even begin.

I have seen several private groups and individuals who developed a phonetic system like this to make a livelihood from it. They make their system a proprietary venture. They box themselves in their own world; say they are better than anyone else with a similar system. They mistrust anyone who wants to learn their system who they feel could copy and alter it and start their own proprietary system. The only students who benefit are the few who find them. A strictly proprietary approach is not the way to go.

The phonetic system of the English Language as applied to pedagogy must exist within an “open source”, public domain arena. Fonts need to be made. Teachers need to get their hands on them, learn the system and pick their own approach to teaching it. The boxed proprietary approach will not be sufficient. There does need to be some form of industry standard so the variances do not get too far afield. This can be done. Maybe teachers in New York will favor a phonetic font system over several others to choose from. None of this is primary. Getting it started is important. Everything else will develop by the sincerity of the teaching and curriculum community.

The role of proprietorship does co-exist with an industry standard. The market can and must be both public domain an private enterprise. But proprietary fonts or curriculums, or online software, or phonetic content publications can all follow the general guidelines of an industry standard for phoneme/grapheme pedagogy.

I look at like the web. The W3 establishes web standards and responsible web developers implement the standards so the industry will stay together. There is plenty of creative expression within the framework of the standards. Privately owned companies comply with the W3 standards and their businesses thrive.

Dogmatism is destructive. Teachers are very independent and resourceful folk. Give them the tools and let them go to work.

University courses in Teaching Reading must move in this direction. No one even as a font to begin the work.

Joe Panico

Strabismus's picture

Wow, you guys are amazing for non-linguists! Good job!
Merry Christmas!

Thomas Phinney's picture

Joe:

Generally, I think you make many very good points.

If I may pick one tiny nit, I'll point out that geographic pronunciation differences *can* matter with dictionaries, even with people mapping the dictionary to their particular pronunciation; the dictionary may conflate two sounds that are differentiated in a particular form of the spoken language, or may differentiate two sounds that are conflated in a particular spoken form.

Cheers,

T

John Hudson's picture

Dogmatism is destructive. ... University courses in Teaching Reading must move in this direction. [My emphasis.]

The close conjunction of these statements made me laugh.

Joe, my wife learned to read and write in an experimental programme using ITA, which in some respects resembles your proposed phonetic spelling system. The problem with such programmes always seems to be the transition to standard spelling. Children do learn to read phonetic systems very quickly, but unless the transition to standard spelling is better managed this early advantage will be quickly lost.

Jongseong's picture

I personally think the schwa is a wasted symbol. It denotes that some dialects unstress the vowels to such a degree (that is pronounce vowels so quickly without clear enunciation) that the vowels are pronounced as almost inaudible little uhs (represented by a schwa). Fine, that is the effect of dialect, however, the word does have a vowel in it so show the vowel, indicate its proper pronunciation and then after the child learns the word if the vowels gets an inaudible uh in practice of everyday speech, so be it. Dialect will always win.

The schwa has to be considered from the perspective of the English language as a whole. Vowel reduction in English occurs to such a degree that several simple vowels and diphthongs weaken to produce a single, indistinguishable sound, the schwa. This new sound has a vowel quality distinct from any of the "strong" vowels. You simply cannot claim that the vowel of "cut" is the original strong vowel that reduces to the schwa in all words that contain the sound.

Now, in General American, the quality of the "cut" vowel and the schwa is closer than in other major dialects. In addition, the reduction to schwa occurs far more readily than in Received Pronunciation, for example. The "cut" vowel, when it comes in an unstressed syllable, always reduces to the schwa. The effect of all this is that to the GA speaker, the "cut" vowel and the schwa are pretty much simply versions of the same sound that occur in stressed syllables and unstressed syllables respectively. The vowel qualities of the GA "cut" vowel and the schwa are still slightly different, but many Americans don't hear the difference.

I've seen older American dictionaries claim this as well, stating that they cannot hear any difference between the "cut" vowel and the schwa except for the stress, and proceeding to use the same symbol for both sounds.

But it is important to understand that this is a feature of the American dialect and doesn't apply to English as a whole. In fact, in modern RP, the "cut" vowel has drifted even further from the schwa (from which it already was distinct) to something approaching how some Americans would pronounce the first vowel in "father".

In addition, there are minimal pairs that prove the "cut" vowel is distinct from the schwa for many RP speakers. They distinguish "unavailable" and "an available" through the difference in vowel quality only. GA speakers, who lack this contrast, have to stress the "un" in "unavailable" to make the distinction.

Welsh English is interesting in that it uses a stressed schwa due to the influence of the Welsh language which uses that vowel. Listening to the stressed schwa used by Welsh or Welsh English speakers will be instructive for GA speakers who have trouble hearing the difference between their own "cut" vowel and the schwa.

Your statement about the schwa being a wasted symbol makes sense in the context of General American. After all, many dictionaries using the IPA such as the Longman Pronunciation Dictionary use the same symbols for the "bit" vowel and the unstressed final vowel in "roses", even the actual vowel qualities are slightly different. This is because no major variety of English distinguishes between the two vowels in the same environment, so they can be seen as versions of the same sound.

Be aware, however, of the limitations of your approach. It doesn't reflect the etymology (the schwas in "alone", "computer", etc. were never pronounced with the "cut" vowel at any stage of the history of the English language) and it fails to accomodate other major varieties of English.

that as soon as you get outside the confines of General American, schwa needs

aric's picture

Joe, your idea is interesting, but as an academic endeavor it really needs to be approached more academically. Before you worry about fonts, you need to do a thorough, graduate-level literature review of the history and current state of reading instruction in the United States and other English-speaking countries. You aren't the first to propose a phonetic orthography for teaching English. Have previous attempts been demonstrably more successful at teaching people to read and write conventional English than methods (phoneme-based or otherwise) that begin with normal spelling? Is it really true that current reading instruction eschews phonics? (As the father of a kindergartner, I can tell you that at my son's public school a lot of time is spent on letters and the sounds they make...) Is it really true that current reading instruction is an utter failure? If so, are there any other possible explanations for the failure besides the nature of the orthography or the instructional method? You owe it to yourself to seek out detailed, balanced answers to these questions before you concern yourself with developing a special font. (You also need to conduct a thorough study of American phonology and dialectology. As Thomas Phinney pointed out, there really are phonemic differences from dialect to dialect such that materials prepared for one particular dialect will be wrong for certain other dialects. One example: in the Western US, "cot" and "caught" are homophones; in many New England dialects, they are not.)

If you do determine that reading should be taught based on a modified orthography, and if you can back up that determination with published evidence, then you can develop your font (which could be done in a day or two, assuming you've studied out what symbols should be included) and then you should test it rigorously and publish your results in a forum where they will undergo stringent peer review. It's a lot of work--a lot more work than just throwing a font together and inflicting it on some poor, unsuspecting child--but it's vital if you want to develop the best possible system and have anyone adopt it. If you cut corners, it will only come back to bite you in the long run. Before any university class will move in the direction you suggest, you must convincingly demonstrate that your instructional method is more successful at teaching children to read conventionally written English than the best current methods (most of which presumably begin with conventional spelling).

I'm skeptical that your proposed approach is really the best way forward (although some good solid research could convince me otherwise). If I were working on early reading instruction, I'd focus my efforts on educating parents about how they can better help their kids learn to read--how the school and home can become better partners in this effort. If a kid who knows the basics of reading spends 10-20 minutes a day practicing reading with a supportive adult, in a few years that kid will be an excellent reader. As with anything worth learning, no magical teaching method can replace lots and lots of good practice on the part of the learner.

canvasjoe's picture

Thanks for the many replies,

I guess we all have a lot to say about phonics. I do appreciate all your ideas.
John I looked at the ITA foundation alphabet and their choice of graphemes is exactly what I could never recommend: letters that don’t exist in the English alphabet for a beginner of English. (Assuming this is what you referred to (http://www.itafoundation.org/alphabet.htm)

Of course a student of that grapheme system would have to learn how to apply it to real writing. Thirteen of the graphemes used in that system are not used in English. Why (would a system) teach them when there are already graphemes for those 13 sounds that do exist in the English alphabet? To me that is unintuitive.

Aric, yes no doubt an academic training in Reading Methods and or Linguistics is a necessary step for me to continue this research and share it with others. (I am looking at potential grad programs). This type or category of approaching phonics is growing and there has been research on it. The Riggs Institute http://www.riggsinst.org/Default.aspx is a forerunner of one approach of using what they call explicit phonics. They’re teaching it in conjunction with other methods. The University of Arkansas is conducting research. I know of other groups. In general there is a growing awareness of the need for “phonetic awareness” programs for reading methods. I’m certainly not alone in this.

Any undertaking like this takes time, and research and patience. It is always collaborative and any person who makes some advancement does so on the shoulders of the researchers/teachers who contributed before them.
However, I disagree about the need for an English Phonetic font system. A publicly available phonetic font is nothing more than a tool that teachers, professors, and curriculum developers could use right now to help them to research and develop methods. The need is ripe.

Teaching phonetic awareness is about teaching the 44 sounds, and the 70 graphemes. In that sense it is very basic. All dialect issues are filtered-out in the early grade levels. The more I learn and integrate English phonetics into my awareness the more I literally see words in a new way. I find I instantaneously see words phonetically instead of seeing words by unconscious sight-recognition. I love it.

No one method is going to be the only method, or the best method, for all readers. Again, I emphasize that teachers need the flexibility to use phonetic systems in the creative ways they see fit. They are the ones in the trenches.

Anyway, the vibes are a little heated in this thread. That’s good. We all have valuable insight.

Thank you, Jongseong and everyone for your replies. This has been most helpful.

Joe

aric's picture

Joe, it sounds like you have a good idea what you're getting into. I went back and looked at your initial post more closely and realized that you're not proposing to create texts entirely from phonetic respellings, which makes me much more comfortable with your proposed method.

There are actually a number of free fonts out there that can (more or less) produce the kinds of characters you're interested in. (i with a slash seems to be problematic, but I suppose you could fix that problem by creating a ligature or something.)

Since these fonts use the standard Unicode code points for these characters (which I think is a good thing), you'd need to create a custom keyboard layout for producing them. This can be done for Windows using the Microsoft Keyboard Layout Creator and for Mac using the web applet at Alex Eulenberg's website (the Mac one can also be done by hand as an XML file, but who wants to do that?). In both cases, you can give other people copies of the keyboard layouts you create.

You'll want to think long and hard about how to make it convenient to use your system. Typesetting stacked text is no walk in the park, and I imagine a lot of elementary school teachers could be put off by that kind of thing.

From a pedagogical perspective, I think you also want to consider the difference between letters which serve a diacritic purpose and letters which are just exceptional. For example, the e in wide and ounce, the a in boat and beam, and the u in guess are not simply silent letters; e and a alter the sound of the preceding vowel, and u prevents the g from making a j sound. On the other hand, letters like b in limb or gh in through are just historic artifacts with no bearing on the word's (modern) pronunciation. Simply marking all of these letters as silent obscures the fact that some of these letters give us hints on how to pronounce the word while others don't. I don't know how you want to handle that, but I think it's worth thinking about. If you need more symbols, you potentially have the whole set of combining diacritic marks at your disposal. Fonts that can produce a double breve can probably produce most or all of these.

Best of luck to you.

aric's picture

From the discussion on "silent" letters which serve a diacritic purpose, I should add that the e in ounce alters the sound of the letter c.

Tom Gewecke's picture

Joe, the easiest way to make a custom keyboard layout on a Mac for your project is with Ukelele:

http://scripts.sil.org/ukelele

I've made lots of them this way in case you need any help.

I think everything you require is already in the available fonts, it's really the keyboard which matters, so you can easily access what is in the fonts everyone already has on their machines.

I don't think it would make sense to devise new characters, as they will never get into Unicode and thus never be generally available in fonts like IPA and other symbols are.

aric's picture

Tom, Ukelele looks like a really nice tool, and I defer to your superior Mac knowledge. However, if anybody needs to create a Mac keyboard but doesn't own a Mac, the Eulenberg site is a viable option (among others, probably). By the way, kudos to the folks at Apple for using an XML-based format, which makes that possible.

I don't know if your comment on devising new characters is a response to my suggestion of creating a ligature, but let me clarify that suggestion in case it wasn't sufficiently self-explanatory. OpenType provides a way to specify that if two specific code points (say, U+0069 U+0338) occur in sequence, they should be rendered by a single precomposed character. My suggestion was to modify a font like Charis SIL to include a precomposed i with combining solidus overlay, in order to visually center the solidus on the i and assure proper sidebearings. The Unicode representation of the i with solidus would be no different whether one uses this modified font or some other font, but if one uses the modified font, the rendering might be more satisfactory.

This kind of thing can be done in FontForge, if you prefer to pay for your font editing software with time, effort, and mental anguish rather than money.

Tom Gewecke's picture

Aric -- good point about not needing a Mac with the Eulenberg site (though you really need to test these layouts on a Mac before distributing them, as it is so easy to make errors). My comment wasn't actually aimed at your ligature suggestion, but your explanation of how one can get better display of i plus solidus without creating a new "character" is of course correct. Hopefully OS X support for OpenType features is now good enough for that to work right too.

PS The i plus solidus is not bad using the OS X system font Lucida Grande, but the solidus position also varies with the character it is applied to.

http://homepage.mac.com/thgewecke/bepri.jpg

Syndicate content Syndicate content