Image by qi xna


A Brief Introduction to Hwayih Woen

Cogs Illustration


Broadly speaking, there exist three primary factors that characterize the Hwayih Woen system as a whole. In no particular order of importance, those three factors are:

  1. Junior script unity across Simplified and Traditional character sets

  2. Phonetic division based upon the “General American” accent (a.k.a. the Pacific Northwest accent)

  3. Extensive overlapping tolerances for phonetic representation

The first simply means that the minuscule characters are the same in both simplified and traditional. The reasons and significance of this design decision—and the distinction between minuscule and majuscule character forms—will be made fully apparent.

The second is fairly self explanatory, particularly since the creator of this system is a native speaker of English as it is spoken in California, which in turn is a sort of de facto national standard thanks to the centrality of Hollywood in the absence of any sort of official language regulatory body in the United States; however, it is still worth mentioning because what point #2 presents is a future opportunity for the Hwayih Woen system to grow. But what exactly does this mean? While there are only 26 letters in the English alphabet, there are more than 45 different sounds in the English language as it is spoken in California, and by extension more generally across the majority of the United States. The characters listed in the phonetic characters section of this website represent all the sounds in general/standard American English on a 1:1 symbol-to-sound ratio basis. That is to say, these characters are assigned one of these 45+ sounds, which—when combined together—can accurately and phonetically represent any word in general/standard American English. Being geographically specific in nature, the phonetic map by definition is incapable of fully representing—on a strictly phonetic basis—the full spectrum of English regional variances, for the simple reason that there exist sounds in, for example, Australian English or the Queen’s English, that simply do not exist in the United States. However, should this system gain traction among its target audience, Hwayih Woen is perfectly poised to take on any number of “extension” lists to incorporate more characters with phonetic values that represent regional variances, so that in addition to the original phonetic map, there might also be an “Australian extension” or “British extension” to the phonetic roster. But as stated before, such a project remains a future opportunity and beyond the scope of this current work.

The third is arguably the most peculiar aspect of this system. While the phonetic map provides a 1:1 ratio between character and sound, this does not mean that there is only one way by which the user can graphically represent any given word. In fact, there are often multiple ways to do so, and the implications of this slightly shift depending upon whether one is examining the junior script versus the senior script. In the context of the junior script, it is often the case that a vowel within any given word can be substituted out for another vowel without resulting in what would conventionally be considered a mispronunciation or an “accent.” This is simply because within any given language, idiolects do in fact exist. As a point of comparison, what makes spoken English unique from a language like Modern Standard Mandarin is that there has never been a comprehensive dictum from a central authority regarding how every word in the dictionary ought to be pronounced. A dictionary might have a pronunciation guide appended to each entry, but Anglo-sphere society and education regards these entries as suggestions from the publisher (assuming the annotations are even understood), with the more authoritative pronunciation guide for any given word being regional convention. As an example, William F. Buckley and Robert Kennedy were both white American men whose accents sounded nothing alike. But more significantly, neither men were ever in a position to regard the other’s accent as “mispronounced” or “non-standard.” The same cannot be said of comparison between, for example, Aisin-Gioro Puyi and Chiang Kai-shek, because the former’s pronunciation clearly aligns with the universal phonetic annotation found in every Chinese dictionary while the latter’s is clearly divergent prima facia. The purpose of this comparison is to demonstrate that Hwayih Woen, or rather its “junior script,” is not a system of standardization, nor could it ever be. Even among native English speakers from the same geographical region, the way certain vowels and consonants are handled is not identical, the very definition of what an idiolect even is. As previously stated, there does not exist an objective authority in English to which an individual can appeal to arbitrate pronunciation differences as there does in Modern Standard Mandarin, and there probably does not need to be, given how long English has gone without it. But more importantly, mapping the entirety of the English language is unnecessary for a new orthographic system to be functional. This is where the senior script demonstrates one of its many advantages over the junior script. That is to say, by incorporating the logographic use of characters beyond the base phonetic map, the Hwayih Woen user can side-step the question of how to accurately represent the pronunciation of a given word much in the same way that languages as different as, for example: Mandarin, Korean, and Japanese can look at the same character for the word “white” (i.e. 白) and arrive at completely different pronunciations, which is exactly the advantage of using a logographic system at all! This here is precisely the meaning of “overlapping tolerances.” By leaving the question of how to accurately represent the pronunciation of a given word to the discretion of the individual user, Hwayih Woen incentivizes the user to increase senior script logographic character usage, the significance of which will be fully explained in the following section.

Chinese Warrior Statues


The genesis of this project began in 2017, when the author was teaching English to native Mandarin speakers in southern China. Despite Chinese having zero connection to any alphabetically represented language, the use of Roman letters since the dawn of the 21st century is universal across the mainland. In no small measure, this is largely because the Hànyŭ Pīnyīn romanization system by central edict is entrenched in every elementary and secondary school in the country. However, despite this universal familiarity with the names of the Roman letters, the students’ ability to understand the sound values inherent to the letters was consistently dysfunctional, a situation easily predicted from the fact that the Hànyŭ Pīnyīn system ascribes non-English sounds to Roman letters, which, when combined with the idiosyncrasies of the English orthographic system itself, created a multi-layered problem that the generic Mainland student had to simultaneously overcome in order to make the meaningful progress required to avoid frustration, and ultimately quitting.

Arguably, the most immediate problem facing the students was the inability to visually represent on paper the aural phenomena they were expected to recognize and vocally reproduce. The inconsistencies of English spelling created a situation in which written English provided zero visual aid to the students; instead, creating greater learning difficulties as opposed to serving as a legitimately effective pedagogical tool. Compounding this problem even further, the traditional American diacritic system (eg. “ă, ĕ, ĭ, ŏ, ŭ,” to indicate short vowel sounds) used to teach native English speaking kindergartners phonics was also ineffective in the classroom because the students were too young to adequately differentiate between what was a phonetic symbol versus what was an English letter, a problem deriving from the simple fact that both systems are based upon the Roman alphabet, resulting in extreme visual similarity. This combined with the Roman alphabet basis of the Hànyŭ Pīnyīn system, Chinese students as young as 7 and 8 were having to simultaneously juggle three completely different orthographic systems that, from their perspective, were visually indistinguishable from one another, in addition to the square block based script of their own native language as well as the International Phonetic Alphabet—a system as equally problematic as Hànyŭ Pīnyīn due to visual similarity with the general Latin script, but nevertheless still serving as the universal default go-to within the Chinese education system for notating English pronunciation when the spelling is unreliable.

The classroom needed a system that could adequately represent the sounds of the English language on a 1:1 sound-to-symbol ratio basis, and was visually distinct from the Roman alphabet. After some research, the author discovered that there did indeed exist historical systems that explicitly sought to accomplish this goal, the more notable ones being Deseret in the United States—a completely original alphabet developed by the LDS church under Brigham Young—and Shavian in the United Kingdom—an orthographic script developed by Ronald Kingsley Read in the mid-1950’s. The objective of both these non-Latin scripts was the same: to provide a simple, phonetic orthography for the English language to replace the difficulties of conventional spelling while avoiding the visual impression that the new spellings were simply “misspellings.” According to these standards, either Deseret or Shavian would have been perfect pedagogical tools for the mainland Chinese ESL student. However, when the instructor employed either Deseret or Shavian in the classroom, the students immediately resisted due to the visual unfamiliarity of the script. In other words, having to learn an entirely new writing system from the perspective of the students was a task so daunting that the mental resources necessary to memorize the shapes of each new symbol impeded their ability to memorize the sound values those symbols were supposed to represent.

Fortuitously, this was exactly the time when the author discovered the work of Jonathan Stalling, a professor of English at the University of Oklahoma who created the Stalling Chinese Character Phonetic Transcription System, colloquially referred to as “Pinying” (拼英)—an obvious play on the word “pīnyīn.” Like Deseret or Shavian, Pinying also seeks to account for all the sounds of the English language on the basis of a 1:1 sound-to-symbol ratio. However, unlike the former two systems, Stalling’s system ascribes the English sound values to the square tetra-graphs of the students’ native language—that is to say, Chinese characters. Upon discovery of Professor Stalling’s work, the genius of his system was immediately apparent.  By using Chinese characters to represent English phonetics, the mainland ESL student had a way to fully reproduce sound on paper in a way that was visually distinct from the Latin alphabet, yet also immediately accessible by virtue of his native language. But even more curious was the way in which Stalling’s system handled English sounds that did not exist at all in Mandarin (eg. both variants of the “th~” sound).  For those sounds unique to English, Professor Stalling invented brand new and wholly original characters that do not actually exist within the standard Chinese lexicon—a practice that actually has historical precedent from Russian Orthodox priests who also “invented” new characters to represent sounds unique to the Slavic languages back in the 19th century as a pedagogical tool. But even more astonishing was the fact that mainland ESL students—including those as young as 7 and 8 years old—did not reject Stalling’s artificially generated characters as they did the Deseret or Shavian phonetic symbols, a phenomenon readily explainable given the requirements for literacy in the students’ native language.  But what exactly does this mean? The Jonghwa Tzhhae (中华字海), the largest in-print Chinese character dictionary compiled in 1994, consists of 85,568 different characters. This means that the vast majority of monolingual Chinese speakers go to the grave without having learned how to write or speak anywhere between 80~85% of their own language. Ergo, encountering wholly unfamiliar characters is a recurring fact of life that begins in kindergarten and technically never ends, regardless of how much schooling the native speaker may or may not achieve. What this creates in the mind of the native speaker is a disposition that is receptive to Chinese characters even if the character’s sound value is unknown. Whenever the native Chinese speaker encounters an unfamiliar character, he never asks himself: “what is this?”—as he would with Deseret, Shavian, or any other non-Latin or non-Chinese based script. Rather, the question is always: “how do I pronounce this character”—followed immediately by reaching for a dictionary. This kind of reaction becomes so predictably entrenched that it, in a sense, becomes its own free-standing format. The brilliance of Stalling’s system is that it uses this peculiar feature of the native Chinese speaker’s mind as a self-reinforcing mechanism to impel the student to focus on unique English sound values on the basis of a 1:1 sound-to-symbol ratio while simultaneously avoiding the inherent psychological rejection of unfamiliar symbols!

The implementation of Stalling’s system into the classroom proved to be an immediate success, providing the students with exactly the tools they needed to visually represent on paper new and/or unfamiliar sounds. However, technical compatibility issues immediately surfaced when the author attempted to scale up the system across the entire teaching staff under his management at the time. Specifically, while Pinying was perfectly feasible in the context of a single instructor writing examples on a whiteboard or students writing out exercises on sheets of paper, it was nearly impossible to create any kind of digitized documentation or customized teaching material simply because the characters Professor Stalling created did not exist in Unicode. The initial measures taken to address this issue were simply to hard substitute the characters that did not exist in Unicode with extremely rare characters that, nevertheless, did exist in Unicode—the rational being that by using rare characters that most native Chinese speakers did not themselves know, the instructor could hard substitute the dictionary mandated Mandarin pronunciation with an English phonetic value in the minds of the students (much in the same way that, for example, the letter Q in Hànyŭ Pīnyīn bears absolutely zero resemblance to how the letter is actually pronounced in English). This workaround, while viable, was technically a departure from Professor Stalling’s system. At this point, a design decision had to be made. If only a portion of Stalling’s lexicon was modified for Unicode compatibility, there would be a version tracking problem introduced to both the instructors as well as the students. That is to say, it would be visually unclear as to whether a string of Chinese characters was using Stalling’s original system or the author’s “customized” version of Pinying. Furthermore, the vast majority of Stalling’s lexicon consists of commonly used characters—the justification being that they provide pronunciation hints to the students. Consequently, rather than alter Pinying’s original form, the author decided to create a parallel structure that mapped over Professor Stalling’s formulation, one that consisted entirely of rarely used characters that still nevertheless existed in Unicode. In other words, every sound in the English language can be represented by one of two different characters: the Pinying original, and an obscure yet Unicode compatible character selected by the author on the basis of shared pronunciation with the former, similar to how each letter in the English alphabet also can be represented by two different forms—capital and lowercase. That is to say, Professor Stalling’s original character map was—for the most part—left unchanged and regarded as the majuscule form, while the Unicode friendly character is regarded as the minuscule form.

This here was the seeding genesis of the entire Hwayih Woen system. It began as nothing more than a parallel structure founded upon Professor Stalling’s work to facilitate digital input as well as weening students off of visual hints to reproduce English sounds.

Image by cheng feng


Given the development history just elaborated in the previous section, the utility of the system to the mainland ESL student should, at this point, be patently obvious. However, after the author completed the majuscule-miniscule binary structure, a further utility began to emerge, a utility directly applicable to the inverse demographic of the author’s original target audience: Anglo-sphere Overseas Chinese.

Throughout the entire Anglo-sphere, long-standing Chinese enclaves define the landscape of any sufficiently large metropolitan area, a fact proven by the existence of some form of “Chinatown” in every major urban center in the West. Entire generations of Chinese people have been born and raised inside these Chinatowns, giving rise to colloquialisms—such as ABCs (American Born Chinese), BBCs (British Born Chinese), CBCs (Canadian Born Chinese), etc.—to refer to the native populations of these enclaves. Within these communities—often, but not always, centered around some kind of church congregation—language attrition is an inescapable fact of life, and is the typically the earliest indication of wholesale assimilation, a process that typically achieves irrevocable completion within about 3 to 4 generations. This predictably repeated process arises from a number of concurrent factors.

First, the linguistic environment of the Anglo-sphere is oppressively monolingual. This does not mean that there do not exist large populations who speak non-English languages. On the contrary, the English-speaking countries by and large use the word “multiculturalism” as a tool to exalt themselves. However, these linguistic populations do not perpetuate themselves. If one already speaks a non-English language, one can certainly find opportunities to use it in the Anglo-sphere. However, if one does not already understand the language, the barrier of entry to these language populations is functionally impossible to surmount because there does not exist a clearly defined pathway by which one can gradually increase linguistic prowess and eventually become a full standing member of any given language community.

Second, education in the Anglo-sphere guarantees that the ONLY language that possesses such a systematic path to proficiency is English and English alone. Dual immersion private schools is a phenomenon that arose only after the year 2000, and even twenty years later, dual immersion schools still remain a tiny fraction of the private education industry, to say nothing of how much further the percentages become suppressed when taking into account the lion’s share of the public education system in the general education market. By and large, the Overseas Chinese born into these enclave communities go to English medium schools starting from kindergarten all the way to university, whether public or private—a significant contrast to the Overseas Chinese communities in places like Malaysia or Indonesia, where “Hwayeu Shyueshiaw”(华语学校) have a long and entrenched history. This kind of environmental pressure massively incentivizes individual families to make English the language of the home in order to accord with the requirements of their children’s education (to say nothing of the commonly seen phenomenon in which children of immigrant parents refuse to speak their heritage language even if the parents attempt to make it the language of the home). The traditional counterbalance to this issue—assuming the community is centered around a church or other religious and/or community organization—is to provide what is known as “Chinese school” over the weekend, stereotypically after church services finish on Sunday afternoon. However, these programs are rarely effective, because the students see little purpose in adding onto what is already a packed school and extra curricular schedule, which in turn renders parents forcing their children into weekend Chinese school an unsustainable practice that usually ends well before the students even get to middle school.

Third, contrary to the assumption of nearly every single mainlander, Mandarin is in fact NOT the lingua franca of these enclave communities—it’s usually Cantonese. But even then, linguistic uniformity is far from guaranteed because of the massive sprachbund differences between the Hakka, Fukienese, and Teochew, all of whom have representation in the Overseas Chinese populations (to say nothing of the Mandarin speaking minority from Taiwanese immigration). This creates a situation in which children born into these enclaves are not necessarily able to use the language they learned from their parents (assuming they bothered to learn it at all) with their peers, forcing a default to English among the children even further because it’s the only available common language.

All these factors combine together to create the commonly seen archetype of the Overseas Chinese individual who is neither conversationally functional in Mandarin nor regional dialect; or if he is, he is essentially guaranteed to be wholly illiterate in characters, save for the three that comprise his own name. But ultimately, what does any of this have to do with the Hwayih Woen system?

As was already explained in the previous section, the junior script exclusively uses the majuscule or minuscule phonetic characters. However, what such a system provides—being able to phonetically represent any word in the English language—is the ability to incorporate any number of known characters into writing that is immediately deployable. The significance of this cannot be understated. Countless Overseas Chinese students in the Anglo-sphere leave their Sunday afternoon Chinese school tenure no more literate than when they entered in large part because there’s no way for the characters they learn in class to be reinforced via repeated use. When a student memorizes Chinese vocabulary, he has no idea how to use it because the lack of sufficient grammatical and syntax knowledge prevents him from doing so, which leads to forgetting previously learned vocabulary in the process of studying new ones. Japanese and Koreans traditionally never had this problem because they both used Chinese characters to write their own languages, a feat made accomplishable due to the overlapping tolerances of their own domestic tetra-graphs. But what exactly does this mean?

The astute reader will immediately recognize that neither Japanese nor Korean uses Chinese characters exclusively. Neither Japanese nor Korean are isolating languages like the Sinitic languages are. They are syntactically synthetic, requiring verb conjugations and case endings to indicate various parts of speech. Both these cultures natively developed phonetically representative orthographies based on the structural principles of Chinese characters (in the case of Hangul), if not directly upon Chinese characters outright (in the case of Hiragana). Taking written Japanese as the most relevant example in the modern era, one can technically write the entirety of the Japanese language using Hiragana alone, because all possible sound combinations are already accounted for within the system. However, the Japanese people don’t actually do this, still preserving the use of Chinese characters in daily use to this day, in large measure because their orthographic scripts—Hiragana and Katagana—are both square block graphemes, meaning that one can write both Chinese characters and the kana systems within the same line of text. Up until the 1970’s and 80’s, the situation was identical in South Korea, a country whose constitution is still formally written in a character-Hangul “mixed script” system. To put it as bluntly as possible, this is exactly what Hwayih Woen has accomplished for English.

To the astute observer, this promises a massive boon to those Overseas Chinese individuals who are attempting to become literate in characters but are failing because they have neither the educational nor the environmental reinforcement mechanisms to solidify vocabulary acquisition. As already mentioned previously, the junior script enables the user to immediately and fully employ the language they already know (ie. English), and then gradually intersperse logographic characters into their lines of text to represent English words, much in the same way as the Japanese intersperse Chinese characters into their lines of kana to represent Japanese words, which is the very definition of the Hwayih Woen senior script. However, the purposes of such an orthographic system are far from just educational.

A commonly asked question in response to the Hwayih Woen system usually formulates as: “why would one go through all the trouble of writing English in Chinese characters rather than just directly write in standard English orthography?” After the exposition above, it should be obvious that the majority of Overseas Chinese born and raised in the Anglo-sphere are functionally blocked from their own language communities, rendering them without any language community uniquely their own. Historically speaking, this was exactly the predicament in which the Ashkenazi Jews found themselves in medieval Europe. By that time, Hebrew had already been a dead language for several centuries, preserved only in text for liturgical purposes—much in the same way that nobody today “speaks” Latin in any functional sense despite active preservation on the part of the Roman Catholic clergy. The daily language of these Ashkenazi Jews was middle German, and so it remained for their descendants all the way into the modern era, facilitated at least in part by the linguistic evolutionary pressures motivating the community to use their ancestral script to transcribe and represent their adopted—though technically foreign—living language. The binding of a Semitic script to a Germanic language was so successful that Yiddish eventually became an inseparable artifact of Ashkenazi Jewish identity, so successful in fact that the Zionists actually referred to it as “the language of the exile” prior to the revival of Hebrew as a spoken language in Israel.

In this context, it should be obvious why the question of “just writing standard German” becomes irrelevant. Indeed, an enormous number of Jews, especially in the modern era, were native German speakers themselves, writing an extensive corpus of work in standard German. Nevertheless, Yiddish remained an identifying marker of who belonged to the Jewish community, used among themselves on a daily basis. There was never any contradiction between Yiddish and German, let alone an imperative that the former ought not to exist because the latter “already existed.”

This is the theoretical framework in which the author constructed the Hwayih Woen Senior Script system. This manual—in addition to being a pedagogical ESL tool—seeks to provide a way to the Overseas Chinese trapped within the Anglo-sphere by which they can maintain a connection to their ancestral language community, just as Yiddish did for the Ashkenazim (or Ladino for the Sephardim). When it comes to the question of preserving the integrity of a group identity, the Overseas Chinese will always face a dichotomy: find a way to return to their ancestral lands, or assimilate wholesale into the local environment because generational amalgamation guarantees it. However, in light of the Jewish example, this manual suggests that perhaps there might exist a third option. After centuries of Byzantine and later Ottoman rule, returning to the Holy Land on any kind of permanent basis was simply not an option for the vast majority of Jews, a situation that parallels what the vast majority of Overseas Chinese face today. Hwayih Woen by no means purports to be a comprehensive solution to the question of assimilation. However, it might be a small contribution to help preserve what it means to be Chinese for now and future generations to come as we, like the Jews under Moses and beyond, wander in exile across the many nations on Earth, so that maybe there might still remain Overseas Chinese who can indeed “return home,” should such a day ever come for some far-flung generation.



All Chinese character input methods divide into two broad categories: phonetic based and shape based. The categorical names are fairly self explanatory. Phonetic input methods are based upon a character’s pronunciation, which makes for an easy learning curve, enables writing when one has forgotten how to write a character by hand, and is therefore the preferred form of input method among those who already “speak” the language—the Hànyŭ Pīnyīn input method being by far the most popular among contemporary Chinese speakers. However, due to the nature of the Chinese language, phonetic input methods are dialect specific. Because the vast majority of phonetic input methods are based upon Mandarin, if a computer user is literate in characters but does not know Mandarin, he is completely unable to use something like the Hànyŭ Pīnyīn input method. Alternatively, shape based input methods are based upon how a character is visually constructed. As a general principle, a character is deconstructed by stroke and/or component according to the decomposition rules unique to the particular input method. The advantages of a shape based input method are numerous, but the two most salient ones in this context are: A) swift input of obscure characters and B) linguistic poly-centrism.

Because of the massive number of characters within the canonical Chinese lexicon, homonyms literally number in the thousands, resulting in a situation where a single phonetic input results in a massive list of characters that are all pronounced exactly the same. Modern phonetic input methods get around this problem by including a predictive algorithm that filters the most commonly used characters to the top of the list, a solution that works perfectly well for writing Modern Standard Mandarin; however, if a character is not commonly used, or not used in Modern Standard Mandarin at all, then the input method will invariably bury the relevant character somewhere at the bottom of the list, a problem that is all too apparent for professional typists attempting to use the Hànyŭ Pīnyīn input method to write Cantonese, because while they often know the Mandarin dictionary pronunciation of a Cantonese specific character, having to alternate between keyboard and mouse for nearly every word makes the process extremely cumbersome. Shape based input methods negate these problems entirely because, by definition, a character’s visual construction is unique to that character alone, thereby allowing shape based input methods to input even the most obscure characters with (mostly) unique key input combinations, making the input of non-Mandarin based text very smooth and straightforward. One need not even know Mandarin or any Chinese dialect to effectively use a shape based input method, which means that speakers of languages as different as Korean, Japanese, and even Vietnamese to a limited extent (for the Chữ Nôm script), have equal accessibility to shape based input methods that phonetic input methods simply cannot provide. This is precisely what the term “linguistic poly-centrism” means.

In the advent of the personal computer and the proliferation of mass consumer digital technology, Oriental technologists have invented a wide variety of shape based input methods; however, twenty percent into the 21st century, only two remain supreme in the arena of common use and cross-platform availability: Cāngjié (仓颉) and Wǔbǐ (五笔). A full exposition into the history and development of both these input methods is technically beyond the scope of this text, seeing as other sources and authors have already sufficiently covered the subject. Nevertheless, the author recommends a general awareness of the differences between these two systems to facilitate the reader’s ability to navigate the architecture of this manual’s construction.

The Cāngjié input method was invented in the 1970s and is therefore the older of the two systems, even long predating Unicode itself. Because it is primarily used for writing Traditional characters, the primary user base of Cāngjié is located in Taiwan, Hong Kong, and Macau. On the other hand, the first generation of the Wǔbǐ input method did not come out until 1986, and because its inventor designed it around deconstructing Simplified characters, the primary user base of Wǔbǐ is in Mainland China, though today it is holds a fraction of the computer user market due to the popularity of the Hànyŭ Pīnyīn input method, itself a result of state mandated reliance upon Mandarin across the country.

The key point that the reader of this manual must note is this: both Cāngjié and Wǔbǐ are capable of writing their opposite character sets. That is to say, although Cāngjié was primarily designed for Traditional characters, it can write Simplified characters on the same keyboard. Conversely, although Wǔbǐ was primarily designed for Simplified characters, it can write Traditional characters, likewise, on the same keyboard.

The reader will note that this manual provides references in both Cāngjié and Wǔbǐ; however, the author strongly recommends that the new user who is wholly unfamiliar with any shape based input method prioritize the former over the latter. This is because as of this writing, the Wǔbǐ input method is still not natively available on the iPhone, despite both being available on Android as well as Windows, Mac, and Linux computer operating systems—to say nothing of the former’s visual accessibility because the latter by and large still relies upon the Roman lettering of the QWERTY keyboard to demarcate its input codes. By prioritizing Cāngjié, the reader can develop true cross-platform usability regardless of whether he intends to use Hwayih Woen to write either Traditional or Simplified characters.