|Copyright||(c) 2009, 2010 Bryan O'Sullivan|
Character set normalization functions for Unicode, implemented as bindings to the International Components for Unicode (ICU) libraries.
- data NormalizationMode
- normalize :: NormalizationMode -> Text -> Text
- quickCheck :: NormalizationMode -> Text -> Maybe Bool
- isNormalized :: NormalizationMode -> Text -> Bool
- data CompareOption
- compare :: [CompareOption] -> Text -> Text -> Ordering
Unicode normalization API
normalize function transforms Unicode text into an equivalent
composed or decomposed form, allowing for easier sorting and
searching of text.
normalize supports the standard normalization
forms described in http://www.unicode.org/unicode/reports/tr15/,
Unicode Standard Annex #15: Unicode Normalization Forms.
Characters with accents or other adornments can be encoded in several different ways in Unicode. For example, take the character A-acute. In Unicode, this can be encoded as a single character (the "composed" form):
00C1 LATIN CAPITAL LETTER A WITH ACUTE
or as two separate characters (the "decomposed" form):
0041 LATIN CAPITAL LETTER A 0301 COMBINING ACUTE ACCENT
To a user of your program, however, both of these sequences should be treated as the same "user-level" character "A with acute accent". When you are searching or comparing text, you must ensure that these two sequences are treated equivalently. In addition, you must handle characters with more than one accent. Sometimes the order of a character's combining accents is significant, while in other cases accent sequences in different orders are really equivalent.
Similarly, the string "ffi" can be encoded as three separate letters:
0066 LATIN SMALL LETTER F 0066 LATIN SMALL LETTER F 0069 LATIN SMALL LETTER I
or as the single character
FB03 LATIN SMALL LIGATURE FFI
The "ffi" ligature is not a distinct semantic character, and strictly speaking it shouldn't be in Unicode at all, but it was included for compatibility with existing character sets that already provided it. The Unicode standard identifies such characters by giving them "compatibility" decompositions into the corresponding semantic characters. When sorting and searching, you will often want to use these mappings.
normalize helps solve these problems by transforming text into
the canonical composed and decomposed forms as shown in the first
example above. In addition, you can have it perform compatibility
decompositions so that you can treat compatibility characters the
same as their equivalents. Finally,
normalize rearranges accents
into the proper canonical order, so that you do not have to worry
about accent rearrangement on your own.
FCD, "Fast C or D", is also designed for collation. It
allows to work on strings that are not necessarily normalized with
an algorithm (like in collation) that works under "canonical
closure", i.e., it treats precomposed characters and their
decomposed equivalents the same.
It is not a normalization form because it does not provide for
uniqueness of representation. Multiple strings may be canonically
equivalent (their NFDs are identical) and may all conform to
without being identical themselves.
The form is defined such that the "raw decomposition", the recursive canonical decomposition of each character, results in a string that is canonically ordered. This means that precomposed characters are allowed for as long as their decompositions do not need canonical reordering.
Its advantage for a process like collation is that all
NFC texts - and many unnormalized texts - already conform to
FCD and do not need to be normalized (
NFD) for such a
quickCheck will return
Yes for most strings
For more details on
FCD see the collation design document:
ICU collation performs either
automatically if normalization is turned on for the collator
object. Beyond collation and string search, normalized strings may
be useful for string equivalence comparisons,
transliteration/transcription, unique representations, etc.
The W3C generally recommends to exchange texts in
NFC. Note also
that most legacy character encodings use only precomposed forms and
often do not encode any combining marks by themselves. For
conversion to such character encodings the Unicode text needs to be
NFC. For more usage examples, see the Unicode
Canonical decomposition followed by canonical composition.
Compatibility decomposition followed by canonical composition.
"Fast C or D" form.
Normalize a string according the specified normalization mode.
Perform an efficient check on a string, to quickly determine if the string is in a particular normalization form.
Nothing result indicates that a definite answer could not be
determined quickly, and a more thorough check is required,
isNormalized. The user may have to convert the string
to its normalized form and compare the results.
Indicate whether a string is in a given normalization form.
quickCheck, this function returns a definitive result.
FCD normalization forms, both functions
work in exactly the same ways. For
NFKC forms, where
quickCheck may return
Nothing, this function will perform
further tests to arrive at a definitive result.
Compare strings case-insensitively using case folding, instead of case-sensitively. If set, then the following case folding options are used.
When case folding, exclude the special I character. For use with Turkic (Turkish/Azerbaijani) text data.
Compare two strings for canonical equivalence. Further options include case-insensitive comparison and code point order (as opposed to code unit order).
Canonical equivalence between two strings is defined as their
normalized forms (
NFC) being identical. This function
compares strings incrementally instead of normalizing (and
optionally case-folding) both strings entirely, improving
Bulk normalization is only necessary if the strings do not fulfill
FCD conditions. Only in this case, and only if the strings
are relatively long, is memory allocated temporarily. For
strings and short non-
FCD strings there is no memory allocation.