Commonly used functions for Unicode, implemented as bindings to the International Components for Unicode (ICU) libraries.
This module contains only the most commonly used types and functions. Other modules in this package expose richer interfaces.
- data LocaleName
- data Breaker a
- data Break a
- brkPrefix :: Break a -> Text
- brkBreak :: Break a -> Text
- brkSuffix :: Break a -> Text
- brkStatus :: Break a -> a
- data Line
- data Word
- breakCharacter :: LocaleName -> Breaker ()
- breakLine :: LocaleName -> Breaker Line
- breakSentence :: LocaleName -> Breaker ()
- breakWord :: LocaleName -> Breaker Word
- breaks :: Breaker a -> Text -> [Break a]
- breaksRight :: Breaker a -> Text -> [Break a]
- toCaseFold :: Bool -> Text -> Text
- toLower :: LocaleName -> Text -> Text
- toUpper :: LocaleName -> Text -> Text
- data CharIterator
- fromString :: String -> CharIterator
- fromText :: Text -> CharIterator
- fromUtf8 :: ByteString -> CharIterator
- data NormalizationMode
- normalize :: NormalizationMode -> Text -> Text
- quickCheck :: NormalizationMode -> Text -> Maybe Bool
- isNormalized :: NormalizationMode -> Text -> Bool
- data CompareOption
- compare :: [CompareOption] -> Text -> Text -> Ordering
- data Collator
- collator :: LocaleName -> Collator
- collate :: Collator -> Text -> Text -> Ordering
- collateIter :: Collator -> CharIterator -> CharIterator -> Ordering
- sortKey :: Collator -> Text -> ByteString
- uca :: Collator
- data MatchOption
- data ParseError
- data Match
- data Regex
- class Regular r
- regex :: [MatchOption] -> Text -> Regex
- regex' :: [MatchOption] -> Text -> Either ParseError Regex
- pattern :: Regular r => r -> Text
- find :: Regex -> Text -> Maybe Match
- findAll :: Regex -> Text -> [Match]
- groupCount :: Regular r => r -> Int
- unfold :: (Int -> Match -> Maybe Text) -> Match -> [Text]
- span :: Match -> Text
- group :: Int -> Match -> Maybe Text
- prefix :: Int -> Match -> Maybe Text
- suffix :: Int -> Match -> Maybe Text
Text type is implemented as an array in the Haskell
heap. This means that its location is not pinned; it may be copied
during a garbage collection pass. ICU, on the other hand, works
with strings that are allocated in the normal system heap and have
a fixed address.
To accommodate this need, these bindings use the functions from
Data.Text.Foreign to copy data between the Haskell heap and the
system heap. The copied strings are still managed automatically,
but the need to duplicate data does add some performance and memory
The name of a locale.
The root locale. For a description of resource bundles and the root resource, see http://userguide.icu-project.org/locale/resources.
A specific locale.
The program's current locale.
Text boundary analysis is the process of locating linguistic boundaries while formatting and handling text. Examples of this process include:
- Locating appropriate points to word-wrap text to fit within specific margins while displaying or printing.
- Counting characters, words, sentences, or paragraphs.
- Making a list of the unique words in a document.
- Figuring out if a given range of text contains only whole words.
- Capitalizing the first letter of each word.
- Locating a particular unit of the text (For example, finding the third word in the document).
Breaker type was designed to support these kinds of
For the impure boundary analysis API (which is richer, but less
easy to use than the pure API), see the
module. The impure API supports some uses that may be less
efficient via the pure API, including:
- Locating the beginning of a word that the user has selected.
- Determining how far to move the text cursor when the user hits an arrow key (Some characters require more than one position in the text store and some characters in the text store do not display at all).
Line break status.
Word break status.
A "word" that does not fit into another category. Includes spaces and most punctuation.
A word that appears to be a number.
A word containing letters, excluding hiragana, katakana or ideographic characters.
A word containing kana characters.
A word containing ideographic characters.
Break a string on character boundaries.
Character boundary analysis identifies the boundaries of Extended Grapheme Clusters, which are groupings of codepoints that should be treated as character-like units for many text operations. Please see Unicode Standard Annex #29, Unicode Text Segmentation, http://www.unicode.org/reports/tr29/ for additional information on grapheme clusters and guidelines on their use.
Break a string on line boundaries.
Line boundary analysis determines where a text string can be broken when line wrapping. The mechanism correctly handles punctuation and hyphenated words.
Break a string on sentence boundaries.
Sentence boundary analysis allows selection with correct interpretation of periods within numbers and abbreviations, and trailing punctuation marks such as quotation marks and parentheses.
Break a string on word boundaries.
Word boundary analysis is used by search and replace functions, as well as within text editing applications that allow the user to select words with a double click. Word selection provides correct interpretation of punctuation marks within and following words. Characters that are not part of a word, such as symbols or punctuation marks, have word breaks on both sides.
Return a list of all breaks in a string, from left to right.
Return a list of all breaks in a string, from right to left.
Whether to include or exclude mappings for
dotted and dotless I and i that are marked with
Case-fold the characters in a string.
Case folding is locale independent and not context sensitive, but there is an option for treating the letter I specially for Turkic languages. The result may be longer or shorter than the original.
Lowercase the characters in a string.
Casing is locale dependent and context sensitive. The result may be longer or shorter than the original.
Uppercase the characters in a string.
Casing is locale dependent and context sensitive. The result may be longer or shorter than the original.
A type that supports efficient iteration over Unicode characters.
As an example of where this may be useful, a function using this
type may be able to iterate over a UTF-8
rather than first copying and converting it to an intermediate
form. This type also allows e.g. comparison between
ByteString, with minimal overhead.
Canonical decomposition followed by canonical composition.
Compatibility decomposition followed by canonical composition.
"Fast C or D" form.
Normalize a string according the specified normalization mode.
Perform an efficient check on a string, to quickly determine if the string is in a particular normalization form.
Nothing result indicates that a definite answer could not be
determined quickly, and a more thorough check is required,
isNormalized. The user may have to convert the string
to its normalized form and compare the results.
Indicate whether a string is in a given normalization form.
quickCheck, this function returns a definitive result.
FCD normalization forms, both functions
work in exactly the same ways. For
NFKC forms, where
quickCheck may return
Nothing, this function will perform
further tests to arrive at a definitive result.
Normalization-sensitive string comparison
Compare strings case-insensitively using case folding, instead of case-sensitively. If set, then the following case folding options are used.
When case folding, exclude the special I character. For use with Turkic (Turkish/Azerbaijani) text data.
Compare two strings for canonical equivalence. Further options include case-insensitive comparison and code point order (as opposed to code unit order).
Canonical equivalence between two strings is defined as their
normalized forms (
NFC) being identical. This function
compares strings incrementally instead of normalizing (and
optionally case-folding) both strings entirely, improving
Bulk normalization is only necessary if the strings do not fulfill
FCD conditions. Only in this case, and only if the strings
are relatively long, is memory allocated temporarily. For
strings and short non-
FCD strings there is no memory allocation.
Locale-sensitive string collation
For the impure collation API (which is richer, but less easy to
use than the pure API), see the
String collator type.
Collators are considered equal if they
will sort strings identically.
Options for controlling matching behaviour.
Enable case insensitive matching.
Allow comments and white space within patterns.
If set, treat the entire pattern as a literal string. Metacharacters or escape sequences in the input sequence will be given no special meaning.
Control behaviour of
Haskell-only line endings. When this mode is enabled, only
Unicode word boundaries. If set,
Warning: Unicode word boundaries are quite different from traditional regular expression word boundaries. See http://unicode.org/reports/tr29/#Word_Boundaries.
Throw an error on unrecognized backslash escapes. If set, fail with an error on patterns that contain backslash-escaped ASCII letters without a known special meaning. If this flag is not set, these escaped letters represent themselves.
Set a processing limit for match operations.
Some patterns, when matching certain strings, can run in exponential time. For practical purposes, the match operation may appear to be in an infinite loop. When a limit is set a match operation will fail with an error if the limit is exceeded.
The units of the limit are steps of the match engine. Correspondence with actual processor time will depend on the speed of the processor and the details of the specific pattern, but will typically be on the order of milliseconds.
By default, the matching time is not limited.
Set the amount of heap storage avaliable for use by the match backtracking stack.
ICU uses a backtracking regular expression engine, with the backtrack stack maintained on the heap. This function sets the limit to the amount of memory that can be used for this purpose. A backtracking stack overflow will result in an error from the match operation that caused it.
A limit is desirable because a malicious or poorly designed pattern can use excessive memory, potentially crashing the process. A limit is enabled by default.
Detailed information about parsing errors. Used by ICU parsing
engines that parse long rules, patterns, or programs, where the
text being parsed is long enough that more information than an
ICUError is needed to localize the error.
A compiled regular expression.
Regex values are usually constructed using the
regex' functions. This type is also an instance of
so if you have the
OverloadedStrings language extension enabled,
you can construct a
Regex by simply writing the pattern in
quotes (though this does not allow you to specify any
Compile a regular expression with the given options. This
function throws a
ParseError if the pattern is invalid, so it is
best for use when the pattern is statically known.
Compile a regular expression with the given options. This is safest to use when the pattern is constructed at run time.
Return the source form of the pattern used to construct this regular expression or match.
Find the first match for the regular expression in the given text.
Lazily find all matches for the regular expression in the given text.
Capturing groups are numbered starting from zero. Group zero is always the entire matching text. Groups greater than zero contain the text matching each capturing group in a regular expression.
Return the number of capturing groups in this regular expression or match's pattern.
A combinator for returning a list of all capturing groups on a
Return the span of text between the end of the previous match and the beginning of the current match.
Return the nth capturing group in a match, or
Nothing if n
is out of bounds.
Return the prefix of the nth capturing group in a match (the
text from the start of the string to the start of the match), or
Nothing if n is out of bounds.