| Safe Haskell | Safe-Inferred |
|---|---|
| Language | Haskell2010 |
Hpp.Tokens
Description
Tokenization breaks a String into pieces of whitespace,
constants, symbols, and identifiers.
Synopsis
- data Token s
- detok :: Token s -> s
- isImportant :: Token s -> Bool
- notImportant :: Token s -> Bool
- importants :: [Token s] -> [s]
- trimUnimportant :: [Token s] -> [Token s]
- detokenize :: Monoid s => [Token s] -> s
- tokenize :: Stringy s => s -> [Token s]
- newLine :: (Eq s, IsString s) => Token s -> Bool
- skipLiteral :: Stringy s => s -> (s, s)
Documentation
Tokenization is words except the white space is tagged rather
than discarded.
importants :: [Token s] -> [s] Source #
Return the contents of only Important (non-space) tokens.
trimUnimportant :: [Token s] -> [Token s] Source #
detokenize :: Monoid s => [Token s] -> s Source #
Collapse a sequence of Tokens back into a String. detokenize
. tokenize == id.
tokenize :: Stringy s => s -> [Token s] Source #
Break an input String into a sequence of Tokens. Warning:
This may not exactly correspond to your target language's
definition of a valid identifier!
skipLiteral :: Stringy s => s -> (s, s) Source #
Skip over a string or character literal returning the literal and the remaining the input.