úÎ!P¤MŒ8      !"#$%&'()*+,-./01234567None=>?MBNone=>?MÈDmitry Zuikov 2020MITdzuikov@gmail.com experimentalunknownNone=>?Må fuzzy-parse$Tries to parse a date from the text.None=>?MK  Dmitry Zuikov 2020MITdzuikov@gmail.com experimentalunknownNone=>?ML˜  fuzzy-parseQTypeclass for token values. Note, that some tokens appear in results only when ‘ option is set, i.e. sequences of characters turn out to text tokens or string literals and delimeter tokens are just removed from the results  fuzzy-parseCreate a character token  fuzzy-parse'Create a string literal character token  fuzzy-parseCreate a punctuation token fuzzy-parseCreate a text chunk token fuzzy-parseCreate a string literal token fuzzy-parseCreate a keyword token fuzzy-parseCreate an empty field token fuzzy-parseCreate a delimeter token fuzzy-parseCreates an indent token fuzzy-parseCreates an EOL token fuzzy-parsekTokenization settings. Use mempty for an empty value and construction functions for changing the settings. fuzzy-parseTurns on EOL token generation fuzzy-parse„Turn on character escaping inside string literals. Currently the following escaped characters are supported: [" ' t n r a b f v ] fuzzy-parse£Raise empty field tokens (note mkEmpty method) when no tokens found before a delimeter. Useful for processing CSV-like data in order to distingush empty columns fuzzy-parsesame as addEmptyFields fuzzy-parsedTurns off token normalization. Makes the tokenizer generate character stream. Useful for debugging. fuzzy-parse@Turns on single-quoted string literals. Character stream after '\''Ç character will be proceesed as single-quoted stream, assuming all delimeter, comment and other special characters as a part of the string literal until the next unescaped single quote character. fuzzy-parse1Enable double-quoted string literals support as  for single-quoted strings. fuzzy-parse!Disable separate string literals.ÿUseful when processed delimeted data (csv-like formats). Normally, sequential text chunks are concatenated together, but consequent text and string literal will produce the two different tokens and it may cause weird results if data is in csv-like format, i.e:Rtokenize (delims ":"<>emptyFields<>sq ) "aaa:bebe:'qq' aaa:next::" :: [Maybe Text]J[Just "aaa",Just "bebe",Just "qq",Just " aaa",Just "next",Nothing,Nothing]ÿlook: "qq" and " aaa" are turned into two separate tokens that makes the result of CSV processing looks improper, like it has an extra-column. This behavior may be avoided using this option, if you don't need to distinguish text chunks and string literals:^tokenize (delims ":"<>emptyFields<>sq<>noslits) "aaa:bebe:'qq:foo' aaa:next::" :: [Maybe Text]F[Just "aaa",Just "bebe",Just "qq:foo aaa",Just "next",Nothing,Nothing] fuzzy-parse»Specify the list of delimers (characters) to split the character stream into fields. Useful for CSV-like separated formats. Support for empty fields in token stream may be enabled by  function fuzzy-parse¥Strip spaces on left side of a token. Does not affect string literals, i.e string are processed normally. Useful mostly for processing CSV-like formats, otherwise % may be used to skip unwanted spaces.  fuzzy-parse¦Strip spaces on right side of a token. Does not affect string literals, i.e string are processed normally. Useful mostly for processing CSV-like formats, otherwise % may be used to skip unwanted spaces.! fuzzy-parsevStrips spaces on right and left sides and transforms multiple spaces into the one. Name origins from unwords . words~Does not affect string literals, i.e string are processed normally. Useful mostly for processing CSV-like formats, otherwise % may be used to skip unwanted spaces." fuzzy-parse¨Specify the line comment prefix. All text after the line comment prefix will be ignored until the newline character appearance. Multiple line comments are supported.# fuzzy-parse–Specify the punctuation characters. Any punctuation character is handled as a separate token. Any token will be breaked on a punctiation character..Useful for handling ... er... punctuaton, like > function(a,b)or > (apply function 1 2 3)0tokenize spec "(apply function 1 2 3)" :: [Text](["(","apply","function","1","2","3",")"]$ fuzzy-parseNSpecify the keywords list. Each keyword will be threated as a separate token.% fuzzy-parseEnable identation support& fuzzy-parseæSet tab expanding multiplier i.e. each tab extends into n spaces before processing. It also turns on the indentation. Only the tabs at the beginning of the string are expanded, i.e. before the first non-space character appears.' fuzzy-parseTokenize a text  !"#$%&' ' !"#%&$8      !"#$%&'()*+,-./0123456789:;<=>fuzzy-parse-0.1.2.0-inplaceData.Text.Fuzzy.Attoparsec.Day Data.Text.Fuzzy.Attoparsec.MonthData.Text.Fuzzy.DatesData.Text.Fuzzy.SectionData.Text.Fuzzy.TokenizedaydayDMYdayYMD dayYYYYMMDDdayDMonY fuzzyMonthfuzzyMonthFromText parseMaybeDay cutSectionOn cutSectionByIsTokenmkCharmkSCharmkPunctmkTextmkStrLit mkKeywordmkEmptymkDelimmkIndentmkEol TokenizeSpeceolescaddEmptyFields emptyFieldsnnsqsqqnoslitsdelimsslsruwcommentpunctkeywordsindent itabstopstokenize$fMonoidTokenizeSpec$fSemigroupTokenizeSpec $fIsTokenText$fIsTokenMaybe$fEqTokenizeSpec$fOrdTokenizeSpec$fShowTokenizeSpec$fApplicativeTokenizeM$fFunctorTokenizeM$fMonadReaderTokenizeM$fMonadWriterTokenizeM$fMonadStateTokenizeM$fMonadTokenizeM $fEqToken $fOrdToken $fShowToken