úÎËÝÅ´     portablealphayaml-oren@ben-kiki.orgurecord |> field is the same as  field record, but is more readable.  decode bytes automatically detects the  used and converts the  bytes to Unicode characters. detectEncoding text- examines the first few chars (bytes) of the text A to deduce the Unicode encoding used according to the YAML spec. undoEncoding encoding bytes converts a bytes stream to Unicode  characters according to the encoding. combinePairs chars8 converts each pair of UTF-16 surrogate characters to a  single Unicode character. combineLead lead rest combines the lead surrogate with the head of the  rest% of the input chars, assumed to be a trail surrogate, and continues  combining surrogate pairs. surrogateOffset" is copied from the Unicode FAQs. combineSurrogates lead trail. combines two UTF-16 surrogates into a single  Unicode character. hasFewerThan bytes n% checks whether there are fewer than n bytes  left to read. undoUTF18LE bytes decoded a UTF-16-LE bytes stream to Unicode chars. undoUTF18BE bytes decoded a UTF-16-BE bytes stream to Unicode chars. undoUTF8 bytes decoded a UTF-8 bytes stream to Unicode chars. decodeTwoUTF8 first bytes/ decodes a two-byte UTF-8 character, where the  first= byte is already available and the second is the head of the bytes, 0 and then continues to undo the UTF-8 encoding. combineTwoUTF8 first second combines the first and second bytes of a 1 two-byte UTF-8 char into a single Unicode char. decodeThreeUTF8 first bytes- decodes a three-byte UTF-8 character, where  the firstA byte is already available and the second and third are the head  of the bytes1, and then continues to undo the UTF-8 encoding. combineThreeUTF8 first second combines the first, second and third > bytes of a three-byte UTF-8 char into a single Unicode char. decodeFourUTF8 first bytes0 decodes a four-byte UTF-8 character, where the  firstD byte is already available and the second, third and fourth are the  head of the bytes1, and then continues to undo the UTF-8 encoding. combineFourUTF8 first second combines the first, second and third > bytes of a three-byte UTF-8 char into a single Unicode char. !escapeString string- escapes all the non-ASCII characters in the  string, as well as escaping the "\" character, using the "\xXX",  "\uXXXX" and "\ UXXXXXXXX" escape sequences. "toHex digits int converts the int to the specified number of  hexadecimal digits. showTokens tokens converts a list of tokens to a multi-line YEAST  text. #initialState name input returns an initial $% for parsing the  input (with name for error messages). &setDecision name state sets the  sDecision field to decision. 'setLimit limit state sets the sLimit field to limit. (setForbidden forbidden state sets the  sForbidden field to  forbidden. )setCode code state sets the sCode field to code. *returnReply state result prepares a +, with the specified state  and result. -tokenReply state token returns a +, containing the state and  tokenE. Any collected characters are cleared (either there are none, or we " put them in this token, or we don't want them). failReply state message prepares a +, with the specified state  and error message. .unexpectedReply state returns a  failReply for an unexpected character. / parser % n repeats parser exactly n times. 0parser <% n matches fewer than n occurrences of parser. 1decision ^ (option / option / ...) provides a decision name to the & choice about to be made, to allow to commit to it. 2parser ! decision commits to decision (in an option) after  successfully matching the parser. 3parser ?! decision commits to decision (in an option) if the current  position matches parser$, without consuming any characters. 4 lookbehind <?1 matches the current point without consuming any M characters, if the previous character matches the lookbehind parser (single  character negative lookbehind) 5 lookahead >?< matches the current point without consuming any characters 9 if it matches the lookahead parser (positive lookahead) 6parser - rejected matches parser , except if rejected matches at this  point. 7before & after parses before and, if it succeeds, parses after. This  basically invokes the monad's >>= (bind) method. 8first / second tries to parse first, and failing that parses  second , unless first3 has committed in which case is fails immediately. 9 (optional ?) tries to match parser, otherwise does nothing. : (parser *)% matches zero or more occurrences of repeat, as long as each ) one actually consumes input characters. ; (parser +)$ matches one or more occurrences of parser, as long as each ) one actually consumed input characters. <decide first second tries to parse first, and failing that parses  second , unless first3 has committed in which case is fails immediately. =choice decision parser provides a decision name to the choice about to  be made in parser, to allow to commit to it. > prev parser succeeds if parser' matches at the previous character. It  does not consume any input. ? peek parser succeeds if parser% matches at this point, but does not  consume any input. @reject parser name fails if parser! matches at this point, and does  nothing otherwise. If name/ is provided, it is used in the error message, 4 otherwise the messages uses the current character. AnonEmpty parser succeeds if parser matches some non-empty input  characters at this point. Bempty- always matches without consuming any input. Ceof matches the end of the input. Dsol matches the start of a line. Ecommit decision8 commits the parser to all the decisions up to the most  recent parent decision2. This makes all tokens generated in this parsing + path immediately available to the caller. FnextLine increments sLine counter and resets sColumn. G#with setField getField value parser invokes the specified parser with ) the value of the specified field set to value for the duration of the  invocation, using the setField and getField functions to manipulate it. Hparser `H` pattern parses the specified parser ensuring 0 that it does not contain anything matching the  forbidden parser. Iparser `I` limit parses the specified parser 1 ensuring that it does not consume more than the limit input chars. J nextIf test+ fails if the current position matches the $% forbidden  pattern or if the $%3 lookahead limit is reached. Otherwise it consumes 3 (and buffers) the next input char if it satisfies test. K finishToken= places all collected text into a new token and begins a new < one, or does nothing if there are no collected characters. L wrap parser invokes the parser), ensures any unclaimed input characters N are wrapped into a token (only happens when testing productions), ensures no / input is left unparsed, and returns the parser' s result. Mconsume parser invokes the parser! and then consumes all remaining  unparsed input characters. Ntoken code parser places all text matched by parser into a   with  the specified code9 (unless it is empty). Note it collects the text even if  there is an error. Ofake code text$ creates a token with the specified code and "fake"  textB characters, instead of whatever characters are collected so far. P meta parser, collects the text matched by the specified parser into a  | Meta token. Qindicator code, collects the text matched by the specified parser into an   Indicator token. R text parser- collects the text matched by the specified parser into a  Text token. S nest code0 returns an empty token with the specified begin/end code to  signal nesting. TpatternTokenizer pattern converts the pattern to a simple  . UparserTokenizer what parser converts the parser returning what to a  simple  : (only used for tests). The result is reported as a token  with the Detected  , The result is reported as a token with the  Detected  . V2errorTokens tokens message withFollowing following appends an Error  token with the specified message at the end of tokens , and if   withFollowing also appends the unparsed text  following the error as a  final Unparsed token. commitBugs reply7 inserts an error token if a commit was made outside a 7 named choice. This should never happen outside tests. yaml name input converts the Unicode input (called name in error  messages) to a list of  ) according to the YAML spec. This is it! W pName name converts a parser name to the "proper" spec name. X tokenizers: returns a mapping from a production name to a production  tokenizer. tokenizer name, converts the production with the specified name to a  simple  , or Nothing if it isn' t known. YtokenizersWithN: returns a mapping from a production name to a production  tokenizer (that takes an n argument). tokenizerWithN name n+ converts the production (that requires an n  argument) with the specified name to a simple  , or Nothing if  it isn' t known. ZtokenizersWithC: returns a mapping from a production name to a production  tokenizer (that takes a c argument). tokenizerWithC name c* converts the production (that requires a c  argument) with the specified name to a simple  , or Nothing if  it isn' t known. [tokenizersWithT: returns a mapping from a production name to a production  tokenizer (that takes a t argument). tokenizerWithT name t+ converts the production (that requires an t  argument) with the specified name to a simple  , or Nothing if  it isn' t known. \tokenizersWithNC/ returns a mapping from a production name to a % production tokenizer (that requires n and c arguments). tokenizerWithNC name n c( converts the production (that requires n and  c arguments) with the specified name to a simple  , or  Nothing if it isn' t known. ]tokenizersWithNT/ returns a mapping from a production name to a % production tokenizer (that requires n and t arguments). tokenizerWithNT name n t( converts the production (that requires n and  t arguments) with the specified name to a simple  , or  Nothing if it isn' t known. tokenizerNames3 returns the list of all productions (tokenizers). ^detect_utf_encoding doesn'0t actually detect the encoding, we just call it N this way to make the productions compatible with the spec. Instead it simply L reports the encoding (which was already detected when we started parsing). na is the "non-applicable"" indentation value. We use Haskell' s laziness $ to verify it really is never used. _ asInteger? returns the last consumed character, which is assumed to be a  decimal digit, as an integer. ` result value is the same as  return value except that we give the L Haskell type deduction the additional boost it needs to figure out this is  wrapped in a ab.  . converts a (named) input text into a list of   . Errors ! are reported as tokens with the Error  , and the unparsed text 6 following an error may be attached as a final token. Chomp method. Production context. cMatch parameter result# specifies that we can convert the  parameter to  a ab returning the result. $!The internal parser state. We don''t bother with parameterising it with a  " UserState"?, we just bundle the generic and specific fields together (not , that it is that easy to draw the line - is sLine generic or specific?). d#The input name for error messages. eThe input UTF encoding. fCurrent decision name. gLookahead characters limit. h Pattern we must not enter into. iDisables token generation. j-(Reversed) characters collected for a token. k#Offset in characters in the input. lBuilds on YAML's line break definition. m*Actually character number - we hate tabs. n&Of token we are collecting chars for. oLast matched character. pThe decoded input characters. +Each invication of a ab yields a +,. The qr is only one  part of the +,. sParsing result. t Tokens generated by the parser. u Commitment to a decision point. vThe updated parser state. qThe qr> of each invocation is either an error, the actual result, or 1 a continuation for computing the actual result. aA ab% is basically a function computing a +,. Parsed token. wSpecific token  . xContained input chars, if any.   codes. (Recognized Unicode encodings. UTF-32 isn't required by YAML parsers.    y      !"#$%&&'()*+,,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`aabcdefghijklmnoppqrstuvwYamlReference-0.9Text.Yaml.Reference showTokensyaml tokenizertokenizerWithNtokenizerWithCtokenizerWithTtokenizerWithNCtokenizerWithNTtokenizerNames TokenizerChompContextTokenCode|>decodeEncodingdetectEncoding undoEncoding combinePairs combineLeadsurrogateOffsetcombineSurrogates hasFewerThan undoUTF16LE undoUTF16BEundoUTF8 decodeTwoUTF8combineTwoUTF8decodeThreeUTF8combineThreeUTF8decodeFourUTF8combineFourUTF8 escapeStringtoHex initialStateState setDecisionsetLimit setForbiddensetCode returnReplyReply failReplyunexpectedReply%<%^!?!?-&/?*+decidechoiceprevpeekrejectnonEmptyemptyeofsolcommitnextLinewith forbidding limitedTonextIf finishTokenwrapconsumetokenfakemeta indicatortextnestpatternTokenizerparserTokenizer commitBugspName tokenizerstokenizersWithNtokenizersWithCtokenizersWithTtokenizersWithNCtokenizersWithNTna asIntegerresultParserMatchsName sEncoding sDecisionsLimit sForbiddensIsPeeksCharssOffsetsLinesColumnsCodesLastsInputResultrResultrTokensrCommitrStatetCodetText