úÎΆÈC     portablealphayaml-oren@ben-kiki.orgwrecord |> field is the same as  field record, but is more readable.  decode bytes automatically detects the  used and converts the  bytes to Unicode characters. detectEncoding text- examines the first few chars (bytes) of the text A to deduce the Unicode encoding used according to the YAML spec. undoEncoding encoding bytes converts a bytes stream to Unicode  characters according to the encoding. combinePairs chars8 converts each pair of UTF-16 surrogate characters to a  single Unicode character. combineLead lead rest combines the lead surrogate with the head of the  rest% of the input chars, assumed to be a trail surrogate, and continues  combining surrogate pairs. surrogateOffset" is copied from the Unicode FAQs. combineSurrogates lead trail. combines two UTF-16 surrogates into a single  Unicode character. hasFewerThan bytes n% checks whether there are fewer than n bytes  left to read. undoUTF18LE bytes decoded a UTF-16-LE bytes stream to Unicode chars. undoUTF18BE bytes decoded a UTF-16-BE bytes stream to Unicode chars. undoUTF8 bytes decoded a UTF-8 bytes stream to Unicode chars. decodeTwoUTF8 first bytes/ decodes a two-byte UTF-8 character, where the  first= byte is already available and the second is the head of the bytes, 0 and then continues to undo the UTF-8 encoding. combineTwoUTF8 first second combines the first and second bytes of a 1 two-byte UTF-8 char into a single Unicode char. decodeThreeUTF8 first bytes- decodes a three-byte UTF-8 character, where  the firstA byte is already available and the second and third are the head  of the bytes1, and then continues to undo the UTF-8 encoding. !combineThreeUTF8 first second combines the first, second and third > bytes of a three-byte UTF-8 char into a single Unicode char. "decodeFourUTF8 first bytes0 decodes a four-byte UTF-8 character, where the  firstD byte is already available and the second, third and fourth are the  head of the bytes1, and then continues to undo the UTF-8 encoding. #combineFourUTF8 first second combines the first, second and third > bytes of a three-byte UTF-8 char into a single Unicode char. $escapeString string- escapes all the non-ASCII characters in the  string, as well as escaping the "\" character, using the "\xXX",  "\uXXXX" and "\ UXXXXXXXX" escape sequences. %toHex digits int converts the int to the specified number of  hexadecimal digits. showTokens tokens converts a list of tokens to a multi-line YEAST  text. &initialState name input returns an initial '( for parsing the  input (with name for error messages). )setDecision name state sets the  sDecision field to decision. *setLimit limit state sets the sLimit field to limit. +setForbidden forbidden state sets the  sForbidden field to  forbidden. ,setCode code state sets the sCode field to code. -returnReply state result prepares a ./ with the specified state  and result. 0tokenReply state token returns a ./ containing the state and  tokenE. Any collected characters are cleared (either there are none, or we " put them in this token, or we don't want them). failReply state message prepares a ./ with the specified state  and error message. 1unexpectedReply state returns a  failReply for an unexpected character. 2 parser % n repeats parser exactly n times. 3parser <% n matches fewer than n occurrences of parser. 4decision ^ (option / option / ...) provides a decision name to the & choice about to be made, to allow to commit to it. 5parser ! decision commits to decision! after successfully matching the  parser. 6parser ?! decision commits to decision! if the current position matches  parser$, without consuming any characters. 7parser - rejected matches parser , except if rejected matches at this  point. 8before & after parses before and, if it succeeds, parses after. This  basically invokes the monad's >>= (bind) method. 9first / second tries to parse first, and failing that parses  second , unless first3 has committed in which case is fails immediately. : (optional ?) tries to match parser, otherwise does nothing. ; (parser *)% matches zero or more occurrences of repeat, as long as each ) one actually consumes input characters. < (parser +)$ matches one or more occurrences of parser, as long as each ) one actually consumed input characters. =decide first second tries to parse first, and failing that parses  second , unless first3 has committed in which case is fails immediately. >choice decision parser provides a decision name to the choice about to  be made in parser, to allow to commit to it. ? peek parser succeeds if parser% matches at this point, but does not  consume any input. @reject parser name fails if parser! matches at this point, and does  nothing otherwise. If name/ is provided, it is used in the error message, 4 otherwise the messages uses the current character. AnonEmpty parser succeeds if parser matches some non-empty input  characters at this point. Bempty- always matches without consuming any input. Ceof matches the end of the input. Dsol matches the start of a line. Ecommit decision8 commits the parser to all the decisions up to the most  recent parent decision2. This makes all tokens generated in this parsing + path immediately available to the caller. FnextLine increments sLine counter and resets sColumn. G#with setField getField value parser invokes the specified parser with ) the value of the specified field set to value for the duration of the  invocation, using the setField and getField functions to manipulate it. Hparser `H` pattern parses the specified parser ensuring 0 that it does not contain anything matching the  forbidden parser. Iparser `I` limit parses the specified parser 1 ensuring that it does not consume more than the limit input chars. J nextIf test+ fails if the current position matches the '( forbidden  pattern or if the '(3 lookahead limit is reached. Otherwise it consumes 3 (and buffers) the next input char if it satisfies test. K finishToken= places all collected text into a new token and begins a new < one, or does nothing if there are no collected characters. L wrap parser invokes the parser), ensures any unclaimed input characters N are wrapped into a token (only happens when testing productions), ensures no / input is left unparsed, and returns the parser' s result. Mconsume parser invokes the parser! and then consumes all remaining  unparsed input characters. Ntoken code parser places all text matched by parser into a  with  the specified code9 (unless it is empty). Note it collects the text even if  there is an error. Ofake code text$ creates a token with the specified code and "fake"  textB characters, instead of whatever characters are collected so far. P meta parser, collects the text matched by the specified parser into a  | Meta token. Qindicator code, collects the text matched by the specified parser into an   Indicator token. R text parser- collects the text matched by the specified parser into a  Text token. S nest code0 returns an empty token with the specified begin/end code to  signal nesting. TpatternTokenizer pattern converts the pattern to a simple  . UparserTokenizer what parser converts the parser returning what to a  simple  : (only used for tests). The result is reported as a token  with the Detected , The result is reported as a token with the  Detected . V2errorTokens tokens message withFollowing following appends an Error  token with the specified message at the end of tokens , and if   withFollowing also appends the unparsed text  following the error as a  final Unparsed token. commitBugs reply7 inserts an error token if a commit was made outside a 7 named choice. This should never happen outside tests. yaml name input converts the Unicode input (called name in error  messages) to a list of ) according to the YAML spec. This is it! W pName name converts a parser name to the "proper" spec name. X tokenizers: returns a mapping from a production name to a production  tokenizer. tokenizer name, converts the production with the specified name to a  simple  , or Nothing if it isn' t known. YtokenizersWithN: returns a mapping from a production name to a production  tokenizer (that takes an n argument). tokenizerWithN name n+ converts the production (that requires an n  argument) with the specified name to a simple  , or Nothing if  it isn' t known. ZtokenizersWithC: returns a mapping from a production name to a production  tokenizer (that takes a c argument). tokenizerWithC name c* converts the production (that requires a c  argument) with the specified name to a simple  , or Nothing if  it isn' t known. [tokenizersWithS: returns a mapping from a production name to a production  tokenizer (that takes a s argument). tokenizerWithS name s+ converts the production (that requires an s  argument) with the specified name to a simple  , or Nothing if  it isn' t known. \tokenizersWithT: returns a mapping from a production name to a production  tokenizer (that takes a t argument). tokenizerWithT name t+ converts the production (that requires an t  argument) with the specified name to a simple  , or Nothing if  it isn' t known. ]tokenizersWithNC/ returns a mapping from a production name to a % production tokenizer (that requires n and c arguments). tokenizerWithNC name n c( converts the production (that requires n and  c arguments) with the specified name to a simple  , or  Nothing if it isn' t known. ^tokenizersWithNS: returns a mapping from a production name to a production  tokenizer (that requires n and s arguments). tokenizerWithNS name n s( converts the production (that requires n and  s arguments) with the specified name to a simple  , or  Nothing if it isn' t known. _tokenizersWithNT/ returns a mapping from a production name to a % production tokenizer (that requires n and t arguments). tokenizerWithNT name n t( converts the production (that requires n and  t arguments) with the specified name to a simple  , or  Nothing if it isn' t known. tokenizerNames3 returns the list of all productions (tokenizers). `detect_utf_encoding doesn'0t actually detect the encoding, we just call it N this way to make the productions compatible with the spec. Instead it simply L reports the encoding (which was already detected when we started parsing). na is the "non-applicable"" indentation value. We use Haskell' s laziness $ to verify it really is never used. a asInteger? returns the last consumed character, which is assumed to be a  decimal digit, as an integer. b result value is the same as  return value except that we give the L Haskell type deduction the additional boost it needs to figure out this is  wrapped in a cd.  . converts a (named) input text into a list of  . Errors ! are reported as tokens with the Error , and the unparsed text 6 following an error may be attached as a final token. Chomp method. Scalar style. Production context. eMatch parameter result# specifies that we can convert the  parameter to  a cd returning the result. '!The internal parser state. We don''t bother with parameterising it with a  " UserState"?, we just bundle the generic and specific fields together (not , that it is that easy to draw the line - is sLine generic or specific?). f#The input name for error messages. gThe input UTF encoding. hCurrent decision name. iLookahead characters limit. j Pattern we must not enter into. kDisables token generation. l-(Reversed) characters collected for a token. m#Offset in characters in the input. nBuilds on YAML's line break definition. o*Actually character number - we hate tabs. p&Of token we are collecting chars for. qLast matched character. rThe decoded input characters. .Each invication of a cd yields a ./. The st is only one  part of the ./. uParsing result. v Tokens generated by the parser. w Commitment to a decision point. xThe updated parser state. sThe st> of each invocation is either an error, the actual result, or 1 a continuation for computing the actual result. cA cd% is basically a function computing a ./. Parsed token. ySpecific token . zContained input chars, if any.  codes. (Recognized Unicode encodings. UTF-32 isn't required by YAML parsers.     {      !"#$%&'())*+,-.//0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abccdefghijklmnopqrrstuvwxyYamlReference-0.7Text.Yaml.Reference showTokensyaml tokenizertokenizerWithNtokenizerWithCtokenizerWithStokenizerWithTtokenizerWithNCtokenizerWithNStokenizerWithNTtokenizerNames TokenizerChompStyleContextTokenCode|>decodeEncodingdetectEncoding undoEncoding combinePairs combineLeadsurrogateOffsetcombineSurrogates hasFewerThan undoUTF16LE undoUTF16BEundoUTF8 decodeTwoUTF8combineTwoUTF8decodeThreeUTF8combineThreeUTF8decodeFourUTF8combineFourUTF8 escapeStringtoHex initialStateState setDecisionsetLimit setForbiddensetCode returnReplyReply failReplyunexpectedReply%<%^!?!-&/?*+decidechoicepeekrejectnonEmptyemptyeofsolcommitnextLinewith forbidding limitedTonextIf finishTokenwrapconsumetokenfakemeta indicatortextnestpatternTokenizerparserTokenizer commitBugspName tokenizerstokenizersWithNtokenizersWithCtokenizersWithStokenizersWithTtokenizersWithNCtokenizersWithNStokenizersWithNTna asIntegerresultParserMatchsName sEncoding sDecisionsLimit sForbiddensIsPeeksCharssOffsetsLinesColumnsCodesLastsInputResultrResultrTokensrCommitrStatetCodetText