úÎ<6:&     None+-=KM A stream of tokensThe lexical error exceptionA token recognizer* values are constructed by functions like   and , combined with , and used by   and  .YWhen a recognizer returns without consuming any characters, a lexical error is signaled.A k specification consists of two recognizers: one for meaningful tokens and one for whitespace and comments.Although you can construct 6s directly, it is more convenient to build them with  ,   , and the  instance like this:  myLexer ::  MyToken myLexer =  [   (  myToken) ,   (  myWhiteSpace) ,   ( myComment) ] ;Build a lexer with the given token recognizer and no (i.e. ) whitespace recognizer.  is a monoid homomorphism:   a    b =   (a  b) @Build a lexer with the given whitespace recognizer and no (i.e. ) token recognizer.  is a monoid homomorphism:   a    b =   (a  b) PWhen scanning a next token, the regular expression will compete with the other  s of its 5. If it wins, its result will become the next token.  has the following properties:   (r1  r2) =   r1    r2   r =   r (   ())-This is a more sophisticated recognizer than  .zIt recognizes a token consisting of a prefix and a suffix, where prefix is chosen longest, and suffix is chosen shortest.%An example would be a C block comment /* comment text */ The naive   ( "/*"      "*/")-doesn't work because it consumes too much: in /* xxx */ yyy /* zzz */*it will treat the whole line as a comment.This is where  comes in handy: : (\_ _ -> ()) -- don't care about the comment text ( "/*") (\_ ->     "*/") :Operationally, the prefix regex first competes with other «s for the longest match. If it wins, then the shortest match for the suffix regex is found, and the two results are combined with the given function to produce a token.GThe two regular expressions combined must consume some input, or else I is thrown. However, any one of them may return without consuming input.* * *\Once the prefix regex wins, the choice is committed; the suffix regex must match or else a  is thrown. Therefore,  f pref suff1   f pref suff2 =  f pref suff1 and is not the same as  f pref (suff1  suff2)The following holds, however:  f pref1 suff   f pref2 suff =  f (pref1  pref2) suff * * *yPassing the result of prefix into both suffix and combining function may seem superfluous; indeed we could get away with    pref -> (pref ->    tok) ->  tokor even    (   tok) ->  tokLThis is done purely for convenience and readability; the intention is that prefÿ passed into suffix is used to customize the regular expression which would still return only its part of the token, and then the function will combine the two parts. Of course, you don't need to follow this recommendation. Thanks to parametricity, all three versions are equivalent. Convert a  to a list of tokens. Turn  into a runtime  exception. Convert a  into either a token list or a š. This function may be occasionally useful, but in general its use is discouraged because it needs to force the whole stream before returning a result.;Run a lexer on a string and produce a lazy stream of tokens! regex for the longest prefixregex for the shortest suffixlexer specification$source file name (used in locations) source text"source file namecontents6the character, its position, and the previous position#$%&'   ! "#$%&'(      !"# $%&'()*+,-./0lexer-applicative-2.0Language.Lexer.Applicative TokenStreamTsErrorTsEofTsToken LexicalError RecognizerLexer lexerTokenRElexerWhitespaceREtoken whitespacelongestlongestShortest streamToListstreamToEitherListrunLexerbase Data.MonoidmappendMonoidmconcatmempty<>Control.Applicative<|>GHC.Baseconst$pureregex-applicative-0.3.1 Text.Regex.Applicative.Interfacestring*>manyanySymText.Regex.Applicative.TypesREghc-prim GHC.TypesCharannotate$fIsListTokenStream$fExceptionLexicalError$fShowLexicalError$fMonoidRecognizer $fMonoidLexer