! ?] ء      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOP Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~        !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~             ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 :!;!<!=!>!?!@!A!B!C!D!E!F!G!H!I!J!K!L!M!N!O!P!Q!R!S!T!U!V!W!X!Y!Z![!\!]!^!_!`!a!b!c!d!e!f!g!h!i!j!k!l!m!n!o!p!q!r!s!t!u!v"w"x"y"z"{"|"}"~"""""##########$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$%%%%%%%%%%%%%%%%%%%%%&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&''''''''''''''''''''''' ' ' ' ' ''''''''''''''''''' '!'"'#'$'%'&'''(')'*'+','-'.'/'0'1'2'3'4'5'6'7'8'9':';'<'='>'?'@'A'B'C'D'E'F'G'H'I'J'K'L'M'N'O'P'Q'R'S'T'U'V'W'X'Y'Z'['\']'^'_'`'a'b'c'd'e'f'g'h'i'j'k'l'm'n'o'p'q'r's't'u'v'w'x'y'z'{'|'}'~''''''''''''''''''''''''''''''''''''''''''''''''''''''''''(((((((((())))))))***********************************++++++++++++++++,,,,,,,,,,,, , , ,   !"#$%&'()*+,-./0123456789:;<-=->-?-@-A-B-C-D-E-F-G-H-I-J-K-L-M-N-O-P-Q-R-S-T-U-V.W.X.Y.Z.[.\.].^._.`.a.b.c.d.e.f.g.h.i.j.k.l/m/n/o/p/q/r/s/t/u/v/w/x/y/z/{/|/}/~//////000000000000000011111111111111111111111111112222D3!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe XX4!(c) 2019 Composewell TechnologiesNoneestreamlywrite FD buffer offset length tries to write data on the given filesystem fd (cannot be a socket) up to sepcified length starting from the given offset in the buffer. The write will not block the OS thread, it may suspend the Haskell thread until write can proceed. Returns the actual amount of data written.streamlyEKeep writing in a loop until all data in the buffer has been written.streamlywrite FD iovec count tries to write data on the given filesystem fd (cannot be a socket) from an iovec with specified number of entries. The write will not block the OS thread, it may suspend the Haskell thread until write can proceed. Returns the actual amount of data written.streamlyHKeep writing an iovec in a loop until all the iovec entries are written.BSD3streamly@composewell.com experimentalGHCNonef!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafeXistreamly@Discard any exceptions or value returned by an effectful action.Internal&(c) 2018-2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNonek_ (c) Roman Leshchinskiy 2009-2012 BSD-stylestreamly@composewell.com non-portableNone FHSVXy3streamlyMutable primitive arrays associated with a primitive state token. These can be written to and read from in a monadic context that supports sequencing such as  or n. Typically, a mutable primitive array will be built and then convert to an immutable primitive array using -. However, it is also acceptable to simply discard a mutable primitive array since it lives in managed memory and will be garbage collected when no longer referenced.streamly4Arrays of unboxed elements. This accepts types like , , , and *, as well as their fixed-length variants (Word8, Word16+, etc.). Since the elements are unboxed, a ? is strict in its elements. This differs from the behavior of Array!, which is lazy in its elements.streamly&Convert the primitive array to a list.streamlyThe empty primitive array. streamlygCreate a new mutable primitive array of the given length. The underlying memory is left uninitialized.!streamlyDResize a mutable primitive array. The new size is given in elements.This will either resize the array in-place or, if not possible, allocate the contents into a new, unpinned array and copy the original array's contents.+To avoid undefined behaviour, the original ( shall not be accessed anymore after a ! has been performed. Moreover, no reference to the old one should be kept in order to allow garbage collection of the original  in case a new  had to be allocated."streamlyShrink a mutable primitive array. The new size is given in elements. It must be smaller than the old size. The array will be resized in place. This function is only available when compiling with GHC 7.10 or newer.$streamly$Write an element to the given index.%streamlyCopy part of a mutable array into another mutable array. In the case that the destination and source arrays are the same, the regions may overlap.&streamly1Copy part of an array into another mutable array.'streamlysCopy a slice of an immutable primitive array to an address. The offset and length are given in elements of type a$. This function assumes that the  instance of a agrees with the StorableR instance. This function is only available when building with GHC 7.8 or newer.(streamlysCopy a slice of an immutable primitive array to an address. The offset and length are given in elements of type a$. This function assumes that the  instance of a agrees with the StorableR instance. This function is only available when building with GHC 7.8 or newer.)streamly7Fill a slice of a mutable primitive array with a value.*streamly>Get the size of a mutable primitive array in elements. Unlike +@, this function ensures sequencing in the presence of resizing.+streamlySize of the mutable primitive array in elements. This function shall not be used on primitive arrays that are an argument to or a result of ! or ".,streamly7Check if the two arrays refer to the same memory block.-streamlyyConvert a mutable byte array to an immutable one without copying. The array should not be modified after the conversion..streamlyyConvert an immutable array to a mutable one without copying. The original array should not be used after the conversion./streamly0Read a primitive value from the primitive array.0streamly2Get the size, in elements, of the primitive array.1streamly2Lazy right-associated fold over the elements of a .2streamly4Strict right-associated fold over the elements of a .3streamly1Lazy left-associated fold over the elements of a .4streamly3Strict left-associated fold over the elements of a .5streamly3Strict left-associated fold over the elements of a .6streamlyTraverse a primitive array. The traversal forces the resulting values and writes them to the new primitive array as it performs the monadic effects. Consequently:[traversePrimArrayP (\x -> print x $> bool x undefined (x == 2)) (fromList [1, 2, 3 :: Int])12 *** Exception: Prelude.undefinedIn many situations, 6 can replace A, changing the strictness characteristics of the traversal but typically improving the performance. Consider the following short-circuiting traversal: incrPositiveA :: PrimArray Int -> Maybe (PrimArray Int) incrPositiveA xs = traversePrimArray (\x -> bool Nothing (Just (x + 1)) (x > 0)) xsThis can be rewritten using 67. To do this, we must change the traversal context to  MaybeT (ST s), which has a  instance: incrPositiveB :: PrimArray Int -> Maybe (PrimArray Int) incrPositiveB xs = runST $ runMaybeT $ traversePrimArrayP (\x -> bool (MaybeT (return Nothing)) (MaybeT (return (Just (x + 1)))) (x > 0)) xsBenchmarks demonstrate that the second implementation runs 150 times faster than the first. It also results in fewer allocations.7streamlyaFilter the primitive array, keeping the elements for which the monadic predicate evaluates true.8streamly_Map over the primitive array, keeping the elements for which the monadic predicate provides a .9streamlyWGenerate a primitive array by evaluating the monadic generator function at each index.:streamlyaExecute the monadic action the given number of times and store the results in a primitive array.;streamly+Map over the elements of a primitive array.<streamly3Indexed map over the elements of a primitive array.=streamly>Filter elements of a primitive array according to a predicate.>streamlyaFilter the primitive array, keeping the elements for which the monadic predicate evaluates true.?streamlycMap over the primitive array, keeping the elements for which the applicative predicate provides a .@streamlybMap over a primitive array, optionally discarding some elements. This has the same behavior as Data.Maybe.mapMaybe.AstreamlySTraverse a primitive array. The traversal performs all of the applicative effects beforeY forcing the resulting values and writing them to the new primitive array. Consequently:ZtraversePrimArray (\x -> print x $> bool x undefined (x == 2)) (fromList [1, 2, 3 :: Int])123 *** Exception: Prelude.undefined The function 66 always outperforms this function, but it requires a B constraint, and it forces the values as it performs the effects.Bstreamly:Traverse a primitive array with the index of each element.CstreamlyTraverse a primitive array with the indices. The traversal forces the resulting values and writes them to the new primitive array as it performs the monadic effects.DstreamlyGenerate a primitive array.EstreamlyKCreate a primitive array by copying the element the given number of times.Fstreamly[Generate a primitive array by evaluating the applicative generator function at each index.Gstreamly\Execute the applicative action the given number of times and store the results in a vector.HstreamlyCTraverse the primitive array, discarding the results. There is no N variant of this function since it would not provide any performance benefit.IstreamlyTTraverse the primitive array with the indices, discarding the results. There is no N variant of this function since it would not provide any performance benefit.JstreamlyKstreamlyLstreamlyMstreamlyNstreamlyALexicographic ordering. Subject to change between major versions.Ostreamly!streamlynew size"streamlynew size$streamlyarraystreamlyindexstreamlyelement%streamlydestination arraystreamlyoffset into destination arraystreamly source arraystreamlyoffset into source arraystreamlynumber of elements to copy&streamlydestination arraystreamlyoffset into destination arraystreamly source arraystreamlyoffset into source arraystreamlynumber of elements to copy'streamlydestination pointerstreamly source arraystreamlyoffset into source arraystreamlynumber of prims to copy(streamlydestination pointerstreamly source arraystreamlyoffset into source arraystreamlynumber of prims to copy)streamly array to fillstreamlyoffset into arraystreamlynumber of values to fillstreamlyvalue to fill with*streamlyarray9streamlylengthstreamly generator>streamlymapping functionstreamlyprimitive array?streamlymapping functionstreamlyprimitive arrayAstreamlymapping functionstreamlyprimitive arrayDstreamlylengthstreamlyelement from indexEstreamlylengthstreamlyelementFstreamlylengthstreamlyelement from indexGstreamlylengthstreamlyapplicative element producer1 !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHI1 !"#$/-.&%'(),*+012345HI;<DE=@ABFG>?6C9:78 !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafePstreamlyA P is a special type of Fold? that does not accumulate any value, but runs only effects. A PH has no state to maintain therefore can be a bit more efficient than a Fold* with '()' as the state, especially when PCs are composed with other operations. A Sink can be upgraded to a Fold, but a Fold! cannot be converted into a Sink.PQPQ (c) 2015 Dan DoelBSD3streamly@composewell.com non-portableNone2456FHMSVX-Vstreamly!Create a new small mutable array.Wstreamly5Read the element at a given index in a mutable array.Xstreamly6Write an element at the given idex in a mutable array.Ystreamly)Look up an element in an immutable array.7The purpose of returning a result using a monad is to allow the caller to avoid retaining references to the array. Evaluating the return value will cause the array lookup to be performed, even though it may not require the element of the array to be evaluated (which could throw an exception). For instance: Kdata Box a = Box a ... f sa = case indexSmallArrayM sa 0 of Box x -> ...x" is not a closure that references sa% as it would be if we instead wrote: let x = indexSmallArray sa 0And does not prevent sa from being garbage collected. Note that k is not adequate for this use, as it is a newtype, and cannot be evaluated without evaluating the element.Zstreamly)Look up an element in an immutable array.[streamlyRead a value from the immutable array at the given index, returning the result in an unboxed unary tuple. This is currently used to implement folds.\streamly/Create a copy of a slice of an immutable array.]streamly,Create a copy of a slice of a mutable array.^streamlyFCreate an immutable array corresponding to a slice of a mutable array.<This operation copies the portion of the array to be frozen._streamly!Render a mutable array immutable.hThis operation performs no copying, so care must be taken not to modify the input array after freezing.`streamlyFCreate a mutable array corresponding to a slice of an immutable array.<This operation copies the portion of the array to be thawed.astreamly"Render an immutable array mutable.GThis operation performs no copying, so care must be taken with its use.bstreamly8Copy a slice of an immutable array into a mutable array.cstreamly/Copy a slice of one mutable array into another.fstreamlyThis is the fastest, most straightforward way to traverse an array, but it only works correctly with a sufficiently "affine" D instance. In particular, it must only produce *one* result array. 56>-transformed monads, for example, will not work right at all.gstreamly*Strict map over the elements of the array.istreamly Create a Tw from a list of a known length. If the length of the list does not match the given length, this throws an exception.jstreamly Create a T from a list.lstreamlynstreamlyrstreamly|streamlyALexicographic ordering. Subject to change between major versions.}streamlystreamly Vstreamlysizestreamlyinitial contentsWstreamlyarraystreamlyindexXstreamlyarraystreamlyindexstreamly new elementYstreamlyarraystreamlyindexZstreamlyarraystreamlyindex\streamlysourcestreamlyoffsetstreamlylength]streamlysourcestreamlyoffsetstreamlylength^streamlysourcestreamlyoffsetstreamlylength`streamlysourcestreamlyoffsetstreamlylengthbstreamly destinationstreamlydestination offsetstreamlysourcestreamly source offsetstreamlylengthcstreamly destinationstreamlydestination offsetstreamlysourcestreamly source offsetstreamlylengthRSTUVWXYZ[\]^_`abcdefghijTURSVWXbcZY[\]^_`hadejigf1 !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe>EX/ !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone "#>ESXg2streamly(Lazy right associative fold to a stream. J(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCSafe6sstreamly A strict streamly A strict streamly#Convert strict Maybe' to lazy Maybe !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafeEXA+streamlyMRepresents a stateful transformation over an input stream of values of type a to outputs of type b in  m.streamlyThe composed pipe distributes the input to both the constituent pipes and zips the output of the two using a supplied zipping function.streamlyiThe composed pipe distributes the input to both the constituent pipes and merges the outputs of the two.streamlyLift a pure function to a .streamlyfCompose two pipes such that the output of the second pipe is attached to the input of the first pipe. !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe "#>ESXDstreamlyLift a monadic function to a .(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCSafeNstreamlyXRun an action forever periodically at the given frequency specified in per second (Hz).streamlyRun a computation on every clock tick, the clock runs at the specified frequency. It allows running a computation at high frequency efficiently by maintaining a local clock and adjusting it with the provided base clock at longer intervals. The first argument is a base clock returning some notion of time in microseconds. The second argument is the frequency in per second (Hz). The third argument is the action to run, the action is provided the local time as an argument.(c) 2019 Harendra KumarBSD3streamly@composewell.com experimentalGHCNoneMXtstreamlyERelative times are relative to some arbitrary point of time. Unlike - they are not relative to a predefined epoch.streamly;Absolute times are relative to a predefined epoch in time.  represents times using P which can represent times up to ~292 billion years at a nanosecond resolution.streamly8A type class for converting between units of time using  as the intermediate representation with a nanosecond resolution. This system of units can represent up to ~292 years at nanosecond resolution with fast arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.streamly5A type class for converting between time units using  as the intermediate and the widest representation with a nanosecond resolution. This system of units can represent arbitrarily large times but provides least efficient arithmetic operations due to  arithmetic.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.8A type class for converting between units of time using  as the intermediate representation. This system of units can represent up to ~292 billion years at nanosecond resolution with reasonably efficient arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.streamlyData type to represent practically large quantities of time efficiently. It can represent time up to ~292 billion years at nanosecond resolution.streamlysecondsstreamly nanosecondsstreamlyAn d time representation with a millisecond resolution. It can represent time up to ~292 million years.streamlyAn ` time representation with a microsecond resolution. It can represent time up to ~292,000 years.streamlyAn [ time representation with a nanosecond resolution. It can represent time up to ~292 years.streamly Convert a  to an absolute time.streamlyConvert absolute time to a .streamly Convert a  to a relative time.streamlyConvert relative time to a .streamly/Difference between two absolute points of time.streamlyDConvert nanoseconds to a string showing time in an appropriate unit.j(c) 2019 Harendra Kumar (c) 2009-2012, Cetin Sert (c) 2010, Eugene KirpichovBSD3streamly@composewell.com experimentalGHCNone7MX  streamlyClock types. A clock may be system-wide (that is, visible to all processes) or per-process (measuring time that is meaningful only within a process). All implementations shall support CLOCK_REALTIME. (The only suspend-aware monotonic is CLOCK_BOOTTIME on Linux.) streamlyThe identifier for the system-wide monotonic clock, which is defined as a clock measuring real time, whose value cannot be set via  clock_settime and which cannot have negative clock jumps. The maximum possible clock jump shall be implementation defined. For this clock, the value returned by  represents the amount of time (in seconds and nanoseconds) since an unspecified point in the past (for example, system start-up time, or the Epoch). This point does not change after system start-up time. Note that the absolute value of the monotonic clock is meaningless (because its origin is arbitrary), and thus there is no need to set it. Furthermore, realtime applications can rely on the fact that the value of this clock is never set. streamlyfThe identifier of the system-wide clock measuring real time. For this clock, the value returned by O represents the amount of time (in seconds and nanoseconds) since the Epoch.streamlysThe identifier of the CPU-time clock associated with the calling process. For this clock, the value returned by C represents the amount of execution time of the current process.streamlyuThe identifier of the CPU-time clock associated with the calling OS thread. For this clock, the value returned by E represents the amount of execution time of the current OS thread.streamly(since Linux 2.6.28; Linux and Mac OSX) Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments or the incremental adjustments performed by adjtime(3).streamly(since Linux 2.6.32; Linux and Mac OSX) A faster but less precise version of CLOCK_MONOTONIC. Use when you need very fast, but not fine-grained timestamps.streamlye(since Linux 2.6.39; Linux and Mac OSX) Identical to CLOCK_MONOTONIC, except it also includes any time that the system is suspended. This allows applications to get a suspend-aware monotonic clock without having to deal with the complications of CLOCK_REALTIME, which may have discontinuities if the time is changed using settimeofday(2).streamly(since Linux 2.6.32; Linux-specific) A faster but less precise version of CLOCK_REALTIME. Use when you need very fast, but not fine-grained timestamps.        (c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone,=>?@AEFHMSX_"streamlyA monad that can perform concurrent or parallel IO operations. Streams that can be composed concurrently require the underlying monad to be ".streamlyBuffering policy for persistent push workers (in ParallelT). In a pull style SVar (in AsyncT, AheadT etc.), the consumer side dispatches workers on demand, workers terminate if the buffer is full or if the consumer is not cosuming fast enough. In a push style SVar, a worker is dispatched only once, workers are persistent and keep pushing work to the consumer via a bounded buffer. If the buffer becomes full the worker either blocks, or it can drop an item from the buffer to make space.pPull style SVars are useful in lazy stream evaluation whereas push style SVars are useful in strict left Folds.iXXX Maybe we can separate the implementation in two different types instead of using a common SVar type.estreamly6Specifies the stream yield rate in yields per second (Hertz*). We keep accumulating yield credits at hS. At any point of time we allow only as many yields as we have accumulated as per h{ since the start of time. If the consumer or the producer is slower or faster, the actual rate may fall behind or exceed h. We try to recover the gap between the two by increasing or decreasing the pull rate from the producer. However, if the gap becomes more than j$ we try to recover only as much as j.g puts a bound on how low the instantaneous rate can go when recovering the rate gap. In other words, it determines the maximum yield latency. Similarly, i puts a bound on how high the instantaneous rate can go when recovering the rate gap. In other words, it determines the minimum yield latency. We reduce the latency by increasing concurrency, therefore we can say that it puts an upper bound on concurrency.If the h; is 0 or negative the stream never yields a value. If the j/ is 0 or negative we do not attempt to recover.gstreamlyThe lower rate limithstreamly"The target rate we want to achieveistreamlyThe upper rate limitjstreamlyMaximum slack from the goalkstreamlyVAn SVar or a Stream Var is a conduit to the output from multiple streams running concurrently and asynchronously. An SVar can be thought of as an asynchronous IO handle. We can write any number of streams to an SVar in a non-blocking manner and then read them back at any time at any pace. The SVar would run the streams asynchronously and accumulate results. An SVar may not really execute the stream completely and accumulate all the results. However, it ensures that the reader can read the results at whatever paces it wants to read. The SVar monitors and adapts to the consumer's pace.An SVar is a mini scheduler, it has an associated workLoop that holds the stream tasks to be picked and run by a pool of worker threads. It has an associated output queue where the output stream elements are placed by the worker threads. A outputDoorBell is used by the worker threads to intimate the consumer thread about availability of new results in the output queue. More workers are added to the SVar by  fromStreamVar" on demand if the output produced is not keeping pace with the consumer. On bounded SVars, workers block on the output queue to provide throttling of the producer when the consumer is not pulling fast enough. The number of workers may even get reduced depending on the consuming pace.{New work is enqueued either at the time of creation of the SVar or as a result of executing the parallel combinators i.e. <| and <|>< when the already enqueued computations get evaluated. See joinStreamVarAsync.pstreamlyhIdentify the type of the SVar. Two computations using the same style can be scheduled on the same SVar.ustreamly=Sorting out-of-turn outputs in a heap for Ahead style streamsystreamly7Events that a child thread may send to a parent thread.streamly0Adapt the stream state from one type to another.streamlysFork a thread that is automatically killed as soon as the reference to the returned threadId is garbage collected.streamlyThis function is used by the producer threads to queue output for the consumer thread to consume. Returns whether the queue has more space.streamly}This is safe even if we are adding more threads concurrently because if a child thread is adding another thread then anyway = will not be empty.streamly<In contrast to pushWorker which always happens only from the consumer thread, a pushWorkerPar can happen concurrently from multiple threads on the producer side. So we need to use a thread safe modification of workerThreads. Alternatively, we can use a CreateThread event to avoid using a CAS based modification.streamly]This is a magic number and it is overloaded, and used at several places to achieve batching: If we have to sleep to slowdown this is the minimum period that we accumulate before we sleep. Also, workers do not stop until this much sleep time is accumulated.hCollected latencies are computed and transferred to measured latency after a minimum of this period.streamly`Another magic number! When we have to start more workers to cover up a number of yields that we are lagging by then we cannot start one worker for each yield because that may be a very big number and if the latency of the workers is low these number of yields could be very high. We assume that we run each extra worker for at least this much time.streamlyGet the worker latency without resetting workerPendingLatency Returns (total yield count, base time, measured latency) CAUTION! keep it in sync with collectLatencystreamlyWrite a stream to an %Q in a non-blocking manner. The stream can then be read back from the SVar using fromSVar. !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOSPQRTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~"pqrstGHIJ%&'()*+,-./0123456789:;<=>?@ABCDEFKLM#$~ !klmnoYZ[\]^_`abcd|}yz{uvwxefghijNOSPQRTUVWX(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone%,/=>?@AHSXgqDstreamly&A monadic continuation, it is a function that yields a value of type "a" and calls the argument (a -> m r) as a continuation with that value. We can also think of it as a callback with a handler (a -> m r). Category theorists call it a codensity type, a special type of right kan extension.streamly7A terminal function that has no continuation to follow.streamlySame as .streamlyDClass of types that can represent a stream of elements of some type a in some monad m.streamly_Constructs a stream by adding a monadic action at the head of an existing stream. For example: M> toList $ getLine `consM` getLine `consM` nil hello world ["hello","world"] Concurrent (do not use  parallely to construct infinite streams)streamlyOperator equivalent of . We can read it as "parallel colon" to remember that | comes before :. C> toList $ getLine |: getLine |: nil hello world ["hello","world"]  let delay = threadDelay 1000000 >> print 1 drain $ serially $ delay |: delay |: delay |: nil drain $ parallely $ delay |: delay |: delay |: nil Concurrent (do not use  parallely to construct infinite streams)streamly The type  Stream m a/ represents a monadic stream of values of type a% constructed using actions in monad ma. It uses stop, singleton and yield continuations equivalent to the following direct style type: <data Stream m a = Stop | Singleton a | Yield a (Stream m a) CTo facilitate parallel composition we maintain a local state in an %W that is shared across and is used for synchronization of the streams being composed.The singleton case can be expressed in terms of stop and yield but we have it as a separate case to optimize composition operations for streams with single element. We build singleton streams in the implementation of $ for Applicative and Monad, and in  for MonadTrans.mXXX remove the Stream type parameter from State as it is always constant. We can remove it from SVar as wellstreamlyAAdapt any specific stream type to any other specific stream type.streamlyBuild a stream from an %Q, a stop continuation, a singleton stream continuation and a yield continuation.streamly*Make an empty stream from a stop function.streamly.Make a singleton stream from a yield function.streamly/Add a yield function at the head of the stream.streamlyuConstruct a stream by adding a pure value at the head of an existing stream. For serial streams this is the same as (return a) `consM` rM but more efficient. For concurrent streams this is not concurrent whereas  is concurrent. For example: 2> toList $ 1 `cons` 2 `cons` 3 `cons` nil [1,2,3] streamlyOperator equivalent of . &> toList $ 1 .: 2 .: 3 .: nil [1,2,3] streamlyAn empty stream. > toList nil [] streamly(An empty stream producing a side effect. '> toList (nilM (print "nil")) "nil" [] InternalstreamlyFold a stream by providing an SVar, a stop continuation, a singleton continuation and a yield continuation. The stream would share the current SVar passed via the State.streamlyFold a stream by providing a State, stop continuation, a singleton continuation and a yield continuation. The stream will not use the SVar passed via State.streamly The function fw decides how to reconstruct the stream. We could reconstruct using a shared state (SVar) or without sharing the state.streamly;Fold sharing the SVar state within the reconstructed streamstreamly(Lazy right associative fold to a stream.streamlyLike / but shares the SVar state across computations.streamly-Lazy right fold with a monadic step function.streamlyPolymorphic version of the  operation  of SerialT. Appends two streams sequentially, yielding all elements from the first stream, and then all elements from the second stream.streamlyDetach a stream from an SVarstreamly Perform a  using a specified concat strategy. The first argument specifies a merge or concat function that is used to merge the streams generated by the map function. For example, the concat function could be , parallel, async, ahead% or any other zip or merge function.**5555(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@ASXgstreamlySame as yieldM streamly .repeatM = fix . cons repeatM = cycle1 . yield 9Generate an infinite stream by repeating a monadic value.Internalstreamly fromFoldable =    Construct a stream from a  containing pure values:streamlyLazy right associative fold.streamly9Right associative fold to an arbitrary transformer monad.streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.JNote that the accumulator is always evaluated including the initial value.streamlyStrict left associative fold.streamlyLike foldx#, but with a monadic step function.streamlyLike " but with a monadic step function.streamlyLazy left fold to a stream.streamly1Lazy left fold to an arbitrary transformer monad.streamly >drain = foldl' (\_ _ -> ()) () drain = mapM_ (\_ -> return ())!streamlyIterate a lazy function f of the shape `m a -> t m a` until it gets fully defined i.e. becomes independent of its argument action, then return the resulting value of the function (`t m a`).QIt can be used to construct a stream that uses a cyclic definition. For example: import Streamly.Internal.Prelude as S import System.IO.Unsafe (unsafeInterleaveIO) main = do S.mapM_ print $ S.mfix $ x -> do a <- S.fromList [1,2] b <- S.fromListM [return 3, unsafeInterleaveIO (fmap fst x)] return (a, b) Note that the function f2 must be lazy in its argument, that's why we use unsafeInterleaveIO because IO monad is strict.'streamly/Extract the last element of the stream, if any.1streamly[Apply a monadic action to each element of the stream and discard the output of the action.Cstreamly7Zip two streams serially using a pure zipping function.Dstreamly:Zip two streams serially using a monadic zipping function.f      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHf      "#$%&'()*+0-./,123465789:;<=>?@ABCDFEGH!(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone>oIstreamlyPull a stream from an SVar.KstreamlyWrite a stream to an %Q in a non-blocking manner. The stream can then be read back from the SVar using J.LstreamlyPull a stream from an SVar.IJKLMNJILMKNJ(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone>EXPstreamlyFold   step   inject   extractQstreamly>Represents a left fold over an input stream of values of type a to a single value of type b in  m.$The fold uses an intermediate state s as accumulator. The step function updates the state and returns the new updated state. When the fold is done the final result of the fold is extracted from the intermediate state representation using the extract function.RstreamlyFold   step   initial   extractSstreamlyConvert more general type O into a simpler type QTstreamlyEBuffers the input stream to a list in the reverse order of the input.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.Ustreamly (lmap f fold) maps the function f on the input of the fold.?S.fold (FL.lmap (\x -> x * x) FL.sum) (S.enumerateFromTo 1 100)338350Vstreamly(lmapM f fold) maps the monadic function f on the input of the fold.Wstreamly2Include only those elements that pass a predicate.%S.fold (lfilter (> 5) FL.sum) [1..10]40XstreamlyLike W but with a monadic predicate.Ystreamly(Transform a fold from a pure input to a  input, consuming only  values.Zstreamly Take first n/ elements from the stream and discard the rest.[streamly@Takes elements from the input as long as the predicate succeeds.\streamly#Modify the fold such that when the fold is done, instead of returning the accumulator, it returns a fold. The returned fold starts from where we left i.e. it uses the last accumulator value as the initial value of the accumulator. Thus we can resume the fold later and feed it more input. > do more <- S.fold (FL.duplicate FL.sum) (S.enumerateFromTo 1 10) evenMore <- S.fold (FL.duplicate more) (S.enumerateFromTo 11 20) S.fold evenMore (S.enumerateFromTo 21 30) 465]streamly}Run the initialization effect of a fold. The returned fold would use the value returned by this effect as its initial value.^streamly[Run one step of a fold and store the accumulator as an initial value in the returned fold._streamlyVFor every n input items, apply the first fold and supply the result to the next fold.astreamlypGroup the input stream into windows of n second each and then fold each group using the provided fold function.For example, we can copy and distribute a stream to multiple folds where each fold can group the input differently e.g. by one second, one minute and one hour windows respectively and fold each resulting stream of folds. E -----Fold m a b----|-Fold n a c-|-Fold n a c-|-...-|----Fold m a c bstreamly&Combines the fold outputs using their  instances.cstreamly Combines the fold outputs (type b) using their  instances.dstreamly Combines the fold outputs (type b) using their  instances.estreamly,Combines the outputs of the folds (the type b) using their  instances.fstreamly,Combines the outputs of the folds (the type b) using their  instances.gstreamlyThe fold resulting from i distributes its input to both the argument folds and combines their output using the supplied function.hstreamly4Maps a function on the output of the fold (the type b).OPQRSTUVWXYZ[\]^_`aQROPSTUVWXYZ[a_`\]^G(c) 2018 Harendra Kumar (c) Roman Leshchinskiy 2008-2010BSD3streamly@composewell.com experimentalGHCNone %,=>?@AESXgYnstreamlypA stream consists of a step function that generates the next step given a current state, and the current state.pstreamlyA stream is a succession of ps. A q< produces a single value and the next state of the stream. s3 indicates there are no more values in the stream.xstreamlyMap a monadic function over a n|streamlyCreate a singleton n from a pure value.streamlyCreate a singleton n from a monadic action.streamly#Convert a list of pure values to a nstreamly%Compare two streams lexicographically%ijklmntopsqruvwxyz{|}~&psqrntotuvwyx|{z}~ijklm!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNonel streamly Convert a P to a Q`. When you want to compose sinks and folds together, upgrade a sink to a fold before composing.streamly8Distribute one copy each of the input to both the sinks. T |-------Sink m a ---stream m a---| |-------Sink m a > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > sink (Sink.tee (pr "L") (pr "R")) (S.enumerateFromTo 1 2) L 1 R 1 L 2 R 2 streamly?Distribute copies of the input to all the sinks in a container.  |-------Sink m a ---stream m a---| |-------Sink m a | ...  > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > sink (Sink.distribute [(pr "L"), (pr "R")]) (S.enumerateFromTo 1 2) L 1 R 1 L 2 R 2 4This is the consumer side dual of the producer side  operation.streamlyDemultiplex to multiple consumers without collecting the results. Useful to run different effectful computations depending on the value of the stream elements, for example handling network packets of different types using different handlers.  |-------Sink m a -----stream m a-----Map-----| |-------Sink m a | ... > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > let table = Data.Map.fromList [(1, pr "One"), (2, pr "Two")] in Sink.sink (Sink.demux id table) (S.enumerateFromTo 1 100) One 1 Two 2 streamlyxSplit elements in the input stream into two parts using a monadic unzip function, direct each part to a different sink. s |-------Sink m b -----Stream m a----(b,c)--| |-------Sink m c > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) in Sink.sink (Sink.unzip return (pr "L") (pr "R")) (S.yield (1,2)) L 1 R 2 streamlySame as  but with a pure unzip function.streamly&Map a pure function on the input of a P.streamly)Map a monadic function on the input of a P.streamlyFilter the input of a P! using a pure predicate function.streamlyFilter the input of a P$ using a monadic predicate function.streamly@Drain all input, running the effects and discarding the results.streamly drainM f = lmapM f drain<Drain all input after passing it through a monadic function.PQPQJ(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone "#>ESX.@streamlycMake a fold using a pure step function, a pure initial state and a pure state extraction function.InternalstreamlyMake a fold using a pure step function and a pure initial state. The final state extracted is identical to the intermediate state.Internalstreamly`Make a fold with an effectful step function and initial state, and a state extraction function.  mkFold = FoldWe can just use Q% but it is provided for completeness.InternalstreamlyMake a fold with an effectful step function and initial state. The final state extracted is identical to the intermediate state.Internalstreamly%Change the underlying monad of a foldInternalstreamlyAdapt a pure fold to any monad (generally = hoist (return . runIdentity)Internalstreamly4Flatten the monadic output of a fold to pure output.streamly/Map a monadic function on the output of a fold.streamlyApply a transformation on a Q using a .streamly _Fold1 step returns a new Q using just a step function that has the same type for the accumulator and the element. The result type is the accumulator type wrapped in 1. The initial accumulator is retrieved from the , the result is None for empty containers.streamlyRA fold that drains all its input, running the effects and discarding the results.streamly drainBy f = lmapM f drainlDrain all input after passing it through a monadic function. This is the dual of mapM_ on stream producers.streamly5Extract the last element of the input stream, if any.streamlyLike , except with a more general  return valuestreamly)Determine the length of the input stream.streamlyVDetermine the sum of all elements of a stream of numbers. Returns additive identity (0a) when the stream is empty. Note that this is not numerically stable for floating point numbers.streamly`Determine the product of all elements of a stream of numbers. Returns multiplicative identity (1) when the stream is empty.streamlyRDetermine the maximum element in a stream using the supplied comparison function.streamly  maximum =  compare *Determine the maximum element in a stream.streamlyJComputes the minimum element with respect to the given comparison functionstreamlyRDetermine the minimum element in a stream using the supplied comparison function.streamlyRCompute a numerically stable arithmetic mean of all elements in the input stream.streamlyZCompute a numerically stable (population) variance over all elements in the input stream.streamlydCompute a numerically stable (population) standard deviation over all elements in the input stream.streamly Compute an  sized polynomial rolling hash IH = salt * k ^ n + c1 * k ^ (n - 1) + c2 * k ^ (n - 2) + ... + cn * k ^ 0Where c1, c2, cn* are the elements in the input stream and k is a constant.>This hash is often used in Rabin-Karp string search algorithm.See *https://en.wikipedia.org/wiki/Rolling_hashstreamly-A default salt used in the implementation of .streamly Compute an + sized polynomial rolling hash of a stream. -rollingHash = rollingHashWithSalt defaultSaltstreamly Compute an D sized polynomial rolling hash of the first n elements of a stream. 'rollingHashFirstN = ltake n rollingHashstreamly;Fold an input stream consisting of monoidal elements using  and . 6S.fold FL.mconcat (S.map Sum $ S.enumerateFromTo 1 10)streamly foldMap f = map f mconcatNMake a fold from a pure function that folds the output of the function using  and . 0S.fold (FL.foldMap Sum) $ S.enumerateFromTo 1 10streamly foldMapM f = mapM f mconcatQMake a fold from a monadic function that folds the output of the function using  and . <S.fold (FL.foldMapM (return . Sum)) $ S.enumerateFromTo 1 10streamly!Folds the input stream to a list.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Memory.Array instead.streamlyfA fold that drains the first n elements of its input, running the effects and discarding the results.streamly|A fold that drains elements of its input as long as the predicate succeeds, running the effects and discarding the results.streamlyLike , except with a more general  argumentstreamly&Lookup the element at the given index.streamly0Extract the first element of the stream, if any.streamly=Returns the first element that satisfies the given predicate.streamly!In a stream of (key-value) pairs (a, b), return the value b9 of the first pair where the key equals the given value a.streamlyConvert strict  to lazy streamly;Returns the first index that satisfies the given predicate.streamlyCReturns the first index where a given value is found in the stream.streamlyReturn  if the input stream is empty.streamly any p = lmap p or | Returns : if any of the elements of a stream satisfies a predicate.streamlyReturn / if the given element is present in the stream.streamly all p = lmap p and | Returns 1 if all elements of a stream satisfy a predicate.streamlyReturns 3 if the given element is not present in the stream.streamlyReturns  if all elements are ,  otherwisestreamlyReturns  if any element is ,  otherwisestreamlyCDistribute one copy of the stream to each fold and zip the results.  |-------Fold m a b--------| ---stream m a---| |---m (b,c) |-------Fold m a c--------| >S.fold (FL.tee FL.sum FL.length) (S.enumerateFromTo 1.0 100.0) (5050.0,100)streamlyWDistribute one copy of the stream to each fold and collect the results in a container.  |-------Fold m a b--------| ---stream m a---| |---m [b] |-------Fold m a b--------| | | ... BS.fold (FL.distribute [FL.sum, FL.length]) (S.enumerateFromTo 1 5)[15,5]4This is the consumer side dual of the producer side  operation.streamlyLike @ but for folds that return (), this can be more efficient than ' as it does not need to maintain state.streamly,Partition the input over two folds using an  partitioning predicate.  |-------Fold b x--------| -----stream m a --> (Either b c)----| |----(x,y) |-------Fold c y--------| #Send input to either fold randomly:import System.Random (randomIO)Frandomly a = randomIO >>= \x -> return $ if x then Left a else Right aOS.fold (FL.partitionByM randomly FL.length FL.length) (S.enumerateFromTo 1 100)(59,41)3Send input to the two folds in a proportion of 2:1: zimport Data.IORef (newIORef, readIORef, writeIORef) proportionately m n = do ref <- newIORef $ cycle $ concat [replicate m Left, replicate n Right] return $ \a -> do r <- readIORef ref writeIORef ref $ tail r return $ head r a main = do f <- proportionately 2 1 r <- S.fold (FL.partitionByM f FL.length FL.length) (S.enumerateFromTo (1 :: Int) 100) print r  (67,33) 4This is the consumer side dual of the producer side mergeBy operation.streamlySame as $ but with a pure partition function.'Count even and odd numbers in a stream: >>> let f = FL.partitionBy (\n -> if even n then Left n else Right n) (fmap (("Even " ++) . show) FL.length) (fmap (("Odd " ++) . show) FL.length) in S.fold f (S.enumerateFromTo 1 100) ("Even 50","Odd 50") streamlyBCompose two folds such that the combined fold accepts a stream of  and routes the  values to the first fold and  values to the second fold. partition = partitionBy idstreamlySplit the input stream based on a key field and fold each split using a specific fold collecting the results in a map from the keys to the results. Useful for cases like protocol handlers to handle different type of packets using different handlers.  |-------Fold m a b -----stream m a-----Map-----| |-------Fold m a b | ... streamlyFold a stream of key value pairs using a map of specific folds for each key into a map from keys to the results of fold outputs of the corresponding values. > let table = Data.Map.fromList [("SUM", FL.sum), ("PRODUCT", FL.product)] input = S.fromList [("SUM",1),("PRODUCT",2),("SUM",3),("PRODUCT",4)] in S.fold (FL.demux table) input fromList [(PRODUCT,8),(SUM,4)] streamlySplit the input stream based on a key field and fold each split using a specific fold without collecting the results. Useful for cases like protocol handlers to handle different type of packets.  |-------Fold m a () -----stream m a-----Map-----| |-------Fold m a () | ... streamlyGiven a stream of key value pairs and a map from keys to folds, fold the values for each key using the corresponding folds, discarding the outputs. > let prn = FL.drainBy print > let table = Data.Map.fromList [("ONE", prn), ("TWO", prn)] input = S.fromList [("ONE",1),("TWO",2)] in S.fold (FL.demux_ table) input One 1 Two 2 streamlySplit the input stream based on a key field and fold each split using the given fold. Useful for map/reduce, bucketizing the input in different bins or for generating histograms. > let input = S.fromList [("ONE",1),("ONE",1.1),("TWO",2), ("TWO",2.2)] in S.fold (FL.classify FL.toList) input fromList [("ONE",[1.1,1.0]),("TWO",[2.2,2.0])] streamlyGiven an input stream of key value pairs and a fold for values, fold all the values belonging to each key. Useful for map/reduce, bucketizing the input in different bins or for generating histograms. > let input = S.fromList [("ONE",1),("ONE",1.1),("TWO",2), ("TWO",2.2)] in S.fold (FL.classify FL.toList) input fromList [("ONE",[1.1,1.0]),("TWO",[2.2,2.0])] streamlyLike & but with a monadic splitter function.streamlySplit elements in the input stream into two parts using a pure splitter function, direct each part to a different fold and zip the results.streamlyOSend the elements of tuples in a stream of tuples through two different folds.  |-------Fold m a x--------| ---------stream of (a,b)--| |----m (x,y) |-------Fold m b y--------| 4This is the consumer side dual of the producer side  operation.HQRTUVWXYZ[\]^_aHQRTUVWXYZ[a_]^\7!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone%Q%Q!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone>EXstreamlyAn  Unfold m a b. is a generator of a stream of values of type b from a seed of type a in  m.streamly Unfold step inject8!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe>Y!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone,EFXstreamly;A monad that allows mutable operations using a state token.streamlyA  holds a single  value.streamlyCreate a new mutable variable.streamly$Write a value to a mutable variable.streamlyRead a value from a variable.streamlyQModify the value of a mutable variable using a function with strict application.9!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone >EHVX!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#>EFHVXstreamly;allocate a new array using the provided allocator function.streamlyAllocate a new array aligned to the specified alignmend and using unmanaged pinned memory. The memory will not be automatically freed by GHC. This could be useful in allocate once global data structures. Use carefully as incorrect use can lead to memory leak.streamly Allocate an array that can hold count3 items. The memory of the array is uninitialized.Note that this is internal routine, the reference to this array cannot be given out until the array has been written to and frozen.streamlyZAllocate an Array of the given size and run an IO action passing the array start pointer.streamly{Reallocate the array to the specified size in bytes. If the size is less than the original array the array gets truncated.streamly$Remove the free space from an Array.streamlyBReturn element at the specified index without checking the bounds.9Unsafe because it does not check the bounds of the array.streamlyBReturn element at the specified index without checking the bounds.streamlyO(1)" Get the byte length of the array.streamlyO(1)G Get the length of the array i.e. the number of elements in the array.streamlywriteN n folds a maximum of n' elements from the input stream to an .streamlywriteNAligned alignment n folds a maximum of n' elements from the input stream to an  aligned to the given size.InternalstreamlywriteNAlignedUnmanaged n folds a maximum of n' elements from the input stream to an  aligned to the given size and using unmanaged memory. This could be useful to allocate memory that we need to allocate only once in the lifetime of the program.InternalstreamlyLike n but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown.streamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams.streamlyLike  but the array memory is aligned according to the specified alignment size. This could be useful when we have specific alignment, for example, cache aligned arrays for lookup table etc.-Caution! Do not use this on infinite streams.streamlyfromStreamArraysOf n stream< groups the input stream into a stream of arrays of size n. streamly Convert an  into a list. streamly Create an  from the first N elements of a list. The array is allocated to size N, if the list terminates before N elements then the array may hold less than N elements. streamly Create an . from a list. The list must be of finite size.streamly0GHC memory management allocation header overheadstreamlyDefault maximum buffer size in bytes, for reading from and writing to IO devices, the value is 32KB minus GHC allocation overhead, which is a few bytes, so that the actual allocation is 32KB.streamlyfCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size.streamly!groupIOVecsOf maxBytes maxEntries= groups arrays in the incoming stream to create a stream of  arrays with a maximum of maxBytes' bytes in each array and a maximum of  maxEntries entries in each array.streamlyWCreate two slices of an array without copying the original array. The specified index i( is the first index of the second slice.streamlySplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array.7     7     :!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#EX streamly A ring buffer is a mutable array of fixed size. Initially the array is empty, with ringStart pointing at the start of allocated memory. We call the next location to be written in the ring as ringHead. Initially ringHead == ringStart. When the first item is added, ringHead points to ringStart + sizeof item. When the buffer becomes full ringHead would wrap around to ringStart. When the buffer is full, ringHead always points at the oldest item in the ring and the newest item added always overwrites the oldest item.When using it we should keep in mind that a ringBuffer is a mutable data structure. We should not leak out references to it for immutable use.streamlyCreate a new ringbuffer and return the ring buffer and the ringHead. Returns the ring and the ringHead, the ringHead is same as ringStart.streamlyLAdvance the ringHead by 1 item, wrap around if we hit the end of the array.streamlyInsert an item at the head of the ring, when the ring is full this replaces the oldest item in the ring with the new item. This is unsafe beause ringHead supplied is not verified to be within the Ring. Also, the ringStart foreignPtr must be guaranteed to be alive by the caller.streamlyLike  but compares only N bytes instead of entire length of the ring buffer. This is unsafe because the ringHead Ptr is not checked to be in range.streamlyByte compare the entire length of ringBuffer with the given array, starting at the supplied ringHead pointer. Returns true if the Array and the ringBuffer have identical contents.This is unsafe because the ringHead Ptr is not checked to be in range. The supplied array must be equal to or bigger than the ringBuffer, ARRAY BOUNDS ARE NOT CHECKED.streamly8Fold the buffer starting from ringStart up to the given  using a pure step function. This is useful to fold the items in the ring when the ring is not full. The supplied pointer is usually the end of the ring.>Unsafe because the supplied Ptr is not checked to be in range.streamly5Like unsafeFoldRing but with a monadic step function.streamlyFold the entire length of a ring buffer starting at the supplied ringHead pointer. Assuming the supplied ringHead pointer points to the oldest item, this would fold the ring starting from the oldest item to the newest item in the ring. z(c) 2018 Harendra Kumar (c) Roman Leshchinskiy 2008-2010 (c) The University of Glasgow, 2009BSD3streamly@composewell.com experimentalGHCNone"#%,=>?@AEFSXgkG"streamlyInterposeFirstYield s1 i1 streamlyInterposeFirstBuf s1 i1 streamlyICALFirstYield s1 s2 i1 streamlyICALFirstBuf s1 s2 i1 i2 streamlyInterposeSuffixFirstYield s1 i11streamly An empty n.2streamly An empty n with a side effect.3streamly#Can fuse but has O(n^2) complexity.7streamly Convert an  into a n by supplying it a seed.>streamlysCan be used to enumerate unbounded integrals. This does not check for overflow or underflow for bounded integrals.Lstreamly'Convert a list of monadic actions to a nYstreamly1Run a streaming composition, discard the results.wstreamly)Performs infix separator style splitting.xstreamly)Performs infix separator style splitting.|streamly1Execute a monadic action for each element of the n}streamlyconcatMapU unfold stream uses unfolds to map the input stream elements to streams and then flattens the generated streams into a single output stream.streamlyInterleave streams (full streams, not the elements) unfolded from two input streams and concat. Stop when the first stream stops. If the second stream ends before the first one then first stream still keeps running alone without any interleaving with the second stream. a1, a2, ... an[b1, b2 ...] => [streamA1, streamA2, ... streamAn] [streamB1, streamB2, ...] => [streamA1, streamB1, streamA2...StreamAn, streamBn] => [a11, a12, ...a1j, b11, b12, ...b1k, a21, a22, ...]streamlyInterleave streams (full streams, not the elements) unfolded from two input streams and concat. Stop when the first stream stops. If the second stream ends before the first one then first stream still keeps running alone without any interleaving with the second stream. a1, a2, ... an[b1, b2 ...] => [streamA1, streamA2, ... streamAn] [streamB1, streamB2, ...] => [streamA1, streamB1, streamA2...StreamAn, streamBn] => [a11, a12, ...a1j, b11, b12, ...b1k, a21, a22, ...]streamlyThe most general bracketing and exception combinator. All other combinators can be expressed in terms of this combinator. This can also be used for cases which are not covered by the standard combinators.InternalstreamlyCreate an IORef holding a finalizer that is called automatically when the IORef is garbage collected. The IORef can be written to with a  $ value to deactivate the finalizer.streamlyTRun the finalizer stored in an IORef and deactivate it so that it is run only once.streamly?Deactivate the finalizer stored in an IORef without running it.streamlyLike gbracket but also uses a finalizer to make sure when the stream is garbage collected we run the finalizing action. This requires a MonadIO and MonadBaseControl IO constraint.| The most general bracketing and exception combinator. All other combinators can be expressed in terms of this combinator. This can also be used for cases which are not covered by the standard combinators.Internalstreamly=Run a side effect before the stream yields its first element.streamly5Run a side effect whenever the stream stops normally.streamlypRun a side effect whenever the stream aborts due to an exception. The exception is not caught, simply rethrown.streamlyRun the first action before the stream starts and remember its output, generate a stream using the output, run the second action providing the remembered value as an argument whenever the stream ends normally or due to an exception.streamlyTRun a side effect whenever the stream stops normally or aborts due to an exception.streamlyWhen evaluating a stream if an exception occurs, stream evaluation aborts and the specified exception handler is run with the exception as argument.streamlyintersperse after every n itemsstreamlyZFold the supplied stream to the SVar asynchronously using Parallel concurrency style. {- INLINE [1] toSVarParallel -}streamly#Make the stream producer and consumer run concurrently by introducing a buffer between them. The producer thread evaluates the input stream until the buffer fills, it blocks if the buffer is full until there is space in the buffer. The consumer consumes the stream lazily from the buffer.InternalstreamlymCreate an SVar with a fold consumer that will fold any elements sent to it using the supplied fold function.streamly Take last n/ elements from the stream and discard the rest.streamlybeforestreamlytry (exception handling)streamlyafter, on normal stopstreamly on exceptionstreamlystream generatorstreamlybeforestreamlytry (exception handling)streamlyafter, on normal stop or GCstreamly on exceptionstreamlystream generatorntopsqruvwxyz{|}~!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~psqrntot123465798=<IHKJ;:>@B?ACDEFG|LuwNO}~VXWmnYZ[\]^_`abcdefkhijgz{./0})*+,-~&'(!"#$%pqrstouvwxyz{|lvMPQRSTUyx!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone "#>EPSXg!streamly,Map a function on the input argument of the .Internalstreamly+Map an action on the input argument of the .InternalstreamlyASupply the seed to an unfold closing the input end of the unfold.InternalstreamlySupply the first component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the second component of the tuple as a seed.InternalstreamlySupply the second component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the first component of the tuple as a seed.Internalstreamly Convert an  into an unfold accepting a tuple as an argument, using the argument of the original fold as the second element of tuple and discarding the first element of the tuple.Internalstreamly Convert an  into an unfold accepting a tuple as an argument, using the argument of the original fold as the first element of tuple and discarding the second element of the tuple.Internalstreamly Convert an ` that accepts a tuple as an argument into an unfold that accepts a tuple with elements swapped.Internalstreamly Compose an  and a Q . Given an  Unfold m a b and a  Fold m b c, returns a monadic action a -> m cB representing the application of the fold on the unfolded stream.InternalstreamlyConvert a stream into an &. Note that a stream converted to an  may not be as efficient as an  in some situations.Internalstreamly=Convert a single argument stream generator function into an %. Note that a stream converted to an  may not be as efficient as an  in some situations.Internalstreamly9Convert a two argument stream generator function into an &. Note that a stream converted to an  may not be as efficient as an  in some situations.InternalstreamlySLift a monadic function into an unfold generating a nil stream with a side effect.streamly:Prepend a monadic single element generator function to an .InternalstreamlyCLift a monadic effect into an unfold generating a singleton stream.streamlyELift a monadic function into an unfold generating a singleton stream.streamly_Identity unfold. Generates a singleton stream with the seed as the only element in the stream. identity = singleton returnstreamly(Generates a stream replicating the seed n times.streamly0Generates an infinite stream repeating the seed.streamly#Convert a list of pure values to a nstreamly&Convert a list of monadic values to a nstreamlysCan be used to enumerate unbounded integrals. This does not check for overflow or underflow for bounded integrals.streamlyThe most general bracketing and exception combinator. All other combinators can be expressed in terms of this combinator. This can also be used for cases which are not covered by the standard combinators.InternalstreamlyThe most general bracketing and exception combinator. All other combinators can be expressed in terms of this combinator. This can also be used for cases which are not covered by the standard combinators.Internalstreamly=Run a side effect before the unfold yields its first element.Internalstreamly5Run a side effect whenever the unfold stops normally. Prefer afterIO over this as the aftert action in this combinator is not executed if the unfold is partially evaluated lazily and then garbage collected.InternalstreamlynRun a side effect whenever the unfold stops normally or is garbage collected after a partial lazy evaluation.InternalstreamlyARun a side effect whenever the unfold aborts due to an exception.InternalstreamlyTRun a side effect whenever the unfold stops normally or aborts due to an exception."Prefer finallyIO over this as the aftert action in this combinator is not executed if the unfold is partially evaluated lazily and then garbage collected.InternalstreamlyRun a side effect whenever the unfold stops normally, aborts due to an exception or if it is garbage collected after a partial lazy evaluation.Internalstreamlybracket before after between runs the before/ action and then unfolds its output using the between unfold. When the between4 unfold is done or if an exception occurs then the after# action is run with the output of before as argument."Prefer bracketIO over this as the aftert action in this combinator is not executed if the unfold is partially evaluated lazily and then garbage collected.Internalstreamlybracket before after between runs the before/ action and then unfolds its output using the between unfold. When the between4 unfold is done or if an exception occurs then the after# action is run with the output of beforeq as argument. The after action is also executed if the unfold is paritally evaluated and then garbage collected.InternalstreamlyzWhen unfolding if an exception occurs, unfold the exception using the exception unfold supplied as the first argument to .Internalstreamlybeforestreamlytry (exception handling)streamlyafter, on normal stopstreamly on exceptionstreamly unfold to runstreamlybeforestreamlytry (exception handling)streamlyafter, on normal stop, or GCstreamly on exceptionstreamly unfold to run00;!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone>ESXg(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone­ streamly  fromList =    LConstruct a stream from a list of pure values. This is more efficient than  for serial streams.streamly5Convert a stream into a list in the underlying monad.streamlyLike  #, but with a monadic step function. streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction. streamlyStrict left associative fold. streamly&Lazy left fold to a transformer monad.!For example, to reverse a stream: OS.toList $ S.foldlT (flip S.cons) S.nil $ (S.fromList [1..5] :: SerialT IO Int)streamly3Strict left scan with an extraction function. Like scanl'y, but applies a user supplied extraction function (the third argument) at each step. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.streamly Compare two streams for equalitystreamlyCompare two streamsstreamly A variant of <= that allows you to fold a @ container of streams using the specified stream sum operation.  foldWith async $ map return [1..3]Equivalent to:  foldWith f = S.foldMapWith f id Since: 0.1.0 (Streamly)streamly A variant of 9 that allows you to map a monadic streaming action on a H container and then fold it using the specified stream merge operation.  foldMapWith async return [1..3]Equivalent to: =foldMapWith f g xs = S.concatMapWith f g (S.fromFoldable xs) Since: 0.1.0 (Streamly)streamlyLike d but with the last two arguments reversed i.e. the monadic streaming function is the last argument.Equivalent to: !forEachWith = flip S.foldMapWith Since: 0.1.0 (Streamly)           (c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone,/=>?@AHMVݸ streamly>An IO stream whose applicative instance zips streams wAsyncly.streamlyLike P but zips in parallel, it generates all the elements to be zipped concurrently. main = (toList . $ $ (,,) <$> s1 <*> s2 <*> s3) >>= print where s1 = fromFoldable [1, 2] s2 = fromFoldable [3, 4] s3 = fromFoldable [5, 6]  [(1,3,5),(2,4,6)] The 6 instance of this type works the same way as that of SerialT.streamly>An IO stream whose applicative instance zips streams serially.streamlystreamlyThe applicative instance of } zips a number of streams serially i.e. it produces one element from each stream serially and then zips all those elements. main = (toList . " $ (,,) <$> s1 <*> s2 <*> s3) >>= print where s1 = fromFoldable [1, 2] s2 = fromFoldable [3, 4] s3 = fromFoldable [5, 6]  [(1,3,5),(2,4,6)] The 6 instance of this type works the same way as that of SerialT.streamlyLike & but using a monadic zipping function.streamly7Zip two streams serially using a pure zipping function. M> S.toList $ S.zipWith (+) (S.fromList [1,2,3]) (S.fromList [4,5,6]) [5,7,9]  streamlyLike V but zips concurrently i.e. both the streams being zipped are generated concurrently.!streamlyLike V but zips concurrently i.e. both the streams being zipped are generated concurrently."streamly(Fix the type of a polymorphic stream as .#streamlySame as ".$streamly(Fix the type of a polymorphic stream as .%streamlySame as $.  !"#$% "$! #%!(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone,/=>?@AHMV$+:streamly;streamly5An interleaving serial IO stream of elements of type a. See <! documentation for more details.<streamlyThe  operation for <B interleaves the elements from the two streams. Therefore, when a <> b is evaluated, stream aZ is evaluated first to produce the first element of the combined stream and then stream bl is evaluated to produce the next element of the combined stream, and then we go back to evaluating stream a4 and so on. In other words, the elements of stream a- are interleaved with the elements of stream b.Note that evaluation of  a <> b <> c does not schedule a, b and c9 with equal priority. This expression is equivalent to  a <> (b <> c)$, therefore, it fairly interleaves a with the result of b <> c. For example, RS.fromList [1,2] <> S.fromList [3,4] <> S.fromList [5,6] :: WSerialT Identity Int would result in [1,3,2,5,4,6]. In other words, the leftmost stream gets the same scheduling priority as the rest of the streams taken together. The same is true for each subexpression on the right.Note that this operation cannot be used to fold a container of infinite streams as the state that it needs to maintain is proportional to the number of streams.The W in the name stands for wideR or breadth wise scheduling in contrast to the depth wise scheduling behavior of ?. !import Streamly import qualified Streamly.Prelude as S main = (S.toList . C7 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,3,2,4] Similarly, the o instance interleaves the iterations of the inner and the outer loop, nesting loops in a breadth first manner. main = S.drain . C^ $ do x <- return 1 <> return 2 y <- return 3 <> return 4 S.yieldM $ print (x, y) (1,3) (2,3) (1,4) (2,4) =streamly>streamly'A serial IO stream of elements of type a. See ?! documentation for more details.?streamlyThe  operation for ?< behaves like a regular append operation. Therefore, when a <> b is evaluated, stream a7 is evaluated first until it exhausts and then stream b7 is evaluated. In other words, the elements of stream b( are appended to the elements of stream aL. This operation can be used to fold an infinite lazy container of streams. !import Streamly import qualified Streamly.Prelude as S main = (S.toList . @7 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,2,3,4] The  instance runs the monadic continuation+ for each element of the stream, serially. main = S.drain . @; $ do x <- return 1 <> return 2 S.yieldM $ print x  1 2 ?0 nests streams serially in a depth first manner. main = S.drain . @^ $ do x <- return 1 <> return 2 y <- return 3 <> return 4 S.yieldM $ print (x, y)  (1,3) (1,4) (2,3) (2,4) We call the monadic code being run for each element of the stream a monadic continuation. In imperative paradigm we can think of this composition as nested foro loops and the monadic continuation is the body of the loop. The loop iterates for all elements of the stream.)Note that the behavior and semantics of ? , including  and 6 instances are exactly like Haskell lists except that ?5 can contain effectful actions while lists are pure.In the code above, the @: combinator can be omitted as the default stream type is ?.@streamly(Fix the type of a polymorphic stream as ?.Bstreamly  map = fmap Same as . 5> S.toList $ S.map (+1) $ S.fromList [1,2,3] [2,3,4] Cstreamly(Fix the type of a polymorphic stream as <.DstreamlySame as C.EstreamlyPolymorphic version of the  operation  of <. Interleaves two streams, yielding one element from each stream alternately. When one stream stops the rest of the other stream is used in the output stream.FstreamlyLike E: but stops interleaving as soon as the first stream stops.GstreamlyLike EA but stops interleaving as soon as any of the two streams stops.HstreamlySame as E.IstreamlyBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns  # and the stream ends. For example, let f b = if b > 3 then return Nothing else print b >> return (Just (b, b + 1)) in drain $ unfoldrM f 0   0 1 2 3 Internal:;<=>?@ABCDEFGHI?>@<;EFGCIBA=:HDH5"!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#FXA vstreamly Create an  from the first N elements of a stream. The array is allocated to size N, if the stream terminates before N elements then the array may hold less than N elements.Internalwstreamly Create an e from a stream. This is useful when we want to create a single array from a stream of unknown size. writeN@ is at least twice as efficient when the size is already known.zNote that if the input stream is too large memory allocation for the array may fail. When the stream size is not known, arraysOfY followed by processing of indvidual arrays in the resulting stream should be preferred.Internalxstreamly Convert an  into a stream.Internalystreamly Convert an  into a stream in reverse order.InternalzstreamlyUnfold an array into a stream.{streamlyUnfold an array into a stream, does not check the end of the array, the user is responsible for terminating the stream within the array bounds. For high performance application where the end condition can be determined by a terminating fold.Written in the hope that it may be faster than "read", however, in the case for which this was written, "read" proves to be faster even though the core generated with unsafeRead looks simpler.Internal|streamly null arr = length arr == 0Internal}streamly )last arr = readIndex arr (length arr - 1)Internal~streamlyO(1)8 Lookup the element at the given index, starting from 0.InternalstreamlyO(1)c Write the given element at the given index in the array. Performs in-place mutation of the array.InternalstreamlyOTransform an array into another array using a stream transformation operation.InternalstreamlyFold an array using a Q.Internalstreamly,Fold an array using a stream fold operation.Internal   vwxyz{|}~  vw xyz{|}~>!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNoneXD*   z   z#(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNoneU streamlySpecify the maximum number of threads that can be spawned concurrently for any concurrent combinator in a stream. A value of 0 resets the thread limit to default, a negative value means there is no limit. The default value is 1500.  does not affect  ParallelT5 streams as they can use unbounded number of threads.When the actions in a stream are IO bound, having blocking IO calls, this option can be used to control the maximum number of in-flight IO requests. When the actions are CPU bound this option can be used to control the amount of CPU used by the stream.streamly;Specify the maximum size of the buffer for storing the results from concurrent computations. If the buffer becomes full we stop spawning more concurrent tasks until there is space in the buffer. A value of 0 resets the buffer size to default, a negative value means there is no limit. The default value is 1500.CAUTION! using an unbounded : value (i.e. a negative value) coupled with an unbounded  value is a recipe for disaster in presence of infinite streams, or very large streams. Especially, it must not be used when  is used in  ZipAsyncM streams as  in applicative zip streams generates an infinite stream causing unbounded concurrent generation with no limit on the buffer or threads.streamly&Specify the pull rate of a stream. A   value resets the rate to default which is unlimited. When the rate is specified, concurrent production may be ramped up or down automatically to achieve the specified yield rate. The specific behavior for different styles of e$ specifications is documented under eN. The effective maximum production rate achieved by a stream is governed by:The  limitThe  limit5The maximum rate that the stream producer can achieve5The maximum rate that the stream consumer can achievestreamlySame as )rate (Just $ Rate (r/2) r (2*r) maxBound)YSpecifies the average production rate of a stream in number of yields per second (i.e. Hertz). Concurrent production is ramped up or down automatically to achieve the specified average yield rate. The rate can go down to half of the specified rate on the lower side and double of the specified rate on the higher side.streamlySame as %rate (Just $ Rate r r (2*r) maxBound)Specifies the minimum rate at which the stream should yield values. As far as possible the yield rate would never be allowed to go below the specified rate, even though it may possibly go above it at times, the upper limit is double of the specified rate.streamlySame as %rate (Just $ Rate (r/2) r r maxBound)sSpecifies the maximum rate at which the stream should yield values. As far as possible the yield rate would never be allowed to go above the specified rate, even though it may possibly go below it at times, the lower limit is half of the specified rate. This can be useful in applications where certain resource usage must not be allowed to go beyond certain limits.streamlySame as rate (Just $ Rate r r r 0)-Specifies a constant yield rate. If for some reason the actual rate goes above or below the specified rate we do not try to recover it by increasing or decreasing the rate in future. This can be useful in applications like graphics frame refresh where we need to maintain a constant refresh rate.streamlySpecify the average latency, in nanoseconds, of a single threaded action in a concurrent composition. Streamly can measure the latencies, but that is possible only after at least one task has completed. This combinator can be used to provide a latency hint so that rate control using  can take that into account right from the beginning. When not specified then a default behavior is chosen which could be too slow or too fast, and would be restricted by any other control parameters configured. A value of 0 indicates default behavior, a negative value means there is no limit i.e. zero latency. This would normally be useful only in high latency and high throughput cases.streamly:Print debug information about an SVar when the stream ends  $!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone %456HMVgstreamly Just like  except that it has a zipping  instance and no  instance.streamlyList a is a replacement for [a].streamly3A list constructor and pattern that deconstructs a ) into its head and tail. Corresponds to : for Haskell lists.streamly<An empty list constructor and pattern that matches an empty ). Corresponds to '[]' for Haskell lists.streamly Convert a  to a regular streamlyConvert a regular  to a   5%(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@AM streamly4A parallely composing IO stream of elements of type a. See  documentation for more details.streamlyBAsync composition with strict concurrent execution of all streams.The  instance of k executes both the streams concurrently without any delay or without waiting for the consumer demand and merges the results as they arrive. If the consumer does not consume the results, they are buffered upto a configured maximum, controlled by the  maxBufferk primitive. If the buffer becomes full the concurrent tasks will block until there is space in the buffer.Both WAsyncT and `, evaluate the constituent streams fairly in a round robin fashion. The key difference is that WAsyncTJ might wait for the consumer demand before it executes the tasks whereas [ starts executing all the tasks immediately without waiting for the consumer demand. For WAsyncT the  maxThreads limit applies whereas for % it does not apply. In other words, WAsyncT can be lazy whereas  is strict.t is useful for cases when the streams are required to be evaluated simultaneously irrespective of how the consumer consumes them e.g. when we want to race two tasks and want to start both strictly at the same time or if we have timers in the parallel tasks and our results depend on the timers being started at the same time. If we do not have such requirements then AsyncT or AheadT5 are recommended as they can be more efficient than . main = (toList . ; $ (fromFoldable [1,2]) <> (fromFoldable [3,4])) >>= print   [1,3,2,4] zWhen streams with more than one element are merged, it yields whichever stream yields first without any bias, unlike the Async style streams.Any exceptions generated by a constituent stream are propagated to the output stream. The output and exceptions from a single stream are guaranteed to arrive in the same order in the resulting stream as they were generated in the input stream. However, the relative ordering of elements from different streams in the resulting stream can vary depending on scheduling and generation delays.Similarly, the  instance of  runs all& iterations of the loop concurrently. import Streamly import qualified Streamly.Prelude( as S import Control.Concurrent main = drain .  $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n)  ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 Note that parallel composition can only combine a finite number of streams as it needs to retain state for each unfinished stream.5Since: 0.7.0 (maxBuffer applies to ParallelT streams) Since: 0.1.0streamlynXXX we can implement it more efficienty by directly implementing instead of combining streams using parallel.streamlyPolymorphic version of the  operation  of " Merges two streams concurrently.streamlyLike 8 but stops the output as soon as the first stream stops.streamlyLike ? but stops the output as soon as any of the two streams stops.streamlyVGenerate a stream asynchronously to keep it buffered, lazily consume from the buffer.InternalstreamlymCreate an SVar with a fold consumer that will fold any elements sent to it using the supplied fold function.streamlyRedirect a copy of the stream to a supplied fold and run it concurrently in an independent thread. The fold may buffer some elements. The buffer size is determined by the prevailing  maxBuffer setting. h Stream m a -> m b | -----stream m a ---------------stream m a-----  C> S.drain $ S.tapAsync (S.mapM_ print) (S.enumerateFromTo 1 2) 1 2 fExceptions from the concurrently running fold are propagated to the current computation. Note that, because of buffering in the fold, exceptions may be delayed and may not correspond to the current element being processed in the parent stream, but we guarantee that before the parent stream stops the tap finishes and all exceptions from it are drained. Compare with tap.streamlyiConcurrently distribute a stream to a collection of fold functions, discarding the outputs of the folds.QS.drain $ distributeAsync_ [S.mapM_ print, S.mapM_ print] (S.enumerateFromTo 1 2) )distributeAsync_ = flip (foldr tapAsync) Internalstreamly(Fix the type of a polymorphic stream as .  &(c) 2018 Harendra KumarBSD3streamly@composewell.com experimentalGHCNoneBstreamlylTypes that can be enumerated as a stream. The operations in this type class are equivalent to those in the [ type class, except that these generate a stream instead of a list. Use the functions in )Streamly.Internal.Data.Stream.Enumeration module to define new instances.streamlyenumerateFrom from/ generates a stream starting with the element from, enumerating up to  when the type is 8 or generating an infinite stream when the type is not . => S.toList $ S.take 4 $ S.enumerateFrom (0 :: Int) [0,1,2,3] For c types, enumeration is numerically stable. However, no overflow or underflow checks are performed. >> S.toList $ S.take 4 $ S.enumerateFrom 1.1 [1.1,2.1,3.1,4.1] streamly3Generate a finite stream starting with the element from(, enumerating the type up to the value to. If to is smaller than from# then an empty stream is returned. /> S.toList $ S.enumerateFromTo 0 4 [0,1,2,3,4] For 3 types, the last element is equal to the specified to5 value after rounding to the nearest integral value. t> S.toList $ S.enumerateFromTo 1.1 4 [1.1,2.1,3.1,4.1] > S.toList $ S.enumerateFromTo 1.1 4.6 [1.1,2.1,3.1,4.1,5.1] streamlyenumerateFromThen from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - fromD. Enumeration can occur downwards or upwards depending on whether then comes before or after from. For  types the stream ends when B is reached, for unbounded types it keeps enumerating infinitely. z> S.toList $ S.take 4 $ S.enumerateFromThen 0 2 [0,2,4,6] > S.toList $ S.take 4 $ S.enumerateFromThen 0 (-2) [0,-2,-4,-6] streamly enumerateFromThenTo from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to toC. Enumeration can occur downwards or upwards depending on whether then comes before or after from. o> S.toList $ S.enumerateFromThenTo 0 2 6 [0,2,4,6] > S.toList $ S.enumerateFromThenTo 0 (-2) (-6) [0,-2,-4,-6] streamly#enumerateFromStepIntegral from step6 generates an infinite stream whose first element is from3 and the successive elements are in increments of step.sCAUTION: This function is not safe for finite integral types. It does not check for overflow, underflow or bounds. > S.toList $ S.take 4 $ S.enumerateFromStepIntegral 0 2 [0,2,4,6] > S.toList $ S.take 3 $ S.enumerateFromStepIntegral 0 (-2) [0,-2,-4] streamly Enumerate an  type. enumerateFromIntegral from, generates a stream whose first element is from3 and the successive elements are in increments of 1+. The stream is bounded by the size of the  type. E> S.toList $ S.take 4 $ S.enumerateFromIntegral (0 :: Int) [0,1,2,3] streamly Enumerate an  type in steps. $enumerateFromThenIntegral from then+ generates a stream whose first element is from, the second element is then2 and the successive elements are in increments of  then - from,. The stream is bounded by the size of the  type. > S.toList $ S.take 4 $ S.enumerateFromThenIntegral (0 :: Int) 2 [0,2,4,6] > S.toList $ S.take 4 $ S.enumerateFromThenIntegral (0 :: Int) (-2) [0,-2,-4,-6] streamly Enumerate an  type up to a given limit. enumerateFromToIntegral from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to. 7> S.toList $ S.enumerateFromToIntegral 0 4 [0,1,2,3,4] streamly Enumerate an % type in steps up to a given limit. (enumerateFromThenToIntegral from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to. > S.toList $ S.enumerateFromThenToIntegral 0 2 6 [0,2,4,6] > S.toList $ S.enumerateFromThenToIntegral 0 (-2) (-6) [0,-2,-4,-6] streamly&Numerically stable enumeration from a  number in steps of size 1. enumerateFromFractional from, generates a stream whose first element is from2 and the successive elements are in increments of 12. No overflow or underflow checks are performed.This is the equivalent to  for  types. For example: H> S.toList $ S.take 4 $ S.enumerateFromFractional 1.1 [1.1,2.1,3.1,4.1] streamly&Numerically stable enumeration from a  number in steps. %enumerateFromThenFractional from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from2. No overflow or underflow checks are performed.This is the equivalent of  for  types. For example: > S.toList $ S.take 4 $ S.enumerateFromThenFractional 1.1 2.1 [1.1,2.1,3.1,4.1] > S.toList $ S.take 4 $ S.enumerateFromThenFractional 1.1 (-2.1) [1.1,-2.1,-5.300000000000001,-8.500000000000002] streamly&Numerically stable enumeration from a  number to a given limit. !enumerateFromToFractional from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to.This is the equivalent of  for  types. For example: > S.toList $ S.enumerateFromToFractional 1.1 4 [1.1,2.1,3.1,4.1] > S.toList $ S.enumerateFromToFractional 1.1 4.6 [1.1,2.1,3.1,4.1,5.1] 7Notice that the last element is equal to the specified to. value after rounding to the nearest integer.streamly&Numerically stable enumeration from a ( number in steps up to a given limit. *enumerateFromThenToFractional from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to.This is the equivalent of  for  types. For example: > S.toList $ S.enumerateFromThenToFractional 0.1 2 6 [0.1,2.0,3.9,5.799999999999999] > S.toList $ S.enumerateFromThenToFractional 0.1 (-2) (-6) [0.1,-2.0,-4.1000000000000005,-6.200000000000001] streamly for  types not larger than .streamly for  types not larger than .streamly for  types not larger than .Note: We convert the  to  and enumerate the ,. If a type is bounded but does not have a n instance then we can go on enumerating it beyond the legal values of the type, resulting in the failure of  when converting back to . Therefore we require a / instance for this function to be safely used.streamly "enumerate = enumerateFrom minBound Enumerate a  type from its  to streamly &enumerateTo = enumerateFromTo minBound Enumerate a  type from its  to specified value.streamly 4enumerateFromBounded = enumerateFromTo from maxBound for   types.'(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone"#>HSX streamlytime since last event streamlytime as per last event!streamly!total number sessions in progress"streamlyheap for timeouts#streamlyStored sessions for keys$streamlyCompleted sessionsstreamlyLDecompose a stream into its head and tail. If the stream is empty, returns  &. If the stream is non-empty, returns  Just (a, ma), where a is the head of the stream and ma its tail.vThis is a brute force primitive. Avoid using it as long as possible, use it when no other combinator can do the job. This can be used to do pretty much anything in an imperative manner, as it just breaks down the stream into individual elements and we can loop over them as we deem fit. For example, this can be used to convert a streamly stream into other stream types.streamly 7unfoldr step s = case step s of Nothing -> 0 Just (a, b) -> a `cons` unfoldr step b Build a stream by unfolding a pure step function step starting from a seed sq. The step function returns the next element in the stream and the next seed value. When it is done it returns  # and the stream ends. For example, elet f b = if b > 3 then Nothing else Just (b, b + 1) in toList $ unfoldr f 0  [0,1,2,3] streamlyBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns  # and the stream ends. For example, let f b = if b > 3 then return Nothing else print b >> return (Just (b, b + 1)) in drain $ unfoldrM f 0   0 1 2 3 When run concurrently, the next unfold step can run concurrently with the processing of the output of the previous step. Note that more than one step cannot run concurrently as the next step depends on the output of the previous step. (asyncly $ S.unfoldrM (\n -> liftIO (threadDelay 1000000) >> return (Just (n, n + 1))) 0) & S.foldlM' (\_ a -> threadDelay 1000000 >> print a) ()  Concurrent Since: 0.1.0streamly Convert an - into a stream by supplying it an input seed.,unfold (UF.replicateM 10) (putStrLn "hello") Since: 0.7.0streamly yield a = a `cons` nil ,Create a singleton stream from a pure value.?The following holds in monadic streams, but not in Zip streams: #yield = pure yield = yieldM . pure In Zip applicative streams  is not the same as  because in that case  is equivalent to  instead.  and ( are equally efficient, in other cases G may be slightly more efficient than the other equivalent definitions.streamly yieldM m = m `consM` nil 0Create a singleton stream from a monadic action. *> toList $ yieldM getLine hello ["hello"] streamly 6fromIndices f = let g i = f i `cons` g (i + 1) in g 0 GGenerate an infinite stream, whose values are the output of a function f9 applied on the corresponding index. Index starts at 0. 5> S.toList $ S.take 5 $ S.fromIndices id [0,1,2,3,4] streamly 8fromIndicesM f = let g i = f i `consM` g (i + 1) in g 0 PGenerate an infinite stream, whose values are the output of a monadic function f7 applied on the corresponding index. Index starts at 0. Concurrentstreamly replicateM = take n . repeatM 1Generate a stream by performing a monadic action n times. Same as: drain $ serially $ S.replicateM 10 $ (threadDelay 1000000 >> print 1) drain $ asyncly $ S.replicateM 10 $ (threadDelay 1000000 >> print 1)  Concurrentstreamly replicate = take n . repeat Generate a stream of length n by repeating a value n times.streamly6Generate an infinite stream by repeating a pure value.streamly 0repeatM = fix . consM repeatM = cycle1 . yieldM CGenerate a stream by repeatedly executing a monadic action forever. drain $ serially $ S.take 10 $ S.repeatM $ (threadDelay 1000000 >> print 1) drain $ asyncly $ S.take 10 $ S.repeatM $ (threadDelay 1000000 >> print 1) &Concurrent, infinite (do not use with  parallely)streamly #iterate f x = x `cons` iterate f x !Generate an infinite stream with xT as the first element and each successive element derived by applying the function f on the previous element. 5> S.toList $ S.take 5 $ S.iterate (+1) 1 [1,2,3,4,5] streamly <iterateM f m = m >>= a -> return a `consM` iterateM f (f a) LGenerate an infinite stream with the first element generated by the action mG and each successive element derived by applying the monadic function f on the previous element.When run concurrently, the next iteration can run concurrently with the processing of the previous iteration. Note that more than one iteration cannot run concurrently as the next iteration depends on the output of the previous iteration. drain $ serially $ S.take 10 $ S.iterateM (\x -> threadDelay 1000000 >> print x >> return (x + 1)) (return 0) drain $ asyncly $ S.take 10 $ S.iterateM (\x -> threadDelay 1000000 >> print x >> return (x + 1)) (return 0)  ConcurrentSince: 0.7.0 (signature change) Since: 0.1.2streamly  fromListM =    PConstruct a stream from a list of monadic actions. This is more efficient than  for serial streams.streamly fromFoldableM =    Construct a stream from a  containing monadic actions. drain $ serially $ S.fromFoldableM $ replicateM 10 (threadDelay 1000000 >> print 1) drain $ asyncly $ S.fromFoldableM $ replicateM 10 (threadDelay 1000000 >> print 1) Concurrent (do not use with  parallely on infinite containers)streamlySame as  fromFoldable.streamly6Read lines from an IO Handle into a stream of Strings.streamly Construct a stream by reading a   repeatedly.Internalstreamly currentTime gG returns a stream of absolute timestamps using a clock of granularity gX specified in seconds. A low granularity clock is more expensive in terms of CPU usage..Note: This API is not safe on 32-bit machines.Internalstreamly"Right associative/lazy pull fold. foldrM build final stream9 constructs an output structure using the step function build. build is invoked with the next input element and the remaining (lazy) tail of the output structure. It builds a lazy output expression using the two. When the "tail structure" in the output expression is evaluated it calls build( again thus lazily consuming the input stream. until either the output expression built by build@ is free of the "tail" or the input is exhausted in which case finaly is used as the terminating case for the output structure. For more details see the description in the previous section.%Example, determine if any element is % in a stream:cS.foldrM (\x xs -> if odd x then return True else xs) (return False) $ S.fromList (2:4:5:undefined)> True Since: 0.7.0 (signature changed) Since: 0.2.0 (signature changed) Since: 0.1.0streamly Right fold to a streaming monad. foldrS S.cons S.nil === id can be used to perform stateless stream to stream transformations like map and filter in general. It can be coupled with a scan to perform stateful transformations. However, note that the custom map and filter routines can be much more efficient than this due to better stream fusion.4S.toList $ S.foldrS S.cons S.nil $ S.fromList [1..5] > [1,2,3,4,5]%Find if any element in the stream is :S.toList $ S.foldrS (\x xs -> if odd x then return True else xs) (return False) $ (S.fromList (2:4:5:undefined) :: SerialT IO Int)> [True]:Map (+2) on odd elements and filter out the even elements:vS.toList $ S.foldrS (\x xs -> if odd x then (x + 2) `S.cons` xs else xs) S.nil $ (S.fromList [1..5] :: SerialT IO Int) > [3,5,7]% can also be represented in terms of ., however, the former is much more efficient: WfoldrM f z s = runIdentityT $ foldrS (\x xs -> lift $ f x (runIdentityT xs)) (lift z) sstreamlySRight fold to a transformer monad. This is the most general right fold function.  is a special case of  , however ' implementation can be more efficient: gfoldrS = foldrT foldrM f z s = runIdentityT $ foldrT (\x xs -> lift $ f x (runIdentityT xs)) (lift z) sl can be used to translate streamly streams to other transformer monads e.g. to a different streaming type. streamlyQRight fold, lazy for lazy monads and pure streams, and strict for strict monads. Please avoid using this routine in strict monads like IO unless you need a strict right fold. This is provided only for use in lazy monads (e.g. Identity) or pure streams. Note that with this signature it is not possible to implement a lazy foldr when the monad mw is strict. In that case it would be strict in its accumulator and therefore would necessarily consume all its input. streamly[Lazy right fold for non-empty streams, using first element as the starting value. Returns   if the stream is empty. streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction. streamly#Left associative/strict push fold. foldl' reduce initial stream invokes reduceE with the accumulator and the next input in the input stream, using initial; as the initial value of the current value of the accumulator. When the input is exhausted the current value of the accumulator is returned. Make sure to use a strict data structure for accumulator to not build unnecessary lazy expressions unless that's what you want. See the previous section for more details. streamly]Strict left fold, for non-empty streams, using first element as the starting value. Returns   if the stream is empty.streamlyLike  #, but with a monadic step function.streamlyLike  " but with a monadic step function.streamly+Fold a stream using the supplied left fold.'S.fold FL.sum (S.enumerateFromTo 1 100)5050streamly drain = mapM_ (\_ -> return ())NRun a stream, discarding the results. By default it interprets the stream as ?O, to run other types of streams use the type adapting combinators for example drain . asyncly.streamlyNRun a stream, discarding the results. By default it interprets the stream as ?O, to run other types of streams use the type adapting combinators for example  runStream . asyncly.streamly drainN n = drain . take nRun maximum up to n iterations of a stream.streamly runN n = runStream . take nRun maximum up to n iterations of a stream.streamly "drainWhile p = drain . takeWhile p1Run a stream as long as the predicate holds true.streamly $runWhile p = runStream . takeWhile p1Run a stream as long as the predicate holds true.streamly&Determine whether the stream is empty.streamly0Extract the first element of the stream, if any.  head = (!! 0)streamlyExtract the first element of the stream, if any, otherwise use the supplied default value. It can help avoid one branch in high performance code.Internalstreamly tail = fmap (fmap snd) . uncons8Extract all but the first element of the stream, if any.streamly7Extract all but the last element of the stream, if any.streamly/Extract the last element of the stream, if any. last xs = xs !! (length xs - 1)streamly6Determine whether an element is present in the stream.streamly:Determine whether an element is not present in the stream.streamly#Determine the length of the stream. streamly?Determine whether all elements of a stream satisfy a predicate.!streamlyFDetermine whether any of the elements of a stream satisfy a predicate."streamly8Determines if all elements of a boolean stream are True.#streamlyDDetermines whether at least one element of a boolean stream is True.$streamlyBDetermine the sum of all elements of a stream of numbers. Returns 0a when the stream is empty. Note that this is not numerically stable for floating point numbers.%streamlyFDetermine the product of all elements of a stream of numbers. Returns 1 when the stream is empty.&streamly  minimum = ' compare *Determine the minimum element in a stream.'streamlyRDetermine the minimum element in a stream using the supplied comparison function.(streamly  maximum = ) compare *Determine the maximum element in a stream.)streamlyRDetermine the maximum element in a stream using the supplied comparison function.*streamly&Lookup the element at the given index.+streamly!In a stream of (key-value) pairs (a, b), return the value b9 of the first pair where the key equals the given value a. "lookup = snd <$> find ((==) . fst),streamlyLike -" but with a non-monadic predicate. find p = findM (return . p)-streamly=Returns the first element that satisfies the given predicate..streamlyTFind all the indices where the element in the stream satisfies the given predicate./streamly;Returns the first index that satisfies the given predicate.0streamly_Find all the indices where the value of the element in the stream is equal to the given value.1streamlyCReturns the first index where a given value is found in the stream. elemIndex a = findIndex (== a)2streamlyReturns _ if the first stream is the same as or a prefix of the second. A stream is a prefix of itself. Q> S.isPrefixOf (S.fromList "hello") (S.fromList "hello" :: SerialT IO Char) True 3streamlyReturns  if all the elements of the first stream occur, in order, in the second stream. The elements do not have to occur consecutively. A stream is a subsequence of itself. T> S.isSubsequenceOf (S.fromList "hlo") (S.fromList "hello" :: SerialT IO Char) True 4streamly.Drops the given prefix from a stream. Returns  > if the stream does not start with the given prefix. Returns Just nil, when the prefix is the same as the stream.5streamly mapM_ = drain . mapMApply a monadic action to each element of the stream and discard the output of the action. This is not really a pure transformation operation but a transformation followed by fold.6streamly toList = S.foldr (:) [] mConvert a stream into a list in the underlying monad. The list can be consumed lazily in a lazy monad (e.g. c). In a strict monad (e.g. IO) the whole list is generated and buffered before it can be consumed.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.7streamly #toListRev = S.foldl' (flip (:)) [] FConvert a stream into a list in reverse order in the underlying monad.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.Internal8streamly #toHandle h = S.mapM_ $ hPutStrLn h *Write a stream of Strings to an IO Handle.9streamly/A fold that buffers its input to a pure stream.Warning!f working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.Internal:streamlyMBuffers the input stream to a pure stream in the reverse order of the input.Warning!f working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.Internal;streamly"Convert a stream to a pure stream. toPure = foldr cons nil Internal<streamly3Convert a stream to a pure stream in reverse order. #toPureRev = foldl' (flip cons) nil Internal=streamlySParallel transform application operator; applies a stream transformation function t m a -> t m b to a stream t m a concurrently; the input stream is evaluated asynchronously in an independent thread yielding elements to a buffer and the transformation function runs in another thread consuming the input from the buffer. =5 is just like regular function application operator & except that it is concurrent.If you read the signature as $(t m a -> t m b) -> (t m a -> t m b)y you can look at it as a transformation that converts a transform function to a buffered concurrent transform function.]The following code prints a value every second even though each stage adds a 1 second delay. mdrain $ S.mapM (\x -> threadDelay 1000000 >> print x) |$ S.repeatM (threadDelay 1000000 >> return 1)  Concurrent>streamlySame as =.Internal?streamlyyParallel reverse function application operator for streams; just like the regular reverse function application operator & except that it is concurrent. ndrain $ S.repeatM (threadDelay 1000000 >> return 1) |& S.mapM (\x -> threadDelay 1000000 >> print x)  Concurrent@streamly<Parallel fold application operator; applies a fold function  t m a -> m b to a stream t m a concurrently; The the input stream is evaluated asynchronously in an independent thread yielding elements to a buffer and the folding action runs in another thread consuming the input from the buffer.If you read the signature as  (t m a -> m b) -> (t m a -> m b)o you can look at it as a transformation that converts a fold function to a buffered concurrent fold function.The .I at the end of the operator is a mnemonic for termination of the stream. o S.foldlM' (\_ a -> threadDelay 1000000 >> print a) () |$. S.repeatM (threadDelay 1000000 >> return 1)  ConcurrentAstreamlySame as @.InternalBstreamlylParallel reverse function application operator for applying a run or fold functions to a stream. Just like @' except that the operands are reversed. p S.repeatM (threadDelay 1000000 >> return 1) |&. S.foldlM' (\_ a -> threadDelay 1000000 >> print a) ()  ConcurrentCstreamlyUse a  to transform a stream.Dstreamly3Strict left scan with an extraction function. Like Fy, but applies a user supplied extraction function (the third argument) at each step. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.!Since: 0.7.0 (Monad m constraint) Since 0.2.0EstreamlyLike F" but with a monadic fold function.FstreamlyStrict left scan. Like map, FG too is a one to one transformation, however it adds an extra element. >> S.toList $ S.scanl' (+) 0 $ fromList [1,2,3,4] [0,1,3,6,10]  \> S.toList $ S.scanl' (flip (:)) [] $ S.fromList [1,2,3,4] [[],[1],[2,1],[3,2,1],[4,3,2,1]] The output of Fi is the initial value of the accumulator followed by all the intermediate steps and the final result of  .LBy streaming the accumulated state after each fold step, we can share the state across multiple stages of stream composition. Each stage can modify or extend the state, do some processing with it and emit it for the next stage, thus modularizing the stream processing. This can be useful in stateful or event-driven programming.|Consider the following monolithic example, computing the sum and the product of the elements in a stream in one go using a foldl': N> S.foldl' (\(s, p) x -> (s + x, p * x)) (0,1) $ S.fromList [1,2,3,4] (10,24) Using scanl' we can make it modular by computing the sum in the first stage and passing it down to the next stage for computing the product: > S.foldl' (\(_, p) (s, x) -> (s, p * x)) (0,1) $ S.scanl' (\(s, _) x -> (s + x, x)) (0,1) $ S.fromList [1,2,3,4] (10,24)  IMPORTANT: F evaluates the accumulator to WHNF. To avoid building lazy expressions inside the accumulator, it is recommended that a strict data structure is used for accumulator.GstreamlyLike F: but does not stream the initial value of the accumulator. .postscanl' f z xs = S.drop 1 $ S.scanl' f z xsHstreamlyLike G" but with a monadic step function.IstreamlyCLike scanl' but does not stream the final value of the accumulator.Jstreamly1Like postscanl' but with a monadic step function.KstreamlyLike L" but with a monadic step function.LstreamlyLike F but for a non-empty stream. The first element of the stream is used as the initial value of the accumulator. Does nothing if the stream is empty. :> S.toList $ S.scanl1 (+) $ fromList [1,2,3,4] [1,3,6,10] Mstreamly+Scan a stream using the given monadic fold.Nstreamly/Postscan a stream using the given monadic fold.OstreamlyApply a function on every two successive elements of a stream. If the stream consists of a single element the output is an empty stream.InternalPstreamlyLike O$ but with an effectful map function.InternalQstreamly2Include only those elements that pass a predicate.RstreamlySame as Q but with a monadic predicate.Sstreamly7Drop repeated elements that are adjacent to each other.Tstreamly`Ensures that all the elements of the stream are identical and then returns that unique element.Ustreamly Take first n/ elements from the stream and discard the rest.Vstreamly<End the stream as soon as the predicate fails on an element.WstreamlySame as V but with a monadic predicate.XstreamlytakeByTime duration- yields stream elements upto specified time duration. The duration starts when the stream is evaluated for the first time, before the first element is yielded. The time duration is checked before generating each element, if the duration has expired the stream stops.AThe total time taken in executing the stream is guaranteed to be at least duration, however, because the duration is checked before generating an element, the upper bound is indeterminate and depends on the time taken in generating and processing the last element.lNo element is yielded if the duration is zero. At least one element is yielded if the duration is non-zero.InternalYstreamlyDiscard first n, elements from the stream and take the rest.ZstreamlydDrop elements in the stream as long as the predicate succeeds and then take the rest of the stream.[streamlySame as Z but with a monadic predicate.\streamlydropByTime duration' drops stream elements until specified durationr has passed. The duration begins when the stream is evaluated for the first time. The time duration is checked afterj generating a stream element, the element is yielded if the duration has expired otherwise it is dropped.BThe time elapsed before starting to generate the first element is at most duration, however, because the duration expiry is checked after the element is generated, the lower bound is indeterminate and depends on the time taken in generating an element.1All elements are yielded if the duration is zero.Internal]streamly mapM f = sequence . map f oApply a monadic function to each element of the stream and replace it with the output of the resulting action. > drain $ S.mapM putStr $ S.fromList ["a", "b", "c"] abc drain $ S.replicateM 10 (return 1) & (serially . S.mapM (\x -> threadDelay 1000000 >> print x)) drain $ S.replicateM 10 (return 1) & (asyncly . S.mapM (\x -> threadDelay 1000000 >> print x)) Concurrent (do not use with  parallely on infinite streams)^streamly sequence = mapM id WReplace the elements of a stream of monadic actions with the outputs of those actions. > drain $ S.sequence $ S.fromList [putStr "a", putStr "b", putStrLn "c"] abc drain $ S.replicateM 10 (return $ threadDelay 1000000 >> print 1) & (serially . S.sequence) drain $ S.replicateM 10 (return $ threadDelay 1000000 >> print 1) & (asyncly . S.sequence) Concurrent (do not use with  parallely on infinite streams)_streamlyMap a 0 returning function to a stream, filter out the  9 elements, and return a stream of values extracted from .Equivalent to: mapMaybe f = S.map ' . S.filter ( . S.map f `streamlyLike _ but maps a monadic function.Equivalent to: mapMaybeM f = S.map ' . S.filter ( . S.mapM f Concurrent (do not use with  parallely on infinite streams)astreamlyReturns the elements of the stream in reverse order. The stream must be finite. Note that this necessarily buffers the entire stream in memory. Since 0.7.0 (Monad m constraint) Since: 0.1.1bstreamlyLike a& but several times faster, requires a ) instance.cstreamlyGenerate a stream by inserting the result of a monadic action between consecutive elements of the given stream. Note that the monadic action is performed after the stream action before which its result is inserted. J> S.toList $ S.intersperseM (return ',') $ S.fromList "hello" "h,e,l,l,o" dstreamlyaGenerate a stream by inserting a given element between consecutive elements of the given stream. @> S.toList $ S.intersperse ',' $ S.fromList "hello" "h,e,l,l,o" estreamly9Insert a monadic action after each element in the stream.*streamlyPerform a side effect after each element of a stream. The output of the effectful action is discarded, therefore, the input stream remains unchanged. T> S.mapM_ putChar $ S.intersperseSuffix_ (threadDelay 1000000) $ S.fromList "hello" InternalfstreamlyGIntroduces a delay of specified seconds after each element of a stream.InternalgstreamlyLike eF but intersperses a monadic action into the input stream after every n% elements and after the last element. V> S.toList $ S.intersperseSuffixBySpan 2 (return ',') $ S.fromList "hello" "he,ll,o," Internalhstreamly?Intersperse a monadic action into the input stream after every n seconds. > S.drain $ S.interjectSuffix 1 (putChar ',') $ S.mapM (\x -> threadDelay 1000000 >> putChar x) $ S.fromList "hello" "h,e,l,l,o" istreamlyinsertBy cmp elem stream inserts elem before the first element in stream that is less than elem when compared using cmp. insertBy cmp x = o cmp ( x) A> S.toList $ S.insertBy compare 2 $ S.fromList [1,3,5] [1,2,3,5] jstreamlygDeletes the first occurrence of the element in the stream that satisfies the given equality predicate. >> S.toList $ S.deleteBy (==) 3 $ S.fromList [1,3,3,5] [1,3,5] kstreamly kindexed = S.postscanl' (\(i, _) x -> (i + 1, x)) (-1,undefined) indexed = S.zipWith (,) (S.enumerateFrom 0)DPair each element in a stream with its index, starting from index 0. 0> S.toList $ S.indexed $ S.fromList "hello" [(0,h),(1,e),(2,l),(3,l),(4,o)] lstreamly indexedR n = S.postscanl' (\(i, _) x -> (i - 1, x)) (n + 1,undefined) indexedR n = S.zipWith (,) (S.enumerateFromThen n (n - 1))MPair each element in a stream with its index, starting from the given index n and counting down. 5> S.toList $ S.indexedR 10 $ S.fromList "hello" [(10,h),(9,e),(8,l),(7,l),(6,o)] mstreamly<Compare two streams for equality using an equality function.nstreamlyBCompare two streams lexicographically using a comparison function.ostreamlyMerge two streams using a comparison function. The head elements of both the streams are compared and the smaller of the two elements is emitted, if both elements are equal then the element from the first stream is used first.pIf the streams are sorted in ascending order, the resulting stream would also remain sorted in ascending order. [> S.toList $ S.mergeBy compare (S.fromList [1,3,5]) (S.fromList [2,4,6,8]) [1,2,3,4,5,6,8] pstreamlyLike o( but with a monadic comparison function.Merge two streams randomly: > randomly _ _ = randomIO >>= x -> return $ if x then LT else GT > S.toList $ S.mergeByM randomly (S.fromList [1,1,1,1]) (S.fromList [2,2,2,2]) [2,1,2,2,2,1,1,1] )Merge two streams in a proportion of 2:1: 9proportionately m n = do ref <- newIORef $ cycle $ concat [replicate m LT, replicate n GT] return $ \_ _ -> do r <- readIORef ref writeIORef ref $ tail r return $ head r main = do f <- proportionately 2 1 xs <- S.toList $ S.mergeByM f (S.fromList [1,1,1,1,1,1]) (S.fromList [2,2,2]) print xs [1,1,2,1,1,2,1,1,2] qstreamlyLike o[ but merges concurrently (i.e. both the elements being merged are generated concurrently).rstreamlyLike p[ but merges concurrently (i.e. both the elements being merged are generated concurrently).sstreamlyconcatMapWith merge map stream is a two dimensional looping combinator. The first argument specifies a merge or concat function that is used to merge the streams generated by applying the second argument i.e. the mapN function to each element of the input stream. The concat function could be serial, parallel, async, aheads or any other zip or merge function and the second argument could be any stream generation function using a seed.Compare tstreamlyqMap a stream producing function on each element of the stream and then flatten the results into a single stream.  concatMap = s  concatMap f = { (return . f) ustreamlyAppend the outputs of two streams, yielding all the elements from the first stream and then yielding all the elements from the second stream./IMPORTANT NOTE: This could be 100x faster than  serial/<> for appending a few (say 100) streams because it can fuse via stream fusion. However, it does not scale for a large number of streams (say 1000s) and becomes qudartically slow. Therefore use this for custom appending of a few streams but use t) or 'concatMapWith serial' for appending n, streams or infinite containers of streams.vstreamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. If any of the streams finishes early the other stream continues alone until it too finishes.:set -XOverloadedStrings/interleave "ab" ",,,," :: SerialT Identity CharfromList "a,b,,,"/interleave "abcd" ",," :: SerialT Identity CharfromList "a,b,cd"v is dual to y, it can be called  interleaveMax.%Do not use at scale in concatMapWith.wstreamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. As soon as the first stream finishes, the output stops, discarding the remaining part of the second stream. In this case, the last element in the resulting stream would be from the second stream. If the second stream finishes early then the first stream still continues to yield elements until it finishes.:set -XOverloadedStrings6interleaveSuffix "abc" ",,,," :: SerialT Identity CharfromList "a,b,c,"3interleaveSuffix "abc" "," :: SerialT Identity CharfromList "a,bc"w is a dual of x.%Do not use at scale in concatMapWith.xstreamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream and ending at the first stream. If the second stream is longer than the first, elements from the second stream are infixed with elements from the first stream. If the first stream is longer then it continues yielding elements even after the second stream has finished.:set -XOverloadedStrings5interleaveInfix "abc" ",,,," :: SerialT Identity CharfromList "a,b,c"2interleaveInfix "abc" "," :: SerialT Identity CharfromList "a,bc"x is a dual of w.%Do not use at scale in concatMapWith.ystreamly5Interleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. The output stops as soon as any of the two streams finishes, discarding the remaining part of the other stream. The last element of the resulting stream would be from the longer stream.:set -XOverloadedStrings2interleaveMin "ab" ",,,," :: SerialT Identity CharfromList "a,b,"2interleaveMin "abcd" ",," :: SerialT Identity CharfromList "a,b,c"y is dual to v.%Do not use at scale in concatMapWith.zstreamlySchedule the execution of two streams in a fair round-robin manner, executing each stream once, alternately. Execution of a stream may not necessarily result in an output, a stream may chose to SkipP producing an element until later giving the other stream a chance to run. Therefore, this combinator fairly interleaves the execution of two streams rather than fairly interleaving the output of the two streams. This can be useful in co-operative multitasking without using explicit threads. This can be used as an alternative to async.%Do not use at scale in concatMapWith.{streamlyMap a stream producing monadic function on each element of the stream and then flatten the results into a single stream. Since the stream generation function is monadic, unlike tQ, it can produce an effect at the beginning of each iteration of the inner loop.|streamlyLike t but uses an  for stream generation. Unlike t this can fuse the O code with the inner loop and therefore provide many times better performance.}streamlyLike |1 but interleaves the streams in the same way as v# behaves instead of appending them.~streamlyLike |. but executes the streams in the same way as z.streamlyx followed by unfold and concat.Internalstreamlyd followed by unfold and concat. %unwords = intercalate " " UF.fromList1intercalate " " UF.fromList ["abc", "def", "ghi"]> "abc def ghi"streamlyUnfold the elements of a stream, intersperse the given element between the unfolded streams and then concat them into a single stream. unwords = S.interpose ' 'Internalstreamlyw followed by unfold and concat.Internalstreamlye followed by unfold and concat. ,unlines = intercalateSuffix "\n" UF.fromList2intercalate "\n" UF.fromList ["abc", "def", "ghi"]> "abc\ndef\nghi\n"streamlyUnfold the elements of a stream, append the given element after each unfolded stream and then concat them into a single stream.  unlines = S.interposeSuffix '\n'InternalstreamlyLike ' but using a stream generator function.InternalstreamlySTraverse a forest with recursive tree structures whose non-leaf nodes are of type a and leaf nodes are of type b, flattening all the trees into streams and combining the streams into a single stream consisting of both leaf and non-leaf nodes. is a generalization of ty, using a recursive feedback loop to append the non-leaf nodes back to the input stream enabling recursive traversal. t* flattens a single level nesting whereas ) flattens a recursively nested structure.DTraversing a directory tree recursively is a canonical use case of . concatMapTreeWith combine f xs = concatMapIterateWith combine g xs where g (Left tree) = f tree g (Right leaf) = nil Internalstreamly:Flatten a stream with a feedback loop back into the input.For example, exceptions generated by the output stream can be fed back to the input to take any corrective action. The corrective action may be to retry the action or do nothing or log the errors. For the retry case we need a feedback loop.Internalstreamly1Concat a stream of trees, generating only leaves. Compare with T. While the latter returns all nodes in the tree, this one returns only the leaves._Traversing a directory tree recursively and yielding on the files is a canonical use case of . KconcatMapTreeYieldLeavesWith combine f = concatMapLoopWith combine f yield InternalstreamlysplitAt n f1 f2 composes folds f1 and f2 such that first n- elements of its input are consumed by fold f11 and the rest of the stream is consumed by fold f2. Mlet splitAt_ n xs = S.fold (FL.splitAt n FL.toList FL.toList) $ S.fromList xssplitAt_ 6 "Hello World!"> ("Hello ","World!")splitAt_ (-1) [1,2,3]> ([],[1,2,3])splitAt_ 0 [1,2,3]> ([],[1,2,3])splitAt_ 1 [1,2,3] > ([1],[2,3])splitAt_ 3 [1,2,3]> ([1,2,3],[])splitAt_ 4 [1,2,3]> ([1,2,3],[])streamly&Group the input stream into groups of nJ elements each and then fold each group using the provided fold function. I> S.toList $ S.chunksOf 2 FL.sum (S.enumerateFromTo 1 10) [3,7,11,15,19]/This can be considered as an n-fold version of ltake where we apply ltake= repeatedly on the leftover stream until the stream exhausts.streamlyarraysOf n stream9 groups the elements in the input stream into arrays of n elements each.0Same as the following but may be more efficient: &arraysOf n = S.chunksOf n (A.writeN n)streamly'Group the input stream into windows of nH second each and then fold each group using the provided fold function.streamlyBreak the input stream into two groups, the first group takes the input as long as the predicate applied to the first element of the stream and next input element holds /, the second group takes the rest of the input.streamly span p f1 f2 composes folds f1 and f2 such that f1. consumes the input as long as the predicate p is . f2! consumes the rest of the input. Flet span_ p xs = S.fold (S.span p FL.toList FL.toList) $ S.fromList xsspan_ (< 1) [1,2,3]> ([],[1,2,3])span_ (< 2) [1,2,3] > ([1],[2,3])span_ (< 4) [1,2,3]> ([1,2,3],[])streamly break p = span (not . p)'Break as soon as the predicate becomes .  break p f1 f2 composes folds f1 and f2 such that f11 stops consuming input as soon as the predicate p becomes $. The rest of the input is consumed f2.This is the binary version of splitBy. Hlet break_ p xs = S.fold (S.break p FL.toList FL.toList) $ S.fromList xsbreak_ (< 1) [3,2,1]> ([3,2,1],[])break_ (< 2) [3,2,1] > ([3,2],[1])break_ (< 4) [3,2,1]> ([],[3,2,1])streamlyLike w but applies the predicate in a rolling fashion i.e. predicate is applied to the previous and the next input elements.streamly'groupsBy cmp f $ S.fromList [a,b,c,...] assigns the element a to the first group, if  a `cmp` b is  then b* is also assigned to the same group. If  a `cmp` c is  then c is also assigned to the same group and so on. When the comparison fails a new group is started. Each group is folded using the fold f= and the result of the fold is emitted in the output stream.>S.toList $ S.groupsBy (>) FL.toList $ S.fromList [1,3,7,0,2,5]> [[1,3,7],[0,2,5]]streamlyUnlike groupsBy^ this function performs a rolling comparison of two successive elements in the input stream. /groupsByRolling cmp f $ S.fromList [a,b,c,...] assigns the element a to the first group, if  a `cmp` b is  then b) is also assigned to the same group. If  b `cmp` c is  then c is also assigned to the same group and so on. When the comparison fails a new group is started. Each group is folded using the fold f.VS.toList $ S.groupsByRolling (\a b -> a + 1 == b) FL.toList $ S.fromList [1,2,3,7,8,9]> [[1,2,3],[7,8,9]]streamly 4groups = groupsBy (==) groups = groupsByRolling (==)HGroups contiguous spans of equal elements together in individual groups.4S.toList $ S.groups FL.toList $ S.fromList [1,1,2,2]> [[1,1],[2,2]]streamly%Split on an infixed separator element, dropping the separator. Splits the stream on separator elements determined by the supplied predicate, separator is considered as infixed between two segments, if one side of the separator is missing then it is parsed as an empty stream. The supplied Q) is applied on the split segments. With +* representing non-separator elements and , as separator,  splits as follows: ="--.--" => "--" "--" "--." => "--" "" ".--" => "" "--" splitOn (== x) is an inverse of intercalate (S.yield x)4Let's use the following definition for illustration: BsplitOn' p xs = S.toList $ S.splitOn p (FL.toList) (S.fromList xs)splitOn' (== '.') ""[""]splitOn' (== '.') "."["",""]splitOn' (== '.') ".a" > ["","a"]splitOn' (== '.') "a." > ["a",""]splitOn' (== '.') "a.b" > ["a","b"]splitOn' (== '.') "a..b"> ["a","","b"]streamlyLike  but the separator is considered as suffixed to the segments in the stream. A missing suffix at the end is allowed. A separator at the beginning is parsed as empty segment. With + representing elements and , as separator,  splits as follows: C "--.--." => "--" "--" "--.--" => "--" "--" ".--." => "" "--"  NsplitOnSuffix' p xs = S.toList $ S.splitSuffixBy p (FL.toList) (S.fromList xs)splitOnSuffix' (== '.') ""[]splitOnSuffix' (== '.') "."[""]splitOnSuffix' (== '.') "a"["a"]splitOnSuffix' (== '.') ".a" > ["","a"]splitOnSuffix' (== '.') "a."> ["a"]splitOnSuffix' (== '.') "a.b" > ["a","b"]splitOnSuffix' (== '.') "a.b." > ["a","b"] splitOnSuffix' (== '.') "a..b.."> ["a","","b",""] lines = splitOnSuffix (== '\n')streamlyLike I after stripping leading, trailing, and repeated separators. Therefore, ".a..b." with ,& as the separator would be parsed as  ["a","b"]J. In other words, its like parsing words from whitespace separated text. BwordsBy' p xs = S.toList $ S.wordsBy p (FL.toList) (S.fromList xs)wordsBy' (== ',') ""> []wordsBy' (== ',') ","> []wordsBy' (== ',') ",a,,b," > ["a","b"] words = wordsBy isSpacestreamlyLike 8 but keeps the suffix attached to the resulting splits. RsplitWithSuffix' p xs = S.toList $ S.splitWithSuffix p (FL.toList) (S.fromList xs)splitWithSuffix' (== '.') ""[]splitWithSuffix' (== '.') "."["."]splitWithSuffix' (== '.') "a"["a"]splitWithSuffix' (== '.') ".a" > [".","a"]splitWithSuffix' (== '.') "a."> ["a."]splitWithSuffix' (== '.') "a.b" > ["a.","b"] splitWithSuffix' (== '.') "a.b." > ["a.","b."]"splitWithSuffix' (== '.') "a..b.."> ["a.",".","b.","."]streamlyLike J but the separator is a sequence of elements instead of a single element.FFor illustration, let's define a function that operates on pure lists: ZsplitOnSeq' pat xs = S.toList $ S.splitOnSeq (A.fromList pat) (FL.toList) (S.fromList xs) splitOnSeq' "" "hello"> ["h","e","l","l","o"]splitOnSeq' "hello" ""> [""]splitOnSeq' "hello" "hello" > ["",""]splitOnSeq' "x" "hello" > ["hello"]splitOnSeq' "h" "hello" > ["","ello"]splitOnSeq' "o" "hello" > ["hell",""]splitOnSeq' "e" "hello" > ["h","llo"]splitOnSeq' "l" "hello"> ["he","","o"]splitOnSeq' "ll" "hello" > ["he","o"] is an inverse of !. The following law always holds: intercalate . splitOn == idvThe following law holds when the separator is non-empty and contains none of the elements present in the input lists: splitOn . intercalate == idstreamlyLike  splitSuffixBy[ but the separator is a sequence of elements, instead of a predicate for a single element. _splitSuffixOn_ pat xs = S.toList $ S.splitSuffixOn (A.fromList pat) (FL.toList) (S.fromList xs)splitSuffixOn_ "." ""[""]splitSuffixOn_ "." "."[""]splitSuffixOn_ "." "a"["a"]splitSuffixOn_ "." ".a" > ["","a"]splitSuffixOn_ "." "a."> ["a"]splitSuffixOn_ "." "a.b" > ["a","b"]splitSuffixOn_ "." "a.b." > ["a","b"]splitSuffixOn_ "." "a..b.."> ["a","","b",""] lines = splitSuffixOn "\n"streamlyLike 5 but splits the separator as well, as an infix token. UsplitOn'_ pat xs = S.toList $ S.splitOn' (A.fromList pat) (FL.toList) (S.fromList xs)splitOn'_ "" "hello"#> ["h","","e","","l","","l","","o"]splitOn'_ "hello" ""> [""]splitOn'_ "hello" "hello"> ["","hello",""]splitOn'_ "x" "hello" > ["hello"]splitOn'_ "h" "hello"> ["","h","ello"]splitOn'_ "o" "hello"> ["hell","o",""]splitOn'_ "e" "hello"> ["h","e","llo"]splitOn'_ "l" "hello"> ["he","l","","l","o"]splitOn'_ "ll" "hello"> ["he","ll","o"]streamlyLike  splitSuffixOn+ but keeps the suffix intact in the splits. bsplitSuffixOn'_ pat xs = S.toList $ FL.splitSuffixOn' (A.fromList pat) (FL.toList) (S.fromList xs)splitSuffixOn'_ "." ""[""]splitSuffixOn'_ "." "."["."]splitSuffixOn'_ "." "a"["a"]splitSuffixOn'_ "." ".a" > [".","a"]splitSuffixOn'_ "." "a."> ["a."]splitSuffixOn'_ "." "a.b" > ["a.","b"]splitSuffixOn'_ "." "a.b." > ["a.","b."]splitSuffixOn'_ "." "a..b.."> ["a.",".","b.","."]streamlyAConsider a chunked stream of container elements e.g. a stream of Word8# chunked as a stream of arrays of Word8. $splitInnerBy splitter joiner stream splits the inner containers f a using the splitterg function and joins back the resulting fragments from splitting across multiple containers using the joinerm function such that the transformed output stream is consolidated as one container per segment of the split.sCAUTION! This is not a true streaming function as the container size after the split and merge may not be bounded.streamlyLike H but splits assuming the separator joins the segment in a suffix style.streamly-Tap the data flowing through a stream into a Q. For example, you may add a tap to log the contents flowing through the stream. The fold is used only for effects, its result is discarded. e Fold m a b | -----stream m a ---------------stream m a-----  A> S.drain $ S.tap (FL.drainBy print) (S.enumerateFromTo 1 2) 1 2  Compare with .streamlytapOffsetEvery offset n taps every n&th element in the stream starting at offset. offset can be between 0 and n - 1f. Offset 0 means start at the first element in the stream. If the offset is outside this range then offset - n is used as offset. g>>> S.drain $ S.tapOffsetEvery 0 2 (FL.mapM print FL.toList) $ S.enumerateFromTo 0 10 > [0,2,4,6,8,10] InternalstreamlyRedirect a copy of the stream to a supplied fold and run it concurrently in an independent thread. The fold may buffer some elements. The buffer size is determined by the prevailing  maxBuffer setting. h Stream m a -> m b | -----stream m a ---------------stream m a-----  C> S.drain $ S.tapAsync (S.mapM_ print) (S.enumerateFromTo 1 2) 1 2 fExceptions from the concurrently running fold are propagated to the current computation. Note that, because of buffering in the fold, exceptions may be delayed and may not correspond to the current element being processed in the parent stream, but we guarantee that before the parent stream stops the tap finishes and all exceptions from it are drained. Compare with .Internalstreamly*pollCounts predicate transform fold stream4 counts those elements in the stream that pass the  predicateR. The resulting count stream is sent to another thread which transforms it using  transform and then folds it using foldZ. The thread is automatically cleaned up if the stream stops or aborts due to exception.CFor example, to print the count of elements processed every second: z> S.drain $ S.pollCounts (const True) (S.rollingMap (-) . S.delayPost 1) (FL.drainBy print) $ S.enumerateFrom 0 5Note: This may not work correctly on 32-bit machines. /InternalstreamlyHCalls the supplied function with the number of elements consumed every n seconds. The given function is run in a separate thread until the end of the stream. In case there is an exception in the stream the thread is killed during the next major GC.CNote: The action is not guaranteed to run if the main thread exits. > delay n = threadDelay (round $ n * 1000000) >> return n > S.drain $ S.tapRate 2 (\n -> print $ show n ++ " elements processed") (delay 1 S.|: delay 0.5 S.|: delay 0.5 S.|: S.nil) 2 elements processed 1 elements processed 5Note: This may not work correctly on 32-bit machines. /Internalstreamly]Apply a monadic function to each element flowing through the stream and discard the results. 6> S.drain $ S.trace print (S.enumerateFromTo 1 2) 1 2  Compare with .streamly2classifySessionsBy tick timeout idle pred f stream groups timestamped events in an input event stream into sessions based on a session key. Each element in the stream is an event consisting of a triple ((session key, sesssion data, timestamp).  session keys is a key that uniquely identifies the session. All the events belonging to a session are folded using the fold f until the fold returns a  result or a timeout has occurred. The session key and the result of the fold are emitted in the output stream when the session is purged.When idle is , timeoutE is the maximum lifetime of a session in seconds, measured from the  timestamp+ of the first event in that session. When idle is ] then the timeout is an idle timeout, it is reset after every event received in the session. timestampu in an event characterizes the time when the input event was generated, this is an absolute time measured from some Epoch. The notion of current time is maintained by a monotonic event time clock using the timestamps seen in the input stream. The latest timestamp seen till now is used as the base for the current time. When no new events are seen, a timer is started with a tick duration specified by tickN. This timer is used to detect session timeouts in the absence of new events.The predicate pred; is invoked with the current session count, if it returns  a session is ejected from the session cache before inserting a new session. This could be useful to alert or eject sessions when the number of sessions becomes too high.InternalstreamlyLike  but the session is kept alive if an event is received within the session window. The session times out and gets closed only if no event is received within the specified session window size."If the ejection predicate returns \, the session that was idle for the longest time is ejected before inserting a new session. PclassifyKeepAliveSessions timeout pred = classifySessionsBy 1 timeout True pred InternalstreamlySplit the stream into fixed size time windows of specified interval in seconds. Within each such window, fold the elements in sessions identified by the session keys. The fold result is emitted in the output stream if the fold returns a # result or if the time window ends.Session  timestamp in the input stream is an absolute time from some epoch, characterizing the time when the input element was generated. To detect session window end, a monotonic event time clock is maintained synced with the timestamps with a clock resolution of 1 second."If the ejection predicate returns S, the session with the longest lifetime is ejected before inserting a new session. LclassifySessionsOf interval pred = classifySessionsBy 1 interval False pred Internalstreamly=Run a side effect before the stream yields its first element.streamly5Run a side effect whenever the stream stops normally.Prefer  over this as the aftert action in this combinator is not executed if the unfold is partially evaluated lazily and then garbage collected.streamlynRun a side effect whenever the stream stops normally or is garbage collected after a partial lazy evaluation.InternalstreamlyARun a side effect whenever the stream aborts due to an exception.streamlyTRun a side effect whenever the stream stops normally or aborts due to an exception.Prefer  over this as the aftert action in this combinator is not executed if the unfold is partially evaluated lazily and then garbage collected.streamlyRun a side effect whenever the stream stops normally, aborts due to an exception or if it is garbage collected after a partial lazy evaluation.InternalstreamlyRun the first action before the stream starts and remember its output, generate a stream using the output, run the second action using the remembered value as an argument whenever the stream ends normally or due to an exception.Prefer  over this as the aftert action in this combinator is not executed if the unfold is partially evaluated lazily and then garbage collected.streamly#Run the first action before the stream starts and remember its output, generate a stream using the output, run the second action using the remembered value as an argument whenever the stream ends normally, due to an exception or if it is garbage collected after a partial lazy evaluation.InternalstreamlyWhen evaluating a stream if an exception occurs, stream evaluation aborts and the specified exception handler is run with the exception as argument.streamlyETransform the inner monad of a stream using a natural transformation. Internalstreamly.Generalize the inner monad of the stream from  to any monad. Internalstreamly;Lift the inner monad of a stream using a monad transformer. Internalstreamly(Evaluate the inner monad of a stream as .. Internalstreamly(Evaluate the inner monad of a stream as /.This is supported only for ?/ as concurrent state updation may not be safe. InternalstreamlyBRun a stateful (StateT) stream transformation using a given state.This is supported only for ?/ as concurrent state updation may not be safe. Internalstreamly(Evaluate the inner monad of a stream as /> and emit the resulting state and value pair after each step.This is supported only for ?/ as concurrent state updation may not be safe. Internalstreamlyfeedback function to feed b back into inputstreamlytimer tick in secondsstreamlysession timeout in secondsstreamly+reset the timeout when an event is receivedstreamly2predicate to eject sessions based on session countstreamly$Fold to be applied to session eventsstreamlysession key, data, timestampstreamlysession key, fold resultstreamlysession inactive timeoutstreamly,predicate to eject sessions on session countstreamly*Fold to be applied to session payload datastreamlysession key, data, timestampstreamlytime window sizestreamly,predicate to eject sessions on session countstreamly$Fold to be applied to session eventsstreamlysession key, data, timestamp! !BFG      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~   A@B$%)('&T*-,+/1 !"#67;<9:CB^]5FEGHIJLKMN>=?klQRUXVWY\Z[jS_`PO.0icdeghfabuvywxFGzopqr! {|}~tsmn234!D  8=0?1@0B1(!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#FX streamly;Convert a stream of arrays into a stream of their elements.)Same as the following but more efficient: concat = S.concatMap A.readstreamlysConvert a stream of arrays into a stream of their elements reversing the contents of each array before flattening.streamlyMFlatten a stream of arrays after inserting the given element between arrays.InternalstreamlyIFlatten a stream of arrays appending the given element after each array.streamlySplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array.streamlyhCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes.streamlyarraysOf n stream9 groups the elements in the input stream into arrays of n elements each.)Same as the following but more efficient: &arraysOf n = S.chunksOf n (A.writeN n)streamlycGiven a stream of arrays, splice them all together to generate a single array. The stream must be finite.  )!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#F astreamlyNRead directories as Left and files as Right. Filter out "." and ".." entries.InternalstreamlyRead files only.Internalstreamly7Read directories only. Filter out "." and ".." entries.InternalstreamlyRaw read of a directory.InternalstreamlyNRead directories as Left and files as Right. Filter out "." and ".." entries.InternalstreamlyRead files only.InternalstreamlyRead directories only.Internal*(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@AMX_ 1!streamly@A round robin parallely composing IO stream of elements of type a. See  documentation for more details.streamly is similar to WSerialT% but with concurrent execution. The  operation (<>) for V merges two streams concurrently interleaving the actions from both the streams. In s1 <> s2 <> s3 ...&, the individual actions from streams s1, s2 and s3 are scheduled for execution in a round-robin fashion. Multiple scheduled actions may be executed concurrently, the results from concurrent executions are consumed in the order in which they become available.The W in the name stands for wideR or breadth wise scheduling in contrast to the depth wise scheduling behavior of . import Streamly import qualified Streamly.Prelude4 as S import Control.Concurrent main = (S.toList . F . maxThreads 1 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,3,2,4] For this example, we are using  maxThreads 1 so that concurrent thread scheduling does not affect the results and make them unpredictable. Let's now take a more general example: main = (S.toList . b . maxThreads 1 $ (S.fromList [1,2,3]) <> (S.fromList [4,5,6]) <> (S.fromList [7,8,9])) >>= print  [1,4,2,7,5,3,8,6,9] 7This is how the execution of the above stream proceeds: (The scheduler queue is initialized with C[S.fromList [1,2,3], (S.fromList [4,5,6]) <> (S.fromList [7,8,9])]G assuming the head of the queue is represented by the rightmost item.S.fromList [1,2,3]# is executed, yielding the element 1 and putting [2,3]I at the back of the scheduler queue. The scheduler queue now looks like @[(S.fromList [4,5,6]) <> (S.fromList [7,8,9]), S.fromList [2,3]].Now ,(S.fromList [4,5,6]) <> (S.fromList [7,8,9]) is picked up for execution, S.fromList [7,8,9]( is added at the back of the queue and S.fromList [4,5,6]# is executed, yielding the element 4 and adding S.fromList [5,6]5 at the back of the queue. The queue now looks like 8[S.fromList [2,3], S.fromList [7,8,9], S.fromList [5,6]].cNote that the scheduler queue expands by one more stream component in every pass because one more <>F is broken down into two components. At this point there are no more <> operations to be broken down further and the queue has reached its maximum size. Now these streams are scheduled in round-robin fashion yielding [2,7,5,3,8,8,9].@As we see above, in a right associated expression composed with <> , only one <>W operation is broken down into two components in one execution, therefore, if we have n streams composed using <> it will take n@ scheduler passes to expand the whole expression. By the time n-thV component is added to the scheduler queue, the first component would have received n scheduler passes.Since all streams get interleaved, this operation is not suitable for folding an infinite lazy container of infinite size streams. However, if the streams are small, the streams on the left may get finished before more streams are added to the scheduler queue from the right side of the expression, so it may be possible to fold an infinite lazy container of streams. For example, if the streams are of size n then at most n4 streams would be in the scheduler queue at a time. Note that WSerialT and ? differ in their scheduling behavior, therefore the output of D even with a single thread of execution is not the same as that of WSerialT See notes in WSerialT, for details about its scheduling behavior.Any exceptions generated by a constituent stream are propagated to the output stream. The output and exceptions from a single stream are guaranteed to arrive in the same order in the resulting stream as they were generated in the input stream. However, the relative ordering of elements from different streams in the resulting stream can vary depending on scheduling and generation delays.Similarly, the  instance of  runs all@ iterations fairly concurrently using a round robin scheduling. main = drain .  $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n) ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 streamlyOA demand driven left biased parallely composing IO stream of elements of type a. See  documentation for more details.streamlyThe  operation (<>) for N merges two streams concurrently with priority given to the first stream. In s1 <> s2 <> s3 ...I the streams s1, s2 and s3 are scheduled for execution in that order. Multiple scheduled streams may be executed concurrently and the elements generated by them are served to the consumer as and when they become available. This behavior is similar to the scheduling and execution behavior of actions in a single async stream.Since only a finite number of streams are executed concurrently, this operation can be used to fold an infinite lazy container of streams. import Streamly import qualified Streamly.Prelude4 as S import Control.Concurrent main = (S.toList . 7 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,2,3,4] Any exceptions generated by a constituent stream are propagated to the output stream. The output and exceptions from a single stream are guaranteed to arrive in the same order in the resulting stream as they were generated in the input stream. However, the relative ordering of elements from different streams in the resulting stream can vary depending on scheduling and generation delays.!Similarly, the monad instance of  may run each iteration concurrently based on demand. More concurrent iterations are started only if the previous iterations are not able to produce enough output for the consumer. main = drain .  $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n) ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 streamlyVGenerate a stream asynchronously to keep it buffered, lazily consume from the buffer.InternalstreamlyqMake the stream producer and consumer run concurrently by introducing a buffer between them. The producer thread evaluates the input stream until the buffer fills, it terminates if the buffer is full and a worker thread is kicked off again to evaluate the remaining stream when there is space in the buffer. The consumer consumes the stream lazily from the buffer.Internal0streamly;Create a new SVar and enqueue one stream computation on it.1streamly/Join two computations on the currently running % queue for concurrent execution. When we are using parallel composition, an SVar is passed around as a state variable. We try to schedule a new parallel computation on the SVar passed to us. The first time, when no SVar exists, a new SVar is created. Subsequently, 2n may get called when a computation already scheduled on the SVar is further evaluated. For example, when (a parallel b) is evaluated it calls a 2 to put a and b! on the current scheduler queue.The p\ required by the current composition context is passed as one of the parameters. If the scheduling and composition style of the new computation being scheduled is different than the style of the current SVar, then we create a new SVar and schedule it on that. The newly created SVar joins as one of the computations on the current SVar queue.+Cases when we need to switch to a new SVar:(x parallel y) parallel (t parallel1 u) -- all of them get scheduled on the same SVar(x parallel y) parallel (t  u) -- t and uN get scheduled on a new child SVar because of the scheduling policy change.if we  a stream of type  to a stream of type Parallel1, we create a new SVar at the transitioning bind.When the stream is switching from disjunctive composition to conjunctive composition and vice-versa we create a new SVar to isolate the scheduling of the two.streamlyPolymorphic version of the  operation  of g. Merges two streams possibly concurrently, preferring the elements from the left one when available.streamlySame as .3streamlykXXX we can implement it more efficienty by directly implementing instead of combining streams using async.streamly(Fix the type of a polymorphic stream as .4streamlylXXX we can implement it more efficienty by directly implementing instead of combining streams using wAsync.streamlyPolymorphic version of the  operation  of F. Merges two streams concurrently choosing elements from both fairly.streamly(Fix the type of a polymorphic stream as .  +(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@AM K]streamly'A serial IO stream of elements of type a" with concurrent lookahead. See  documentation for more details.streamlyThe  operation for  appends two streams. The combined stream behaves like a single stream with the actions from the second stream appended to the first stream. The combined stream is evaluated in the speculative style. This operation can be used to fold an infinite lazy container of streams. import Streamly import qualified Streamly.Prelude4 as S import Control.Concurrent main = do xs <- S.toList . e $ (p 1 |: p 2 |: nil) <> (p 3 |: p 4 |: nil) print xs where p n = threadDelay 1000000 >> return n   [1,2,3,4] VAny exceptions generated by a constituent stream are propagated to the output stream.The monad instance of  may run each monadic continuation (bind) concurrently in a speculative manner, performing side effects in a partially ordered manner but producing the outputs in an ordered manner like SerialT. main = S.drain .  $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n) ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 streamlyPolymorphic version of the  operation  of A. Merges two streams sequentially but with concurrent lookahead.5streamlykXXX we can implement it more efficienty by directly implementing instead of combining streams using ahead.streamly(Fix the type of a polymorphic stream as .,!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNoneF YstreamlywriteN n folds a maximum of n' elements from the input stream to an T.Since we are folding to a T n9 should be <= 128, for larger number of elements use an Array from either Streamly.Data.Array or Streamly.Memory.Array.streamly Create a T from the first n3 elements of a list. The array may hold less than n( elements if the length of the list <= n.$It is recommended to use a value of n* <= 128. For larger sized arrays, use an Array from Streamly.Data.Array or Streamly.Memory.Arraystreamly Create a T from the first n7 elements of a stream. The array is allocated to size n", if the stream terminates before n- elements then the array may hold less than n elements.&For optimal performance use this with n <= 128.TU  TU  ?!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone [ T  !(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNoneF \          @!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone _t !(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNoneF a8 !"#$%&'()*+,-./01#$!%& "-.1)*'(+,0/A!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone ck !%&)*+,-./01B(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone>SX e !B      !"#$%&'()*+,-./01234568DEFGHKLMNQRSTUVWYZ[]^_`acdijklmnopqrst{|   $%)('&T6*-,+/1 !"#mn234B^]5FEGHLKMNQR_`jSicdklaUVWYZ[.0opqr! st{|D  8C!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone a6streamlyA 6 is returned by 7J and is subsequently used to perform read and write operations on a file.8streamlyFile handle for standard input9streamlyFile handle for standard output:streamlyFile handle for standard error7streamly?Open a file that is not a directory and return a file handle. 7L enforces a multiple-reader single-writer locking on files. That is, there may either be many handles on the same file which manage input, or just one handle on the file which manages output. If any open handle is managing a file for output, no new handle can be allocated for that file. If any open handle is managing a file for input, new handles can only be allocated if they do not manage output. Whether two files are the same is implementation-dependent, but they should normally be the same if they have the same absolute path name and neither has been renamed, for example.;streamlyRead a  ByteArray from a file handle. If no data is available on the handle it blocks until some data becomes available. If data is available then it immediately returns that data without blocking. It reads a maximum of up to the size requested.<streamly Write an  to a file handle.=streamlyWrite an array of IOVec to a file handle.>streamlyreadArraysOfUpto size h+ reads a stream of arrays from file handle h6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size.?streamly readArrays h+ reads a stream of arrays from file handle h4. The maximum size of a single array is limited to defaultChunkSize. ? ignores the prevailing  TextEncoding and  NewlineMode on the 6. .readArrays = readArraysOfUpto defaultChunkSize@streamlyreadInChunksOf chunkSize handleQ reads a byte stream from a file handle, reads are performed in chunks of up to  chunkSize2. The stream ends as soon as EOF is encountered.Astreamly<Generate a stream of elements of the given type from a file 6+. The stream ends when EOF is encountered.Bstreamly%Write a stream of arrays to a handle.CstreamlyWrite a stream of arrays to a handle after coalescing them in chunks of specified size. The chunk size is only a maximum and the actual writes could be smaller than that as we do not split the arrays to fit them to the specified size.DstreamlyWrite a stream of IOVec arrays to a handle.Estreamly<Write a stream of arrays to a handle after grouping them in IOVecR arrays of up to a maximum total size. Writes are performed using gather IO via writev4 system call. The maximum number of entries in each IOVec group limited to 512.FstreamlyLike G but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.GstreamlyRWrite a byte stream to a file handle. Combines the bytes in chunks of size up to DE? before writing. Note that the write behavior depends on the H- and the current seek position of the handle. 689:7I?@ABCFG(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone>@A 3streamlySame as 54streamlySame as  runStream.5streamly%Same as "Streamly.Prelude.runStream".6streamlySame as runStream . wSerially.7streamlySame as runStream . parallely.8streamlySame as runStream . asyncly.9streamlySame as runStream . zipping.:streamlySame as runStream . zippingAsync.;streamlyEMake a stream asynchronous, triggers the computation and returns a stream in the underlying monad representing the output generated by the original computation. The returned action is exhaustible and must be drained once. If not drained fully we may have a thread blocked forever and once exhausted it will always return empty.K"efghij"#$%:;<=>?@CDEH=?@B3456789:;K"?<=?@B;Eefghij@C"$>;5346879:=:D#%H-!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#> <streamly$Specify the socket protocol details.BstreamlyB socket act runs the monadic computation actK passing the socket handle to it. The handle will be closed on exit from B, whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by B% rather than any exception raised by act.CstreamlyLike BD but runs a streaming computation instead of a monadic computation.DstreamlyUnfold a three tuple (listenQLen, spec, addr)U into a stream of connected protocol sockets corresponding to incoming connections.  listenQLen? is the maximum number of pending connections in the backlog. spec7 is the socket protocol and options specification and addrL is the protocol address where the server listens for incoming connections.Estreamly"Start a TCP stream server that listens for connections on the supplied server address specification (address family, local interface IP address and port). The server generates a stream of connected sockets. The first argument is the maximum number of pending connections in the backlog.InternalJstreamlyRead a  ByteArray from a file handle. If no data is available on the handle it blocks until some data becomes available. If data is available then it immediately returns that data without blocking. It reads a maximum of up to the size requested.Fstreamly Write an Array to a file handle.GstreamlytoChunksWithBufferOf size h+ reads a stream of arrays from file handle h4. The maximum size of a single array is limited to size. fromHandleArraysUpto ignores the prevailing  TextEncoding and  NewlineMode on the Handle.Hstreamly toChunks h- reads a stream of arrays from socket handle h4. The maximum size of a single array is limited to defaultChunkSize.IstreamlyUnfold the tuple (bufsize, socket) into a stream of KK arrays. Read requests to the socket are performed using a buffer of size bufsizeQ. The size of an array in the resulting stream is always less than or equal to bufsize.Jstreamly"Unfolds a socket into a stream of KG arrays. Requests to the socket are performed using a buffer of size ES. The size of arrays in the resulting stream are therefore less than or equal to E.KstreamlyhGenerate a stream of elements of the given type from a socket. The stream ends when EOF is encountered.LstreamlyUnfolds the tuple (bufsize, socket)Q into a byte stream, read requests to the socket are performed using buffers of bufsize.Mstreamly Unfolds a LL into a byte stream. IO requests to the socket are performed in sizes of E.Nstreamly%Write a stream of arrays to a handle.OstreamlysWrite a stream of arrays to a socket. Each array in the stream is written to the socket as a separate IO request.Pstreamly&writeChunksWithBufferOf bufsize socket writes a stream of arrays to socket3 after coalescing the adjacent arrays in chunks of bufsize. We never split an array, if a single array is bigger than the specified size it emitted as it is. Multiple arrays are coalesed as long as the total size remains below the specified size.QstreamlylWrite a stream of strings to a socket in Latin1 encoding. Output is flushed to the socket for each string.InternalRstreamlyLike U but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.SstreamlynWrite a byte stream to a socket. Accumulates the input in chunks of specified number of bytes before writing.TstreamlyRWrite a byte stream to a file handle. Combines the bytes in chunks of size up to ? before writing. Note that the write behavior depends on the IOMode- and the current seek position of the handle.UstreamlyKWrite a byte stream to a socket. Accumulates the input in chunks of up to  bytes before writing. write = S  <=>?@ABCDEFGHIJKLMNOPQRSTU<=>?@ABCDEMLIJGHKUSNRTFOPQF!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone O<=>?@ADIJLMOSU<=>?@ADMLJIUSO.!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone> #WstreamlyUnfold a tuple (ipAddr, port)* into a stream of connected TCP sockets. ipAddr is the local IP address and port6 is the local port on which connections are accepted.YstreamlyLike W but binds on the IPv4 address 0.0.0.0o i.e. on all IPv4 addresses/interfaces of the machine and listens for TCP connections on the specified port. 4acceptOnPort = UF.supplyFirst acceptOnAddr (0,0,0,0)ZstreamlyLike W) but binds on the localhost IPv4 address  127.0.0.1o. The server can only be accessed from the local host, it cannot be accessed from other hosts on the network. ;acceptOnPortLocal = UF.supplyFirst acceptOnAddr (127,0,0,1)\streamlyLike Eo but binds on the specified IPv4 address of the machine and listens for TCP connections on the specified port.Internal]streamlyLike E but binds on the IPv4 address 0.0.0.0o i.e. on all IPv4 addresses/interfaces of the machine and listens for TCP connections on the specified port. /connectionsOnPort = connectionsOnAddr (0,0,0,0)Internal^streamlyLike E) but binds on the localhost IPv4 address  127.0.0.1o. The server can only be accessed from the local host, it cannot be accessed from other hosts on the network. 6connectionsOnLocalHost = connectionsOnAddr (127,0,0,1)Internal_streamly4Connect to the specified IP address and port number.`streamlyjConnect to a remote host using IP address and port and run the supplied action on the resulting socket. ` makes sure that the socket is closed on normal termination or in case of an exception. If closing the socket raises an exception, then this exception will be raised by `.Internalastreamly Transform an  from a L. to an unfold from a remote IP address and port. The resulting unfold opens a socket, uses it using the supplied unfold and then makes sure that the socket is closed on normal termination or in case of an exception. If closing the socket raises an exception, then this exception will be raised by a.Internalbstreamlyb addr port act| opens a connection to the specified IPv4 host address and port and passes the resulting socket handle to the computation act*. The handle will be closed on exit from b, whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by b% rather than any exception raised by act.InternalcstreamlyBRead a stream from the supplied IPv4 host address and port number.dstreamlyBRead a stream from the supplied IPv4 host address and port number.estreamlyLWrite a stream of arrays to the supplied IPv4 host address and port number.fstreamlyLWrite a stream of arrays to the supplied IPv4 host address and port number.gstreamlyLike j but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.hstreamlyLike j but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.istreamlyAWrite a stream to the supplied IPv4 host address and port number.jstreamlyAWrite a stream to the supplied IPv4 host address and port number.kstreamlySend an input stream to a remote host and produce the output stream from the host. The server host just acts as a transformation function on the input stream. Both sending and receiving happen asynchronously.InternalVWXYZ[\]^_`abcdefghijkWVYXZ\[]^_`acbdjhigfekG!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone %WYZ_WYZ_/!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#>F kMstreamlyRead a  ByteArray from a file handle. If no data is available on the handle it blocks until some data becomes available. If data is available then it immediately returns that data without blocking. It reads a maximum of up to the size requested.NstreamlytoChunksWithBufferOf size h+ reads a stream of arrays from file handle h6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size.lstreamly toChunksWithBufferOf size handle0 reads a stream of arrays from the file handle handle4. The maximum size of a single array is limited to size5. The actual size read may be less than or equal to size.mstreamlyUnfold the tuple (bufsize, handle) into a stream of KO arrays. Read requests to the IO device are performed using a buffer of size bufsizeQ. The size of an array in the resulting stream is always less than or equal to bufsize.nstreamlytoChunks handlen reads a stream of arrays from the specified file handle. The maximum size of a single array is limited to defaultChunkSize5. The actual size read may be less than or equal to defaultChunkSize. 0toChunks = toChunksWithBufferOf defaultChunkSizeostreamly`Read a stream of chunks from standard input. The maximum size of a single chunk is limited to defaultChunkSize). The actual size read may be less than defaultChunkSize. getChunks = toChunks stdinInternalpstreamly+Read a stream of bytes from standard input. getBytes = toBytes stdinInternalqstreamly"Unfolds a handle into a stream of KJ arrays. Requests to the IO device are performed using a buffer of size S. The size of arrays in the resulting stream are therefore less than or equal to .rstreamlyUnfolds the tuple (bufsize, handle)T into a byte stream, read requests to the IO device are performed using buffers of bufsize.sstreamly"toBytesWithBufferOf bufsize handleQ reads a byte stream from a file handle, reads are performed in chunks of up to bufsize.Internaltstreamly`Unfolds a file handle into a byte stream. IO requests to the device are performed in sizes of .ustreamly#Generate a byte stream from a file O.Internalvstreamly Write an  to a file handle.wstreamly%Write a stream of arrays to a handle.xstreamly,Write a stream of chunks to standard output.Internalystreamly{Write a stream of strings to standard output using the supplied encoding. Output is flushed to the device for each string.InternalzstreamlyWrite a stream of strings as separate lines to standard output using the supplied encoding. Output is line buffered i.e. the output is written to the device as soon as a newline is encountered.Internal{streamly-Write a stream of bytes from standard output. putBytes = fromBytes stdoutInternal|streamly,fromChunksWithBufferOf bufsize handle stream writes a stream of arrays to handle3 after coalescing the adjacent arrays in chunks of bufsize. The chunk size is only a maximum and the actual writes could be smaller as we do not split the arrays to fit exactly to the specified size.}streamly+fromBytesWithBufferOf bufsize handle stream writes stream to handle in chunks of bufsizeX. A write is performed to the IO device as soon as we collect the required input size.~streamlyPWrite a byte stream to a file handle. Accumulates the input in chunks of up to  before writing.'NOTE: This may perform better than the ; fold, you can try this if you need some extra perf boost.streamlyrWrite a stream of arrays to a handle. Each array in the stream is written to the device as a separate IO request.streamly&writeChunksWithBufferOf bufsize handle writes a stream of arrays to handle3 after coalescing the adjacent arrays in chunks of bufsize. We never split an array, if a single array is bigger than the specified size it emitted as it is. Multiple arrays are coalesed as long as the total size remains below the specified size.streamly writeWithBufferOf reqSize handle writes the input stream to handleS. Bytes in the input stream are collected into a buffer until we have a chunk of reqSize# and then written to the IO device.streamlyPWrite a byte stream to a file handle. Accumulates the input in chunks of up to " before writing to the IO device.lmnopqrstuvwxyz{|}~truspqmlno~}v|wxy{z0(c) 2019 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone"#>F streamly name mode act opens a file using P5 and passes the resulting handle to the computation act+. The handle will be closed on exit from , whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by & rather than any exception raised by act.InternalQstreamly Transform an  from a O to an unfold from a R+. The resulting unfold opens a handle in S, uses it using the supplied unfold and then makes sure that the handle is closed on normal termination or in case of an exception. If closing the handle raises an exception, then this exception will be raised by Q.Internalstreamly;Write an array to a file. Overwrites the file if it exists.streamlyappend an array to a file.streamlytoChunksWithBufferOf size file$ reads a stream of arrays from file file6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size.streamly toChunks file$ reads a stream of arrays from file file4. The maximum size of a single array is limited to defaultChunkSize). The actual size read may be less than defaultChunkSize. 0toChunks = toChunksWithBufferOf defaultChunkSizestreamly^Unfolds a file path into a byte stream. IO requests to the device are performed in sizes of .streamlyGenerate a stream of bytes from a file specified by path. The stream ends when EOF is encountered. File is locked using multiple reader and single writer locking mode.InternalstreamlyEWrite a stream of arrays to a file. Overwrites the file if it exists.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyKWrite a byte stream to a file. Combines the bytes in chunks of size up to DE before writing. If the file exists it is truncated to zero size before writing. If the file does not exist it is created. File is locked using single writer locking mode.InternalstreamlyrWrite a stream of chunks to a handle. Each chunk in the stream is written to the device as a separate IO request.Internalstreamly"writeWithBufferOf chunkSize handle writes the input stream to handleX. Bytes in the input stream are collected into a buffer until we have a chunk of size  chunkSize# and then written to the IO device.InternalstreamlyIWrite a byte stream to a file. Accumulates the input in chunks of up to " before writing to the IO device.Internalstreamly$Append a stream of arrays to a file.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyLAppend a byte stream to a file. Combines the bytes in chunks of size up to DE before writing. If the file exists then the new data is appended to the file. If the file does not exist it is created. File is locked using single writer locking mode.H!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone mqrttrqm1O(c) 2018 Composewell Technologies (c) Bjoern Hoehrmann 2008-2009BSD3streamly@composewell.com experimentalGHCNone"#>SXg ¶TstreamlyiReturn element at the specified index without checking the bounds. and without touching the foreign ptr.streamly`Decode a stream of bytes to Unicode characters by mapping each byte to a corresponding Unicode  in 0-255 range. Since: 0.7.0streamlyEncode a stream of Unicode characters to bytes by mapping each character to a byte in 0-255 range. Throws an error if the input stream contains characters beyond 255. Since: 0.7.0streamlyLike  but silently truncates and maps input characters beyond 255 to (incorrect) chars in 0-255 range. No error or exception is thrown when such truncation occurs. Since: 0.7.0streamlyDecode a UTF-8 encoded bytestream to a stream of Unicode characters. The incoming stream is truncated if an invalid codepoint is encountered. Since: 0.7.0streamlyInternalstreamlyDecode a UTF-8 encoded bytestream to a stream of Unicode characters. Any invalid codepoint encountered is replaced with the unicode replacement character. Since: 0.7.0streamlyInternalstreamlyInternalstreamlyInternalstreamlyDEncode a stream of Unicode characters to a UTF-8 encoded bytestream. Since: 0.7.0streamly(Remove leading whitespace from a string.  stripStart = S.dropWhile isSpaceInternalstreamly0Fold each line of the stream using the supplied Q and stream the result.CS.toList $ lines FL.toList (S.fromList "lines\nthis\nstring\n\n\n")#["lines", "this", "string", "", ""] !lines = S.splitOnSuffix (== '\n')InternalUstreamly,Code copied from base/Data.Char to INLINE itstreamly0Fold each word of the stream using the supplied Q and stream the result.>S.toList $ words FL.toList (S.fromList "fold these words")["fold", "these", "words"] words = S.wordsBy isSpaceInternalstreamly8Unfold a stream to character streams using the supplied 7 and concat the results suffixing a newline character \n to each stream.InternalstreamlyIUnfold the elements of a stream to character streams using the supplied Q and concat the results with a whitespace character infixed between the streams.Internal2!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone> streamlyqBreak a string up into a stream of strings at newline characters. The resulting strings do not contain newlines. lines = S.lines A.write9S.toList $ lines $ S.fromList "lines\nthis\nstring\n\n\n"["lines","this","string","",""]streamlyiBreak a string up into a stream of strings, which were delimited by characters representing white space. words = S.words A.writeFS.toList $ words $ S.fromList "A newline\nis considered white space?"7["A", "newline", "is", "considered", "white", "space?"]streamlyFlattens the stream of  Array Char8, after appending a terminating newline to each string. is an inverse operation to .;S.toList $ unlines $ S.fromList ["lines", "this", "string"]"lines\nthis\nstring\n" unlines = S.unlines A.readNote that, in general unlines . lines /= idstreamlyFlattens the stream of  Array Char5, after appending a separating space to each string. is an inverse operation to .=S.toList $ unwords $ S.fromList ["unwords", "this", "string"]"unwords this string" unwords = S.unwords A.readNote that, in general unwords . words /= idI!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone> hJ(c) 2017 Harendra KumarBSD3streamly@composewell.comNone ؝VKLMKLNKLOKLPKQRSTUSTVSTWSTXSTYSTZST[ST\ST]ST^ST_S`aS`bS`bcdefghiijjklmnopqrstuvwxyz{|}~                              !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^^_`abccdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyzz{{|}~@_.'AB^FDGE?d1/02~`hSWVUT^K[YJROQPbb_EA^?E     78=<9 !"#:;$%&'u`()*+CIHJK,LSOPQRVWTUXYZ[\-m./0123456789:];<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYaZ[\]^b_`abcdefghijkeflgmcnhiopjklnqrsopqrtuvwxyz{|~}~=7?&dkemcHLMNORSPQT?^'AFDEHItWXYaTV      o                          !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"""""""J"S""""="##########$$$$$$$$$$$$$$$$ $ $ $ $ $$$$$$$$$$$$$$$$$$$ %!%"%#%$%%%w%x%&%'%(%)%*%+%,%-%.%/%0%1%2%3&4&5&6&7&8&&&&&&9&:&"&#&;&<&=&>&?&@&A&B&C&D&E&F&G&H&I&J&K&L&M&N&O&P&Q&R&S&T&U''''''';':''9'8'7'<'='&'V'W'X'''|''''B'A'C'Y'E'Z'['G'=''\'']''^'J'K','L'N'S'O'P''Q'R'''''T'U'V'W'X'Y'['Z'\''_''8'9':']'^'-'`'''a'b'c'd'e'f'g'h''i'^'b'['Z'V'U'c'd'j'k'f'e'c'm'n'r'd'e'k'z'f'g'l'{''h'n'q'm'.'i'j'o'l'p'm'k'l'r's'''q'p'n'o'p'.'>'?'A'B'@'q''r'<'='F's'G'D't'E'u'v'w'x''y'z'{'|'}'~'''0'''''4''''''6'7'g'h'x'i'j'''''L'M'N'O'R'S'P'Q'T'`''(')'*''+(((G(t(E(((({())))))))***********************************++++++++++++++++,,,,E,A,,,,,,,=,,,EA?=EA?=\--------------------------......................////////////////////////0000000000000000>1111 1 1 1 1 11111111111111111111 22223!3!3"3#3$3%44&4'4()*+K,-)*.)*/)*0)*12S34K56K78 9K:;K5<KL=K>?@ABCDEFGHIKLJKLMNOK<AK<PKQRKSTKUVKLWKLXK<YZ[\KL]KL^_KS`a)*b)*cdeK:fK:ghijklKmn9o9pqrst:u:v:w:x:y:z:{K|}:~::u::K5K<KL#KL%%KKKKKKKKK''''''KSKLKKK'KUKLKSKK*****+CCCCCCCCCCCCCCC'CCCKC-K//KK0KK11%streamly-0.7.1-EhdBfTWf5IOJEpyXmHdVnnStreamlyStreamly.Internal.BaseCompat!Streamly.Internal.Data.Prim.Array"Streamly.Internal.Mutable.Prim.VarStreamly.Internal.Data.ArrayStreamly.Internal.Control.MonadStreamly.Internal.Data.Atomics'Streamly.Internal.Data.Prim.Array.Types!Streamly.Internal.Data.Sink.Types'Streamly.Internal.Data.SmallArray.Types+Streamly.Internal.Data.Stream.StreamDK.Type&Streamly.Internal.Data.Stream.StreamDKStreamly.Internal.Data.Strict!Streamly.Internal.Data.Pipe.TypesStreamly.Internal.Data.PipeStreamly.Internal.Data.Time!Streamly.Internal.Data.Time.Units!Streamly.Internal.Data.Time.ClockStreamly.Internal.Data.SVar*Streamly.Internal.Data.Stream.StreamK.Type%Streamly.Internal.Data.Stream.StreamK"Streamly.Internal.Data.Stream.SVar!Streamly.Internal.Data.Fold.Types*Streamly.Internal.Data.Stream.StreamD.TypeStreamly.Internal.Data.SinkStreamly.Internal.Data.Fold#Streamly.Internal.Data.Unfold.Types$Streamly.Internal.Memory.Array.Types%Streamly.Internal.Data.Stream.StreamDStreamly.Internal.Data.Unfold%Streamly.Internal.Data.Stream.Prelude!Streamly.Internal.Data.Stream.Zip$Streamly.Internal.Data.Stream.SerialStreamly.Internal.Memory.Array)Streamly.Internal.Data.Stream.CombinatorsStreamly.Internal.Data.List&Streamly.Internal.Data.Stream.Parallel)Streamly.Internal.Data.Stream.EnumerationStreamly.Internal.Prelude$Streamly.Internal.Memory.ArrayStream Streamly.Internal.FileSystem.Dir#Streamly.Internal.Data.Stream.Async#Streamly.Internal.Data.Stream.Ahead!Streamly.Internal.Data.SmallArray Streamly.Internal.Network.Socket"Streamly.Internal.Network.Inet.TCP#Streamly.Internal.FileSystem.Handle!Streamly.Internal.FileSystem.File%Streamly.Internal.Data.Unicode.Stream&Streamly.Internal.Memory.Unicode.ArrayStreamly.FileSystem.IOVecStreamly.FileSystem.FDIOControl.Monad.Trans.ListListTStreamly.Data.Fold#Streamly.Internal.Data.Unicode.CharStreamly.Memory.MallocStreamly.Memory.RingStreamly.Data.Unfold Data.FoldablefoldStreamly.Memory.ArrayStreamly.Data.SmallArrayStreamly.Data.Prim.ArrayStreamly.Data.ArrayStreamly.PreludeStreamly.FileSystem.FDAdefaultChunkSizeStreamly.Network.SocketStreamly.Network.Inet.TCPStreamly.FileSystem.HandleStreamly.Data.Unicode.StreamStreamly.TutorialbaseGHC.Base<> SemigroupstimessconcatGHC.ErrerrorWithoutStackTrace(primitive-0.7.0.0-9xMM76CsovTEGnXCHiCdRJData.Primitive.Types setOffAddr# writeOffAddr# readOffAddr# indexOffAddr# setByteArray#writeByteArray#readByteArray#indexByteArray# alignment#sizeOf#PrimData.Primitive.Arrayarray#Array#.discardatomicModifyIORefCASatomicModifyIORefCAS_ writeBarrierstoreLoadBarrierMutablePrimArray PrimArrayprimArrayFromListprimArrayFromListNprimArrayToList newPrimArrayresizeMutablePrimArrayshrinkMutablePrimArray readPrimArraywritePrimArraycopyMutablePrimArray copyPrimArraycopyPrimArrayToPtrcopyMutablePrimArrayToPtr setPrimArraygetSizeofMutablePrimArraysizeofMutablePrimArraysameMutablePrimArrayunsafeFreezePrimArrayunsafeThawPrimArrayindexPrimArraysizeofPrimArrayfoldrPrimArrayfoldrPrimArray'foldlPrimArrayfoldlPrimArray'foldlPrimArrayM'traversePrimArrayPfilterPrimArrayPmapMaybePrimArrayPgeneratePrimArrayPreplicatePrimArrayP mapPrimArray imapPrimArrayfilterPrimArrayfilterPrimArrayAmapMaybePrimArrayAmapMaybePrimArraytraversePrimArrayitraversePrimArrayitraversePrimArrayPgeneratePrimArrayreplicatePrimArraygeneratePrimArrayAreplicatePrimArrayAtraversePrimArray_itraversePrimArray_$fMonoidPrimArray$fSemigroupPrimArray$fShowPrimArray$fIsListPrimArray$fOrdPrimArray $fEqPrimArraySinkSmallMutableArray SmallArray newSmallArrayreadSmallArraywriteSmallArrayindexSmallArrayMindexSmallArrayindexSmallArray##cloneSmallArraycloneSmallMutableArrayfreezeSmallArrayunsafeFreezeSmallArraythawSmallArrayunsafeThawSmallArraycopySmallArraycopySmallMutableArraysizeofSmallArraysizeofSmallMutableArraytraverseSmallArrayPmapSmallArray' runSmallArraysmallArrayFromListNsmallArrayFromList$fDataSmallArray$fRead1SmallArray$fReadSmallArray$fShow1SmallArray$fShowSmallArray$fIsListSmallArray$fMonoidSmallArray$fSemigroupSmallArray$fMonadFixSmallArray$fMonadZipSmallArray$fMonadPlusSmallArray$fMonadFailSmallArray$fAlternativeSmallArray$fApplicativeSmallArray$fFunctorSmallArray$fTraversableSmallArray$fFoldableSmallArray$fOrdSmallArray$fOrd1SmallArray$fEqSmallArray$fEq1SmallArray$fDataSmallMutableArray$fEqSmallMutableArray$fMonadSmallArrayStreamStepYieldStopnilconsconsMunfoldrMunfoldr replicateMunconsfoldrSdrainEither'Left'Right'Maybe'Just'Nothing'Tuple4'Tuple3'Tuple'toMaybe $fShowTuple' $fShowTuple3' $fShowTuple4' $fShowMaybe' $fShowEither'Pipe PipeStateConsumeProduceContinuezipWithteemapcompose $fArrowPipe$fCategoryTYPEPipe$fSemigroupPipe$fApplicativePipe $fFunctorPipemapMperiodic withClockRelTime RelTime64AbsTime TimeUnit64TimeUnitTimeSpecsecnsec MilliSecond64 MicroSecond64 NanoSecond64 toAbsTime fromAbsTime toRelTime64 fromRelTime64 diffAbsTime64addToAbsTime64 toRelTime fromRelTime diffAbsTime addToAbsTimeshowNanoSecond64 showRelTime64 $fNumTimeSpec $fOrdTimeSpec$fTimeUnitMilliSecond64$fTimeUnitMicroSecond64$fTimeUnitNanoSecond64$fTimeUnitTimeSpec$fTimeUnit64MilliSecond64$fTimeUnit64MicroSecond64$fTimeUnit64NanoSecond64$fEqNanoSecond64$fReadNanoSecond64$fShowNanoSecond64$fEnumNanoSecond64$fBoundedNanoSecond64$fNumNanoSecond64$fRealNanoSecond64$fIntegralNanoSecond64$fOrdNanoSecond64$fEqMicroSecond64$fReadMicroSecond64$fShowMicroSecond64$fEnumMicroSecond64$fBoundedMicroSecond64$fNumMicroSecond64$fRealMicroSecond64$fIntegralMicroSecond64$fOrdMicroSecond64$fEqMilliSecond64$fReadMilliSecond64$fShowMilliSecond64$fEnumMilliSecond64$fBoundedMilliSecond64$fNumMilliSecond64$fRealMilliSecond64$fIntegralMilliSecond64$fOrdMilliSecond64 $fEqTimeSpec$fReadTimeSpec$fShowTimeSpec $fEqAbsTime $fOrdAbsTime $fShowAbsTime $fEqRelTime64$fReadRelTime64$fShowRelTime64$fEnumRelTime64$fBoundedRelTime64$fNumRelTime64$fRealRelTime64$fIntegralRelTime64$fOrdRelTime64 $fEqRelTime $fReadRelTime $fShowRelTime $fNumRelTime $fOrdRelTimeClock MonotonicRealtimeProcessCPUTime ThreadCPUTime MonotonicRawMonotonicCoarseUptimeRealtimeCoarsegetTime$fStorableTimeSpec $fEqClock $fEnumClock$fGenericClock $fReadClock $fShowClockHeapDequeueResultClearingWaitingReadyRunInIOrunInIO MonadAsyncState streamVarSVar svarStylesvarMrun svarStopStyle svarStopBy outputQueueoutputDoorBell readOutputQ postProcessoutputQueueFromConsumeroutputDoorBellFromConsumermaxWorkerLimitmaxBufferLimitpushBufferSpacepushBufferPolicypushBufferMVar remainingWork yieldRateInfoenqueue isWorkDone isQueueDone needDoorBellworkLoop workerThreads workerCount accountThreadworkerStopMVar svarStatssvarRefsvarInspectMode svarCreator outputHeapaheadWorkQueue SVarStopStyleStopNoneStopAnyStopByLimit UnlimitedLimited SVarStatstotalDispatches maxWorkers maxOutQSize maxHeapSize maxWorkQSizeavgWorkerLatencyminWorkerLatencymaxWorkerLatency svarStopTime YieldRateInfosvarLatencyTargetsvarLatencyRangesvarRateBuffersvarGainedLostYieldssvarAllTimeLatencyworkerBootstrapLatencyworkerPollingIntervalworkerPendingLatencyworkerCollectedLatencyworkerMeasuredLatencyRaterateLowrateGoalrateHigh rateBuffer WorkerInfoworkerYieldMaxworkerYieldCountworkerLatencyStart SVarStyleAsyncVar WAsyncVar ParallelVarAheadVarAheadHeapEntryAheadEntryNullAheadEntryPureAheadEntryStream ChildEvent ChildYield ChildStop ThreadAbortdefState adaptState setYieldLimit getYieldLimit setMaxThreads getMaxThreads setMaxBuffer getMaxBuffer setStreamRate getStreamRatesetStreamLatencysetInspectModegetInspectMode cleanupSVarcleanupSVarFromWorkercollectLatencydumpSVar printSVar withDiagMVarcaptureMonadStatedoForkfork forkManageddecrementYieldLimitincrementYieldLimitdecrementBufferLimitincrementBufferLimitsendsendToProducersendStopToProducerworkerUpdateLatencyupdateYieldCountworkerRateControl sendYieldsendStop enqueueLIFO enqueueFIFO enqueueAheadreEnqueueAheadqueueEmptyAhead dequeueAhead withIORefdequeueFromHeapdequeueFromHeapSeq heapIsSanerequeueOnHeapTop updateHeapSeq delThread modifyThreadhandleChildExceptionhandleFoldExceptionrecordMaxWorkers pushWorkerParisBeyondMaxRatedispatchWorkerPacedreadOutputQBasicreadOutputQBoundedreadOutputQPacedpostProcessBoundedpostProcessPacedgetYieldRateInfo newSVarStatssendFirstWorker newAheadVarnewParallelVar toStreamVar$fExceptionThreadAbort $fOrdLimit $fEqLimit $fEqCount $fReadCount $fShowCount $fEnumCount$fBoundedCount $fNumCount $fRealCount$fIntegralCount $fOrdCount$fShowThreadAbort $fEqSVarStyle$fShowSVarStyle$fShowLatencyRange $fShowLimit$fEqSVarStopStyle$fShowSVarStopStyle $fShowWork StreamingIsStreamtoStream fromStream|:MkStreamadaptmkStream fromStopK fromYieldKconsK.:nilMyieldyieldMconsMByfoldStreamShared foldStream consMStreamfoldrSMbuildbuildSbuildSMbuildMsharedMaugmentS augmentSMfoldrMserialconjoin mapMSerialunSharebindWith concatMapBy concatMap $fMonadStream$fApplicativeStream$fMonadTransStream$fFunctorStream$fMonoidStream$fSemigroupStream$fIsStreamStreamoncerepeatMrepeat replicate fromIndicesM fromIndicesiterateiterateM fromFoldablefromList fromStreamKfoldrfoldrTfoldr1foldlx'foldl'foldlMx'foldlM'foldlSfoldlTnullheadtailmfixinitelemnotElemallanylastminimum minimumBymaximum maximumBy!!lookupfindMfind findIndicesmapM_toList toStreamKhoistscanlx'scanl'filtertake takeWhiledrop dropWhilesequence intersperseM intersperseinsertBydeleteByreversemapMaybezipWithMmergeByMmergeBythe withLocal fromStreamVarfromSVartoSVar fromProducer fromConsumer pushToFoldFold2Foldsimplify toListRevFlmaplmapMlfilterlfilterM lcatMaybesltake ltakeWhile duplicate initializerunStep lchunksOf lchunksOf2 lsessionsOf$fFloatingFold$fFractionalFold $fNumFold $fMonoidFold$fSemigroupFold$fApplicativeFold $fFunctorFold GroupState GroupStart GroupBuffer GroupYield GroupFinishUnStreamSkip fromStreamD concatMapMfoldrMxeqBycmpBygroupsOf groupsOf2 $fFunctorStep$fMonadThrowStreamtoFold distributedemuxunzipMunzipdrainMmkPuremkPureIdmkFoldmkFoldId generally transformdrainBydrainBy2lengthsumproductmeanvariancestdDevrollingHashWithSalt rollingHashrollingHashFirstNmconcatfoldMapfoldMapMdrainN drainWhileindex findIndex elemIndexandor distribute_ partitiondemuxWithDefault_demux_ demuxDefault_classifytoParallelSVartoParallelSVarLimitedUnfoldMonadMutVarnewVarwriteVarreadVar modifyVar' FlattenState OuterLoop InnerLoopaStartaEndaBoundmemcpymemcmpunsafeInlineIObytesToElemCountnewArray withNewArray unsafeSnocsnocrealloc shrinkToFit unsafeIndexIO unsafeIndex byteLength byteCapacity toStreamD toStreamDRev toStreamKRevwriteN writeNAlignedwriteNAlignedUnmanaged writeNUnsafetoArrayMinChunkwrite writeAligned fromStreamDNfromStreamDArraysOf flattenArraysflattenArraysRev fromListN spliceTwo mkChunkSize mkChunkSizeKBunlinesspliceWithDoublingpackArraysChunksOflpackArraysChunksOf groupIOVecsOfsplitAtbreakOnsplitOn $fMonoidArray$fSemigroupArray $fOrdArray $fNFDataArray $fEqArray $fIsListArray$fIsStringArray $fReadArray $fShowArrayInterleaveStateInterleaveFirstInterleaveSecondInterleaveSecondOnlyInterleaveFirstOnly AppendState AppendFirst AppendSecondConcatUnfoldInterleaveStateConcatUnfoldInterleaveOuterConcatUnfoldInterleaveInnerConcatUnfoldInterleaveInnerLConcatUnfoldInterleaveInnerRConcatMapUStateConcatMapUOuterConcatMapUInnerunfoldenumerateFromStepIntegralenumerateFromToIntegralenumerateFromIntegralenumerateFromThenToIntegralenumerateFromThenIntegralenumerateFromStepNumnumFrom numFromThenenumerateFromToFractionalenumerateFromThenToFractional generateMgenerate fromListM fromPrimVar liftInner runReaderT evalStateT runStateTheadElse toListRevreverse'splitSuffixBy'groupsBygroupsRollingBysplitBy splitSuffixBywordsBy splitSuffixOn splitInnerBysplitInnerBySuffix isPrefixOfisSubsequenceOf stripPrefix concatMapUconcatUnfoldInterleaveconcatUnfoldRoundrobinappend interleave interleaveMininterleaveSuffixinterleaveInfix roundRobingintercalateSuffixinterposeSuffix gintercalate interposegbracketnewFinalizedIORefrunIORefFinalizerclearIORefFinalizerbeforeafterafterIO onExceptionbracket bracketIOfinally finallyIOhandle prescanlM' prescanl' postscanlMx' postscanlx'scanlMx' postscanlM' postscanl' postscanlM postscanlscanlM'scanlMscanlscanl1Mscanl1scanl1M'scanl1' rollingMapM rollingMaptaptapOffsetEvery pollCountstapRate takeWhileM dropWhileMfilterMuniqintersperseSuffixintersperseSuffixBySpan mapMaybeMindexedindexedRrunFoldtoSVarParallel mkParallelD mkParalleltapAsynclastN takeByTime dropByTime currentTimesupply supplyFirst supplySecond discardFirst discardSecondswap mapMWithInput fromStream1 fromStream2effect singletonidentityconstconcat outerProduct gbracketIO fromStreamS toStreamSfoldWith foldMapWith forEachWithZipAsync ZipAsyncM ZipSerial ZipStream ZipSerialM zipAsyncWithM zipAsyncWith zipSeriallyzipping zipAsyncly zippingAsync$fTraversableZipSerialM$fFoldableZipSerialM$fApplicativeZipSerialM$fFunctorZipSerialM$fNFData1ZipSerialM$fNFDataZipSerialM$fIsStringZipSerialM$fReadZipSerialM$fShowZipSerialM$fOrdZipSerialM$fEqZipSerialM$fIsListZipSerialM$fIsStreamZipSerialM$fApplicativeZipAsyncM$fFunctorZipAsyncM$fIsStreamZipAsyncM$fSemigroupZipSerialM$fMonoidZipSerialM$fSemigroupZipAsyncM$fMonoidZipAsyncM InterleavedTWSerialWSerialTStreamTSerialSerialTserially wSerially interleavingwSerial wSerialFst wSerialMin<=>$fTraversableSerialT$fFoldableSerialT$fNFData1SerialT$fNFDataSerialT$fIsStringSerialT $fReadSerialT $fShowSerialT $fOrdSerialT $fEqSerialT$fIsListSerialT$fMonadStatesSerialT$fMonadReaderrSerialT$fMonadThrowSerialT$fMonadIOSerialT$fMonadBasebSerialT$fFunctorSerialT$fApplicativeSerialT$fMonadSerialT$fIsStreamSerialT$fTraversableWSerialT$fFoldableWSerialT$fNFData1WSerialT$fNFDataWSerialT$fIsStringWSerialT$fReadWSerialT$fShowWSerialT $fOrdWSerialT $fEqWSerialT$fIsListWSerialT$fMonadStatesWSerialT$fMonadReaderrWSerialT$fMonadThrowWSerialT$fMonadIOWSerialT$fMonadBasebWSerialT$fFunctorWSerialT$fMonadWSerialT$fApplicativeWSerialT$fMonoidWSerialT$fSemigroupWSerialT$fIsStreamWSerialT$fSemigroupSerialT$fMonoidSerialT$fMonadTransSerialT$fMonadTransWSerialT fromStreamN toStreamRevread unsafeRead readIndex writeIndexstreamTransform streamFold maxThreads maxBufferrateavgRateminRatemaxRate constRate maxYields printState inspectModeZipList toZipSerialListtoSerialConsNil fromZipList toZipList $fIsListList$fIsStringList$fIsListZipList$fIsStringZipList $fShowList $fReadList$fEqList $fOrdList $fNFDataList $fNFData1List$fSemigroupList $fMonoidList $fFunctorList$fFoldableList$fApplicativeList$fTraversableList $fMonadList $fShowZipList $fReadZipList $fEqZipList $fOrdZipList$fNFDataZipList$fNFData1ZipList$fSemigroupZipList$fMonoidZipList$fFunctorZipList$fFoldableZipList$fApplicativeZipList$fTraversableZipListParallel ParallelTparallel parallelFst parallelMindistributeAsync_ parallely$fMonadStatesParallelT$fMonadReaderrParallelT$fMonadThrowParallelT$fMonadIOParallelT$fMonadBasebParallelT$fFunctorParallelT$fMonadParallelT$fApplicativeParallelT$fMonoidParallelT$fSemigroupParallelT$fIsStreamParallelT$fMonadTransParallelT Enumerable enumerateFromenumerateFromToenumerateFromThenenumerateFromThenToenumerateFromFractionalenumerateFromThenFractionalenumerateFromToSmallenumerateFromThenToSmallenumerateFromThenSmallBounded enumerate enumerateToenumerateFromBounded$fEnumerableIdentity$fEnumerableRatio$fEnumerableFixed$fEnumerableDouble$fEnumerableFloat$fEnumerableNatural$fEnumerableInteger$fEnumerableWord64$fEnumerableWord32$fEnumerableWord16$fEnumerableWord8$fEnumerableWord$fEnumerableInt64$fEnumerableInt32$fEnumerableInt16$fEnumerableInt8$fEnumerableInt$fEnumerableChar$fEnumerableOrdering$fEnumerableBool$fEnumerable() fromFoldableMeach fromHandlefoldxfoldl1'foldxM runStreamrunNrunWhile elemIndicestoHandletoPure toPureRev|$ applyAsync|&|$. foldAsync|&.scanxscanpostscan delayPostinterjectSuffix mergeAsyncBy mergeAsyncByM concatMapWith roundrobin concatUnfold intercalateintercalateSuffixconcatMapIterateWithconcatMapTreeWithconcatMapLoopWithconcatMapTreeYieldLeavesWithchunksOf chunksOf2arraysOf intervalsOfspanByspanbreak spanByRollinggroupsByRollinggroups splitOnSuffixsplitWithSuffix splitOnSeqsplitOnSuffixSeq splitBySeqsplitWithSuffixSeqtraceclassifySessionsByclassifyKeepAliveSessionsclassifySessionsOf usingStateT concatRevcompacttoArray readEither readFilesreadDirstoEithertoFilestoDirsWAsyncWAsyncTAsyncAsyncTmkAsyncKmkAsyncasync<|asynclywAsyncwAsyncly$fMonadStatesAsyncT$fMonadReaderrAsyncT$fMonadThrowAsyncT$fMonadIOAsyncT$fMonadBasebAsyncT$fFunctorAsyncT $fMonadAsyncT$fApplicativeAsyncT$fMonoidAsyncT$fSemigroupAsyncT$fIsStreamAsyncT$fMonadStatesWAsyncT$fMonadReaderrWAsyncT$fMonadThrowWAsyncT$fMonadIOWAsyncT$fMonadBasebWAsyncT$fFunctorWAsyncT$fMonadWAsyncT$fApplicativeWAsyncT$fMonoidWAsyncT$fSemigroupWAsyncT$fIsStreamWAsyncT$fMonadTransAsyncT$fMonadTransWAsyncTAheadAheadTaheadaheadly$fMonadStatesAheadT$fMonadReaderrAheadT$fMonadThrowAheadT$fMonadIOAheadT$fMonadBasebAheadT$fFunctorAheadT $fMonadAheadT$fApplicativeAheadT$fMonoidAheadT$fSemigroupAheadT$fIsStreamAheadT$fMonadTransAheadT$fNFDataSmallArray readSlice$fNFDataPrimArray runStreaming runStreamTrunInterleavedT runParallelT runAsyncT runZipStream runZipAsyncSockSpec sockFamilysockType sockProtosockOpts handleWithM handleWithaccept connections writeChunktoChunksWithBufferOftoChunksreadChunksWithBufferOf readChunkstoBytesreadWithBufferOf fromChunks writeChunkswriteChunksWithBufferOf writeStringsfromBytesWithBufferOfwriteWithBufferOf fromBytesacceptOnAddrWith acceptOnAddracceptOnPortWith acceptOnPortacceptOnPortLocalconnectionsOnAddrWithconnectionsOnAddrconnectionsOnPortconnectionsOnLocalHostconnectwithConnectionMusingConnectionwithConnectiontransformBytesWith getChunksgetBytestoBytesWithBufferOf writeArray putChunks putStringsputLinesputBytesfromChunksWithBufferOfwrite2withFile appendArray appendChunksappendWithBufferOf DecodeError DecodeState CodePoint decodeUtf8DdecodeUtf8LenientDresumeDecodeUtf8EitherDdecodeUtf8EitherDdecodeUtf8ArraysDdecodeUtf8ArraysLenientD encodeUtf8D decodeLatin1 encodeLatin1encodeLatin1Lax decodeUtf8decodeUtf8Arrays decodeUtf8LaxdecodeUtf8EitherresumeDecodeUtf8EitherdecodeUtf8ArraysLenient encodeUtf8 stripStartlineswordsunwords$fShowCodingFailureMode$fShowDecodeErrorIOVeciovBaseiovLenc_writev c_safe_writevwriteAllwritev writevAllghc-prim GHC.TypesIOGHC.STSTDoubleCharIntWordemptyPrimArrayControl.Monad.Primitive PrimMonad GHC.MaybeJustData.Functor.IdentityIdentity? Data.EitherEitherMaybeMonadGHC.IntInt64 integer-gmpGHC.Integer.TypeIntegerPushBufferPolicyallThreadsDoneminThreadDelayrateRecoveryTimegetWorkerLatencyYieldKStopKpuretransformers-0.5.5.0Control.Monad.Trans.Classlift foldrSWith foldrSSharedFoldable GHC.FloatFloatingGHC.Real FractionalGHC.NumNumMonoid<*> sequence__Fold1 genericLength defaultSaltmappendmempty genericIndexIntegralhushTrueFalse partitionByM partitionByLeftRight demuxWith demuxWith_ classifyWith unzipWithM unzipWithGHC.ListzipmallocForeignPtrAlignedBytes%mallocForeignPtrAlignedUnmanagedBytesnewArrayAlignedAllocWithnewArrayAlignedUnmanagedreallocAligned allocOverheadRingnewadvance unsafeInsertunsafeEqArrayN unsafeEqArrayunsafeFoldRingGHC.PtrPtrunsafeFoldRingMunsafeFoldRingFullM ringStart ringBoundInterposeFirstInnerInterposeSecondYieldICALFirstInnerICALSecondInnerInterposeSuffixFirstInnerNothing newFoldSVarfmap_serialLatency Applicative consMParallelGHC.EnumEnummaxBoundBoundedenumFrom enumFromThen enumFromToenumFromThenTotoEnumminBoundsessionCurTimesessionEventTime sessionCountsessionTimerHeapsessionKeyValueMapsessionOutputStreamodd$ Data.MaybefromJustisJustForeign.StorableStorableintersperseSuffix_-.modControl.Monad.Trans.ReaderReaderT Control.Monad.Trans.State.StrictStateT newWAsyncVar forkSVarAsyncjoinStreamVarAsync consMAsync consMWAsync consMAheadHandleopenFilestdinstdoutstderr readArrayUpto writeIOVec_readArraysOfUpto readArraysreadInChunksOf writeArrayswriteArraysPackedUpto_writevArraysPackedUptowriteInChunksOf GHC.IO.IOModeIOModereadArraysOfUpto readArrayOfGHC.WordWord8&network-3.1.1.1-Izwsyk64OoDBVuRysfRa9DNetwork.Socket.TypesSocket_toChunksWithBufferOfGHC.IO.Handle.TypesGHC.IO.Handle.FD usingFileGHC.IOFilePathReadModeunsafePeekElemOffisSpace