! dm&      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !" # $ % 9 !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe X&'()*+ !(c) 2019 Composewell TechnologiesNone"#F,streamlywrite FD buffer offset length tries to write data on the given filesystem fd (cannot be a socket) up to sepcified length starting from the given offset in the buffer. The write will not block the OS thread, it may suspend the Haskell thread until write can proceed. Returns the actual amount of data written.-streamlyEKeep writing in a loop until all data in the buffer has been written..streamlywrite FD iovec count tries to write data on the given filesystem fd (cannot be a socket) from an iovec with specified number of entries. The write will not block the OS thread, it may suspend the Haskell thread until write can proceed. Returns the actual amount of data written./streamlyHKeep writing an iovec in a loop until all the iovec entries are written.&'(),-./ &(c) 2018-2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNoney0123 !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe&z4streamlyA 4 is a special type of Foldl? that does not accumulate any value, but runs only effects. A 4H has no state to maintain therefore can be a bit more efficient than a Foldl* with '()' as the state, especially when 4Cs are composed with other operations. A Sink can be upgraded to a Foldl, but a Foldl! cannot be converted into a Sink.45J(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCSafe)6streamly A strict 78streamly A strict 9:streamly#Convert strict Maybe' to lazy Maybe 6;<8=>?@ABCD:!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafeEX4uEstreamlyMRepresents a stateful transformation over an input stream of values of type a to outputs of type b in F m.GstreamlyThe composed pipe distributes the input to both the constituent pipes and zips the output of the two using a supplied zipping function.HstreamlyiThe composed pipe distributes the input to both the constituent pipes and merges the outputs of the two.IstreamlyLift a pure function to a J.KstreamlyfCompose two pipes such that the output of the second pipe is attached to the input of the first pipe. JLEMNOPQGHIK!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe "#>ESX7RstreamlyLift a monadic function to a J.JGHIKR(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCSafeASstreamlyXRun an action forever periodically at the given frequency specified in per second (Hz).TstreamlyRun a computation on every clock tick, the clock runs at the specified frequency. It allows running a computation at high frequency efficiently by maintaining a local clock and adjusting it with the provided base clock at longer intervals. The first argument is a base clock returning some notion of time in microseconds. The second argument is the frequency in per second (Hz). The third argument is the action to run, the action is provided the local time as an argument.ST(c) 2019 Harendra KumarBSD3streamly@composewell.com experimentalGHCNoneMXgUstreamlyERelative times are relative to some arbitrary point of time. Unlike V- they are not relative to a predefined epoch.Vstreamly;Absolute times are relative to a predefined epoch in time. V represents times using WP which can represent times up to ~292 billion years at a nanosecond resolution.Xstreamly8A type class for converting between units of time using Y as the intermediate representation with a nanosecond resolution. This system of units can represent up to ~292 years at nanosecond resolution with fast arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.Zstreamly5A type class for converting between time units using [ as the intermediate and the widest representation with a nanosecond resolution. This system of units can represent arbitrarily large times but provides least efficient arithmetic operations due to [ arithmetic.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.8A type class for converting between units of time using W as the intermediate representation. This system of units can represent up to ~292 billion years at nanosecond resolution with reasonably efficient arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.WstreamlyData type to represent practically large quantities of time efficiently. It can represent time up to ~292 billion years at nanosecond resolution.\streamlyseconds]streamly nanoseconds^streamlyAn Yd time representation with a millisecond resolution. It can represent time up to ~292 million years._streamlyAn Y` time representation with a microsecond resolution. It can represent time up to ~292,000 years.`streamlyAn Y[ time representation with a nanosecond resolution. It can represent time up to ~292 years.astreamly Convert a Z to an absolute time.bstreamlyConvert absolute time to a Z.cstreamly Convert a Z to a relative time.dstreamlyConvert relative time to a Z.estreamly/Difference between two absolute points of time.fstreamlyDConvert nanoseconds to a string showing time in an appropriate unit.gUVhXZWi\]^j_k`labcdemnopqfrj(c) 2019 Harendra Kumar (c) 2009-2012, Cetin Sert (c) 2010, Eugene KirpichovBSD3streamly@composewell.com experimentalGHCNone7MXf sstreamlyClock types. A clock may be system-wide (that is, visible to all processes) or per-process (measuring time that is meaningful only within a process). All implementations shall support CLOCK_REALTIME. (The only suspend-aware monotonic is CLOCK_BOOTTIME on Linux.)tstreamlyThe identifier for the system-wide monotonic clock, which is defined as a clock measuring real time, whose value cannot be set via  clock_settime and which cannot have negative clock jumps. The maximum possible clock jump shall be implementation defined. For this clock, the value returned by u represents the amount of time (in seconds and nanoseconds) since an unspecified point in the past (for example, system start-up time, or the Epoch). This point does not change after system start-up time. Note that the absolute value of the monotonic clock is meaningless (because its origin is arbitrary), and thus there is no need to set it. Furthermore, realtime applications can rely on the fact that the value of this clock is never set.vstreamlyfThe identifier of the system-wide clock measuring real time. For this clock, the value returned by uO represents the amount of time (in seconds and nanoseconds) since the Epoch.wstreamlysThe identifier of the CPU-time clock associated with the calling process. For this clock, the value returned by uC represents the amount of execution time of the current process.xstreamlyuThe identifier of the CPU-time clock associated with the calling OS thread. For this clock, the value returned by uE represents the amount of execution time of the current OS thread.ystreamly(since Linux 2.6.28; Linux and Mac OSX) Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments or the incremental adjustments performed by adjtime(3).zstreamly(since Linux 2.6.32; Linux and Mac OSX) A faster but less precise version of CLOCK_MONOTONIC. Use when you need very fast, but not fine-grained timestamps.{streamlye(since Linux 2.6.39; Linux and Mac OSX) Identical to CLOCK_MONOTONIC, except it also includes any time that the system is suspended. This allows applications to get a suspend-aware monotonic clock without having to deal with the complications of CLOCK_REALTIME, which may have discontinuities if the time is changed using settimeofday(2).|streamly(since Linux 2.6.32; Linux-specific) A faster but less precise version of CLOCK_REALTIME. Use when you need very fast, but not fine-grained timestamps. stvwxyz{|u(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone,=>?@AEFHMSX_^streamlyA monad that can perform concurrent or parallel IO operations. Streams that can be composed concurrently require the underlying monad to be .}streamlyBuffering policy for persistent push workers (in ParallelT). In a pull style SVar (in AsyncT, AheadT etc.), the consumer side dispatches workers on demand, workers terminate if the buffer is full or if the consumer is not cosuming fast enough. In a push style SVar, a worker is dispatched only once, workers are persistent and keep pushing work to the consumer via a bounded buffer. If the buffer becomes full the worker either blocks, or it can drop an item from the buffer to make space.pPull style SVars are useful in lazy stream evaluation whereas push style SVars are useful in strict left Folds.iXXX Maybe we can separate the implementation in two different types instead of using a common SVar type.streamly6Specifies the stream yield rate in yields per second (Hertz*). We keep accumulating yield credits at S. At any point of time we allow only as many yields as we have accumulated as per { since the start of time. If the consumer or the producer is slower or faster, the actual rate may fall behind or exceed . We try to recover the gap between the two by increasing or decreasing the pull rate from the producer. However, if the gap becomes more than  $ we try to recover only as much as  . puts a bound on how low the instantaneous rate can go when recovering the rate gap. In other words, it determines the maximum yield latency. Similarly,   puts a bound on how high the instantaneous rate can go when recovering the rate gap. In other words, it determines the minimum yield latency. We reduce the latency by increasing concurrency, therefore we can say that it puts an upper bound on concurrency.If the ; is 0 or negative the stream never yields a value. If the  / is 0 or negative we do not attempt to recover.streamlyThe lower rate limitstreamly"The target rate we want to achieve streamlyThe upper rate limit streamlyMaximum slack from the goal~streamlyVAn SVar or a Stream Var is a conduit to the output from multiple streams running concurrently and asynchronously. An SVar can be thought of as an asynchronous IO handle. We can write any number of streams to an SVar in a non-blocking manner and then read them back at any time at any pace. The SVar would run the streams asynchronously and accumulate results. An SVar may not really execute the stream completely and accumulate all the results. However, it ensures that the reader can read the results at whatever paces it wants to read. The SVar monitors and adapts to the consumer's pace.An SVar is a mini scheduler, it has an associated workLoop that holds the stream tasks to be picked and run by a pool of worker threads. It has an associated output queue where the output stream elements are placed by the worker threads. A outputDoorBell is used by the worker threads to intimate the consumer thread about availability of new results in the output queue. More workers are added to the SVar by  fromStreamVar" on demand if the output produced is not keeping pace with the consumer. On bounded SVars, workers block on the output queue to provide throttling of the producer when the consumer is not pulling fast enough. The number of workers may even get reduced depending on the consuming pace.{New work is enqueued either at the time of creation of the SVar or as a result of executing the parallel combinators i.e. <| and <|>< when the already enqueued computations get evaluated. See joinStreamVarAsync.streamlyhIdentify the type of the SVar. Two computations using the same style can be scheduled on the same SVar.streamly=Sorting out-of-turn outputs in a heap for Ahead style streamsstreamly7Events that a child thread may send to a parent thread.streamly0Adapt the stream state from one type to another.streamlyThis function is used by the producer threads to queue output for the consumer thread to consume. Returns whether the queue has more space.streamly}This is safe even if we are adding more threads concurrently because if a child thread is adding another thread then anyway  will not be empty.streamly<In contrast to pushWorker which always happens only from the consumer thread, a pushWorkerPar can happen concurrently from multiple threads on the producer side. So we need to use a thread safe modification of workerThreads. Alternatively, we can use a CreateThread event to avoid using a CAS based modification.streamly]This is a magic number and it is overloaded, and used at several places to achieve batching: If we have to sleep to slowdown this is the minimum period that we accumulate before we sleep. Also, workers do not stop until this much sleep time is accumulated.hCollected latencies are computed and transferred to measured latency after a minimum of this period.streamly`Another magic number! When we have to start more workers to cover up a number of yields that we are lagging by then we cannot start one worker for each yield because that may be a very big number and if the latency of the workers is low these number of yields could be very high. We assume that we run each extra worker for at least this much time.streamlyGet the worker latency without resetting workerPendingLatency Returns (total yield count, base time, measured latency) CAUTION! keep it in sync with collectLatencystreamlyWrite a stream to an Q in a non-blocking manner. The stream can then be read back from the SVar using fromSVar. ~     J(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone>EX$streamlyFold   step   inject   extract streamly>Represents a left fold over an input stream of values of type a to a single value of type b in F m.$The fold uses an intermediate state s as accumulator. The step function updates the state and returns the new updated state. When the fold is done the final result of the fold is extracted from the intermediate state representation using the extract function.streamlyFold   step   initial   extractstreamlyConvert more general type  into a simpler type  streamlyEBuffers the input stream to a list in the reverse order of the input.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.streamly (lmap f fold) maps the function f on the input of the fold.?S.fold (FL.lmap (\x -> x * x) FL.sum) (S.enumerateFromTo 1 100)338350streamly(lmapM f fold) maps the monadic function f on the input of the fold.streamly2Include only those elements that pass a predicate.%S.fold (lfilter (> 5) FL.sum) [1..10]40streamlyLike  but with a monadic predicate.streamly(Transform a fold from a pure input to a 9 input, consuming only  values.streamly Take first n/ elements from the stream and discard the rest. streamly@Takes elements from the input as long as the predicate succeeds.!streamly#Modify the fold such that when the fold is done, instead of returning the accumulator, it returns a fold. The returned fold starts from where we left i.e. it uses the last accumulator value as the initial value of the accumulator. Thus we can resume the fold later and feed it more input. > do more <- S.fold (FL.duplicate FL.sum) (S.enumerateFromTo 1 10) evenMore <- S.fold (FL.duplicate more) (S.enumerateFromTo 11 20) S.fold evenMore (S.enumerateFromTo 21 30) 465"streamly}Run the initialization effect of a fold. The returned fold would use the value returned by this effect as its initial value.#streamly[Run one step of a fold and store the accumulator as an initial value in the returned fold.$streamlyVFor every n input items, apply the first fold and supply the result to the next fold.%streamlypGroup the input stream into windows of n second each and then fold each group using the provided fold function.For example, we can copy and distribute a stream to multiple folds where each fold can group the input differently e.g. by one second, one minute and one hour windows respectively and fold each resulting stream of folds. E -----Fold m a b----|-Fold n a c-|-Fold n a c-|-...-|----Fold m a c &streamly&Combines the fold outputs using their ' instances.(streamly Combines the fold outputs (type b) using their ) instances.*streamly Combines the fold outputs (type b) using their + instances.,streamly,Combines the outputs of the folds (the type b) using their - instances..streamly,Combines the outputs of the folds (the type b) using their  instances./streamlyThe fold resulting from 0i distributes its input to both the argument folds and combines their output using the supplied function.1streamly4Maps a function on the output of the fold (the type b).  !"#$2%!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNoneJ 3streamly Convert a 4 to a  `. When you want to compose sinks and folds together, upgrade a sink to a fold before composing.4streamly8Distribute one copy each of the input to both the sinks. T |-------Sink m a ---stream m a---| |-------Sink m a > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > sink (Sink.tee (pr "L") (pr "R")) (S.enumerateFromTo 1 2) L 1 R 1 L 2 R 2 5streamly?Distribute copies of the input to all the sinks in a container.  |-------Sink m a ---stream m a---| |-------Sink m a | ...  > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > sink (Sink.distribute [(pr "L"), (pr "R")]) (S.enumerateFromTo 1 2) L 1 R 1 L 2 R 2 4This is the consumer side dual of the producer side 6 operation.7streamlyDemultiplex to multiple consumers without collecting the results. Useful to run different effectful computations depending on the value of the stream elements, for example handling network packets of different types using different handlers.  |-------Sink m a -----stream m a-----Map-----| |-------Sink m a | ... > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > let table = Data.Map.fromList [(1, pr "One"), (2, pr "Two")] in Sink.sink (Sink.demux id table) (S.enumerateFromTo 1 100) One 1 Two 2 8streamlyxSplit elements in the input stream into two parts using a monadic unzip function, direct each part to a different sink. s |-------Sink m b -----Stream m a----(b,c)--| |-------Sink m c > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) in Sink.sink (Sink.unzip return (pr "L") (pr "R")) (S.yield (1,2)) L 1 R 2 9streamlySame as 8 but with a pure unzip function.:streamly&Map a pure function on the input of a 4.;streamly)Map a monadic function on the input of a 4.<streamlyFilter the input of a 4! using a pure predicate function.=streamlyFilter the input of a 4$ using a monadic predicate function.>streamly@Drain all input, running the effects and discarding the results.?streamly drainM f = lmapM f drain<Drain all input after passing it through a monadic function.45345789:;<=>?J(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone "#>ESX<@streamlycMake a fold using a pure step function, a pure initial state and a pure state extraction function.InternalAstreamlyMake a fold using a pure step function and a pure initial state. The final state extracted is identical to the intermediate state.InternalBstreamly`Make a fold with an effectful step function and initial state, and a state extraction function.  mkFold = FoldWe can just use  % but it is provided for completeness.InternalCstreamlyMake a fold with an effectful step function and initial state. The final state extracted is identical to the intermediate state.InternalDstreamly%Change the underlying monad of a foldInternalEstreamlyAdapt a pure fold to any monad (generally = hoist (return . runIdentity)Internal streamly4Flatten the monadic output of a fold to pure output. streamly/Map a monadic function on the output of a fold.FstreamlyApply a transformation on a   using a J.Gstreamly _Fold1 step returns a new   using just a step function that has the same type for the accumulator and the element. The result type is the accumulator type wrapped in 91. The initial accumulator is retrieved from the H, the result is None for empty containers.streamlyRA fold that drains all its input, running the effects and discarding the results.streamly drainBy f = lmapM f drainlDrain all input after passing it through a monadic function. This is the dual of mapM_ on stream producers.streamly5Extract the last element of the input stream, if any.IstreamlyLike , except with a more general + return valuestreamly)Determine the length of the input stream.streamlyVDetermine the sum of all elements of a stream of numbers. Returns additive identity (0a) when the stream is empty. Note that this is not numerically stable for floating point numbers.streamly`Determine the product of all elements of a stream of numbers. Returns multiplicative identity (1) when the stream is empty.streamlyRDetermine the maximum element in a stream using the supplied comparison function.streamly  maximum =  compare *Determine the maximum element in a stream.streamlyJComputes the minimum element with respect to the given comparison functionstreamlyRDetermine the minimum element in a stream using the supplied comparison function.streamlyRCompute a numerically stable arithmetic mean of all elements in the input stream.streamlyZCompute a numerically stable (population) variance over all elements in the input stream.streamlydCompute a numerically stable (population) standard deviation over all elements in the input stream.Jstreamly Compute an K sized polynomial rolling hash IH = salt * k ^ n + c1 * k ^ (n - 1) + c2 * k ^ (n - 2) + ... + cn * k ^ 0Where c1, c2, cn* are the elements in the input stream and k is a constant.>This hash is often used in Rabin-Karp string search algorithm.See *https://en.wikipedia.org/wiki/Rolling_hashLstreamly-A default salt used in the implementation of M.Mstreamly Compute an K+ sized polynomial rolling hash of a stream. -rollingHash = rollingHashWithSalt defaultSaltstreamly;Fold an input stream consisting of monoidal elements using N and O. 6S.fold FL.mconcat (S.map Sum $ S.enumerateFromTo 1 10)streamly foldMap f = map f mconcatNMake a fold from a pure function that folds the output of the function using N and O. 0S.fold (FL.foldMap Sum) $ S.enumerateFromTo 1 10streamly foldMapM f = mapM f mconcatQMake a fold from a monadic function that folds the output of the function using N and O. <S.fold (FL.foldMapM (return . Sum)) $ S.enumerateFromTo 1 10streamly!Folds the input stream to a list.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.PstreamlyLike , except with a more general Q argumentstreamly&Lookup the element at the given index. streamly0Extract the first element of the stream, if any.!streamly=Returns the first element that satisfies the given predicate."streamly!In a stream of (key-value) pairs (a, b), return the value b9 of the first pair where the key equals the given value a.RstreamlyConvert strict 6 to lazy 9#streamly;Returns the first index that satisfies the given predicate.$streamlyCReturns the first index where a given value is found in the stream.%streamlyReturn S if the input stream is empty.&streamly any p = lmap p or | Returns S: if any of the elements of a stream satisfies a predicate.'streamlyReturn S/ if the given element is present in the stream.(streamly all p = lmap p and | Returns S1 if all elements of a stream satisfy a predicate.)streamlyReturns S3 if the given element is not present in the stream.*streamlyReturns S if all elements are S, T otherwise+streamlyReturns S if any element is S, T otherwise,streamlyCDistribute one copy of the stream to each fold and zip the results.  |-------Fold m a b--------| ---stream m a---| |---m (b,c) |-------Fold m a c--------| >S.fold (FL.tee FL.sum FL.length) (S.enumerateFromTo 1.0 100.0) (5050.0,100)-streamlyWDistribute one copy of the stream to each fold and collect the results in a container.  |-------Fold m a b--------| ---stream m a---| |---m [b] |-------Fold m a b--------| | | ... BS.fold (FL.distribute [FL.sum, FL.length]) (S.enumerateFromTo 1 5)[15,5]4This is the consumer side dual of the producer side   operation.Ustreamly,Partition the input over two folds using an 7 partitioning predicate.  |-------Fold b x--------| -----stream m a --> (Either b c)----| |----(x,y) |-------Fold c y--------| #Send input to either fold randomly:import System.Random (randomIO)Frandomly a = randomIO >>= \x -> return $ if x then Left a else Right aOS.fold (FL.partitionByM randomly FL.length FL.length) (S.enumerateFromTo 1 100)(59,41)3Send input to the two folds in a proportion of 2:1: zimport Data.IORef (newIORef, readIORef, writeIORef) proportionately m n = do ref <- newIORef $ cycle $ concat [replicate m Left, replicate n Right] return $ \a -> do r <- readIORef ref writeIORef ref $ tail r return $ head r a main = do f <- proportionately 2 1 r <- S.fold (FL.partitionByM f FL.length FL.length) (S.enumerateFromTo (1 :: Int) 100) print r  (67,33) 4This is the consumer side dual of the producer side mergeBy operation.VstreamlySame as U$ but with a pure partition function.'Count even and odd numbers in a stream: >>> let f = FL.partitionBy (\n -> if even n then Left n else Right n) (fmap (("Even " ++) . show) FL.length) (fmap (("Odd " ++) . show) FL.length) in S.fold f (S.enumerateFromTo 1 100) ("Even 50","Odd 50") .streamlyBCompose two folds such that the combined fold accepts a stream of 7 and routes the W values to the first fold and X values to the second fold. partition = partitionBy idYstreamlySplit the input stream based on a key field and fold each split using a specific fold collecting the results in a map from the keys to the results. Useful for cases like protocol handlers to handle different type of packets using different handlers.  |-------Fold m a b -----stream m a-----Map-----| |-------Fold m a b | ... ZstreamlyFold a stream of key value pairs using a map of specific folds for each key into a map from keys to the results of fold outputs of the corresponding values. > let table = Data.Map.fromList [("SUM", FL.sum), ("PRODUCT", FL.product)] input = S.fromList [("SUM",1),("PRODUCT",2),("SUM",3),("PRODUCT",4)] in S.fold (FL.demux table) input One 1 Two 2 [streamlySplit the input stream based on a key field and fold each split using a specific fold without collecting the results. Useful for cases like protocol handlers to handle different type of packets.  |-------Fold m a () -----stream m a-----Map-----| |-------Fold m a () | ... \streamlyGiven a stream of key value pairs and a map from keys to folds, fold the values for each key using the corresponding folds, discarding the outputs. > let prn = FL.drainBy print > let table = Data.Map.fromList [("ONE", prn), ("TWO", prn)] input = S.fromList [("ONE",1),("TWO",2)] in S.fold (FL.demux_ table) input One 1 Two 2 ]streamlySplit the input stream based on a key field and fold each split using the given fold. Useful for map/reduce, bucketizing the input in different bins or for generating histograms. > let input = S.fromList [("ONE",1),("ONE",1.1),("TWO",2), ("TWO",2.2)] in S.fold (FL.classify FL.toList) input fromList [("ONE",[1.1,1.0]),("TWO",[2.2,2.0])] ^streamlyGiven an input stream of key value pairs and a fold for values, fold all the values belonging to each key. Useful for map/reduce, bucketizing the input in different bins or for generating histograms. > let input = S.fromList [("ONE",1),("ONE",1.1),("TWO",2), ("TWO",2.2)] in S.fold (FL.classify FL.toList) input fromList [("ONE",[1.1,1.0]),("TWO",[2.2,2.0])] _streamlyLike `& but with a monadic splitter function.`streamlySplit elements in the input stream into two parts using a pure splitter function, direct each part to a different fold and zip the results./streamlyOSend the elements of tuples in a stream of tuples through two different folds.  |-------Fold m a x--------| ---------stream of (a,b)--| |----m (x,y) |-------Fold m b y--------| 4This is the consumer side dual of the producer side a operation.@  !"#$%@ABCDE FbJM !"#$%&'()*+,-.Z\^/!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone%  !"#$%&'()*+,-./%  !"#$%')(&*+ ,-./!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe>!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#>EFHVXcd!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCSafe>EXefghi!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone "#>ESXg!jstreamly(Lazy right associative fold to a stream. eghiklmnopqjr(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone%,/=>?@AHSXgq<;sstreamly&A monadic continuation, it is a function that yields a value of type "a" and calls the argument (a -> m r) as a continuation with that value. We can also think of it as a callback with a handler (a -> m r). Category theorists call it a codensity type, a special type of right kan extension.tstreamly7A terminal function that has no continuation to follow.0streamlySame as 1.1streamlyDClass of types that can represent a stream of elements of some type a in some monad m.2streamly_Constructs a stream by adding a monadic action at the head of an existing stream. For example: M> toList $ getLine `consM` getLine `consM` nil hello world ["hello","world"] Concurrent (do not use  parallely to construct infinite streams)3streamlyOperator equivalent of 2. We can read it as "parallel colon" to remember that | comes before :. C> toList $ getLine |: getLine |: nil hello world ["hello","world"]  let delay = threadDelay 1000000 >> print 1 drain $ serially $ delay |: delay |: delay |: nil drain $ parallely $ delay |: delay |: delay |: nil Concurrent (do not use  parallely to construct infinite streams)ustreamly The type  Stream m a/ represents a monadic stream of values of type a% constructed using actions in monad ma. It uses stop, singleton and yield continuations equivalent to the following direct style type: <data Stream m a = Stop | Singleton a | Yield a (Stream m a) CTo facilitate parallel composition we maintain a local state in an W that is shared across and is used for synchronization of the streams being composed.The singleton case can be expressed in terms of stop and yield but we have it as a separate case to optimize composition operations for streams with single element. We build singleton streams in the implementation of v$ for Applicative and Monad, and in w for MonadTrans.mXXX remove the Stream type parameter from State as it is always constant. We can remove it from SVar as well4streamlyAAdapt any specific stream type to any other specific stream type.xstreamlyBuild a stream from an Q, a stop continuation, a singleton stream continuation and a yield continuation.ystreamly*Make an empty stream from a stop function.zstreamly.Make a singleton stream from a yield function.{streamly/Add a yield function at the head of the stream.5streamlyuConstruct a stream by adding a pure value at the head of an existing stream. For serial streams this is the same as (return a) `consM` rM but more efficient. For concurrent streams this is not concurrent whereas 2 is concurrent. For example: 2> toList $ 1 `cons` 2 `cons` 3 `cons` nil [1,2,3] 6streamlyOperator equivalent of 5. &> toList $ 1 .: 2 .: 3 .: nil [1,2,3] 7streamlyAn empty stream. > toList nil [] |streamly(An empty stream producing a side effect. '> toList (nilM (print "nil")) "nil" [] Internal}streamlyFold a stream by providing an SVar, a stop continuation, a singleton continuation and a yield continuation. The stream would share the current SVar passed via the State.~streamlyFold a stream by providing a State, stop continuation, a singleton continuation and a yield continuation. The stream will not use the SVar passed via State.streamly The function fw decides how to reconstruct the stream. We could reconstruct using a shared state (SVar) or without sharing the state.streamly;Fold sharing the SVar state within the reconstructed streamstreamly(Lazy right associative fold to a stream.streamlyLike / but shares the SVar state across computations.streamly-Lazy right fold with a monadic step function.8streamlyPolymorphic version of the  operation  of SerialT. Appends two streams sequentially, yielding all elements from the first stream, and then all elements from the second stream.streamlyDetach a stream from an SVarstreamly Perform a  using a specified concat strategy. The first argument specifies a merge or concat function that is used to merge the streams generated by the map function. For example, the concat function could be 8, parallel, async, ahead% or any other zip or merge function.*0123u4xyz{567|}~825355565(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@ASXS9streamlySame as yieldMstreamly .repeatM = fix . cons repeatM = cycle1 . yield 9Generate an infinite stream by repeating a monadic value.Internal:streamly fromFoldable =  5 7 Construct a stream from a H containing pure values:streamlyLazy right associative fold.streamly9Right associative fold to an arbitrary transformer monad.streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.JNote that the accumulator is always evaluated including the initial value.streamlyStrict left associative fold.streamlyLike foldx#, but with a monadic step function.streamlyLike " but with a monadic step function.streamlyLazy left fold to a stream.streamly1Lazy left fold to an arbitrary transformer monad.streamly >drain = foldl' (\_ _ -> ()) () drain = mapM_ (\_ -> return ())streamly/Extract the last element of the stream, if any.streamly[Apply a monadic action to each element of the stream and discard the output of the action.streamly7Zip two streams serially using a pure zipping function.streamly:Zip two streams serially using a monadic zipping function.c0123u4x567|}~89:(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone>Y,streamlyPull a stream from an SVar.streamlyWrite a stream to an Q in a non-blocking manner. The stream can then be read back from the SVar using .(c) 2018 Harendra KumarNone %,=>?@AESXgbkstreamlypA stream consists of a step function that generates the next step given a current state, and the current state.streamlyA stream is a succession of s. A < produces a single value and the next state of the stream. 3 indicates there are no more values in the stream.streamlyMap a monadic function over a streamlyCreate a singleton  from a pure value.streamlyCreate a singleton  from a monadic action.streamly#Convert a list of pure values to a streamly%Compare two streams lexicographically% !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#>EFHVXstreamly;allocate a new array using the provided allocator function.streamlyAllocate a new array aligned to the specified alignmend and using unmanaged pinned memory. The memory will not be automatically freed by GHC. This could be useful in allocate once global data structures. Use carefully as incorrect use can lead to memory leak.streamly Allocate an array that can hold count3 items. The memory of the array is uninitialized.Note that this is internal routine, the reference to this array cannot be given out until the array has been written to and frozen.streamlyZAllocate an Array of the given size and run an IO action passing the array start pointer.streamly{Reallocate the array to the specified size in bytes. If the size is less than the original array the array gets truncated.streamly$Remove the free space from an Array.streamlyBReturn element at the specified index without checking the bounds.9Unsafe because it does not check the bounds of the array. streamlyBReturn element at the specified index without checking the bounds. streamlyO(1)" Get the byte length of the array.<streamlyO(1)G Get the length of the array i.e. the number of elements in the array.=streamlywriteN n folds a maximum of n' elements from the input stream to an ;. streamlywriteNAligned n folds a maximum of n' elements from the input stream to an ; aligned to the given size.Internal streamlywriteNAlignedUnmanaged n folds a maximum of n' elements from the input stream to an ; aligned to the given size and using unmanaged memory. This could be useful to allocate memory that we need to allocate only once in the lifetime of the program.Internal streamlyLike =n but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown.>streamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams.streamlyLike > but the array memory is aligned according to the specified alignment size. This could be useful when we have specific alignment, for example, cache aligned arrays for lookup table etc.-Caution! Do not use this on infinite streams.streamlyfromStreamArraysOf n stream< groups the input stream into a stream of arrays of size n.?streamly Convert an ; into a list.@streamly Create an ; from the first N elements of a list. The array is allocated to size N, if the list terminates before N elements then the array may hold less than N elements.Astreamly Create an ;. from a list. The list must be of finite size.streamly0GHC memory management allocation header overheadstreamlyDefault maximum buffer size in bytes, for reading from and writing to IO devices, the value is 32KB minus GHC allocation overhead, which is a few bytes, so that the actual allocation is 32KB.streamlyfCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size.streamly!groupIOVecsOf maxBytes maxEntries= groups arrays in the incoming stream to create a stream of & arrays with a maximum of maxBytes' bytes in each array and a maximum of  maxEntries entries in each array.streamlyWCreate two slices of an array without copying the original array. The specified index i( is the first index of the second slice.streamlySplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array.7; !"#  <$%&'()*=   +>,-.?@A/012345!!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#EXɝ 6streamly A ring buffer is a mutable array of fixed size. Initially the array is empty, with ringStart pointing at the start of allocated memory. We call the next location to be written in the ring as ringHead. Initially ringHead == ringStart. When the first item is added, ringHead points to ringStart + sizeof item. When the buffer becomes full ringHead would wrap around to ringStart. When the buffer is full, ringHead always points at the oldest item in the ring and the newest item added always overwrites the oldest item.When using it we should keep in mind that a ringBuffer is a mutable data structure. We should not leak out references to it for immutable use.7streamlyCreate a new ringbuffer and return the ring buffer and the ringHead. Returns the ring and the ringHead, the ringHead is same as ringStart.8streamlyLAdvance the ringHead by 1 item, wrap around if we hit the end of the array.9streamlyInsert an item at the head of the ring, when the ring is full this replaces the oldest item in the ring with the new item. This is unsafe beause ringHead supplied is not verified to be within the Ring. Also, the ringStart foreignPtr must be guaranteed to be alive by the caller.:streamlyLike ; but compares only N bytes instead of entire length of the ring buffer. This is unsafe because the ringHead Ptr is not checked to be in range.;streamlyByte compare the entire length of ringBuffer with the given array, starting at the supplied ringHead pointer. Returns true if the Array and the ringBuffer have identical contents.This is unsafe because the ringHead Ptr is not checked to be in range. The supplied array must be equal to or bigger than the ringBuffer, ARRAY BOUNDS ARE NOT CHECKED.<streamly8Fold the buffer starting from ringStart up to the given = using a pure step function. This is useful to fold the items in the ring when the ring is not full. The supplied pointer is usually the end of the ring.>Unsafe because the supplied Ptr is not checked to be in range.>streamly5Like unsafeFoldRing but with a monadic step function.?streamlyFold the entire length of a ring buffer starting at the supplied ringHead pointer. Assuming the supplied ringHead pointer points to the oldest item, this would fold the ring starting from the oldest item to the newest item in the ring. 6@AB79:;<>?"!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone>EX͟BstreamlyAn  Unfold m a b. is a generator of a stream of values of type b from a seed of type a in F m.Cstreamly Unfold step injectBC#(c) 2018 Harendra KumarNone"#%,=>?@AEFSXgkDstreamlyInterposeFirstYield s1 i1EstreamlyInterposeFirstBuf s1 i1FstreamlyICALFirstYield s1 s2 i1GstreamlyICALFirstBuf s1 s2 i1 i2HstreamlyInterposeSuffixFirstYield s1 i1Istreamly An empty .Jstreamly An empty  with a side effect.Kstreamly#Can fuse but has O(n^2) complexity.Lstreamly Convert an B into a  by supplying it a seed.MstreamlysCan be used to enumerate unbounded integrals. This does not check for overflow or underflow for bounded integrals.Nstreamly'Convert a list of monadic actions to a Ostreamly1Run a streaming composition, discard the results.Pstreamly)Performs infix separator style splitting.Qstreamly)Performs infix separator style splitting.Rstreamly1Execute a monadic action for each element of the SstreamlyconcatMapU unfold stream uses unfolds to map the input stream elements to streams and then flattens the generated streams into a single output stream.TstreamlyInterleave streams (full streams, not the elements) unfolded from two input streams and concat. Stop when the first stream stops. If the second stream ends before the first one then first stream still keeps running alone without any interleaving with the second stream. a1, a2, ... an[b1, b2 ...] => [streamA1, streamA2, ... streamAn] [streamB1, streamB2, ...] => [streamA1, streamB1, streamA2...StreamAn, streamBn] => [a11, a12, ...a1j, b11, b12, ...b1k, a21, a22, ...]UstreamlyInterleave streams (full streams, not the elements) unfolded from two input streams and concat. Stop when the first stream stops. If the second stream ends before the first one then first stream still keeps running alone without any interleaving with the second stream. a1, a2, ... an[b1, b2 ...] => [streamA1, streamA2, ... streamAn] [streamB1, streamB2, ...] => [streamA1, streamB1, streamA2...StreamAn, streamBn] => [a11, a12, ...a1j, b11, b12, ...b1k, a21, a22, ...]VstreamlyThe most general bracketing and exception combinator. All other combinators can be expressed in terms of this combinator. This can also be used for cases which are not covered by the standard combinators.InternalWstreamly=Run a side effect before the stream yields its first element.Xstreamly5Run a side effect whenever the stream stops normally.YstreamlypRun a side effect whenever the stream aborts due to an exception. The exception is not caught, simply rethrown.ZstreamlyRun the first action before the stream starts and remember its output, generate a stream using the output, run the second action providing the remembered value as an argument whenever the stream ends normally or due to an exception.[streamlyTRun a side effect whenever the stream stops normally or aborts due to an exception.\streamlyWhen evaluating a stream if an exception occurs, stream evaluation aborts and the specified exception handler is run with the exception as argument.]streamlyiReturn element at the specified index without checking the bounds. and without touching the foreign ptr.Vstreamlybeforestreamlytry (exception handling)streamlyafter, on normal stopstreamly on exceptionstreamlystream generator^_`abcdefghijklmnopqIJKrstLuvwxMyz{|}~NOPQRSTUVWXYZ[\$(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone Cstreamly  fromList =  5 7 LConstruct a stream from a list of pure values. This is more efficient than : for serial streams.streamly5Convert a stream into a list in the underlying monad.streamlyLike #, but with a monadic step function.streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.streamlyStrict left associative fold.streamly&Lazy left fold to a transformer monad.!For example, to reverse a stream: OS.toList $ S.foldlT (flip S.cons) S.nil $ (S.fromList [1..5] :: SerialT IO Int)streamly3Strict left scan with an extraction function. Like scanl'y, but applies a user supplied extraction function (the third argument) at each step. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.streamly Compare two streams for equalitystreamlyCompare two streamsDstreamly A variant of %& that allows you to fold a H@ container of streams using the specified stream sum operation.  foldWith async $ map return [1..3]Equivalent to:  foldWith f = S.foldMapWith f id Since: 0.1.0 (Streamly)Estreamly A variant of 9 that allows you to map a monadic streaming action on a HH container and then fold it using the specified stream merge operation.  foldMapWith async return [1..3]Equivalent to: =foldMapWith f g xs = S.concatMapWith f g (S.fromFoldable xs) Since: 0.1.0 (Streamly)FstreamlyLike Ed but with the last two arguments reversed i.e. the monadic streaming function is the last argument.Equivalent to: !forEachWith = flip S.foldMapWith Since: 0.1.0 (Streamly)CDEF'(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone,/=>?@AHMV[GstreamlyHstreamly5An interleaving serial IO stream of elements of type a. See I! documentation for more details.IstreamlyThe  operation for IB interleaves the elements from the two streams. Therefore, when a <> b is evaluated, stream aZ is evaluated first to produce the first element of the combined stream and then stream bl is evaluated to produce the next element of the combined stream, and then we go back to evaluating stream a4 and so on. In other words, the elements of stream a- are interleaved with the elements of stream b.2Note that when multiple actions are combined like a <> b <> c ... <> z. we interleave them in a binary fashion i.e. a and bE are interleaved with each other and the result is interleaved with c and so on. This will not act as a true round-robin scheduling across all the streams. Note that this operation cannot be used to fold a container of infinite streams as the state that it needs to maintain is proportional to the number of streams. !import Streamly import qualified Streamly.Prelude as S main = (S.toList . O7 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,3,2,4] Similarly, the Fo instance interleaves the iterations of the inner and the outer loop, nesting loops in a breadth first manner. main = S.drain . O^ $ do x <- return 1 <> return 2 y <- return 3 <> return 4 S.yieldM $ print (x, y) (1,3) (2,3) (1,4) (2,4) JstreamlyKstreamly'A serial IO stream of elements of type a. See L! documentation for more details.LstreamlyThe  operation for L< behaves like a regular append operation. Therefore, when a <> b is evaluated, stream a7 is evaluated first until it exhausts and then stream b7 is evaluated. In other words, the elements of stream b( are appended to the elements of stream aL. This operation can be used to fold an infinite lazy container of streams. !import Streamly import qualified Streamly.Prelude as S main = (S.toList . M7 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,2,3,4] The F instance runs the monadic continuation+ for each element of the stream, serially. main = S.drain . M; $ do x <- return 1 <> return 2 S.yieldM $ print x  1 2 L0 nests streams serially in a depth first manner. main = S.drain . M^ $ do x <- return 1 <> return 2 y <- return 3 <> return 4 S.yieldM $ print (x, y)  (1,3) (1,4) (2,3) (2,4) We call the monadic code being run for each element of the stream a monadic continuation. In imperative paradigm we can think of this composition as nested foro loops and the monadic continuation is the body of the loop. The loop iterates for all elements of the stream.)Note that the behavior and semantics of L , including  and F6 instances are exactly like Haskell lists except that L5 can contain effectful actions while lists are pure.In the code above, the M: combinator can be omitted as the default stream type is L.Mstreamly(Fix the type of a polymorphic stream as L.Nstreamly  map = fmap Same as . 5> S.toList $ S.map (+1) $ S.fromList [1,2,3] [2,3,4] Ostreamly(Fix the type of a polymorphic stream as I.PstreamlySame as O.QstreamlyPolymorphic version of the  operation  of I. Interleaves two streams, yielding one element from each stream alternately. When one stream stops the rest of the other stream is used in the output stream.streamlyLike Q: but stops interleaving as soon as the first stream stops.streamlyLike QA but stops interleaving as soon as any of the two streams stops.RstreamlySame as Q.8GHIJKLMNOPQRR5((c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@AM& Sstreamly4A parallely composing IO stream of elements of type a. See T documentation for more details.TstreamlyBAsync composition with strict concurrent execution of all streams.The  instance of Tk executes both the streams concurrently without any delay or without waiting for the consumer demand and merges the results as they arrive. If the consumer does not consume the results, they are buffered upto a configured maximum, controlled by the  maxBufferk primitive. If the buffer becomes full the concurrent tasks will block until there is space in the buffer.Both WAsyncT and T`, evaluate the constituent streams fairly in a round robin fashion. The key difference is that WAsyncTJ might wait for the consumer demand before it executes the tasks whereas T[ starts executing all the tasks immediately without waiting for the consumer demand. For WAsyncT the  maxThreads limit applies whereas for T% it does not apply. In other words, WAsyncT can be lazy whereas T is strict.Tt is useful for cases when the streams are required to be evaluated simultaneously irrespective of how the consumer consumes them e.g. when we want to race two tasks and want to start both strictly at the same time or if we have timers in the parallel tasks and our results depend on the timers being started at the same time. If we do not have such requirements then AsyncT or AheadT5 are recommended as they can be more efficient than T. main = (toList . Z; $ (fromFoldable [1,2]) <> (fromFoldable [3,4])) >>= print   [1,3,2,4] zWhen streams with more than one element are merged, it yields whichever stream yields first without any bias, unlike the Async style streams.Any exceptions generated by a constituent stream are propagated to the output stream. The output and exceptions from a single stream are guaranteed to arrive in the same order in the resulting stream as they were generated in the input stream. However, the relative ordering of elements from different streams in the resulting stream can vary depending on scheduling and generation delays.Similarly, the F instance of T runs all& iterations of the loop concurrently. import Streamly import qualified Streamly.Prelude( as S import Control.Concurrent main = drain . Z $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n)  ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 Note that parallel composition can only combine a finite number of streams as it needs to retain state for each unfinished stream.5Since: 0.7.0 (maxBuffer applies to ParallelT streams) Since: 0.1.0streamlynXXX we can implement it more efficienty by directly implementing instead of combining streams using parallel.UstreamlyPolymorphic version of the  operation  of T" Merges two streams concurrently.streamlyLike U8 but stops the output as soon as the first stream stops.streamlyLike U? but stops the output as soon as any of the two streams stops.streamlyRedirect a copy of the stream to a supplied fold and run it concurrently in an independent thread. The fold may buffer some elements. The buffer size is determined by the prevailing  maxBuffer setting. h Stream m a -> m b | -----stream m a ---------------stream m a-----  C> S.drain $ S.tapAsync (S.mapM_ print) (S.enumerateFromTo 1 2) 1 2 fExceptions from the concurrently running fold are propagated to the current computation. Note that, because of buffering in the fold, exceptions may be delayed and may not correspond to the current element being processed in the parent stream, but we guarantee that the tap finishes and all exceptions from it are drained before the parent stream stops. Compare with tap.VstreamlyiParallel function application operator for streams; just like the regular function application operator } except that it is concurrent. The following code prints a value every second even though each stage adds a 1 second delay. mdrain $ S.mapM (\x -> threadDelay 1000000 >> print x) |$ S.repeatM (threadDelay 1000000 >> return 1)  ConcurrentWstreamlyyParallel reverse function application operator for streams; just like the regular reverse function application operator & except that it is concurrent. ndrain $ S.repeatM (threadDelay 1000000 >> return 1) |& S.mapM (\x -> threadDelay 1000000 >> print x)  ConcurrentXstreamly2Parallel function application operator; applies a run or fold_ function to a stream such that the fold consumer and the stream producer run in parallel. A run or foldF function reduces the stream to a value in the underlying monad. The .I at the end of the operator is a mnemonic for termination of the stream. o S.foldlM' (\_ a -> threadDelay 1000000 >> print a) () |$. S.repeatM (threadDelay 1000000 >> return 1)  ConcurrentYstreamlylParallel reverse function application operator for applying a run or fold functions to a stream. Just like X' except that the operands are reversed. p S.repeatM (threadDelay 1000000 >> return 1) |&. S.foldlM' (\_ a -> threadDelay 1000000 >> print a) ()  ConcurrentZstreamly(Fix the type of a polymorphic stream as T. STUVWXYZV0W1X0Y1)(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone [streamlySpecify the maximum number of threads that can be spawned concurrently for any concurrent combinator in a stream. A value of 0 resets the thread limit to default, a negative value means there is no limit. The default value is 1500. [ does not affect  ParallelT5 streams as they can use unbounded number of threads.When the actions in a stream are IO bound, having blocking IO calls, this option can be used to control the maximum number of in-flight IO requests. When the actions are CPU bound this option can be used to control the amount of CPU used by the stream.\streamly;Specify the maximum size of the buffer for storing the results from concurrent computations. If the buffer becomes full we stop spawning more concurrent tasks until there is space in the buffer. A value of 0 resets the buffer size to default, a negative value means there is no limit. The default value is 1500.CAUTION! using an unbounded \: value (i.e. a negative value) coupled with an unbounded [ value is a recipe for disaster in presence of infinite streams, or very large streams. Especially, it must not be used when v is used in  ZipAsyncM streams as v in applicative zip streams generates an infinite stream causing unbounded concurrent generation with no limit on the buffer or threads.]streamly&Specify the pull rate of a stream. A   value resets the rate to default which is unlimited. When the rate is specified, concurrent production may be ramped up or down automatically to achieve the specified yield rate. The specific behavior for different styles of $ specifications is documented under N. The effective maximum production rate achieved by a stream is governed by:The [ limitThe \ limit5The maximum rate that the stream producer can achieve5The maximum rate that the stream consumer can achieve^streamlySame as )rate (Just $ Rate (r/2) r (2*r) maxBound)YSpecifies the average production rate of a stream in number of yields per second (i.e. Hertz). Concurrent production is ramped up or down automatically to achieve the specified average yield rate. The rate can go down to half of the specified rate on the lower side and double of the specified rate on the higher side._streamlySame as %rate (Just $ Rate r r (2*r) maxBound)Specifies the minimum rate at which the stream should yield values. As far as possible the yield rate would never be allowed to go below the specified rate, even though it may possibly go above it at times, the upper limit is double of the specified rate.`streamlySame as %rate (Just $ Rate (r/2) r r maxBound)sSpecifies the maximum rate at which the stream should yield values. As far as possible the yield rate would never be allowed to go above the specified rate, even though it may possibly go below it at times, the lower limit is half of the specified rate. This can be useful in applications where certain resource usage must not be allowed to go beyond certain limits.astreamlySame as rate (Just $ Rate r r r 0)-Specifies a constant yield rate. If for some reason the actual rate goes above or below the specified rate we do not try to recover it by increasing or decreasing the rate in future. This can be useful in applications like graphics frame refresh where we need to maintain a constant refresh rate. streamlySpecify the average latency, in nanoseconds, of a single threaded action in a concurrent composition. Streamly can measure the latencies, but that is possible only after at least one task has completed. This combinator can be used to provide a latency hint so that rate control using ] can take that into account right from the beginning. When not specified then a default behavior is chosen which could be too slow or too fast, and would be restricted by any other control parameters configured. A value of 0 indicates default behavior, a negative value means there is no limit i.e. zero latency. This would normally be useful only in high latency and high throughput cases. streamly:Print debug information about an SVar when the stream ends [\]^_`a   *(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@AMX_Rbstreamly@A round robin parallely composing IO stream of elements of type a. See c documentation for more details.cstreamlyThe  operation for cB interleaves the elements from the two streams. Therefore, when a <> b1 is evaluated, one action is picked from stream a@ for evaluation and then the next action is picked from stream b6 and then the next action is again picked from stream ad, going around in a round-robin fashion. Many such actions are executed concurrently depending on  maxThreads and  maxBufferT limits. Results are served to the consumer in the order completion of the actions.2Note that when multiple actions are combined like a <> b <> c ... <> zW we go in a round-robin fasion across all of them picking one action from each up to z and then come back to a. Note that this operation cannot be used to fold a container of infinite streams as the state that it needs to maintain is proportional to the number of streams. import Streamly import qualified Streamly.Prelude4 as S import Control.Concurrent main = (S.toList . k7 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,3,2,4] Any exceptions generated by a constituent stream are propagated to the output stream. The output and exceptions from a single stream are guaranteed to arrive in the same order in the resulting stream as they were generated in the input stream. However, the relative ordering of elements from different streams in the resulting stream can vary depending on scheduling and generation delays.Similarly, the F instance of c runs all@ iterations fairly concurrently using a round robin scheduling. main = drain . k $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n) ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 dstreamlyOA demand driven left biased parallely composing IO stream of elements of type a. See e documentation for more details.estreamlyThe  operation for e appends two streams. The combined stream behaves like a single stream with the actions from the second stream appended to the first stream. The combined stream is evaluated in the asynchronous style. This operation can be used to fold an infinite lazy container of streams. import Streamly import qualified Streamly.Prelude4 as S import Control.Concurrent main = (S.toList . i7 $ (S.fromList [1,2]) <> (S.fromList [3,4])) >>= print   [1,2,3,4] Any exceptions generated by a constituent stream are propagated to the output stream. The output and exceptions from a single stream are guaranteed to arrive in the same order in the resulting stream as they were generated in the input stream. However, the relative ordering of elements from different streams in the resulting stream can vary depending on scheduling and generation delays.!Similarly, the monad instance of e may run each iteration concurrently based on demand. More concurrent iterations are started only if the previous iterations are not able to produce enough output for the consumer. main = drain . i $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n) ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 fstreamlyEMake a stream asynchronous, triggers the computation and returns a stream in the underlying monad representing the output generated by the original computation. The returned action is exhaustible and must be drained once. If not drained fully we may have a thread blocked forever and once exhausted it will always return empty.streamly;Create a new SVar and enqueue one stream computation on it.streamly/Join two computations on the currently running  queue for concurrent execution. When we are using parallel composition, an SVar is passed around as a state variable. We try to schedule a new parallel computation on the SVar passed to us. The first time, when no SVar exists, a new SVar is created. Subsequently, n may get called when a computation already scheduled on the SVar is further evaluated. For example, when (a parallel b) is evaluated it calls a  to put a and b! on the current scheduler queue.The \ required by the current composition context is passed as one of the parameters. If the scheduling and composition style of the new computation being scheduled is different than the style of the current SVar, then we create a new SVar and schedule it on that. The newly created SVar joins as one of the computations on the current SVar queue.+Cases when we need to switch to a new SVar:(x parallel y) parallel (t parallel1 u) -- all of them get scheduled on the same SVar(x parallel y) parallel (t g u) -- t and uN get scheduled on a new child SVar because of the scheduling policy change.if we 4 a stream of type g to a stream of type Parallel1, we create a new SVar at the transitioning bind.When the stream is switching from disjunctive composition to conjunctive composition and vice-versa we create a new SVar to isolate the scheduling of the two.gstreamlyPolymorphic version of the  operation  of eg. Merges two streams possibly concurrently, preferring the elements from the left one when available.hstreamlySame as g.streamlykXXX we can implement it more efficienty by directly implementing instead of combining streams using async.istreamly(Fix the type of a polymorphic stream as e.streamlylXXX we can implement it more efficienty by directly implementing instead of combining streams using wAsync.jstreamlyPolymorphic version of the  operation  of cF. Merges two streams concurrently choosing elements from both fairly.kstreamly(Fix the type of a polymorphic stream as c. bcdefghijk+(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone ,/=>?@AMllstreamly'A serial IO stream of elements of type a" with concurrent lookahead. See m documentation for more details.mstreamlyThe  operation for m appends two streams. The combined stream behaves like a single stream with the actions from the second stream appended to the first stream. The combined stream is evaluated in the speculative style. This operation can be used to fold an infinite lazy container of streams. import Streamly import qualified Streamly.Prelude4 as S import Control.Concurrent main = do xs <- S.toList . oe $ (p 1 |: p 2 |: nil) <> (p 3 |: p 4 |: nil) print xs where p n = threadDelay 1000000 >> return n   [1,2,3,4] VAny exceptions generated by a constituent stream are propagated to the output stream.The monad instance of m may run each monadic continuation (bind) concurrently in a speculative manner, performing side effects in a partially ordered manner but producing the outputs in an ordered manner like SerialT. main = S.drain . o $ do n <- return 3 <> return 2 <> return 1 S.yieldM $ do threadDelay (n * 1000000) myThreadId >>= \tid -> putStrLn (show tid ++ ": Delay " ++ show n) ?ThreadId 40: Delay 1 ThreadId 39: Delay 2 ThreadId 38: Delay 3 nstreamlyPolymorphic version of the  operation  of mA. Merges two streams sequentially but with concurrent lookahead.streamlykXXX we can implement it more efficienty by directly implementing instead of combining streams using ahead.ostreamly(Fix the type of a polymorphic stream as m.lmno,(c) 2018 Harendra KumarBSD3streamly@composewell.com experimentalGHCNoneNpstreamlylTypes that can be enumerated as a stream. The operations in this type class are equivalent to those in the [ type class, except that these generate a stream instead of a list. Use the functions in Streamly.Streams.Enumeration module to define new instances.qstreamlyenumerateFrom from/ generates a stream starting with the element from, enumerating up to  when the type is 8 or generating an infinite stream when the type is not . => S.toList $ S.take 4 $ S.enumerateFrom (0 :: Int) [0,1,2,3] For )c types, enumeration is numerically stable. However, no overflow or underflow checks are performed. >> S.toList $ S.take 4 $ S.enumerateFrom 1.1 [1.1,2.1,3.1,4.1] rstreamly3Generate a finite stream starting with the element from(, enumerating the type up to the value to. If to is smaller than from# then an empty stream is returned. /> S.toList $ S.enumerateFromTo 0 4 [0,1,2,3,4] For )3 types, the last element is equal to the specified to5 value after rounding to the nearest integral value. t> S.toList $ S.enumerateFromTo 1.1 4 [1.1,2.1,3.1,4.1] > S.toList $ S.enumerateFromTo 1.1 4.6 [1.1,2.1,3.1,4.1,5.1] sstreamlyenumerateFromThen from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - fromD. Enumeration can occur downwards or upwards depending on whether then comes before or after from. For  types the stream ends when B is reached, for unbounded types it keeps enumerating infinitely. z> S.toList $ S.take 4 $ S.enumerateFromThen 0 2 [0,2,4,6] > S.toList $ S.take 4 $ S.enumerateFromThen 0 (-2) [0,-2,-4,-6] tstreamly enumerateFromThenTo from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to toC. Enumeration can occur downwards or upwards depending on whether then comes before or after from. o> S.toList $ S.enumerateFromThenTo 0 2 6 [0,2,4,6] > S.toList $ S.enumerateFromThenTo 0 (-2) (-6) [0,-2,-4,-6] streamly#enumerateFromStepIntegral from step6 generates an infinite stream whose first element is from3 and the successive elements are in increments of step.sCAUTION: This function is not safe for finite integral types. It does not check for overflow, underflow or bounds. > S.toList $ S.take 4 $ S.enumerateFromStepIntegral 0 2 [0,2,4,6] > S.toList $ S.take 3 $ S.enumerateFromStepIntegral 0 (-2) [0,-2,-4] streamly Enumerate an Q type. enumerateFromIntegral from, generates a stream whose first element is from3 and the successive elements are in increments of 1+. The stream is bounded by the size of the Q type. E> S.toList $ S.take 4 $ S.enumerateFromIntegral (0 :: Int) [0,1,2,3] streamly Enumerate an Q type in steps. $enumerateFromThenIntegral from then+ generates a stream whose first element is from, the second element is then2 and the successive elements are in increments of  then - from,. The stream is bounded by the size of the Q type. > S.toList $ S.take 4 $ S.enumerateFromThenIntegral (0 :: Int) 2 [0,2,4,6] > S.toList $ S.take 4 $ S.enumerateFromThenIntegral (0 :: Int) (-2) [0,-2,-4,-6] streamly Enumerate an Q type up to a given limit. enumerateFromToIntegral from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to. 7> S.toList $ S.enumerateFromToIntegral 0 4 [0,1,2,3,4] streamly Enumerate an Q% type in steps up to a given limit. (enumerateFromThenToIntegral from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to. > S.toList $ S.enumerateFromThenToIntegral 0 2 6 [0,2,4,6] > S.toList $ S.enumerateFromThenToIntegral 0 (-2) (-6) [0,-2,-4,-6] streamly&Numerically stable enumeration from a ) number in steps of size 1. enumerateFromFractional from, generates a stream whose first element is from2 and the successive elements are in increments of 12. No overflow or underflow checks are performed.This is the equivalent to  for ) types. For example: H> S.toList $ S.take 4 $ S.enumerateFromFractional 1.1 [1.1,2.1,3.1,4.1] streamly&Numerically stable enumeration from a ) number in steps. %enumerateFromThenFractional from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from2. No overflow or underflow checks are performed.This is the equivalent of   for ) types. For example: > S.toList $ S.take 4 $ S.enumerateFromThenFractional 1.1 2.1 [1.1,2.1,3.1,4.1] > S.toList $ S.take 4 $ S.enumerateFromThenFractional 1.1 (-2.1) [1.1,-2.1,-5.300000000000001,-8.500000000000002] !streamly&Numerically stable enumeration from a ) number to a given limit. !enumerateFromToFractional from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to.This is the equivalent of " for ) types. For example: > S.toList $ S.enumerateFromToFractional 1.1 4 [1.1,2.1,3.1,4.1] > S.toList $ S.enumerateFromToFractional 1.1 4.6 [1.1,2.1,3.1,4.1,5.1] 7Notice that the last element is equal to the specified to. value after rounding to the nearest integer.#streamly&Numerically stable enumeration from a )( number in steps up to a given limit. *enumerateFromThenToFractional from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to.This is the equivalent of $ for ) types. For example: > S.toList $ S.enumerateFromThenToFractional 0.1 2 6 [0.1,2.0,3.9,5.799999999999999] > S.toList $ S.enumerateFromThenToFractional 0.1 (-2) (-6) [0.1,-2.0,-4.1000000000000005,-6.200000000000001] %streamlyr for  types not larger than K.&streamlyt for  types not larger than K.'streamlys for  types not larger than K.Note: We convert the  to K and enumerate the K,. If a type is bounded but does not have a n instance then we can go on enumerating it beyond the legal values of the type, resulting in the failure of ( when converting back to . Therefore we require a / instance for this function to be safely used.ustreamly "enumerate = enumerateFrom minBound Enumerate a  type from its ) to vstreamly &enumerateTo = enumerateFromTo minBound Enumerate a  type from its ) to specified value.*streamly 4enumerateFromBounded = enumerateFromTo from maxBoundq for   types.ptqrs!#%&'uv*-!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone "#>EPSXg~+streamly,Map a function on the input argument of the B.Internal,streamly+Map an action on the input argument of the B.Internal-streamlyASupply the seed to an unfold closing the input end of the unfold.Internal.streamlySupply the first component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the second component of the tuple as a seed.Internal/streamlySupply the second component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the first component of the tuple as a seed.Internal0streamly Convert an B into an unfold accepting a tuple as an argument, using the argument of the original fold as the second element of tuple and discarding the first element of the tuple.Internal1streamly Convert an B into an unfold accepting a tuple as an argument, using the argument of the original fold as the first element of tuple and discarding the second element of the tuple.Internal2streamly Convert an B` that accepts a tuple as an argument into an unfold that accepts a tuple with elements swapped.Internal3streamly Compose an B and a   . Given an  Unfold m a b and a  Fold m b c, returns a monadic action a -> m cB representing the application of the fold on the unfolded stream.Internal4streamlyConvert a stream into an B&. Note that a stream converted to an B may not be as efficient as an B in some situations.Internal5streamly=Convert a single argument stream generator function into an B%. Note that a stream converted to an B may not be as efficient as an B in some situations.Internal6streamly9Convert a two argument stream generator function into an B&. Note that a stream converted to an B may not be as efficient as an B in some situations.Internal7streamlySLift a monadic function into an unfold generating a nil stream with a side effect.8streamly:Prepend a monadic single element generator function to an B.Internal9streamlyCLift a monadic effect into an unfold generating a singleton stream.:streamlyELift a monadic function into an unfold generating a singleton stream.;streamly_Identity unfold. Generates a singleton stream with the seed as the only element in the stream. identity = singleton return<streamly(Generates a stream replicating the seed n times.=streamly#Convert a list of pure values to a >streamly&Convert a list of monadic values to a ?streamlysCan be used to enumerate unbounded integrals. This does not check for overflow or underflow for bounded integrals.@streamlyThe most general bracketing and exception combinator. All other combinators can be expressed in terms of this combinator. This can also be used for cases which are not covered by the standard combinators.InternalAstreamly=Run a side effect before the unfold yields its first element.InternalBstreamly5Run a side effect whenever the unfold stops normally.InternalCstreamlyARun a side effect whenever the unfold aborts due to an exception.InternalDstreamlyTRun a side effect whenever the unfold stops normally or aborts due to an exception.InternalEstreamlybracket before after between runs the before/ action and then unfolds its output using the between unfold. When the between4 unfold is done or if an exception occurs then the after# action is run with the output of before as argument.InternalFstreamlyzWhen unfolding if an exception occurs, unfold the exception using the exception unfold supplied as the first argument to F.Internal@streamlybeforestreamlytry (exception handling)streamlyafter, on normal stopstreamly on exceptionstreamly unfold to run+B+,-./0123GHI456789:;J<=>KLMNO?PQRST@ABCDEF!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone "#>EPSXgBB.!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#FX* Ustreamly Create an ; from the first N elements of a stream. The array is allocated to size N, if the stream terminates before N elements then the array may hold less than N elements.InternalVstreamly Create an ;e from a stream. This is useful when we want to create a single array from a stream of unknown size. writeN@ is at least twice as efficient when the size is already known.zNote that if the input stream is too large memory allocation for the array may fail. When the stream size is not known, arraysOfY followed by processing of indvidual arrays in the resulting stream should be preferred.InternalWstreamly Convert an ; into a stream.InternalXstreamly Convert an ; into a stream in reverse order.InternalwstreamlyUnfold an array into a stream.Ystreamly null arr = length arr == 0InternalZstreamly )last arr = readIndex arr (length arr - 1)Internal[streamlyO(1)8 Lookup the element at the given index, starting from 0.Internal\streamlyO(1)c Write the given element at the given index in the array. Performs in-place mutation of the array.Internal]streamlyOTransform an array into another array using a stream transformation operation.Internal^streamlyFold an array using a  .Internal_streamly,Fold an array using a stream fold operation.Internal;<=>?@AUVWXwYZ[\]^_!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#FX,;<=>?@Aw;@A=>?w</(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone,/=>?@AHMVDH xstreamly>An IO stream whose applicative instance zips streams wAsyncly.ystreamlyLike |P but zips in parallel, it generates all the elements to be zipped concurrently. main = (toList .  $ (,,) <$> s1 <*> s2 <*> s3) >>= print where s1 = fromFoldable [1, 2] s2 = fromFoldable [3, 4] s3 = fromFoldable [5, 6]  [(1,3,5),(2,4,6)] The 6 instance of this type works the same way as that of SerialT.zstreamly>An IO stream whose applicative instance zips streams serially.{streamly|streamlyThe applicative instance of |} zips a number of streams serially i.e. it produces one element from each stream serially and then zips all those elements. main = (toList . } $ (,,) <$> s1 <*> s2 <*> s3) >>= print where s1 = fromFoldable [1, 2] s2 = fromFoldable [3, 4] s3 = fromFoldable [5, 6]  [(1,3,5),(2,4,6)] The 6 instance of this type works the same way as that of SerialT.}streamly(Fix the type of a polymorphic stream as |.~streamlySame as }.streamlyLike zipWithV but zips concurrently i.e. both the streams being zipped are generated concurrently.streamlyLike zipWithMV but zips concurrently i.e. both the streams being zipped are generated concurrently.streamly(Fix the type of a polymorphic stream as y.streamlySame as . xyz{|}~0(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone>SXstreamlyLDecompose a stream into its head and tail. If the stream is empty, returns  &. If the stream is non-empty, returns  Just (a, ma), where a is the head of the stream and ma its tail.vThis is a brute force primitive. Avoid using it as long as possible, use it when no other combinator can do the job. This can be used to do pretty much anything in an imperative manner, as it just breaks down the stream into individual elements and we can loop over them as we deem fit. For example, this can be used to convert a streamly stream into other stream types.streamly 7unfoldr step s = case step s of Nothing -> 70 Just (a, b) -> a `cons` unfoldr step b Build a stream by unfolding a pure step function step starting from a seed sq. The step function returns the next element in the stream and the next seed value. When it is done it returns  # and the stream ends. For example, elet f b = if b > 3 then Nothing else Just (b, b + 1) in toList $ unfoldr f 0  [0,1,2,3] streamlyBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns  # and the stream ends. For example, let f b = if b > 3 then return Nothing else print b >> return (Just (b, b + 1)) in drain $ unfoldrM f 0   0 1 2 3 When run concurrently, the next unfold step can run concurrently with the processing of the output of the previous step. Note that more than one step cannot run concurrently as the next step depends on the output of the previous step. (asyncly $ S.unfoldrM (\n -> liftIO (threadDelay 1000000) >> return (Just (n, n + 1))) 0) & S.foldlM' (\_ a -> threadDelay 1000000 >> print a) ()  Concurrent Since: 0.1.0streamly Convert an B- into a stream by supplying it an input seed.*unfold UF.replicateM 10 (putStrLn "hello") Since: 0.7.0streamly yield a = a `cons` nil ,Create a singleton stream from a pure value.?The following holds in monadic streams, but not in Zip streams: #yield = pure yield = yieldM . pure In Zip applicative streams  is not the same as v because in that case v is equivalent to  instead.  and v( are equally efficient, in other cases G may be slightly more efficient than the other equivalent definitions.streamly yieldM m = m `consM` nil 0Create a singleton stream from a monadic action. *> toList $ yieldM getLine hello ["hello"] streamly 6fromIndices f = let g i = f i `cons` g (i + 1) in g 0 GGenerate an infinite stream, whose values are the output of a function f9 applied on the corresponding index. Index starts at 0. 5> S.toList $ S.take 5 $ S.fromIndices id [0,1,2,3,4] streamly 8fromIndicesM f = let g i = f i `consM` g (i + 1) in g 0 PGenerate an infinite stream, whose values are the output of a monadic function f7 applied on the corresponding index. Index starts at 0. Concurrentstreamly replicateM = take n . repeatM 1Generate a stream by performing a monadic action n times. Same as: drain $ serially $ S.replicateM 10 $ (threadDelay 1000000 >> print 1) drain $ asyncly $ S.replicateM 10 $ (threadDelay 1000000 >> print 1)  Concurrentstreamly replicate = take n . repeat Generate a stream of length n by repeating a value n times.streamly6Generate an infinite stream by repeating a pure value.streamly 0repeatM = fix . consM repeatM = cycle1 . yieldM CGenerate a stream by repeatedly executing a monadic action forever. drain $ serially $ S.take 10 $ S.repeatM $ (threadDelay 1000000 >> print 1) drain $ asyncly $ S.take 10 $ S.repeatM $ (threadDelay 1000000 >> print 1) &Concurrent, infinite (do not use with  parallely)streamly #iterate f x = x `cons` iterate f x !Generate an infinite stream with xT as the first element and each successive element derived by applying the function f on the previous element. 5> S.toList $ S.take 5 $ S.iterate (+1) 1 [1,2,3,4,5] streamly <iterateM f m = m >>= a -> return a `consM` iterateM f (f a) LGenerate an infinite stream with the first element generated by the action mG and each successive element derived by applying the monadic function f on the previous element.When run concurrently, the next iteration can run concurrently with the processing of the previous iteration. Note that more than one iteration cannot run concurrently as the next iteration depends on the output of the previous iteration. drain $ serially $ S.take 10 $ S.iterateM (\x -> threadDelay 1000000 >> print x >> return (x + 1)) (return 0) drain $ asyncly $ S.take 10 $ S.iterateM (\x -> threadDelay 1000000 >> print x >> return (x + 1)) (return 0)  ConcurrentSince: 0.7.0 (signature change) Since: 0.1.2streamly  fromListM =  2 7 PConstruct a stream from a list of monadic actions. This is more efficient than  for serial streams.streamly fromFoldableM =  2 7 Construct a stream from a H containing monadic actions. drain $ serially $ S.fromFoldableM $ replicateM 10 (threadDelay 1000000 >> print 1) drain $ asyncly $ S.fromFoldableM $ replicateM 10 (threadDelay 1000000 >> print 1) Concurrent (do not use with  parallely on infinite containers)streamlySame as  fromFoldable.streamly6Read lines from an IO Handle into a stream of Strings.streamly"Right associative/lazy pull fold. foldrM build final stream9 constructs an output structure using the step function build. build is invoked with the next input element and the remaining (lazy) tail of the output structure. It builds a lazy output expression using the two. When the "tail structure" in the output expression is evaluated it calls build( again thus lazily consuming the input stream. until either the output expression built by build@ is free of the "tail" or the input is exhausted in which case finaly is used as the terminating case for the output structure. For more details see the description in the previous section.%Example, determine if any element is ` in a stream:cS.foldrM (\x xs -> if odd x then return True else xs) (return False) $ S.fromList (2:4:5:undefined)> True Since: 0.7.0 (signature changed) Since: 0.2.0 (signature changed) Since: 0.1.0astreamly Right fold to a streaming monad. foldrS S.cons S.nil === ida can be used to perform stateless stream to stream transformations like map and filter in general. It can be coupled with a scan to perform stateful transformations. However, note that the custom map and filter routines can be much more efficient than this due to better stream fusion.4S.toList $ S.foldrS S.cons S.nil $ S.fromList [1..5] > [1,2,3,4,5]%Find if any element in the stream is S:S.toList $ S.foldrS (\x xs -> if odd x then return True else xs) (return False) $ (S.fromList (2:4:5:undefined) :: SerialT IO Int)> [True]:Map (+2) on odd elements and filter out the even elements:vS.toList $ S.foldrS (\x xs -> if odd x then (x + 2) `S.cons` xs else xs) S.nil $ (S.fromList [1..5] :: SerialT IO Int) > [3,5,7]% can also be represented in terms of a., however, the former is much more efficient: WfoldrM f z s = runIdentityT $ foldrS (\x xs -> lift $ f x (runIdentityT xs)) (lift z) sbstreamlySRight fold to a transformer monad. This is the most general right fold function. a is a special case of b , however a' implementation can be more efficient: gfoldrS = foldrT foldrM f z s = runIdentityT $ foldrT (\x xs -> lift $ f x (runIdentityT xs)) (lift z) sbl can be used to translate streamly streams to other transformer monads e.g. to a different streaming type.streamlyQRight fold, lazy for lazy monads and pure streams, and strict for strict monads. Please avoid using this routine in strict monads like IO unless you need a strict right fold. This is provided only for use in lazy monads (e.g. Identity) or pure streams. Note that with this signature it is not possible to implement a lazy foldr when the monad mw is strict. In that case it would be strict in its accumulator and therefore would necessarily consume all its input.streamly[Lazy right fold for non-empty streams, using first element as the starting value. Returns   if the stream is empty.streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.streamly#Left associative/strict push fold. foldl' reduce initial stream invokes reduceE with the accumulator and the next input in the input stream, using initial; as the initial value of the current value of the accumulator. When the input is exhausted the current value of the accumulator is returned. Make sure to use a strict data structure for accumulator to not build unnecessary lazy expressions unless that's what you want. See the previous section for more details.streamly]Strict left fold, for non-empty streams, using first element as the starting value. Returns   if the stream is empty.streamlyLike #, but with a monadic step function.streamlyLike " but with a monadic step function.streamly+Fold a stream using the supplied left fold.'S.fold FL.sum (S.enumerateFromTo 1 100)5050streamly drain = mapM_ (\_ -> return ())NRun a stream, discarding the results. By default it interprets the stream as LO, to run other types of streams use the type adapting combinators for example drain . asyncly.streamlyNRun a stream, discarding the results. By default it interprets the stream as LO, to run other types of streams use the type adapting combinators for example  runStream . asyncly.streamly drainN n = drain . take nRun maximum up to n iterations of a stream.streamly runN n = runStream . take nRun maximum up to n iterations of a stream.streamly "drainWhile p = drain . takeWhile p1Run a stream as long as the predicate holds true.streamly $runWhile p = runStream . takeWhile p1Run a stream as long as the predicate holds true.streamly&Determine whether the stream is empty.streamly0Extract the first element of the stream, if any.  head = (!! 0)streamly tail = fmap (fmap snd) . uncons8Extract all but the first element of the stream, if any.streamly7Extract all but the last element of the stream, if any.streamly/Extract the last element of the stream, if any. last xs = xs !! (length xs - 1)streamly6Determine whether an element is present in the stream.streamly:Determine whether an element is not present in the stream.streamly#Determine the length of the stream.streamly?Determine whether all elements of a stream satisfy a predicate.streamlyFDetermine whether any of the elements of a stream satisfy a predicate.streamly8Determines if all elements of a boolean stream are True.streamlyDDetermines whether at least one element of a boolean stream is True.streamlyBDetermine the sum of all elements of a stream of numbers. Returns 0a when the stream is empty. Note that this is not numerically stable for floating point numbers.streamlyFDetermine the product of all elements of a stream of numbers. Returns 1 when the stream is empty.streamly  minimum =  compare *Determine the minimum element in a stream.streamlyRDetermine the minimum element in a stream using the supplied comparison function.streamly  maximum =  compare *Determine the maximum element in a stream.streamlyRDetermine the maximum element in a stream using the supplied comparison function.streamly&Lookup the element at the given index.streamly!In a stream of (key-value) pairs (a, b), return the value b9 of the first pair where the key equals the given value a. "lookup = snd <$> find ((==) . fst)streamlyLike " but with a non-monadic predicate. find p = findM (return . p)streamly=Returns the first element that satisfies the given predicate.streamlyTFind all the indices where the element in the stream satisfies the given predicate.streamly;Returns the first index that satisfies the given predicate.streamly_Find all the indices where the value of the element in the stream is equal to the given value.streamlyCReturns the first index where a given value is found in the stream. elemIndex a = findIndex (== a)streamlyReturns S_ if the first stream is the same as or a prefix of the second. A stream is a prefix of itself. Q> S.isPrefixOf (S.fromList "hello") (S.fromList "hello" :: SerialT IO Char) True streamlyReturns S if all the elements of the first stream occur, in order, in the second stream. The elements do not have to occur consecutively. A stream is a subsequence of itself. T> S.isSubsequenceOf (S.fromList "hlo") (S.fromList "hello" :: SerialT IO Char) True streamly.Drops the given prefix from a stream. Returns  > if the stream does not start with the given prefix. Returns Just nil, when the prefix is the same as the stream.streamly mapM_ = drain . mapMApply a monadic action to each element of the stream and discard the output of the action. This is not really a pure transformation operation but a transformation followed by fold.streamly toList = S.foldr (:) [] mConvert a stream into a list in the underlying monad. The list can be consumed lazily in a lazy monad (e.g. cc). In a strict monad (e.g. IO) the whole list is generated and buffered before it can be consumed.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.dstreamly #toListRev = S.foldl' (flip (:)) [] FConvert a stream into a list in reverse order in the underlying monad.Warning!d working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.Internalstreamly #toHandle h = S.mapM_ $ hPutStrLn h *Write a stream of Strings to an IO Handle.estreamly/A fold that buffers its input to a pure stream.Warning!f working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.InternalfstreamlyMBuffers the input stream to a pure stream in the reverse order of the input.Warning!f working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.Internalgstreamly"Convert a stream to a pure stream. toPure = foldr cons nil Internalhstreamly3Convert a stream to a pure stream in reverse order. #toPureRev = foldl' (flip cons) nil InternalistreamlyUse a J to transform a stream.streamly3Strict left scan with an extraction function. Like y, but applies a user supplied extraction function (the third argument) at each step. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.!Since: 0.7.0 (Monad m constraint) Since 0.2.0streamlyLike " but with a monadic fold function.streamlyStrict left scan. Like map, G too is a one to one transformation, however it adds an extra element. >> S.toList $ S.scanl' (+) 0 $ fromList [1,2,3,4] [0,1,3,6,10]  \> S.toList $ S.scanl' (flip (:)) [] $ S.fromList [1,2,3,4] [[],[1],[2,1],[3,2,1],[4,3,2,1]] The output of i is the initial value of the accumulator followed by all the intermediate steps and the final result of .LBy streaming the accumulated state after each fold step, we can share the state across multiple stages of stream composition. Each stage can modify or extend the state, do some processing with it and emit it for the next stage, thus modularizing the stream processing. This can be useful in stateful or event-driven programming.|Consider the following monolithic example, computing the sum and the product of the elements in a stream in one go using a foldl': N> S.foldl' (\(s, p) x -> (s + x, p * x)) (0,1) $ S.fromList [1,2,3,4] (10,24) Using scanl' we can make it modular by computing the sum in the first stage and passing it down to the next stage for computing the product: > S.foldl' (\(_, p) (s, x) -> (s, p * x)) (0,1) $ S.scanl' (\(s, _) x -> (s + x, x)) (0,1) $ S.fromList [1,2,3,4] (10,24)  IMPORTANT:  evaluates the accumulator to WHNF. To avoid building lazy expressions inside the accumulator, it is recommended that a strict data structure is used for accumulator.streamlyLike : but does not stream the initial value of the accumulator. .postscanl' f z xs = S.drop 1 $ S.scanl' f z xsstreamlyLike " but with a monadic step function.jstreamlyCLike scanl' but does not stream the final value of the accumulator.kstreamly1Like postscanl' but with a monadic step function.streamlyLike " but with a monadic step function.streamlyLike  but for a non-empty stream. The first element of the stream is used as the initial value of the accumulator. Does nothing if the stream is empty. :> S.toList $ S.scanl1 (+) $ fromList [1,2,3,4] [1,3,6,10] streamly+Scan a stream using the given monadic fold.streamly/Postscan a stream using the given monadic fold.streamly2Include only those elements that pass a predicate.streamlySame as  but with a monadic predicate.streamly7Drop repeated elements that are adjacent to each other.streamly`Ensures that all the elements of the stream are identical and then returns that unique element.streamly Take first n/ elements from the stream and discard the rest.streamly<End the stream as soon as the predicate fails on an element.streamlySame as  but with a monadic predicate.streamlyDiscard first n, elements from the stream and take the rest.streamlydDrop elements in the stream as long as the predicate succeeds and then take the rest of the stream.streamlySame as  but with a monadic predicate.streamly mapM f = sequence . map f oApply a monadic function to each element of the stream and replace it with the output of the resulting action. > drain $ S.mapM putStr $ S.fromList ["a", "b", "c"] abc drain $ S.replicateM 10 (return 1) & (serially . S.mapM (\x -> threadDelay 1000000 >> print x)) drain $ S.replicateM 10 (return 1) & (asyncly . S.mapM (\x -> threadDelay 1000000 >> print x)) Concurrent (do not use with  parallely on infinite streams)streamly sequence = mapM id WReplace the elements of a stream of monadic actions with the outputs of those actions. > drain $ S.sequence $ S.fromList [putStr "a", putStr "b", putStrLn "c"] abc drain $ S.replicateM 10 (return $ threadDelay 1000000 >> print 1) & (serially . S.sequence) drain $ S.replicateM 10 (return $ threadDelay 1000000 >> print 1) & (asyncly . S.sequence) Concurrent (do not use with  parallely on infinite streams)streamlyMap a 90 returning function to a stream, filter out the  9 elements, and return a stream of values extracted from .Equivalent to: mapMaybe f = S.map l . S.filter m . S.map f streamlyLike  but maps a monadic function.Equivalent to: mapMaybeM f = S.map l . S.filter m . S.mapM f Concurrent (do not use with  parallely on infinite streams)streamlyReturns the elements of the stream in reverse order. The stream must be finite. Note that this necessarily buffers the entire stream in memory. Since 0.7.0 (Monad m constraint) Since: 0.1.1nstreamlyLike & but several times faster, requires a o instance.streamlycGenerate a stream by performing a monadic action between consecutive elements of the given stream.Concurrent (do not use with  parallely on infinite streams) J> S.toList $ S.intersperseM (return ',') $ S.fromList "hello" "h,e,l,l,o" streamlyaGenerate a stream by inserting a given element between consecutive elements of the given stream. @> S.toList $ S.intersperse ',' $ S.fromList "hello" "h,e,l,l,o" pstreamly9Insert a monadic action after each element in the stream.qstreamly?Intersperse a monadic action into the input stream after every n seconds. > S.drain $ S.interjectSuffix 1 (putChar ',') $ S.mapM (\x -> threadDelay 1000000 >> putChar x) $ S.fromList "hello" "h,e,l,l,o" streamlyinsertBy cmp elem stream inserts elem before the first element in stream that is less than elem when compared using cmp. insertBy cmp x =  cmp ( x) A> S.toList $ S.insertBy compare 2 $ S.fromList [1,3,5] [1,2,3,5] streamlyfDeletes the first occurence of the element in the stream that satisfies the given equality predicate. >> S.toList $ S.deleteBy (==) 3 $ S.fromList [1,3,3,5] [1,3,5] streamly kindexed = S.postscanl' (\(i, _) x -> (i + 1, x)) (-1,undefined) indexed = S.zipWith (,) (S.enumerateFrom 0)DPair each element in a stream with its index, starting from index 0. 0> S.toList $ S.indexed $ S.fromList "hello" [(0,h),(1,e),(2,l),(3,l),(4,o)] streamly indexedR n = S.postscanl' (\(i, _) x -> (i - 1, x)) (n + 1,undefined) indexedR n = S.zipWith (,) (S.enumerateFromThen n (n - 1))MPair each element in a stream with its index, starting from the given index n and counting down. 5> S.toList $ S.indexedR 10 $ S.fromList "hello" [(10,h),(9,e),(8,l),(7,l),(6,o)] streamlyLike & but using a monadic zipping function.streamly7Zip two streams serially using a pure zipping function. M> S.toList $ S.zipWith (+) (S.fromList [1,2,3]) (S.fromList [4,5,6]) [5,7,9] streamly<Compare two streams for equality using an equality function.streamlyBCompare two streams lexicographically using a comparison function.streamlyMerge two streams using a comparison function. The head elements of both the streams are compared and the smaller of the two elements is emitted, if both elements are equal then the element from the first stream is used first.pIf the streams are sorted in ascending order, the resulting stream would also remain sorted in ascending order. [> S.toList $ S.mergeBy compare (S.fromList [1,3,5]) (S.fromList [2,4,6,8]) [1,2,3,4,5,6,8] streamlyLike ( but with a monadic comparison function.Merge two streams randomly: > randomly _ _ = randomIO >>= x -> return $ if x then LT else GT > S.toList $ S.mergeByM randomly (S.fromList [1,1,1,1]) (S.fromList [2,2,2,2]) [2,1,2,2,2,1,1,1] )Merge two streams in a proportion of 2:1: 9proportionately m n = do ref <- newIORef $ cycle $ concat [replicate m LT, replicate n GT] return $ \_ _ -> do r <- readIORef ref writeIORef ref $ tail r return $ head r main = do f <- proportionately 2 1 xs <- S.toList $ S.mergeByM f (S.fromList [1,1,1,1,1,1]) (S.fromList [2,2,2]) print xs [1,1,2,1,1,2,1,1,2] streamlyLike [ but merges concurrently (i.e. both the elements being merged are generated concurrently).streamlyLike [ but merges concurrently (i.e. both the elements being merged are generated concurrently).streamlyconcatMapWith merge map stream is a two dimensional looping combinator. The first argument specifies a merge or concat function that is used to merge the streams generated by applying the second argument i.e. the mapN function to each element of the input stream. The concat function could be serial, parallel, async, aheads or any other zip or merge function and the second argument could be any stream generation function using a seed.Compare EstreamlyqMap a stream producing function on each element of the stream and then flatten the results into a single stream.  concatMap =  8 concatMap f =  (return . f) rstreamlyAppend the outputs of two streams, yielding all the elements from the first stream and then yielding all the elements from the second stream./IMPORTANT NOTE: This could be 100x faster than  serial/<> for appending a few (say 100) streams because it can fuse via stream fusion. However, it does not scale for a large number of streams (say 1000s) and becomes qudartically slow. Therefore use this for custom appending of a few streams but use ) or 'concatMapWith serial' for appending n, streams or infinite containers of streams.sstreamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. If any of the streams finishes early the other stream continues alone until it too finishes.:set -XOverloadedStrings/interleave "ab" ",,,," :: SerialT Identity CharfromList "a,b,,,"/interleave "abcd" ",," :: SerialT Identity CharfromList "a,b,cd"s is dual to t, it can be called  interleaveMax.%Do not use at scale in concatMapWith.ustreamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. As soon as the first stream finishes, the output stops, discarding the remaining part of the second stream. In this case, the last element in the resulting stream would be from the second stream. If the second stream finishes early then the first stream still continues to yield elements until it finishes.:set -XOverloadedStrings6interleaveSuffix "abc" ",,,," :: SerialT Identity CharfromList "a,b,c,"3interleaveSuffix "abc" "," :: SerialT Identity CharfromList "a,bc"u is a dual of v.%Do not use at scale in concatMapWith.vstreamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream and ending at the first stream. If the second stream is longer than the first, elements from the second stream are infixed with elements from the first stream. If the first stream is longer then it continues yielding elements even after the second stream has finished.:set -XOverloadedStrings5interleaveInfix "abc" ",,,," :: SerialT Identity CharfromList "a,b,c"2interleaveInfix "abc" "," :: SerialT Identity CharfromList "a,bc"v is a dual of u.%Do not use at scale in concatMapWith.tstreamly5Interleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. The output stops as soon as any of the two streams finishes, discarding the remaining part of the other stream. The last element of the resulting stream would be from the longer stream.:set -XOverloadedStrings2interleaveMin "ab" ",,,," :: SerialT Identity CharfromList "a,b,"2interleaveMin "abcd" ",," :: SerialT Identity CharfromList "a,b,c"t is dual to s.%Do not use at scale in concatMapWith.wstreamlySchedule the execution of two streams in a fair round-robin manner, executing each stream once, alternately. Execution of a stream may not necessarily result in an output, a stream may chose to SkipP producing an element until later giving the other stream a chance to run. Therefore, this combinator fairly interleaves the execution of two streams rather than fairly interleaving the output of the two streams. This can be useful in co-operative multitasking without using explicit threads. This can be used as an alternative to async.%Do not use at scale in concatMapWith.streamlyMap a stream producing monadic function on each element of the stream and then flatten the results into a single stream. Since the stream generation function is monadic, unlike Q, it can produce an effect at the beginning of each iteration of the inner loop.streamlyLike  but uses an B for stream generation. Unlike  this can fuse the BO code with the inner loop and therefore provide many times better performance.xstreamlyLike 1 but interleaves the streams in the same way as s# behaves instead of appending them.ystreamlyLike . but executes the streams in the same way as w.zstreamlyv followed by unfold and concat.Internal{streamly followed by unfold and concat. %unwords = intercalate " " UF.fromList1intercalate " " UF.fromList ["abc", "def", "ghi"]> "abc def ghi"|streamlyUnfold the elements of a stream, intersperse the given element between the unfolded streams and then concat them into a single stream. unwords = S.interpose ' 'Internal}streamlyu followed by unfold and concat.Internal~streamlyp followed by unfold and concat. ,unlines = intercalateSuffix "\n" UF.fromList2intercalate "\n" UF.fromList ["abc", "def", "ghi"]> "abc\ndef\nghi\n"streamlyUnfold the elements of a stream, append the given element after each unfolded stream and then concat them into a single stream.  unlines = S.interposeSuffix '\n'InternalstreamlysplitAt n f1 f2 composes folds f1 and f2 such that first n- elements of its input are consumed by fold f11 and the rest of the stream is consumed by fold f2. Mlet splitAt_ n xs = S.fold (FL.splitAt n FL.toList FL.toList) $ S.fromList xssplitAt_ 6 "Hello World!"> ("Hello ","World!")splitAt_ (-1) [1,2,3]> ([],[1,2,3])splitAt_ 0 [1,2,3]> ([],[1,2,3])splitAt_ 1 [1,2,3] > ([1],[2,3])splitAt_ 3 [1,2,3]> ([1,2,3],[])splitAt_ 4 [1,2,3]> ([1,2,3],[])streamly&Group the input stream into groups of nJ elements each and then fold each group using the provided fold function. I> S.toList $ S.chunksOf 2 FL.sum (S.enumerateFromTo 1 10) [3,7,11,15,19]/This can be considered as an n-fold version of ltake where we apply ltake= repeatedly on the leftover stream until the stream exhausts.streamlyarraysOf n stream9 groups the elements in the input stream into arrays of n elements each.0Same as the following but may be more efficient: &arraysOf n = S.chunksOf n (A.writeN n)streamly'Group the input stream into windows of nH second each and then fold each group using the provided fold function.streamlyBreak the input stream into two groups, the first group takes the input as long as the predicate applied to the first element of the stream and next input element holds S/, the second group takes the rest of the input.streamly span p f1 f2 composes folds f1 and f2 such that f1. consumes the input as long as the predicate p is S. f2! consumes the rest of the input. Flet span_ p xs = S.fold (S.span p FL.toList FL.toList) $ S.fromList xsspan_ (< 1) [1,2,3]> ([],[1,2,3])span_ (< 2) [1,2,3] > ([1],[2,3])span_ (< 4) [1,2,3]> ([1,2,3],[])streamly break p = span (not . p)'Break as soon as the predicate becomes S.  break p f1 f2 composes folds f1 and f2 such that f11 stops consuming input as soon as the predicate p becomes S$. The rest of the input is consumed f2.This is the binary version of splitBy. Hlet break_ p xs = S.fold (S.break p FL.toList FL.toList) $ S.fromList xsbreak_ (< 1) [3,2,1]> ([3,2,1],[])break_ (< 2) [3,2,1] > ([3,2],[1])break_ (< 4) [3,2,1]> ([],[3,2,1])streamlyLike w but applies the predicate in a rolling fashion i.e. predicate is applied to the previous and the next input elements.streamly'groupsBy cmp f $ S.fromList [a,b,c,...] assigns the element a to the first group, if  a `cmp` b is S then b* is also assigned to the same group. If  a `cmp` c is S then c is also assigned to the same group and so on. When the comparison fails a new group is started. Each group is folded using the fold f= and the result of the fold is emitted in the output stream.>S.toList $ S.groupsBy (>) FL.toList $ S.fromList [1,3,7,0,2,5]> [[1,3,7],[0,2,5]]streamlyUnlike groupsBy^ this function performs a rolling comparison of two successive elements in the input stream. /groupsByRolling cmp f $ S.fromList [a,b,c,...] assigns the element a to the first group, if  a `cmp` b is S then b) is also assigned to the same group. If  b `cmp` c is S then c is also assigned to the same group and so on. When the comparison fails a new group is started. Each group is folded using the fold f.VS.toList $ S.groupsByRolling (\a b -> a + 1 == b) FL.toList $ S.fromList [1,2,3,7,8,9]> [[1,2,3],[7,8,9]]streamly 4groups = groupsBy (==) groups = groupsByRolling (==)HGroups contiguous spans of equal elements together in individual groups.4S.toList $ S.groups FL.toList $ S.fromList [1,1,2,2]> [[1,1],[2,2]]streamly%Split on an infixed separator element, dropping the separator. Splits the stream on separator elements determined by the supplied predicate, separator is considered as infixed between two segments, if one side of the separator is missing then it is parsed as an empty stream. The supplied  ) is applied on the split segments. With * representing non-separator elements and  as separator,  splits as follows: ="--.--" => "--" "--" "--." => "--" "" ".--" => "" "--" splitOn (== x) is an inverse of intercalate (S.yield x)4Let's use the following definition for illustration: BsplitOn' p xs = S.toList $ S.splitOn p (FL.toList) (S.fromList xs)splitOn' (== '.') ""[""]splitOn' (== '.') "."["",""]splitOn' (== '.') ".a" > ["","a"]splitOn' (== '.') "a." > ["a",""]splitOn' (== '.') "a.b" > ["a","b"]splitOn' (== '.') "a..b"> ["a","","b"]streamlyLike  but the separator is considered as suffixed to the segments in the stream. A missing suffix at the end is allowed. A separator at the beginning is parsed as empty segment. With  representing elements and  as separator,  splits as follows: C "--.--." => "--" "--" "--.--" => "--" "--" ".--." => "" "--"  NsplitOnSuffix' p xs = S.toList $ S.splitSuffixBy p (FL.toList) (S.fromList xs)splitOnSuffix' (== '.') ""[]splitOnSuffix' (== '.') "."[""]splitOnSuffix' (== '.') "a"["a"]splitOnSuffix' (== '.') ".a" > ["","a"]splitOnSuffix' (== '.') "a."> ["a"]splitOnSuffix' (== '.') "a.b" > ["a","b"]splitOnSuffix' (== '.') "a.b." > ["a","b"] splitOnSuffix' (== '.') "a..b.."> ["a","","b",""] lines = splitOnSuffix (== '\n')streamlyLike I after stripping leading, trailing, and repeated separators. Therefore, ".a..b." with & as the separator would be parsed as  ["a","b"]J. In other words, its like parsing words from whitespace separated text. BwordsBy' p xs = S.toList $ S.wordsBy p (FL.toList) (S.fromList xs)wordsBy' (== ',') ""> []wordsBy' (== ',') ","> []wordsBy' (== ',') ",a,,b," > ["a","b"] words = wordsBy isSpacestreamlyLike 8 but keeps the suffix attached to the resulting splits. RsplitWithSuffix' p xs = S.toList $ S.splitWithSuffix p (FL.toList) (S.fromList xs)splitWithSuffix' (== '.') ""[]splitWithSuffix' (== '.') "."["."]splitWithSuffix' (== '.') "a"["a"]splitWithSuffix' (== '.') ".a" > [".","a"]splitWithSuffix' (== '.') "a."> ["a."]splitWithSuffix' (== '.') "a.b" > ["a.","b"] splitWithSuffix' (== '.') "a.b." > ["a.","b."]"splitWithSuffix' (== '.') "a..b.."> ["a.",".","b.","."]streamlyLike J but the separator is a sequence of elements instead of a single element.FFor illustration, let's define a function that operates on pure lists: ZsplitOnSeq' pat xs = S.toList $ S.splitOnSeq (A.fromList pat) (FL.toList) (S.fromList xs) splitOnSeq' "" "hello"> ["h","e","l","l","o"]splitOnSeq' "hello" ""> [""]splitOnSeq' "hello" "hello" > ["",""]splitOnSeq' "x" "hello" > ["hello"]splitOnSeq' "h" "hello" > ["","ello"]splitOnSeq' "o" "hello" > ["hell",""]splitOnSeq' "e" "hello" > ["h","llo"]splitOnSeq' "l" "hello"> ["he","","o"]splitOnSeq' "ll" "hello" > ["he","o"] is an inverse of {!. The following law always holds: intercalate . splitOn == idvThe following law holds when the separator is non-empty and contains none of the elements present in the input lists: splitOn . intercalate == idstreamlyLike  splitSuffixBy[ but the separator is a sequence of elements, instead of a predicate for a single element. _splitSuffixOn_ pat xs = S.toList $ S.splitSuffixOn (A.fromList pat) (FL.toList) (S.fromList xs)splitSuffixOn_ "." ""[""]splitSuffixOn_ "." "."[""]splitSuffixOn_ "." "a"["a"]splitSuffixOn_ "." ".a" > ["","a"]splitSuffixOn_ "." "a."> ["a"]splitSuffixOn_ "." "a.b" > ["a","b"]splitSuffixOn_ "." "a.b." > ["a","b"]splitSuffixOn_ "." "a..b.."> ["a","","b",""] lines = splitSuffixOn "\n"streamlyLike 5 but splits the separator as well, as an infix token. UsplitOn'_ pat xs = S.toList $ S.splitOn' (A.fromList pat) (FL.toList) (S.fromList xs)splitOn'_ "" "hello"#> ["h","","e","","l","","l","","o"]splitOn'_ "hello" ""> [""]splitOn'_ "hello" "hello"> ["","hello",""]splitOn'_ "x" "hello" > ["hello"]splitOn'_ "h" "hello"> ["","h","ello"]splitOn'_ "o" "hello"> ["hell","o",""]splitOn'_ "e" "hello"> ["h","e","llo"]splitOn'_ "l" "hello"> ["he","l","","l","o"]splitOn'_ "ll" "hello"> ["he","ll","o"]streamlyLike  splitSuffixOn+ but keeps the suffix intact in the splits. bsplitSuffixOn'_ pat xs = S.toList $ FL.splitSuffixOn' (A.fromList pat) (FL.toList) (S.fromList xs)splitSuffixOn'_ "." ""[""]splitSuffixOn'_ "." "."["."]splitSuffixOn'_ "." "a"["a"]splitSuffixOn'_ "." ".a" > [".","a"]splitSuffixOn'_ "." "a."> ["a."]splitSuffixOn'_ "." "a.b" > ["a.","b"]splitSuffixOn'_ "." "a.b." > ["a.","b."]splitSuffixOn'_ "." "a..b.."> ["a.",".","b.","."]streamlyAConsider a chunked stream of container elements e.g. a stream of Word8# chunked as a stream of arrays of Word8. $splitInnerBy splitter joiner stream splits the inner containers f a using the splitterg function and joins back the resulting fragments from splitting across multiple containers using the joinerm function such that the transformed output stream is consolidated as one container per segment of the split.sCAUTION! This is not a true streaming function as the container size after the split and merge may not be bounded.streamlyLike H but splits assuming the separator joins the segment in a suffix style.streamly-Tap the data flowing through a stream into a  . For example, you may add a tap to log the contents flowing through the stream. The fold is used only for effects, its result is discarded. e Fold m a b | -----stream m a ---------------stream m a-----  A> S.drain $ S.tap (FL.drainBy print) (S.enumerateFromTo 1 2) 1 2  Compare with .streamly]Apply a monadic function to each element flowing through the stream and discard the results. 6> S.drain $ S.trace print (S.enumerateFromTo 1 2) 1 2  Compare with .streamly.classifySessionsBy tick timeout reset f streamM groups together all input stream elements that belong to the same session. timeout is the maximum lifetime of a session in seconds. All elements belonging to a session are purged after this duration. If "reset" is TureF then the timeout is reset after every event received in the session. Session duration is measured using the timestamp of the first element seen for that session. To detect session timeouts, a monotonic event time clock is maintained using the timestamps seen in the inputs and a timer with a tick duration specified by tick. session keyG is a key that uniquely identifies the session for the given element,  timestampk characterizes the time when the input element was generated, this is an absolute time measured from some Epoch. session closel is a boolean indicating whether this element marks the closing of the session. When an input element with  session close set to True, is seen the session is purged immediately.LAll the input elements belonging to a session are collected using the fold f. The session key and the fold result are emitted in the output stream when the session is purged either via the session close event or via the session liftime timeout.streamlyLike  but the session is kept alive if an event is received within the session window. The session times out and gets closed only if no event is received within the specified session window size.streamlymSplit the stream into fixed size time windows of specified interval in seconds. Within each such window, fold the elements in buckets identified by the keys. A particular bucket fold can be terminated early if a closing flag is encountered in an element for that key. Once a fold is terminated the key and value for that bucket are emitted in the output stream.Session  timestamp in the input stream is an absolute time from some epoch, characterizing the time when the input element was generated. To detect session window end, a monotonic event time clock is maintained synced with the timestamps with a clock resolution of 1 second.streamly=Run a side effect before the stream yields its first element.streamly5Run a side effect whenever the stream stops normally.streamlyARun a side effect whenever the stream aborts due to an exception.streamlyTRun a side effect whenever the stream stops normally or aborts due to an exception.streamlyRun the first action before the stream starts and remember its output, generate a stream using the output, run the second action using the remembered value as an argument whenever the stream ends normally or due to an exception.streamlyWhen evaluating a stream if an exception occurs, stream evaluation aborts and the specified exception handler is run with the exception as argument.streamlyETransform the inner monad of a stream using a natural transformation. Internalstreamly.Generalize the inner monad of the stream from c to any monad. Internalstreamly;Lift the inner monad of a stream using a monad transformer. Internalstreamly(Evaluate the inner monad of a stream as . Internalstreamly(Evaluate the inner monad of a stream as .This is supported only for L/ as concurrent state updation may not be safe. InternalstreamlyBRun a stateful (StateT) stream transformation using a given state.This is supported only for L/ as concurrent state updation may not be safe. Internalstreamly(Evaluate the inner monad of a stream as > and emit the resulting state and value pair after each step.This is supported only for L/ as concurrent state updation may not be safe. Internalstreamlytimer tick in secondsstreamlysession timeoutstreamly+reset the timeout when an event is receivedstreamly$Fold to be applied to session eventsstreamly)session key, timestamp, close event, datastreamlysession inactive timeoutstreamly*Fold to be applied to session payload datastreamly(session key, data, close flag, timestampstreamlytime window sizestreamly#Fold to be applied to window eventsstreamly'window key, data, close flag, timestamp32567|9:CDEFN ptqrsuvabdefghijknpqrsuvtwxyz{|}~(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone>SX325679:CNptqrsuv75623ptqrsuvC:N91!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#FXGstreamly;Convert a stream of arrays into a stream of their elements.)Same as the following but more efficient: concat = S.concatMap A.readstreamlysConvert a stream of arrays into a stream of their elements reversing the contents of each array before flattening.streamlyMFlatten a stream of arrays after inserting the given element between arrays.InternalstreamlyIFlatten a stream of arrays appending the given element after each array.streamlySplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array.streamlyhCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes.streamlyarraysOf n stream9 groups the elements in the input stream into arrays of n elements each.)Same as the following but more efficient: &arraysOf n = S.chunksOf n (A.writeN n)streamlycGiven a stream of arrays, splice them all together to generate a single array. The stream must be finite. 2!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#F-streamlyA  is returned by J and is subsequently used to perform read and write operations on a file.streamlyFile handle for standard inputstreamlyFile handle for standard outputstreamlyFile handle for standard errorstreamly?Open a file that is not a directory and return a file handle. L enforces a multiple-reader single-writer locking on files. That is, there may either be many handles on the same file which manage input, or just one handle on the file which manages output. If any open handle is managing a file for output, no new handle can be allocated for that file. If any open handle is managing a file for input, new handles can only be allocated if they do not manage output. Whether two files are the same is implementation-dependent, but they should normally be the same if they have the same absolute path name and neither has been renamed, for example.streamlyRead a  ByteArray from a file handle. If no data is available on the handle it blocks until some data becomes available. If data is available then it immediately returns that data without blocking. It reads a maximum of up to the size requested.streamly Write an ; to a file handle.streamlyWrite an array of IOVec to a file handle.streamlyreadArraysOfUpto size h+ reads a stream of arrays from file handle h6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size.streamly readArrays h+ reads a stream of arrays from file handle h4. The maximum size of a single array is limited to defaultChunkSize.  ignores the prevailing  TextEncoding and  NewlineMode on the . .readArrays = readArraysOfUpto defaultChunkSizestreamlyreadInChunksOf chunkSize handleQ reads a byte stream from a file handle, reads are performed in chunks of up to  chunkSize2. The stream ends as soon as EOF is encountered.streamly<Generate a stream of elements of the given type from a file +. The stream ends when EOF is encountered.streamly%Write a stream of arrays to a handle.streamlyWrite a stream of arrays to a handle after coalescing them in chunks of specified size. The chunk size is only a maximum and the actual writes could be smaller than that as we do not split the arrays to fit them to the specified size.streamlyWrite a stream of IOVec arrays to a handle.streamly<Write a stream of arrays to a handle after grouping them in IOVecR arrays of up to a maximum total size. Writes are performed using gather IO via writev4 system call. The maximum number of entries in each IOVec group limited to 512.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyRWrite a byte stream to a file handle. Combines the bytes in chunks of size up to 34? before writing. Note that the write behavior depends on the - and the current seek position of the handle. 5!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#F6streamlyNRead directories as Left and files as Right. Filter out "." and ".." entries.InternalstreamlyRead files only.Internalstreamly7Read directories only. Filter out "." and ".." entries.InternalstreamlyRaw read of a directory.InternalstreamlyNRead directories as Left and files as Right. Filter out "." and ".." entries.InternalstreamlyRead files only.InternalstreamlyRead directories only.Internal6!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone %456HMVg?streamly Just like  except that it has a zipping  instance and no F instance.streamlyList a is a replacement for [a].streamly3A list constructor and pattern that deconstructs a ) into its head and tail. Corresponds to : for Haskell lists.streamly<An empty list constructor and pattern that matches an empty ). Corresponds to '[]' for Haskell lists.streamly Convert a  to a regular streamlyConvert a regular  to a  5(c) 2017 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone@AFstreamlySame as streamlySame as  runStream.streamly%Same as "Streamly.Prelude.runStream".streamlySame as runStream . wSerially.streamlySame as runStream . parallely.streamlySame as runStream . asyncly.streamlySame as runStream . zipping.streamlySame as runStream . zippingAsync.K 0148DEFGHIJKLMOPQRSTUVWXYZ[\]^_`abcdefghijklmnoxyz{|}~KLImecT|yVWXYf8QngjU[\ ]^_`a1MOiokZ}4KHldbSzxDEF0JG{P~Rh7!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone>hstreamly`Decode a stream of bytes to Unicode characters by mapping each byte to a corresponding Unicode  in 0-255 range. Since: 0.7.0streamlyEncode a stream of Unicode characters to bytes by mapping each character to a byte in 0-255 range. Throws an error if the input stream contains characters beyond 255. Since: 0.7.0 streamlyLike  but silently truncates and maps input characters beyond 255 to (incorrect) chars in 0-255 range. No error or exception is thrown when such truncation occurs. Since: 0.7.0 streamlyDecode a UTF-8 encoded bytestream to a stream of Unicode characters. The incoming stream is truncated if an invalid codepoint is encountered. Since: 0.7.0streamlyInternal streamlyDecode a UTF-8 encoded bytestream to a stream of Unicode characters. Any invalid codepoint encountered is replaced with the unicode replacement character. Since: 0.7.0streamlyInternalstreamlyInternalstreamlyInternal streamlyDEncode a stream of Unicode characters to a UTF-8 encoded bytestream. Since: 0.7.0streamly(Remove leading whitespace from a string.  stripStart = S.dropWhile isSpaceInternalstreamly0Fold each line of the stream using the supplied   and stream the result.CS.toList $ lines FL.toList (S.fromList "lines\nthis\nstring\n\n\n")#["lines", "this", "string", "", ""] !lines = S.splitOnSuffix (== '\n')Internalstreamly,Code copied from base/Data.Char to INLINE itstreamly0Fold each word of the stream using the supplied   and stream the result.>S.toList $ words FL.toList (S.fromList "fold these words")["fold", "these", "words"] words = S.wordsBy isSpaceInternalstreamly8Unfold a stream to character streams using the supplied B7 and concat the results suffixing a newline character \n to each stream.InternalstreamlyIUnfold the elements of a stream to character streams using the supplied BQ and concat the results with a whitespace character infixed between the streams.Internal^_`a    8!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone>{wstreamlyqBreak a string up into a stream of strings at newline characters. The resulting strings do not contain newlines. lines = S.lines A.write9S.toList $ lines $ S.fromList "lines\nthis\nstring\n\n\n"["lines","this","string","",""]streamlyiBreak a string up into a stream of strings, which were delimited by characters representing white space. words = S.words A.writeFS.toList $ words $ S.fromList "A newline\nis considered white space?"7["A", "newline", "is", "considered", "white", "space?"]streamlyFlattens the stream of  Array Char8, after appending a terminating newline to each string. is an inverse operation to .;S.toList $ unlines $ S.fromList ["lines", "this", "string"]"lines\nthis\nstring\n" unlines = S.unlines A.readNote that, in general unlines . lines /= idstreamlyFlattens the stream of  Array Char5, after appending a separating space to each string. is an inverse operation to .=S.toList $ unwords $ S.fromList ["unwords", "this", "string"]"unwords this string" unwords = S.unwords A.readNote that, in general unwords . words /= id9!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#>FstreamlyRead a  ByteArray from a file handle. If no data is available on the handle it blocks until some data becomes available. If data is available then it immediately returns that data without blocking. It reads a maximum of up to the size requested.streamlytoChunksWithBufferOf size h+ reads a stream of arrays from file handle h6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size.streamly toChunksWithBufferOf size handle0 reads a stream of arrays from the file handle handle4. The maximum size of a single array is limited to size5. The actual size read may be less than or equal to size. streamlyUnfold the tuple (bufsize, handle) into a stream of O arrays. Read requests to the IO device are performed using a buffer of size bufsizeQ. The size of an array in the resulting stream is always less than or equal to bufsize.streamlytoChunks handlen reads a stream of arrays from the specified file handle. The maximum size of a single array is limited to defaultChunkSize5. The actual size read may be less than or equal to defaultChunkSize. 0toChunks = toChunksWithBufferOf defaultChunkSizestreamly`Read a stream of chunks from standard input. The maximum size of a single chunk is limited to defaultChunkSize). The actual size read may be less than defaultChunkSize. getChunks = toChunks stdinInternalstreamly+Read a stream of bytes from standard input. getBytes = toBytes stdinInternalstreamly"Unfolds a handle into a stream of J arrays. Requests to the IO device are performed using a buffer of size S. The size of arrays in the resulting stream are therefore less than or equal to .streamlyUnfolds the tuple (bufsize, handle)T into a byte stream, read requests to the IO device are performed using buffers of bufsize.streamly"toBytesWithBufferOf bufsize handleQ reads a byte stream from a file handle, reads are performed in chunks of up to bufsize.Internalstreamly`Unfolds a file handle into a byte stream. IO requests to the device are performed in sizes of .streamly#Generate a byte stream from a file .Internalstreamly Write an ; to a file handle.streamly%Write a stream of arrays to a handle.streamly,Write a stream of chunks to standard output.InternalstreamlyCWrite a stream of strings to standard output using Latin1 encoding.Internalstreamly,fromChunksWithBufferOf bufsize handle stream writes a stream of arrays to handle3 after coalescing the adjacent arrays in chunks of bufsize. The chunk size is only a maximum and the actual writes could be smaller as we do not split the arrays to fit exactly to the specified size.streamly+fromBytesWithBufferOf bufsize handle stream writes stream to handle in chunks of bufsizeX. A write is performed to the IO device as soon as we collect the required input size.streamlyPWrite a byte stream to a file handle. Accumulates the input in chunks of up to  before writing.'NOTE: This may perform better than the ; fold, you can try this if you need some extra perf boost.streamlyrWrite a stream of arrays to a handle. Each array in the stream is written to the device as a separate IO request.streamly&writeChunksWithBufferOf bufsize handle writes a stream of arrays to handle3 after coalescing the adjacent arrays in chunks of bufsize. We never split an array, if a single array is bigger than the specified size it emitted as it is. Multiple arrays are coalesed as long as the total size remains below the specified size.streamly writeWithBufferOf reqSize handle writes the input stream to handleS. Bytes in the input stream are collected into a buffer until we have a chunk of reqSize# and then written to the IO device.streamlyPWrite a byte stream to a file handle. Accumulates the input in chunks of up to " before writing to the IO device. :(c) 2019 Harendra KumarBSD3streamly@composewell.com experimentalGHCNone"#>Fxstreamly name mode act opens a file using 5 and passes the resulting handle to the computation act+. The handle will be closed on exit from , whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by & rather than any exception raised by act.Internalstreamly Transform an B from a  to an unfold from a +. The resulting unfold opens a handle in , uses it using the supplied unfold and then makes sure that the handle is closed on normal termination or in case of an exception. If closing the handle raises an exception, then this exception will be raised by .Internalstreamly;Write an array to a file. Overwrites the file if it exists.streamlyappend an array to a file.streamlytoChunksWithBufferOf size file$ reads a stream of arrays from file file6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size.streamly toChunks file$ reads a stream of arrays from file file4. The maximum size of a single array is limited to defaultChunkSize). The actual size read may be less than defaultChunkSize. 0toChunks = toChunksWithBufferOf defaultChunkSizestreamly^Unfolds a file path into a byte stream. IO requests to the device are performed in sizes of .streamlyGenerate a stream of bytes from a file specified by path. The stream ends when EOF is encountered. File is locked using multiple reader and single writer locking mode.InternalstreamlyEWrite a stream of arrays to a file. Overwrites the file if it exists.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyKWrite a byte stream to a file. Combines the bytes in chunks of size up to 34 before writing. If the file exists it is truncated to zero size before writing. If the file does not exist it is created. File is locked using single writer locking mode.InternalstreamlyrWrite a stream of chunks to a handle. Each chunk in the stream is written to the device as a separate IO request.Internalstreamly"writeWithBufferOf chunkSize handle writes the input stream to handleX. Bytes in the input stream are collected into a buffer until we have a chunk of size  chunkSize# and then written to the IO device.InternalstreamlyIWrite a byte stream to a file. Accumulates the input in chunks of up to " before writing to the IO device.Internalstreamly$Append a stream of arrays to a file.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyLAppend a byte stream to a file. Combines the bytes in chunks of size up to 34 before writing. If the file exists then the new data is appended to the file. If the file does not exist it is created. File is locked using single writer locking mode.!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#FT  !(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone>        ;!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#>F *streamly$Specify the socket protocol details.streamly socket act runs the monadic computation actK passing the socket handle to it. The handle will be closed on exit from , whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by % rather than any exception raised by act.streamlyLike D but runs a streaming computation instead of a monadic computation.streamlyUnfold a three tuple (listenQLen, spec, addr)U into a stream of connected protocol sockets corresponding to incoming connections.  listenQLen? is the maximum number of pending connections in the backlog. spec7 is the socket protocol and options specification and addrL is the protocol address where the server listens for incoming connections.streamly"Start a TCP stream server that listens for connections on the supplied server address specification (address family, local interface IP address and port). The server generates a stream of connected sockets. The first argument is the maximum number of pending connections in the backlog.InternalstreamlyRead a  ByteArray from a file handle. If no data is available on the handle it blocks until some data becomes available. If data is available then it immediately returns that data without blocking. It reads a maximum of up to the size requested.streamly Write an Array to a file handle.streamlytoChunksWithBufferOf size h+ reads a stream of arrays from file handle h4. The maximum size of a single array is limited to size. fromHandleArraysUpto ignores the prevailing  TextEncoding and  NewlineMode on the Handle.streamly toChunks h- reads a stream of arrays from socket handle h4. The maximum size of a single array is limited to defaultChunkSize.streamlyUnfold the tuple (bufsize, socket) into a stream of K arrays. Read requests to the socket are performed using a buffer of size bufsizeQ. The size of an array in the resulting stream is always less than or equal to bufsize.streamly"Unfolds a socket into a stream of G arrays. Requests to the socket are performed using a buffer of size  4S. The size of arrays in the resulting stream are therefore less than or equal to  4. streamlyhGenerate a stream of elements of the given type from a socket. The stream ends when EOF is encountered.streamlyUnfolds the tuple (bufsize, socket)Q into a byte stream, read requests to the socket are performed using buffers of bufsize.streamly Unfolds a  L into a byte stream. IO requests to the socket are performed in sizes of  4. streamly%Write a stream of arrays to a handle.streamlysWrite a stream of arrays to a socket. Each array in the stream is written to the socket as a separate IO request. streamly9Write a stream of strings to a socket in Latin1 encoding.Internal streamlyLike ! but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements. streamlynWrite a byte stream to a socket. Accumulates the input in chunks of specified number of bytes before writing.streamlyRWrite a byte stream to a file handle. Combines the bytes in chunks of size up to ? before writing. Note that the write behavior depends on the IOMode- and the current seek position of the handle.!streamlyKWrite a byte stream to a socket. Accumulates the input in chunks of up to  bytes before writing. write =         !!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone , !! <!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone"#>F a"streamlyUnfold a tuple (ipAddr, port)* into a stream of connected TCP sockets. ipAddr is the local IP address and port6 is the local port on which connections are accepted.#streamlyLike " but binds on the IPv4 address 0.0.0.0o i.e. on all IPv4 addresses/interfaces of the machine and listens for TCP connections on the specified port. 4acceptOnPort = UF.supplyFirst acceptOnAddr (0,0,0,0)$streamlyLike ") but binds on the localhost IPv4 address  127.0.0.1o. The server can only be accessed from the local host, it cannot be accessed from other hosts on the network. ;acceptOnPortLocal = UF.supplyFirst acceptOnAddr (127,0,0,1)streamlyLike o but binds on the specified IPv4 address of the machine and listens for TCP connections on the specified port.InternalstreamlyLike  but binds on the IPv4 address 0.0.0.0o i.e. on all IPv4 addresses/interfaces of the machine and listens for TCP connections on the specified port. /connectionsOnPort = connectionsOnAddr (0,0,0,0)InternalstreamlyLike ) but binds on the localhost IPv4 address  127.0.0.1o. The server can only be accessed from the local host, it cannot be accessed from other hosts on the network. 6connectionsOnLocalHost = connectionsOnAddr (127,0,0,1)Internal%streamly4Connect to the specified IP address and port number.streamlyjConnect to a remote host using IP address and port and run the supplied action on the resulting socket.  makes sure that the socket is closed on normal termination or in case of an exception. If closing the socket raises an exception, then this exception will be raised by .Internalstreamly Transform an B from a  . to an unfold from a remote IP address and port. The resulting unfold opens a socket, uses it using the supplied unfold and then makes sure that the socket is closed on normal termination or in case of an exception. If closing the socket raises an exception, then this exception will be raised by .Internalstreamly addr port act| opens a connection to the specified IPv4 host address and port and passes the resulting socket handle to the computation act*. The handle will be closed on exit from , whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by % rather than any exception raised by act.InternalstreamlyBRead a stream from the supplied IPv4 host address and port number.streamlyBRead a stream from the supplied IPv4 host address and port number.streamlyLWrite a stream of arrays to the supplied IPv4 host address and port number.streamlyLWrite a stream of arrays to the supplied IPv4 host address and port number.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyAWrite a stream to the supplied IPv4 host address and port number.streamlyAWrite a stream to the supplied IPv4 host address and port number."#$% !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone cD"#$%"#$%=(c) 2017 Harendra KumarBSD3streamly@composewell.comNone di>?@>?A>?B>?CDEEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxy z P { | ] } ~"$~$$$''''''''''''(((((((()))))))**********++++,,,,,,,.///////////000000000000000000000000000&0M000000d0_000O0f0h0P0g0e0i0j0Q0R0V0U0T0S00a0`000b00c00000]000000000000000000000L0K00000000000000 0 0 0 0 0000000000000000000 0!"#$%&'(7)7*7+7,7-7.9/9091992939|;4;4;5;6;7;8;9;/;0;1;;2;3;|<:<;<<<= > > ? @ A B | C D E F G H I J JK>LMN>OPQRSTUVVWWXXY>?Zk[\[]^_`aLbcdefg>hijklmnopqrstuvwxyefpqrz{|}~     J !">O#$%&'()*+>,-.>/01>234>?567>?89:;kl>%<=>n !M?@ABCDEFG>%HIJKLMNO>?P>?QR>/STKLUKLVWX>LY>LZ[=\]^_`a>bcdefgg_h`ivtqMjkg>?lmnopqrstuvwxiyz{|}~L>%MO~d_fhgeVUTSa`]DK  g_`hL~gi]                4        z                          !!!!!!!>!!!!!"######v#t#t####M########### ##!###########################################D#E## # # ####d#_##O#f#h#g#e#T#S#V#U##a##`## ## ######################F#### #!####"#####$#%#&#'############K##(########## # ##,#)#*#+#,#-#.$]$$$$$$$>%[$.$/$M$$$$$0$$ $!>?1'2'3'L(4(5(6(7>?8(9>O:);)<)=)>*?*@*A*B*C*D+E>FG>FH>FI,,,,,,J>FK,L>FM,>FN,>FO,P,Q,R>FS>FT,U---V-W-X-Y-Z-[-&-}-\-]-t-q-^-_-`--~-------- -!--L-a-b--------c-d-.e.}.~.f.d.O.g.h.i.&.j>/k0i0>lm0 0~0f0n0o0F00>pq>pr0 >st0(0u000000v0000w000x000y0z0{0|0}>2~>?0000000000D0E00 m0 m00 01c111111y11x1222222222222222D222|>25555~555566>?66666666KL7,7+7*7-7777778888999>99999>999999999:>:>>:::::::::|::2:3:::;;;;;;;;;;;;<<<<<<<<<<2<<|<3<%streamly-0.7.0-2KpACWM8j26HobRTgrBo2XStreamlyStreamly.Data.FoldStreamly.PreludeStreamly.Memory.ArrayStreamly.Data.UnfoldStreamly.Data.Unicode.StreamStreamly.FileSystem.HandleStreamly.Network.SocketStreamly.Network.Inet.TCPStreamly.FileSystem.IOVecStreamly.FileSystem.FDIOStreamly.Internal.Data.Atomics!Streamly.Internal.Data.Sink.TypesStreamly.Internal.Data.Strict!Streamly.Internal.Data.Pipe.TypesStreamly.Internal.Data.PipeStreamly.Internal.Data.Time!Streamly.Internal.Data.Time.Units!Streamly.Internal.Data.Time.ClockStreamly.Internal.Data.SVar!Streamly.Internal.Data.Fold.TypesStreamly.Internal.Data.SinkStreamly.Internal.Data.Fold#Streamly.Internal.Data.Unicode.CharStreamly.Memory.MallocStreamly.Streams.StreamDK.TypeStreamly.Streams.StreamDKStreamly.Streams.StreamK.TypeStreamly.Streams.StreamKStreamly.Streams.SVar*Streamly.Internal.Data.Stream.StreamD.Type$Streamly.Internal.Memory.Array.TypesStreamly.Memory.Ring#Streamly.Internal.Data.Unfold.TypesStreamly.Streams.StreamDStreamly.Streams.Prelude Data.FoldablefoldStreamly.Streams.SerialStreamly.Streams.ParallelStreamly.Streams.CombinatorsStreamly.Streams.AsyncStreamly.Streams.AheadStreamly.Streams.EnumerationStreamly.Internal.Data.UnfoldStreamly.Internal.Memory.ArrayStreamly.Streams.ZipStreamly.Internal.Prelude$Streamly.Internal.Memory.ArrayStreamStreamly.FileSystem.FDAdefaultChunkSize Streamly.Internal.FileSystem.DirStreamly.Internal.Data.List%Streamly.Internal.Data.Unicode.Stream&Streamly.Internal.Memory.Unicode.Array#Streamly.Internal.FileSystem.Handle!Streamly.Internal.FileSystem.File Streamly.Internal.Network.Socket"Streamly.Internal.Network.Inet.TCPStreamly.TutorialbaseGHC.Base<> Semigroupstimessconcat MonadAsyncRaterateLowrateGoalrateHigh rateBufferFoldsequencemapMdraindrainBylastlengthsumproduct maximumBymaximum minimumByminimummeanvariancestdDevmconcatfoldMapfoldMapMtoListindexheadfindlookup findIndex elemIndexnullanyelemallnotElemandortee distribute partitionunzip StreamingIsStreamconsM|:adaptcons.:nilserialonce fromFoldableArraywriteNwrite fromListNfromListUnfoldfoldWith foldMapWith forEachWith InterleavedTWSerialWSerialTStreamTSerialSerialTseriallymap wSerially interleavingwSerial<=>Parallel ParallelTparallel|$|&|$.|&. parallely maxThreads maxBufferrateavgRateminRatemaxRate constRateWAsyncWAsyncTAsyncAsyncTmkAsyncasync<|asynclywAsyncwAsynclyAheadAheadTaheadaheadly Enumerable enumerateFromenumerateFromToenumerateFromThenenumerateFromThenTo enumerate enumerateToreadZipAsync ZipAsyncM ZipSerial ZipStream ZipSerialM zipSeriallyzipping zipAsyncWith zipAsyncWithM zipAsyncly zippingAsyncunconsunfoldrunfoldrMunfoldyieldyieldM fromIndices fromIndicesM replicateM replicaterepeatrepeatMiterateiterateM fromListM fromFoldableMeach fromHandlefoldrMfoldrfoldr1foldxfoldl'foldl1'foldxMfoldlM' runStreamdrainNrunN drainWhilerunWhiletailinit!!findM findIndices elemIndices isPrefixOfisSubsequenceOf stripPrefixmapM_toHandlescanxscanlM'scanl' postscanl' postscanlM'scanl1M'scanl1'scanpostscanfilterfilterMuniqthetake takeWhile takeWhileMdrop dropWhile dropWhileMmapMaybe mapMaybeMreverse intersperseM intersperseinsertBydeleteByindexedindexedRzipWithMzipWitheqBycmpBymergeBymergeByM mergeAsyncBy mergeAsyncByM concatMapWith concatMap concatMapM concatUnfoldchunksOf intervalsOfgroupsBygroupsByRollinggroupssplitOn splitOnSuffixwordsBysplitWithSuffixtaptracebeforeafter onExceptionfinallybrackethandle runStreaming runStreamTrunInterleavedT runParallelT runAsyncT runZipStream runZipAsync decodeLatin1 encodeLatin1encodeLatin1Lax decodeUtf8 decodeUtf8Lax encodeUtf8readChunksWithBufferOf readChunksreadWithBufferOf writeChunkswriteWithBufferOfSockSpec sockFamilysockType sockProtosockOptsaccept acceptOnAddr acceptOnPortacceptOnPortLocalconnectIOVeciovBaseiovLenc_writev c_safe_writevwriteAllwritev writevAllatomicModifyIORefCASatomicModifyIORefCAS_ writeBarrierstoreLoadBarrierSinkEither' Data.EitherEitherMaybe' GHC.MaybeMaybefromStrictMaybeLeft'Right'Just'Nothing'Tuple4'Tuple3'Tuple' PipeStateMonadPipecomposeConsumeProduceStepYieldContinueperiodic withClock RelTime64AbsTimeTimeSpec TimeUnit64GHC.IntInt64TimeUnit integer-gmpGHC.Integer.TypeIntegersecnsec MilliSecond64 MicroSecond64 NanoSecond64 toAbsTime fromAbsTime toRelTime64 fromRelTime64 diffAbsTime64showNanoSecond64RelTimeaddToAbsTime64 toRelTime fromRelTime diffAbsTime addToAbsTime showRelTime64Clock MonotonicgetTimeRealtimeProcessCPUTime ThreadCPUTime MonotonicRawMonotonicCoarseUptimeRealtimeCoarsePushBufferPolicy WorkerInfo SVarStyleAheadHeapEntry ChildEvent adaptStatesendallThreadsDone workerThreads pushWorkerParminThreadDelayrateRecoveryTimegetWorkerLatency toStreamVarSVarHeapDequeueResultClearingWaitingReadyRunInIOrunInIOState streamVar svarStylesvarMrun svarStopStyle svarStopBy outputQueueoutputDoorBell readOutputQ postProcessmaxWorkerLimitmaxBufferLimitpushBufferSpacepushBufferPolicypushBufferMVar remainingWork yieldRateInfoenqueue isWorkDone isQueueDone needDoorBellworkLoop workerCount accountThreadworkerStopMVar svarStatssvarRefsvarInspectMode svarCreator outputHeapaheadWorkQueue SVarStopStyleStopNoneStopAnyStopByLimit UnlimitedLimited SVarStats maxHeapSizetotalDispatches maxWorkers maxOutQSize maxWorkQSizeavgWorkerLatencyminWorkerLatencymaxWorkerLatency svarStopTime YieldRateInfosvarLatencyTargetsvarLatencyRangesvarRateBuffersvarGainedLostYieldssvarAllTimeLatencyworkerBootstrapLatencyworkerPollingIntervalworkerPendingLatencyworkerCollectedLatencyworkerMeasuredLatencyworkerYieldMaxworkerYieldCountworkerLatencyStartAsyncVar WAsyncVar ParallelVarAheadVarAheadEntryNullAheadEntryPureAheadEntryStream ChildYield ChildStop ThreadAbortdefState setYieldLimit getYieldLimit setMaxThreads getMaxThreads setMaxBuffer getMaxBuffer setStreamRate getStreamRatesetStreamLatencysetInspectModegetInspectMode cleanupSVarcleanupSVarFromWorkercollectLatencydumpSVarcaptureMonadStatedoForkdecrementYieldLimitincrementYieldLimitdecrementBufferLimitincrementBufferLimitworkerUpdateLatencyupdateYieldCountworkerRateControl sendYieldsendStop enqueueLIFO enqueueFIFO enqueueAheadreEnqueueAheadqueueEmptyAhead dequeueAhead withIORefdequeueFromHeapdequeueFromHeapSeq heapIsSanerequeueOnHeapTop updateHeapSeq delThread modifyThreadisBeyondMaxRatedispatchWorkerPacedreadOutputQBoundedreadOutputQPacedpostProcessBoundedpostProcessPacedgetYieldRateInfo newSVarStatssendFirstWorker newAheadVarnewParallelVarFold2simplify toListRevFlmaplmapMlfilterlfilterM lcatMaybesJustltake ltakeWhile duplicate initializerunStep lchunksOf lsessionsOf$fFloatingFold GHC.FloatFloating$fFractionalFoldGHC.Real Fractional $fNumFoldGHC.NumNum $fMonoidFoldMonoid$fSemigroupFold$fApplicativeFold<*> $fFunctorFold lchunksOf2toFold sequence_demuxunzipMdrainMmkPuremkPureIdmkFoldmkFoldIdhoist generally transform_Fold1Foldable genericLengthrollingHashWithSaltghc-prim GHC.TypesInt defaultSalt rollingHashmappendmempty genericIndexIntegralhushTrueFalse partitionByM partitionByLeftRight demuxWith demuxWith_demux_ classifyWithclassify unzipWithM unzipWithGHC.ListzipdrainBy2mallocForeignPtrAlignedBytes%mallocForeignPtrAlignedUnmanagedBytesStreamStopfoldrSYieldKStopKpuretransformers-0.5.5.0Control.Monad.Trans.ClassliftmkStream fromStopK fromYieldKconsKnilMfoldStreamShared foldStream foldrSWith foldrSSharedsharedMbuildMunShare concatMapBy fromStreamtoStreamconsMByfoldStreamSVar consMStreamfoldrSMbuildbuildSbuildSMaugmentS augmentSMconjoin mapMSerialbindWithfoldrTfoldlx'foldlMx'foldlSfoldlT fromStreamK toStreamKscanlx' withLocal fromStreamVartoSVarfromSVar GroupState GroupStart GroupBuffer GroupYield GroupFinishUnStreamSkip fromStreamDfoldrMxgroupsOf groupsOf2newArrayAlignedAllocWithnewArrayAlignedUnmanagednewArray withNewArrayreallocAligned shrinkToFit unsafeIndexIO unsafeIndex byteLength writeNAlignedwriteNAlignedUnmanaged writeNUnsafe writeAlignedfromStreamDArraysOf allocOverheadpackArraysChunksOf groupIOVecsOfsplitAt FlattenState OuterLoop InnerLoopaStartaEndaBoundmemcpymemcmpunsafeInlineIObytesToElemCount unsafeSnocsnocrealloc byteCapacity toStreamD toStreamDRev toStreamKRevtoArrayMinChunk fromStreamDN flattenArraysflattenArraysRev spliceTwo mkChunkSize mkChunkSizeKBunlinesspliceWithDoublinglpackArraysChunksOfbreakOnRingnewadvance unsafeInsertunsafeEqArrayN unsafeEqArrayunsafeFoldRingGHC.PtrPtrunsafeFoldRingMunsafeFoldRingFullM ringStart ringBoundInterposeFirstInnerInterposeSecondYieldICALFirstInnerICALSecondInnerInterposeSuffixFirstInnerenumerateFromStepIntegral splitInnerBysplitInnerBySuffix concatMapUgintercalateSuffix gintercalategbracketunsafePeekElemOff DecodeError DecodeState CodePointInterleaveStateInterleaveFirstInterleaveSecondInterleaveSecondOnlyInterleaveFirstOnly AppendState AppendFirst AppendSecondConcatUnfoldInterleaveStateConcatUnfoldInterleaveOuterConcatUnfoldInterleaveInnerConcatUnfoldInterleaveInnerLConcatUnfoldInterleaveInnerRConcatMapUStateConcatMapUOuterConcatMapUInnerenumerateFromToIntegralenumerateFromIntegralenumerateFromThenToIntegralenumerateFromThenIntegralenumerateFromStepNumnumFrom numFromThenenumerateFromToFractionalenumerateFromThenToFractional generateMgenerate liftInner runReaderT evalStateT runStateT toListRevreverse'splitSuffixBy'groupsRollingBysplitBy splitSuffixBy splitSuffixOnconcatUnfoldInterleaveconcatUnfoldRoundrobinappend interleave interleaveMininterleaveSuffixinterleaveInfix roundRobininterposeSuffix interpose prescanlM' prescanl' postscanlMx' postscanlx'scanlMx' postscanlM postscanlscanlMscanlscanl1Mscanl1intersperseSuffixdecodeUtf8LenientresumeDecodeUtf8EitherdecodeUtf8EitherdecodeUtf8ArraysdecodeUtf8ArraysLenient fromStreamS toStreamSrunFoldfmap wSerialFst wSerialMin consMParallel parallelFst parallelMintapAsync$ mkParallelNothing_serialLatency inspectMode maxYields printState newWAsyncVar forkSVarAsyncjoinStreamVarAsync consMAsync consMWAsyncmkAsync' consMAheadGHC.EnumEnummaxBoundBoundedenumerateFromFractionalenumFromenumerateFromThenFractional enumFromThen enumFromToenumFromThenToenumerateFromToSmallenumerateFromThenToSmallenumerateFromThenSmallBoundedtoEnumminBoundenumerateFromBoundedsupply supplyFirst supplySecond discardFirst discardSecondswap fromStream1 fromStream2effect singletonidentity mapMWithInputconstconcat outerProduct fromStreamN toStreamRev readIndex writeIndexstreamTransform streamFoldoddData.Functor.IdentityIdentitytoPure toPureRev Data.MaybefromJustisJustForeign.StorableStorableinterjectSuffix roundrobin intercalateintercalateSuffixarraysOfspanByspanbreak spanByRolling-. splitOnSeqsplitOnSuffixSeq splitBySeqsplitWithSuffixSeqclassifySessionsByclassifyKeepAliveSessionsclassifySessionsOfControl.Monad.Trans.ReaderReaderT Control.Monad.Trans.State.StrictStateT usingStateT chunksOf2 concatRevcompacttoArrayHandleopenFilestdinstdoutstderr readArrayUpto writeArray writeIOVec_readArraysOfUpto readArraysreadInChunksOf writeArrayswriteArraysPackedUpto_writevArraysPackedUptowriteInChunksOf GHC.IO.IOModeIOModereadArraysOfUpto readEither readFilesreadDirstoEithertoFilestoDirsZipListList ApplicativeConsNil fromZipList toZipList toZipSerialtoSerialChar stripStartlinesisSpacewordsunwords_toChunksWithBufferOftoChunksWithBufferOfGHC.WordWord8toChunks getChunksgetBytestoBytesWithBufferOftoBytesGHC.IO.Handle.Types fromChunks putChunks putStringsfromChunksWithBufferOffromBytesWithBufferOf fromByteswriteChunksWithBufferOfwrite2withFileGHC.IO.Handle.FD usingFileGHC.IOFilePathReadMode appendArray appendChunksappendWithBufferOf useSocketM useSocket connections readArrayOf&network-3.1.1.0-H5B40XjCjXjHGaYS2H1v9GNetwork.Socket.TypesSocket writeStringsconnectionsOnAddrconnectionsOnPortconnectionsOnLocalHostwithConnectionMusingConnectionwithConnection