h$\           ! " # $ % & ' ( ) * + , - . / 0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                      !!!!!!!!!!!!!!!!"""""#############################$$$$$$$$%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%&&&&&&&&&&'''''''''''''''''''''''''''''((((((((((((((((((((((((((((((((((((((((((((((((((((((((((())))))************++,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,--------------------------........./////////////////////////000000000000000000000000000000000001111111111111111223333333334444556666666666666666666666666666666666666666666666666666666666666666666666 6 6 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; < < < < < < < < < < < < < < < < < < < < < < < < < < < = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? @ @ @ @ @ @ @ @ @ @ A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A B B B B B B B B C C C C C C C C C C D D D D D D D D D D D D D D D D D D D D D D D D D E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F G G G G G G G G G G G G G G G G G G G G G G G H H H H H H H H H H H H H H H H H H H H H H H H H H H I I I I I IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIJJJJJJJJJJJJJJJKKKKKKKKKKKKKLLLLLLLLLLLLLLLLLLLLLLLLLLLMMMMMMMMMMMMMNNNNNNNNNNNNNNNNNNNNNNNOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOPPPPPPPPPPPPPQQQQQQQQQQQQQQQQQQQQQQRRRRRRRRSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSTTTTTTTTTTTTTTTTTTTUUUUUUUUUUUUVVVVVVVVVVVVVVWWWWWWWWWWWWWWWWWXXXXXXXXXXXXXXXXXXXXXXXXXXXXYYYYYYYYYYYYYYYYYYZZZZZZZZZZZZZZZZZZZ[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]]]]]]]]]]]]]]]^^^^^^^^^^^^^^^^^^^^^^____````aaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?0   !(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone! #$&-035678>?3ostreamlyA monad that can perform concurrent or parallel IO operations. Streams that can be composed concurrently require the underlying monad to be .Since: 0.1.0 (Streamly)streamlyFork a thread to run the given computation, installing the provided exception handler. Lifted to any monad with 'MonadBaseControl IO m' capability.TODO: the RunInIO argument can be removed, we can directly pass the action as "mrun action" instead.streamly= lifted to any monad with 'MonadBaseControl IO m' capability.streamlyFork a thread that is automatically killed as soon as the reference to the returned threadId is garbage collected.!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?4streamlyLike  but is not removed by the compiler, it is always present in production code. Pre-release!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?5streamlyDiscard any exceptions or value returned by an effectful action. Pre-release &(c) 2018-2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?6< !(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?7streamly/Given a continuation based transformation from a to b/ and a continuation based transformation from [b] to c/, make continuation based transformation from [a] to c. Pre-release (c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?9streamly A strict "streamlyReturn  if the given value is a Left',  otherwise.#streamlyReturn ! if the given value is a Right',  otherwise.$streamly3Return the contents of a Left'-value or errors out.%streamly4Return the contents of a Right'-value or errors out. !"#$% !"#$% (c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?;'streamly A strict *streamly#Convert strict Maybe' to lazy Maybe+streamlyExtract the element out of a Just' and throws an error if its argument is Nothing'.,streamly7Returns True iff its argument is of the form "Just' _".'()*+,'()*,+ !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?=.streamlyA . is a special type of Fold? that does not accumulate any value, but runs only effects. A . has no state to maintain therefore can be a bit more efficient than a Fold with () as the state, especially when .s are composed with other operations. A Sink can be upgraded to a Fold, but a Fold! cannot be converted into a Sink.././(c) 2015 Dan DoelBSD3streamly@composewell.com non-portableNone! #$&-035678>?L4streamly!Create a new small mutable array.5streamly5Read the element at a given index in a mutable array.6streamly6Write an element at the given idex in a mutable array.7streamly)Look up an element in an immutable array.The purpose of returning a result using a monad is to allow the caller to avoid retaining references to the array. Evaluating the return value will cause the array lookup to be performed, even though it may not require the element of the array to be evaluated (which could throw an exception). For instance: data Box a = Box a ... f sa = case indexSmallArrayM sa 0 of Box x -> ...x" is not a closure that references sa% as it would be if we instead wrote: let x = indexSmallArray sa 0And does not prevent sa from being garbage collected. Note that  is not adequate for this use, as it is a newtype, and cannot be evaluated without evaluating the element.8streamly)Look up an element in an immutable array.9streamlyRead a value from the immutable array at the given index, returning the result in an unboxed unary tuple. This is currently used to implement folds.:streamly/Create a copy of a slice of an immutable array.;streamly,Create a copy of a slice of a mutable array.<streamlyCreate an immutable array corresponding to a slice of a mutable array.streamlyCreate a mutable array corresponding to a slice of an immutable array.-transformed monads, for example, will not work right at all.Estreamly*Strict map over the elements of the array.Gstreamly Create a 2 from a list of a known length. If the length of the list does not match the given length, this throws an exception.Hstreamly Create a 2 from a list.JstreamlyLstreamlyPstreamlyZstreamlyLexicographic ordering. Subject to change between major versions.[streamly]streamly 4streamlysizestreamlyinitial contents5streamlyarraystreamlyindex6streamlyarraystreamlyindexstreamly new element7streamlyarraystreamlyindex8streamlyarraystreamlyindex:streamlysourcestreamlyoffsetstreamlylength;streamlysourcestreamlyoffsetstreamlylength<streamlysourcestreamlyoffsetstreamlylength>streamlysourcestreamlyoffsetstreamlylength@streamly destinationstreamlydestination offsetstreamlysourcestreamly source offsetstreamlylengthAstreamly destinationstreamlydestination offsetstreamlysourcestreamly source offsetstreamlylength0123456789:;<=>?@ABCDEFGH2301456@A879:;<=>F?BCHGED1!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?MastreamlyA stream is a succession of as. A b< produces a single value and the next state of the stream. d3 indicates there are no more values in the stream.adbcadbc!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?Q9fstreamly!State representing a nested loop.istreamlyA Producer m a b. is a generator of a stream of values of type b from a seed of type a in  m. Pre-releasejstreamly Producer step inject extractnstreamly#Convert a list of pure values to a Stream Pre-releaseostreamlyInterconvert the producer between two interconvertible input types. Pre-releasepstreamly9Map the producer input to another value of the same type. Pre-releaseqstreamlyApply the second unfold to each output element of the first unfold and flatten the output in a single stream. Pre-releaserstreamly8Maps a function on the output of the producer (the type b). fghijklmnopq ijlkmnopfghq!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?Qstuwvuwvst!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?Rstreamly(Lazy right associative fold to a stream. suwvxyz{|}~ suwvxyz|{}~!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?UstreamlyRun an action forever periodically at the given frequency specified in per second (Hz).streamlyRun a computation on every clock tick, the clock runs at the specified frequency. It allows running a computation at high frequency efficiently by maintaining a local clock and adjusting it with the provided base clock at longer intervals. The first argument is a base clock returning some notion of time in microseconds. The second argument is the frequency in per second (Hz). The third argument is the action to run, the action is provided the local time as an argument.!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?WastreamlyData type to represent practically large quantities of time efficiently. It can represent time up to ~292 billion years at nanosecond resolution.streamlysecondsstreamly nanoseconds!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com pre-releaseGHCNone$ #$&-./035678>?a streamlyRelative times are relative to some arbitrary point of time. Unlike - they are not relative to a predefined epoch.streamly;Absolute times are relative to a predefined epoch in time.  represents times using  which can represent times up to ~292 billion years at a nanosecond resolution.streamly8A type class for converting between units of time using  as the intermediate representation with a nanosecond resolution. This system of units can represent up to ~292 years at nanosecond resolution with fast arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.streamly5A type class for converting between time units using  as the intermediate and the widest representation with a nanosecond resolution. This system of units can represent arbitrarily large times but provides least efficient arithmetic operations due to  arithmetic.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.8A type class for converting between units of time using  as the intermediate representation. This system of units can represent up to ~292 billion years at nanosecond resolution with reasonably efficient arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.streamlyAn  time representation with a millisecond resolution. It can represent time up to ~292 million years.streamlyAn  time representation with a microsecond resolution. It can represent time up to ~292,000 years.streamlyAn  time representation with a nanosecond resolution. It can represent time up to ~292 years.streamly Convert a  to an absolute time.streamlyConvert absolute time to a .streamly Convert a  to a relative time.streamlyConvert relative time to a .streamly/Difference between two absolute points of time.streamlyConvert nanoseconds to a string showing time in an appropriate unit.(c) 2019 Composewell Technologies (c) 2009-2012, Cetin Sert (c) 2010, Eugene KirpichovBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?l streamlyClock types. A clock may be system-wide (that is, visible to all processes) or per-process (measuring time that is meaningful only within a process). All implementations shall support CLOCK_REALTIME. (The only suspend-aware monotonic is CLOCK_BOOTTIME on Linux.)streamlyThe identifier for the system-wide monotonic clock, which is defined as a clock measuring real time, whose value cannot be set via  clock_settime and which cannot have negative clock jumps. The maximum possible clock jump shall be implementation defined. For this clock, the value returned by  represents the amount of time (in seconds and nanoseconds) since an unspecified point in the past (for example, system start-up time, or the Epoch). This point does not change after system start-up time. Note that the absolute value of the monotonic clock is meaningless (because its origin is arbitrary), and thus there is no need to set it. Furthermore, realtime applications can rely on the fact that the value of this clock is never set.streamlyThe identifier of the system-wide clock measuring real time. For this clock, the value returned by  represents the amount of time (in seconds and nanoseconds) since the Epoch.streamlyThe identifier of the CPU-time clock associated with the calling process. For this clock, the value returned by  represents the amount of execution time of the current process.streamlyThe identifier of the CPU-time clock associated with the calling OS thread. For this clock, the value returned by  represents the amount of execution time of the current OS thread.streamly(since Linux 2.6.28; Linux and Mac OSX) Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments or the incremental adjustments performed by adjtime(3).streamly(since Linux 2.6.32; Linux and Mac OSX) A faster but less precise version of CLOCK_MONOTONIC. Use when you need very fast, but not fine-grained timestamps.streamly(since Linux 2.6.39; Linux and Mac OSX) Identical to CLOCK_MONOTONIC, except it also includes any time that the system is suspended. This allows applications to get a suspend-aware monotonic clock without having to deal with the complications of CLOCK_REALTIME, which may have discontinuities if the time is changed using settimeofday(2).streamly(since Linux 2.6.32; Linux-specific) A faster but less precise version of CLOCK_REALTIME. Use when you need very fast, but not fine-grained timestamps.  !(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?~streamly6Specifies the stream yield rate in yields per second (Hertz*). We keep accumulating yield credits at . At any point of time we allow only as many yields as we have accumulated as per  since the start of time. If the consumer or the producer is slower or faster, the actual rate may fall behind or exceed . We try to recover the gap between the two by increasing or decreasing the pull rate from the producer. However, if the gap becomes more than $ we try to recover only as much as . puts a bound on how low the instantaneous rate can go when recovering the rate gap. In other words, it determines the maximum yield latency. Similarly,  puts a bound on how high the instantaneous rate can go when recovering the rate gap. In other words, it determines the minimum yield latency. We reduce the latency by increasing concurrency, therefore we can say that it puts an upper bound on concurrency.If the ; is 0 or negative the stream never yields a value. If the / is 0 or negative we do not attempt to recover.Since: 0.5.0 (Streamly)streamlyThe lower rate limitstreamly"The target rate we want to achievestreamlyThe upper rate limitstreamlyMaximum slack from the goalstreamlyAn SVar or a Stream Var is a conduit to the output from multiple streams running concurrently and asynchronously. An SVar can be thought of as an asynchronous IO handle. We can write any number of streams to an SVar in a non-blocking manner and then read them back at any time at any pace. The SVar would run the streams asynchronously and accumulate results. An SVar may not really execute the stream completely and accumulate all the results. However, it ensures that the reader can read the results at whatever paces it wants to read. The SVar monitors and adapts to the consumer's pace.An SVar is a mini scheduler, it has an associated workLoop that holds the stream tasks to be picked and run by a pool of worker threads. It has an associated output queue where the output stream elements are placed by the worker threads. A outputDoorBell is used by the worker threads to intimate the consumer thread about availability of new results in the output queue. More workers are added to the SVar by  fromStreamVar on demand if the output produced is not keeping pace with the consumer. On bounded SVars, workers block on the output queue to provide throttling of the producer when the consumer is not pulling fast enough. The number of workers may even get reduced depending on the consuming pace.New work is enqueued either at the time of creation of the SVar or as a result of executing the parallel combinators i.e. <| and <|>< when the already enqueued computations get evaluated. See joinStreamVarAsync.streamlyIdentify the type of the SVar. Two computations using the same style can be scheduled on the same SVar.streamly=Sorting out-of-turn outputs in a heap for Ahead style streamsstreamly7Events that a child thread may send to a parent thread.streamly0Adapt the stream state from one type to another.streamlyWhen we run computations concurrently, we completely isolate the state of the concurrent computations from the parent computation. The invariant is that we should never be running two concurrent computations in the same thread without using the runInIO function. Also, we should never be running a concurrent computation in the parent thread, otherwise it may affect the state of the parent which is against the defined semantics of concurrent execution.streamlyThis function is used by the producer threads to queue output for the consumer thread to consume. Returns whether the queue has more space.streamlyIn contrast to pushWorker which always happens only from the consumer thread, a pushWorkerPar can happen concurrently from multiple threads on the producer side. So we need to use a thread safe modification of workerThreads. Alternatively, we can use a CreateThread event to avoid using a CAS based modification.streamlyWrite a stream to an  in a non-blocking manner. The stream can then be read back from the SVar using fromSVar.!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>??streamlySame as .streamlyClass of types that can represent a stream of elements of some type a in some monad m.Since: 0.2.0 (Streamly)streamlyConstructs a stream by adding a monadic action at the head of an existing stream. For example: > toList $ getLine `consM` getLine `consM` nil hello world ["hello","world"] Concurrent (do not use  fromParallel to construct infinite streams)streamlyOperator equivalent of . We can read it as "parallel colon" to remember that | comes before :. > toList $ getLine |: getLine |: nil hello world ["hello","world"]  let delay = threadDelay 1000000 >> print 1 drain $ fromSerial $ delay |: delay |: delay |: nil drain $ fromParallel $ delay |: delay |: delay |: nil Concurrent (do not use  fromParallel to construct infinite streams)streamly The type  Stream m a/ represents a monadic stream of values of type a% constructed using actions in monad m. It uses stop, singleton and yield continuations equivalent to the following direct style type:  toList $ 1 `cons` 2 `cons` 3 `cons` nil [1,2,3] streamlyOperator equivalent of . &> toList $ 1 .: 2 .: 3 .: nil [1,2,3] streamlyAn empty stream. > toList nil [] streamly(An empty stream producing a side effect. '> toList (nilM (print "nil")) "nil" []  Pre-releasestreamlyFold a stream by providing an SVar, a stop continuation, a singleton continuation and a yield continuation. The stream would share the current SVar passed via the State.streamlyFold a stream by providing a State, stop continuation, a singleton continuation and a yield continuation. The stream will not use the SVar passed via State.streamly;Fold sharing the SVar state within the reconstructed streamstreamly(Lazy right associative fold to a stream.streamlyLike / but shares the SVar state across computations.streamly-Lazy right fold with a monadic step function.streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.Note that the accumulator is always evaluated including the initial value.streamlyStrict left associative fold.streamlyAppends two streams sequentially, yielding all elements from the first stream, and then all elements from the second stream. import Streamly.Prelude (serial)stream1 = Stream.fromList [1,2]stream2 = Stream.fromList [3,4](Stream.toList $ stream1 `serial` stream2 [1,2,3,4]This operation can be used to fold an infinite lazy container of streams.Since: 0.2.0 (Streamly)streamlyDetach a stream from an SVarstreamly Perform a  using a specified concat strategy. The first argument specifies a merge or concat function that is used to merge the streams generated by the map function. For example, the concat function could be , parallel, async, ahead% or any other zip or merge function.streamlySee ef for documentation.22555562(c) 2020 Composewell Technologies and Contributors BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamlyAn  has an associated IO action that is automatically called whenever the finalizer is garbage collected. The action can be run and cleared prematurely.You can hold a reference to the finalizer in your data structure, if the data structure gets garbage collected the finalizer will be called.It is implemented using . Pre-releasestreamlyCreate a finalizer that calls the supplied function automatically when the it is garbage collected./The finalizer is always run using the state of the monad that is captured at the time of calling  newFinalizer./Note: To run it on garbage collection we have no option but to use the monad state captured at some earlier point of time. For the case when the finalizer is run manually before GC we could run it with the current state of the monad but we want to keep both the cases consistent. Pre-releasestreamlyRun the action associated with the finalizer and deactivate it so that it never runs again. Note, the finalizing action runs with async exceptions masked. Pre-releasestreamlyRun an action clearing the finalizer atomically wrt async exceptions. The action is run with async exceptions masked. Pre-release(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone #$&-035678>? streamly A strict (,,,)streamly A strict (,,)streamly A strict (,)!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?%streamlyRepresents a stateful transformation over an input stream of values of type a to outputs of type b in  m.streamlyThe composed pipe distributes the input to both the constituent pipes and zips the output of the two using a supplied zipping function.streamlyThe composed pipe distributes the input to both the constituent pipes and merges the outputs of the two.streamlyLift a pure function to a .streamlyCompose two pipes such that the output of the second pipe is attached to the input of the first pipe.  !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?,streamlyLift a monadic function to a .(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?{/streamlyExperimental type to provide a side input to the fold for generating the initial state. For example, if we have to fold chunks of a stream and write each chunk to a different file, then we can generate the file name using a monadic action. This is a generalized version of .InternalstreamlyFold   step   inject   extractstreamly The type  Fold m a b having constructor Fold step initial extract; represents a fold over an input stream of values of type a to a final value of type b in  m.$The fold uses an intermediate state s as accumulator, the type s is internal to the specific fold definition. The initial value of the fold state s is returned by initial. The step function consumes an input and either returns the final result b: if the fold is done or the next intermediate state (see ). At any point the fold driver can extract the result from the intermediate state using the extract function.NOTE: The constructor is not yet exposed via exposed modules, smart constructors are provided to create folds. If you think you need the constructor of this type please consider using the smart constructors in Streamly.Internal.Data.Fold instead.since 0.8.0 (type changed)streamlyFold   step   initial   extractstreamlyRepresents the result of the step of a .  returns an intermediate state of the fold, the fold step can be called again with the state or the driver can use extract& on the state to get the result out.  returns the final result and the fold cannot be driven further. Pre-releasestreamly/Map a monadic function on the output of a fold.streamlyMake a fold from a left fold style pure step function and initial value of the accumulator.If your  returns only  (i.e. never returns a ) then you can use foldl'* constructors. (s -> a -> s) -> s -> (s -> b) -> Fold m a b mkfoldlx step initial extract = fmap extract (foldl' step initial)  See also: Streamly.Prelude.foldl'streamlyMake a fold from a left fold style monadic step function and initial value of the accumulator.=A fold with an extract function can be expressed using rmapM: mkFoldlxM :: Functor m => (s -> a -> m s) -> m s -> (s -> m b) -> Fold m a b mkFoldlxM step initial extract = rmapM extract (foldlM' step initial)  See also: Streamly.Prelude.foldlM'streamlyMake a strict left fold, for non-empty streams, using first element as the starting value. Returns Nothing if the stream is empty. See also: Streamly.Prelude.foldl1' Pre-releasestreamlyMake a fold using a right fold style step function and a terminal value. It performs a strict right fold via a left fold using function composition. Note that this is strict fold, it can only be useful for constructing strict structures in memory. For reductions this will be very inefficient. For example, toList = foldr (:) [] See also: ghstreamlyLike " but with a monadic step function. For example, 6toList = foldrM (\a xs -> return $ a : xs) (return []) See also: gi Pre-releasestreamlyMake a terminating fold using a pure step function, a pure initial state and a pure state extraction function. Pre-releasestreamly Similar to  but the final state extracted is identical to the intermediate state. .mkFold_ step initial = mkFold step initial id  Pre-releasestreamlyMake a terminating fold with an effectful step function and initial state, and a state extraction function. mkFoldM = FoldWe can just use % but it is provided for completeness. Pre-releasestreamly Similar to  but the final state extracted is identical to the intermediate state. 4mkFoldM_ step initial = mkFoldM step initial return  Pre-releasestreamlyConvert more general type  into a simpler type InternalstreamlyA fold that drains all its input, running the effects and discarding the results. #drain = drainBy (const (return ()))streamly!Folds the input stream to a list.Warning! working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Data.Array.Foreign instead. toList = foldr (:) []streamlyA fold that always yields a pure value without consuming any input. Pre-releasestreamlyA fold that always yields the result of an effectful action without consuming any input. Pre-releasestreamlySequential fold application. Apply two folds sequentially to an input stream. The input is provided to the first fold, when it is done - the remaining input is provided to the second fold. When the second fold is done or if the input stream is over, the outputs of the two folds are combined using the supplied function.f = Fold.serialWith (,) (Fold.take 8 Fold.toList) (Fold.takeEndBy (== '\n') Fold.toList)1Stream.fold f $ Stream.fromList "header: hello\n"("header: ","hello\n").Note: This is dual to appending streams using gj.Note: this implementation allows for stream fusion but has quadratic time complexity, because each composition adds a new branch that each subsequent fold's input element has to traverse, therefore, it cannot scale to a large number of compositions. After around 100 compositions the performance starts dipping rapidly compared to a CPS style implementation.3Time: O(n^2) where n is the number of compositions.streamlySame as applicative . Run two folds serially one after the other discarding the result of the first. UnimplementedstreamlyteeWith k f1 f2 distributes its input to both f1 and f2? until both of them terminate and combines their output using k.?avg = Fold.teeWith (/) Fold.sum (fmap fromIntegral Fold.length).Stream.fold avg $ Stream.fromList [1.0..100.0]50.5 4teeWith k f1 f2 = fmap (uncurry k) ((Fold.tee f1 f2)7For applicative composition using this combinator see Streamly.Internal.Data.Fold.Tee. See also: Streamly.Internal.Data.Fold.TeestreamlyLike 5 but terminates as soon as the first fold terminates. UnimplementedstreamlyLike  but terminates as soon as any one of the two folds terminates. UnimplementedstreamlyShortest alternative. Apply both folds in parallel but choose the result from the one which consumed least input i.e. take the shortest succeeding fold. UnimplementedstreamlyLongest alternative. Apply both folds in parallel but choose the result from the one which consumed more input i.e. take the longest succeeding fold. UnimplementedstreamlyMap a ' returning function on the result of a  and run the returned fold. This operation can be used to express data dependencies between fold operations.Let's say the first element in the stream is a count of the following elements that we have to add, then:import Data.Maybe (fromJust)count = fmap fromJust Fold.headtotal n = Fold.take n Fold.sumStream.fold (Fold.concatMap total count) $ Stream.fromList [10,9..1]45Time: O(n^2) where n is the number of compositions. See also: ekstreamly lmap f fold maps the function f on the input of the fold.Stream.fold (Fold.lmap (\x -> x * x) Fold.sum) (Stream.enumerateFromTo 1 100)338350 lmap = Fold.lmapM returnstreamlyInternalstreamly lmapM f fold maps the monadic function f on the input of the fold.streamly2Include only those elements that pass a predicate.Stream.fold (Fold.filter (> 5) Fold.sum) $ Stream.fromList [1..10]40 $filter f = Fold.filterM (return . f)streamlyLike  but with a monadic predicate.streamlyModify a fold to receive a  input, the 6 values are unwrapped and sent to the original fold,  values are discarded.streamly Take at most n input elements and fold them using the supplied fold. A negative count is treated as 0.?Stream.fold (Fold.take 2 Fold.toList) $ Stream.fromList [1..10][1,2]streamly+Modify the fold such that it returns a new  instead of the output. If the fold was already done the returned fold would always yield the result. If the fold was partial, the returned fold starts from where we left i.e. it uses the last accumulator value as the initial value of the accumulator. Thus we can resume the fold later and feed it more input.:{do more <- Stream.fold (Fold.duplicate Fold.sum) (Stream.enumerateFromTo 1 10) evenMore <- Stream.fold (Fold.duplicate more) (Stream.enumerateFromTo 11 20)4 Stream.fold evenMore (Stream.enumerateFromTo 21 30):}465 Pre-releasestreamlyRun the initialization effect of a fold. The returned fold would use the value returned by this effect as its initial value. Pre-releasestreamlyRun one step of a fold and store the accumulator as an initial value in the returned fold. Pre-releasestreamly.Collect zero or more applications of a fold. many split collect applies the split fold repeatedly on the input stream and accumulates zero or more fold results using collect.two = Fold.take 2 Fold.toList twos = Fold.many two Fold.toList*Stream.fold twos $ Stream.fromList [1..10] [[1,2],[3,4],[5,6],[7,8],[9,10]] Stops when collect stops. See also: gl, gmstreamlyLike many, but inner fold emits an output at the end even if no input is received.Internal See also: gl, gmstreamlychunksOf n split collect repeatedly applies the split fold to chunks of n: items in the input stream and supplies the result to the collect fold..twos = Fold.chunksOf 2 Fold.toList Fold.toList*Stream.fold twos $ Stream.fromList [1..10] [[1,2],[3,4],[5,6],[7,8],[9,10]] &chunksOf n split = many (take n split) Stops when collect stops.streamlyInternalstreamlytakeInterval n fold uses fold< to fold the input items arriving within a window of first n seconds.Stream.fold (Fold.takeInterval 1.0 Fold.toList) $ Stream.delay 0.1 $ Stream.fromList [1..][1,2,3,4,5,6,7,8,9,10,11] Stops when fold stops or when the timeout occurs. Note that the fold needs an input after the timeout to stop. For example, if no input is pushed to the fold until one hour after the timeout had occurred, then the fold will be done only after consuming that input. Pre-releasestreamlyGroup the input stream into windows of n second each using the first fold and then fold the resulting groups using the second fold.8intervals = Fold.intervalsOf 0.5 Fold.toList Fold.toListStream.fold intervals $ Stream.delay 0.2 $ Stream.fromList [1..10][[1,2,3,4],[5,6,7],[8,9,10]] 1intervalsOf n split = many (takeInterval n split) Pre-releasestreamly maps over . fmap =  streamly maps over  and  maps over .streamly4Maps a function on the output of the fold (the type b).33!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?streamlySame as fromEffectstreamly 1repeatM = fix . cons repeatM = cycle1 . fromPure 9Generate an infinite stream by repeating a monadic value. Pre-releasestreamly fromFoldable =    Construct a stream from a  containing pure values:streamlyLazy right associative fold.streamly9Right associative fold to an arbitrary transformer monad.streamlyLike foldx#, but with a monadic step function.streamlyLike " but with a monadic step function.streamlyLazy left fold to a stream.streamly1Lazy left fold to an arbitrary transformer monad.streamly >drain = foldl' (\_ _ -> ()) () drain = mapM_ (\_ -> return ())streamly&We can define cyclic structures using let:'let (a, b) = ([1, b], head a) in (a, b) ([1,1],1) The function fix defined as: fix f = let x = f x in xensures that the argument of a function and its output refer to the same lazy value x* i.e. the same location in memory. Thus x can be defined in terms of itself, creating structures with cyclic references.import Data.Function (fix)f ~(a, b) = ([1, b], head a)fix f ([1,1],1)no is essentially the same as fix but for monadic values.Using  for streams we can construct a stream in which each element of the stream is defined in a cyclic fashion. The argument of the function being fixed represents the current element of the stream which is being returned by the stream monad. Thus, we can use the argument to construct itself.'In the following example, the argument action of the function f represents the tuple (x,y) returned by it in a given iteration. We define the first element of the tuple in terms of the second. import Streamly.Internal.Data.Stream.IsStream as Stream import System.IO.Unsafe (unsafeInterleaveIO) main = do Stream.mapM_ print $ Stream.mfix f where f action = do let incr n act = fmap ((+n) . snd) $ unsafeInterleaveIO act x <- Stream.fromListM [incr 1 action, incr 2 action] y <- Stream.fromList [4,5] return (x, y) Note: you cannot achieve this by just changing the order of the monad statements because that would change the order in which the stream elements are generated.Note that the function f2 must be lazy in its argument, that's why we use unsafeInterleaveIO on action because IO monad is strict. Pre-releasestreamly/Extract the last element of the stream, if any.streamlyApply a monadic action to each element of the stream and discard the output of the action.streamly7Zip two streams serially using a pure zipping function.streamly:Zip two streams serially using a monadic zipping function.!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>? streamly Convert a . to a . When you want to compose sinks and folds together, upgrade a sink to a fold before composing.streamly8Distribute one copy each of the input to both the sinks.  |-------Sink m a ---stream m a---| |-------Sink m a > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > sink (Sink.tee (pr "L") (pr "R")) (S.enumerateFromTo 1 2) L 1 R 1 L 2 R 2 streamly?Distribute copies of the input to all the sinks in a container.  |-------Sink m a ---stream m a---| |-------Sink m a | ...  > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > sink (Sink.distribute [(pr "L"), (pr "R")]) (S.enumerateFromTo 1 2) L 1 R 1 L 2 R 2 4This is the consumer side dual of the producer side  operation.streamlyDemultiplex to multiple consumers without collecting the results. Useful to run different effectful computations depending on the value of the stream elements, for example handling network packets of different types using different handlers.  |-------Sink m a -----stream m a-----Map-----| |-------Sink m a | ... > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) > let table = Data.Map.fromList [(1, pr "One"), (2, pr "Two")] in Sink.sink (Sink.demux id table) (S.enumerateFromTo 1 100) One 1 Two 2 streamlySplit elements in the input stream into two parts using a monadic unzip function, direct each part to a different sink.  |-------Sink m b -----Stream m a----(b,c)--| |-------Sink m c > let pr x = Sink.drainM (putStrLn . ((x ++ " ") ++) . show) in Sink.sink (Sink.unzip return (pr "L") (pr "R")) (S.fromPure (1,2)) L 1 R 2 streamlySame as  but with a pure unzip function.streamly&Map a pure function on the input of a ..streamly)Map a monadic function on the input of a ..streamlyFilter the input of a .! using a pure predicate function.streamlyFilter the input of a .$ using a monadic predicate function.streamlyDrain all input, running the effects and discarding the results.streamly drainM f = lmapM f drain?streamly(This exception is used for two purposes:When a parser ultimately fails, the user of the parser is intimated via this exception.When the "extract" function of a parser needs to throw an error. Pre-releasestreamly7A parser is a fold that can fail and is represented as Parser step initial extract'. Before we drive a parser we call the initial action to retrieve the initial state of the fold. The parser driver invokes step with the state returned by the previous step and the next input element. It results into a new state and a command to the driver represented by  type. The driver keeps invoking the step function until it stops or fails. At any point of time the driver can call extract to inspect the result of the fold. It may result in an error or an output value. Pre-releasestreamlyThe return type of a  step.The parse operation feeds the input stream to the parser one element at a time, representing a parse . The parser may or may not consume the item and returns a result. If the result is  we can either extract the result or feed more input to the parser. If the result is , we must feed more input in order to get a result. If the parser returns 4 then the parser can no longer take any more input.If the result is , the parse operation retains the input in a backtracking buffer, in case the parser may ask to backtrack in future. Whenever a 'Partial n' result is returned we first backtrack by n elements in the input and then release any remaining backtracking buffer. Similarly, 'Continue n' backtracks to n elements before the current position and starts feeding the input from that point for future invocations of the parser.*If parser is not yet done, we can use the extract operation on the state of the parser to extract a result. If the parser has not yet yielded a result, the operation fails with a % exception. If the parser yielded a  result in the past the last partial result is returned. Therefore, if a parser yields a partial result once it cannot fail later on.The parser can never backtrack beyond the position where the last partial result left it at. The parser must ensure that the backtrack position is always after that. Pre-releasestreamly2Partial result with an optional backtrack request.Partial count state means a partial result is available which can be extracted successfully, state is the opaque state of the parser to be supplied to the next invocation of the step operation. The current input position is reset to count elements back and any input before that is dropped from the backtrack buffer.streamly3Need more input with an optional backtrack request.Continue count state means the parser has consumed the current input but no new result is generated, state is the next state of the parser. The current input is retained in the backtrack buffer and the input position is reset to count elements back.streamly*Done with leftover input count and result.Done count result means the parser has finished, it will accept no more input, last count elements from the input are unused and the result of the parser is in result.streamly,Parser failed without generating any output.The parsing operation may backtrack to the beginning and try another alternative.streamlyThe type of a Parser's initial action.Internalstreamly/Wait for step function to be called with state s.streamly,Return a result right away without an input.streamly,Return an error right away without an input.streamly1Map a monadic function on the output of a parser. Pre-releasestreamlySee %p. Pre-releasestreamlySee %q. Pre-releasestreamlySee %r.Note: this implementation of serialWith is fast because of stream fusion but has quadratic time complexity, because each composition adds a new branch that each subsequent parse's input element has to go through, therefore, it cannot scale to a large number of compositions. After around 100 compositions the performance starts dipping rapidly beyond a CPS style unfused implementation. Pre-releasestreamlyWorks correctly only if the first parser is guaranteed to never fail.streamlySee %s. Pre-releasestreamlySee %t. Pre-releasestreamlySee documentation of %u. Pre-releasestreamlyLike splitMany, but inner fold emits an output at the end even if no input is received.InternalstreamlySee documentation of %v. Pre-releasestreamlySee %w. Pre-releasestreamlySee %x. Pre-releasestreamlySee %l. Pre-releasestreamly(Maps a function over the result held by . fmap =  Internalstreamlyfirst maps on  and second maps on .InternalstreamlySee documentation of !y.streamlySee documentation of !y.streamly instance using .Note: The implementation of  is not lazy in the second argument. The following code will fail:Stream.parse (Parser.satisfy (> 0) <|> undefined) $ Stream.fromList [1..10]1streamly form of .!!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>? D streamly3A continuation passing style parser representation.streamlyConvert a direct style  to a CPS style . Pre-releasestreamlyConvert a CPS style  to a direct style ."initial" returns a continuation which can be called one input at a time using the "step" function. Pre-releasestreamlyA parser that always yields a pure value without consuming any input. Pre-releasestreamlySee %q. Pre-releasestreamlyA parser that always fails with an error message without consuming any input. Pre-releasestreamly is same as , it aborts the parser.  is same as ), it selects the first succeeding parser. Pre-releasestreamly form of %t>. Backtrack and run the second parser if the first one fails.The "some" and "many" operations of alternative accumulate results in a pure list which is not scalable and streaming. Instead use %v and %u for fusible operations with composable accumulation of results. See also %t. This  instance does not fuse, use %t when you need fusion.streamlyMonad composition can be used for lookbehind parsers, we can make the future parses depend on the previously parsed values.If we have to parse "a9" or "9a" but not "99" or "aa" we can use the following parser: backtracking :: MonadCatch m => PR.Parser m Char String backtracking = sequence [PR.satisfy isDigit, PR.satisfy isAlpha] 7 sequence [PR.satisfy isAlpha, PR.satisfy isDigit] We know that if the first parse resulted in a digit at the first place then the second parse is going to fail. However, we waste that information and parse the first character again in the second parse only to know that it is not an alphabetic char. By using lookbehind in a * composition we can avoid redundant work: data DigitOrAlpha = Digit Char | Alpha Char lookbehind :: MonadCatch m => PR.Parser m Char String lookbehind = do x1 <- Digit  PR.satisfy isDigit  Alpha  PR.satisfy isAlpha -- Note: the parse depends on what we parsed already x2 <- case x1 of Digit _ -> PR.satisfy isAlpha Alpha _ -> PR.satisfy isDigit return $ case x1 of Digit x -> [x,x2] Alpha x -> [x,x2]  See also %l*. This monad instance does not fuse, use %l when you need fusion.streamly form of %r/. Note that this operation does not fuse, use %r when fusion is important.streamly.Maps a function over the output of the parser."!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>? streamlySee %z.BrokenstreamlySee %{.BrokenstreamlySee %|. UnimplementedstreamlySee %}.BrokenstreamlySee %~.Broken#!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?`streamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %. Pre-releasestreamlySee %.streamlySee %.streamlySee %.streamlySee %. Pre-releasestreamly span p f1 f2 composes folds f1 and f2 such that f1. consumes the input as long as the predicate p is . f2! consumes the rest of the input. > let span_ p xs = Stream.parse (Parser.span p Fold.toList Fold.toList) $ Stream.fromList xs > span_ (< 1)  [],[1,2,3]1,2,3 > span_ (< 2)  [1],[2,3]1,2,3 > span_ (< 4)  [1,2,3],[]1,2,3  Pre-releasestreamlyBreak the input stream into two groups, the first group takes the input as long as the predicate applied to the first element of the stream and next input element holds /, the second group takes the rest of the input. Pre-releasestreamlyLike  but applies the predicate in a rolling fashion i.e. predicate is applied to the previous and the next input elements. Pre-releasestreamlySee %. Pre-releasestreamlySee %. UnimplementedstreamlySee %. UnimplementedstreamlySee %. UnimplementedstreamlySee %u. Pre-releasestreamlySee %v. Pre-releasestreamlySee %. UnimplementedstreamlySee %. UnimplementedstreamlySee %. Pre-release88$!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamly&A seed with a buffer. It allows us to  or return some data after reading it. Useful in backtracked parsing.streamlyMake a source from a seed value. The buffer would start as empty. Pre-releasestreamlyReturn some unused data back to the source. The data is prepended (or consed) to the source. Pre-releasestreamly!Determine if the source is empty.streamlyConvert a producer to a producer from a buffered source. Any buffered data is read first and then the seed is unfolded. Pre-releasestreamlyParse a buffered source using a parser, returning the parsed value and the remaining source. Pre-releasestreamlyApply a parser repeatedly on a buffered source producer to generate a producer of parsed values. Pre-release%!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHCNone #$&-035678>?_p3streamlyMake a  from a . Pre-releasestreamlyA parser that always yields a pure value without consuming any input. Pre-releasestreamlyA parser that always yields the result of an effectful action without consuming any input. Pre-releasestreamlyA parser that always fails with an error message without consuming any input. Pre-releasestreamlyA parser that always fails with an effectful error message and without consuming any input. Pre-releasestreamlyPeek the head element of a stream, without consuming it. Fails if it encounters end of input.Stream.parse ((,) <$> Parser.peek <*> Parser.satisfy (> 0)) $ Stream.fromList [1](1,1)  peek = lookAhead (satisfy True)  Pre-releasestreamly8Succeeds if we are at the end of input, fails otherwise.Stream.parse ((,) <$> Parser.satisfy (> 0) <*> Parser.eof) $ Stream.fromList [1](1,()) Pre-releasestreamlyReturns the next element if it passes the predicate, fails otherwise.>Stream.parse (Parser.satisfy (== 1)) $ Stream.fromList [1,0,1]1 Pre-releasestreamlyMap a  returning function on the next element in the stream. The parser fails if the function returns  otherwise returns the  value. Pre-releasestreamlyMap an  returning function on the next element in the stream. If the function returns 'Left err', the parser fails with the error message err otherwise returns the  value. Pre-releasestreamlytakeBetween m n takes a minimum of m and a maximum of n8 input elements and folds them using the supplied fold. Stops after n, elements. Fails if the stream ends before m elements could be taken. Examples: - >>> :{ takeBetween' low high ls = Stream.parse prsr (Stream.fromList ls) where prsr = Parser.takeBetween low high Fold.toList :}  takeBetween' 2 4 [1, 2, 3, 4, 5] [1,2,3,4]takeBetween' 2 4 [1, 2][1,2]takeBetween' 2 4 [1]*** Exception: ParseError "takeBetween: Expecting alteast 2 elements, got 1"takeBetween' 0 0 [1, 2][]takeBetween' 0 1 [][] takeBetween is the most general take operation, other take operations can be defined in terms of takeBetween. For example: take = takeBetween 0 n -- equivalent of take take1 = takeBetween 1 n -- equivalent of takeLE1 takeEQ = takeBetween n n takeGE = takeBetween n maxBound  Pre-releasestreamlyStops after taking exactly n input elements.Stops - after consuming n elements.Fails - if the stream or the collecting fold ends before it can collect exactly n elements.Stream.parse (Parser.takeEQ 4 Fold.toList) $ Stream.fromList [1,0,1]*** Exception: ParseError "takeEQ: Expecting exactly 4 elements, input terminated on 3" Pre-releasestreamlyTake at least n& input elements, but can collect more.'Stops - when the collecting fold stops.Fails - if the stream or the collecting fold ends before producing n elements.Stream.parse (Parser.takeGE 4 Fold.toList) $ Stream.fromList [1,0,1]*** Exception: ParseError "takeGE: Expecting at least 4 elements, input terminated on 3"Stream.parse (Parser.takeGE 4 Fold.toList) $ Stream.fromList [1,0,1,0,1] [1,0,1,0,1] Pre-releasestreamlyLike  but uses a  instead of a  to collect the input. The combinator stops when the condition fails or if the collecting parser stops.8This is a generalized version of takeWhile, for example & can be implemented in terms of this: >takeWhile1 cond p = takeWhile cond (takeBetween 1 maxBound p) Stops: when the condition fails or the collecting parser stops. Fails: when the collecting parser fails. UnimplementedstreamlyCollect stream elements until an element fails the predicate. The element on which the predicate fails is returned back to the input stream.>Stops - when the predicate fails or the collecting fold stops.Fails - never.Stream.parse (Parser.takeWhile (== 0) Fold.toList) $ Stream.fromList [0,0,1,0,1][0,0]We can implement a breakOn using : breakOn p = takeWhile (not p)  Pre-releasestreamlyLike 0 but takes at least one element otherwise fails. Pre-releasestreamlyDrain the input as long as the predicate succeeds, running the effects and discarding the results.This is also called  skipWhile in some parsing libraries. Pre-releasestreamlysliceSepByP cond parser# parses a slice of the input using parser until cond succeeds or the parser stops.This is a generalized slicing parser which can be used to implement other parsers e.g.: sliceSepByMax cond n p = sliceSepByP cond (take n p) sliceSepByBetween cond m n p = sliceSepByP cond (takeBetween m n p)  Pre-releasestreamlyLike  sliceSepBy but does not drop the separator element, instead separator is emitted as a separate element in the output. UnimplementedstreamlyCollect stream elements until an elements passes the predicate, return the last element on which the predicate succeeded back to the input stream. If the predicate succeeds on the first element itself then the parser does not terminate there. The succeeding element in the leading position is treated as a prefix separator which is kept in the output segment. sliceBeginWithOdd ls = Stream.parse prsr (Stream.fromList ls)7 where prsr = Parser.sliceBeginWith odd Fold.toList:}sliceBeginWithOdd [2, 4, 6, 3]*** Exception: sliceBeginWith : slice begins with an element which fails the predicate...sliceBeginWithOdd [3, 5, 7, 4][3]!sliceBeginWithOdd [3, 4, 6, 8, 5] [3,4,6,8]sliceBeginWithOdd [][] Pre-releasestreamlyLike  sliceSepBy but the separator elements can be escaped using an escape char determined by the second predicate. UnimplementedstreamlyescapedFrameBy begin end escape parses a string framed using begin and end0 as the frame begin and end marker elements and escape as an escaping element to escape the occurrence of the framing elements within the frame. Nested frames are allowed, but nesting is removed when parsing. For example, <> Stream.parse (Parser.escapedFrameBy (== '{') (== '}') (== \\) Fold.toList) $ Stream.fromList "{hello}" "hello" > Stream.parse (Parser.escapedFrameBy (== '{') (== '}') (== \\) Fold.toList) $ Stream.fromList "{hello {world}}" "hello world" > Stream.parse (Parser.escapedFrameBy (== '{') (== '}') (== \\) Fold.toList) $ Stream.fromList "{hello \{world\}}" "hello {world}" > Stream.parse (Parser.escapedFrameBy (== '{') (== '}') (== \\) Fold.toList) $ Stream.fromList "{hello {world}" ParseError "Unterminated '{'"  UnimplementedstreamlyLike splitOn but strips leading, trailing, and repeated separators. Therefore, ".a..b." having & as the separator would be parsed as  ["a","b"]. In other words, its like parsing words from whitespace separated text.?Stops - when it finds a word separator after a non-word elementFails - never. 2S.wordsBy pred f = S.parseMany (PR.wordBy pred f) streamlyGiven an input stream  [a,b,c,...] and a comparison function cmp", the parser assigns the element a to the first group, then if  a `cmp` b is  b) is also assigned to the same group. If  a `cmp` c is  then c is also assigned to the same group and so on. When the comparison fails the parser is terminated. Each group is folded using the  f9 and the result of the fold is the result of the parser."Stops - when the comparison fails.Fails - never.:{ runGroupsBy eq = Stream.toList; . Stream.parseMany (Parser.groupBy eq Fold.toList) . Stream.fromList:}runGroupsBy (<) [][]runGroupsBy (<) [1][[1]]"runGroupsBy (<) [3, 5, 4, 1, 2, 0][[3,5,4],[1,2],[0]] Pre-releasestreamlyUnlike  this combinator performs a rolling comparison of two successive elements in the input stream. Assuming the input stream to the parser is  [a,b,c,...] and the comparison function is cmp(, the parser first assigns the element a to the first group, then if  a `cmp` b is  b) is also assigned to the same group. If  b `cmp` c is  then c is also assigned to the same group and so on. When the comparison fails the parser is terminated. Each group is folded using the  f9 and the result of the fold is the result of the parser."Stops - when the comparison fails.Fails - never.:{ runGroupsByRolling eq = Stream.toList . Stream.parseMany (Parser.groupByRolling eq Fold.toList) . Stream.fromList:}runGroupsByRolling (<) [][]runGroupsByRolling (<) [1][[1]])runGroupsByRolling (<) [3, 5, 4, 1, 2, 0][[3,5],[4],[1,2],[0]] Pre-releasestreamlyMatch the given sequence of elements using the given comparison function.Stream.parse (Parser.eqBy (==) "string") $ Stream.fromList "string"Stream.parse (Parser.eqBy (==) "mismatch") $ Stream.fromList "match"*** Exception: ParseError "eqBy: failed, yet to match 7 elements" Pre-releasestreamlySequential parser application. Apply two parsers sequentially to an input stream. The input is provided to the first parser, when it is done the remaining input is provided to the second parser. If both the parsers succeed their outputs are combined using the supplied function. The operation fails if any of the parsers fail.9Note: This is a parsing dual of appending streams using gj, it splits the streams using two parsers and zips the results.This implementation is strict in the second argument, therefore, the following will fail:Stream.parse (Parser.serialWith const (Parser.satisfy (> 0)) undefined) $ Stream.fromList [1] *** Exception: Prelude.undefined... Compare with  instance method . This implementation allows stream fusion but has quadratic complexity. This can fuse with other operations and can be faster than : instance for small number (less than 8) of compositions.(Many combinators can be expressed using  serialWith and other parser primitives. Some common idioms are described below, span :: (a -> Bool) -> Fold m a b -> Fold m a b -> Parser m a b span pred f1 f2 = serialWith (,) ( pred f1) ( f2)  spanBy :: (a -> a -> Bool) -> Fold m a b -> Fold m a b -> Parser m a b spanBy eq f1 f2 = serialWith (,) ( eq f1) ( f2)  spanByRolling :: (a -> a -> Bool) -> Fold m a b -> Fold m a b -> Parser m a b spanByRolling eq f1 f2 = serialWith (,) ( eq f1) ( f2)  Pre-releasestreamlySequential parser application ignoring the output of the first parser. Apply two parsers sequentially to an input stream. The input is provided to the first parser, when it is done the remaining input is provided to the second parser. The output of the parser is the output of the second parser. The operation fails if any of the parsers fail.This implementation is strict in the second argument, therefore, the following will fail:Stream.parse (Parser.split_ (Parser.satisfy (> 0)) undefined) $ Stream.fromList [1] *** Exception: Prelude.undefined... Compare with  instance method . This implementation allows stream fusion but has quadratic complexity. This can fuse with other operations, and can be faster than : instance for small number (less than 8) of compositions. Pre-releasestreamlyteeWith f p1 p2 distributes its input to both p1 and p2 until both of them succeed or anyone of them fails and combines their output using f3. The parser succeeds if both the parsers succeed. Pre-releasestreamlyLike  but ends parsing and zips the results, if available, whenever the first parser ends. Pre-releasestreamlyLike  but ends parsing and zips the results, if available, whenever any of the parsers ends or fails. UnimplementedstreamlySequential alternative. Apply the input to the first parser and return the result if the parser succeeds. If the first parser fails then backtrack and apply the same input to the second parser and return the result.Note: This implementation is not lazy in the second argument. The following will fail:Stream.parse (Parser.satisfy (> 0) `Parser.alt` undefined) $ Stream.fromList [1..10]1 Compare with  Alternative instance method <|>. This implementation allows stream fusion but has quadratic complexity. This can fuse with other operations and can be much faster than  Alternative: instance for small number (less than 8) of alternatives. Pre-releasestreamlyShortest alternative. Apply both parsers in parallel but choose the result from the one which consumed least input i.e. take the shortest succeeding parse. Pre-releasestreamlyLongest alternative. Apply both parsers in parallel but choose the result from the one which consumed more input i.e. take the longest succeeding parse. Pre-releasestreamly)Run a parser without consuming the input. Pre-releasestreamlyApply two parsers alternately to an input stream. The input stream is considered an interleaving of two patterns. The two parsers represent the two patterns.,This undoes a "gintercalate" of two streams. UnimplementedstreamlyconcatSequence f t9 collects sequential parses of parsers in the container t using the fold f6. Fails if the input ends or any of the parsers fail.This is same as  but more efficient. UnimplementedstreamlyMap a ' returning function on the result of a . Compare with  instance method . This implementation allows stream fusion but has quadratic complexity. This can fuse with other operations and can be much faster than : instance for small number (less than 8) of compositions. Pre-releasestreamlychoice parsers applies the parsers2 in order and returns the first successful parse.This is same as asum but more efficient. UnimplementedstreamlyLike  but uses a  instead of a  to collect the results. Parsing stops or fails if the collecting parser stops or fails. UnimplementedstreamlyCollect zero or more parses. Apply the supplied parser repeatedly on the input stream and push the parse results to a downstream fold.Stops: when the downstream fold stops or the parser fails. Fails: never, produces zero or more results. Compare with u. Pre-releasestreamlyCollect one or more parses. Apply the supplied parser repeatedly on the input stream and push the parse results to a downstream fold.Stops: when the downstream fold stops or the parser fails. Fails: if it stops without producing a single result. -some fld parser = manyP (takeGE 1 fld) parser Compare with v. Pre-releasestreamlycountBetween m n f p collects between m and n sequential parses of parser p using the fold f. Stop after collecting n> results. Fails if the input ends or the parser fails before m results are collected. Unimplementedstreamly count n f p collects exactly n sequential parses of parser p using the fold f6. Fails if the input ends or the parser fails before n results are collected. UnimplementedstreamlyLike  but uses a & to collect the results instead of a . Parsing stops or fails if the collecting parser stops or fails.3We can implemnent parsers like the following using : ;countBetweenTill m n f p = manyTillP (takeBetween m n f) p  UnimplementedstreamlymanyTill f collect test tries the parser test on the input, if test fails it backtracks and tries collect, after collect succeeds test2 is tried again and so on. The parser stops when test succeeds. The output of test is discarded and the output of collect; is accumulated by the supplied fold. The parser fails if collect fails.Stops when the fold f stops. Pre-releasestreamlymanyThen f collect recover repeats the parser collect on the input and collects the output in the supplied fold. If the the parser collect fails, parser recover? is run until it stops and then we start repeating the parser collect6 again. The parser fails if the recovery parser fails.For example, this can be used to find a key frame in a video stream after an error. UnimplementedstreamlyApply a collection of parsers to an input stream in a round robin fashion. Each parser is applied until it stops and then we repeat starting with the the first parser again. Unimplementedstreamly(Keep trying a parser up to a maximum of n failures. When the parser fails the input consumed till now is dropped and the new instance is tried on the fresh input. UnimplementedstreamlyLike  but aborts after n successive failures. UnimplementedstreamlyKeep trying a parser until it succeeds. When the parser fails the input consumed till now is dropped and the new instance is tried on the fresh input. Unimplemented==&!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?eYstreamlyTee is a newtype wrapper over the  type providing distributing , , , ,  and  instances.streamlyBinary 7 operations distribute the input to both the argument &s and combine their outputs using the  instance of the output type.streamlyBinary 7 operations distribute the input to both the argument &s and combine their outputs using the  instance of the output type.streamlyBinary 6 operations distribute the input to both the argument 's and combine their outputs using the  instance of the output type.streamly, distributes the input to both the argument (s and combines their outputs using the  instance of the output type.streamly, distributes the input to both the argument (s and combines their outputs using the  instance of the output type.streamly, distributes the input to both the argument 9s and combines their outputs using function application.!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHCNone #$&-035678>?f'!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?tstreamlyAn  Unfold m a b. is a generator of a stream of values of type b from a seed of type a in  m.streamly Unfold step injectstreamlyMake an unfold from step and inject functions. Pre-releasestreamly$Make an unfold from a step function. See also:  Pre-releasestreamlyBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns  and the stream ends. Since: 0.8.0streamlyLike  but uses a pure step function.:{ f [] = Nothing f (x:xs) = Just (x, xs):}2Unfold.fold Fold.toList (Unfold.unfoldr f) [1,2,3][1,2,3] Since: 0.8.0streamly,Map a function on the input argument of the .+u = Unfold.lmap (fmap (+1)) Unfold.fromList Unfold.fold Fold.toList u [1..5] [2,3,4,5,6] )lmap f = Unfold.many (Unfold.function f)  Since: 0.8.0streamly5Map a function on the output of the unfold (the type b). Pre-releasestreamlyThe unfold discards its input and generates a function stream using the supplied monadic action. Pre-releasestreamly=Discards the unfold input and always returns the argument of . fromPure = fromEffect . pure Pre-releasestreamly+Outer product discarding the first element. Unimplementedstreamly,Outer product discarding the second element. UnimplementedstreamlyCreate a cross product (vector product or cartesian product) of the output streams of two unfolds using a monadic combining function. Pre-releasestreamlyLike $ but uses a pure combining function. 1crossWith f = crossWithM (\b c -> return $ f b c)$u1 = Unfold.lmap fst Unfold.fromList$u2 = Unfold.lmap snd Unfold.fromListu = Unfold.crossWith (,) u1 u2,Unfold.fold Fold.toList u ([1,2,3], [4,5,6])7[(1,4),(1,5),(1,6),(2,4),(2,5),(2,6),(3,4),(3,5),(3,6)] Since: 0.8.0streamlySee . cross = crossWith (,)/To cross the streams from a tuple we can write: crossProduct :: Monad m => Unfold m a b -> Unfold m c d -> Unfold m (a, c) (b, d) crossProduct u1 u2 = cross (lmap fst u1) (lmap snd u2)  Pre-releasestreamlyMap an unfold generating action to each element of an unfold and flatten the results into a single stream.streamlyLift a monadic function into an unfold. The unfold generates a singleton stream. Since: 0.8.0streamlyLift a pure function into an unfold. The unfold generates a singleton stream. #function f = functionM $ return . f Since: 0.8.0streamlyIdentity unfold. The unfold generates a singleton stream having the input as the only element. identity = function Prelude.id Pre-releasestreamlyApply the second unfold to each output element of the first unfold and flatten the output in a single stream. Since: 0.8.0streamlyDistribute the input to two unfolds and then zip the outputs to a single stream using a monadic zip function.*Stops as soon as any of the unfolds stops. Pre-releasestreamlyLike  but with a pure zip function.+square = fmap (\x -> x * x) Unfold.fromList-cube = fmap (\x -> x * x * x) Unfold.fromList"u = Unfold.zipWith (,) square cube Unfold.fold Fold.toList u [1..5]%[(1,1),(4,8),(9,27),(16,64),(25,125)] -zipWith f = zipWithM (\a b -> return $ f a b) Since: 0.8.0streamly6Maps a function on the output of the unfold (the type b).1((c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?|streamlyA stream consists of a step function that generates the next step given a current state, and the current state.streamly An empty  with a side effect.streamly?Does not fuse, has the same performance as the StreamK version.streamly Convert an  into a  by supplying it a seed.streamlyCreate a singleton  from a pure value.streamlyCreate a singleton  from a monadic action.streamly#Convert a list of pure values to a streamlyConvert a CPS encoded StreamK to direct style step encoded StreamDstreamlyConvert a direct style step encoded StreamD to a CPS encoded StreamKstreamly%Compare two streams lexicographicallystreamlyMap a monadic function over a streamlyunfoldMany unfold stream uses unfold to map the input stream elements to streams and then flattens the generated streams into a single output stream.streamly1Like foldMany but with the following differences:If the stream is empty the default value of the fold would still be emitted in the output.At the end of the stream if the last application of the fold did not receive any input it would still yield the default fold accumulator as the last value.streamlyApply a fold multiple times until the stream ends. If the stream is empty the output would be empty. #foldMany f = parseMany (fromFold f)A terminating fold may terminate even without accepting a single input. So we run the fold's initial action before evaluating the stream. However, this means that if later the stream does not yield anything we have to discard the fold's initial result which could have generated an effect.:adbc;adbc)!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?~z*2(c) 2020 Composewell Technologies and Contributors BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?f streamlyLike  but with following differences: alloc action m c# runs with async exceptions enabledcleanup action c -> m d won't run if the stream is garbage collected after partial evaluation.does not require a  constraint.Inhibits stream fusion Pre-releasestreamlyRun the alloc action m c with async exceptions disabled but keeping blocking operations interruptible (see ). Use the output c as input to c -> Stream m b to generate an output stream. When generating the stream use the supplied try operation  forall s. m s -> m (Either e s) to catch synchronous exceptions. If an exception occurs run the exception handler &c -> e -> Stream m b -> m (Stream m b).The cleanup action c -> m d, runs whenever the stream ends normally, due to a sync or async exception or if it gets garbage collected after a partial lazy evaluation. See ) for the semantics of the cleanup action.6 can express all other exception handling combinators.Inhibits stream fusion Pre-releasestreamlySee e.streamlySee e.streamlySee e.streamlySee e.streamlySee e.streamlySee e.streamlySee e.streamlySee e.8finally action xs = after action $ onException action xsstreamlySee e.streamlySee e.streamlybeforestreamlytry (exception handling)streamlyafter, on normal stopstreamly on exceptionstreamlystream generatorstreamlybeforestreamlytry (exception handling)streamlyafter, on normal stop or GCstreamly on exceptionstreamlystream generator  !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?7streamlyAn  holds a single  value.streamly Create a new . Pre-releasestreamlyWrite a value to an . Pre-releasestreamlyRead a value from an . Pre-releasestreamlyModify the value of an * using a function with strict application. Pre-releasestreamly4Generate a stream by continuously reading the IORef. Pre-releasestreamly Construct a stream by reading a   repeatedly. Pre-release+!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHCNone #$&-035678>?streamly asyncClock g starts a clock thread that updates an IORef with current time as a 64-bit value in microseconds, every g seconds. The IORef can be read asynchronously. The thread exits automatically when the reference to the returned  is lost.Minimum granularity of clock update is 1 ms. Higher is better for performance.CAUTION! This is safe only on a 64-bit machine. On a 32-bit machine a 64-bit Var cannot be read consistently without a lock while another thread is writing to it.  ,!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?+streamly+Map an action on the input argument of the . +lmapM f = Unfold.many (Unfold.functionM f)  Since: 0.8.0streamlySupply the seed to an unfold closing the input end of the unfold. )supply a = Unfold.lmap (Prelude.const a)  Pre-releasestreamlySupply the first component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the second component of the tuple as a seed. "supplyFirst a = Unfold.lmap (a, )  Pre-releasestreamlySupply the second component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the first component of the tuple as a seed. #supplySecond b = Unfold.lmap (, b)  Pre-releasestreamly Convert an  into an unfold accepting a tuple as an argument, using the argument of the original fold as the second element of tuple and discarding the first element of the tuple. discardFirst = Unfold.lmap snd  Pre-releasestreamly Convert an  into an unfold accepting a tuple as an argument, using the argument of the original fold as the first element of tuple and discarding the second element of the tuple.  discardSecond = Unfold.lmap fst  Pre-releasestreamly Convert an  that accepts a tuple as an argument into an unfold that accepts a tuple with elements swapped. swap = Unfold.lmap Tuple.swap  Pre-releasestreamly Compose an  and a  . Given an  Unfold m a b and a  Fold m b c, returns a monadic action a -> m c representing the application of the fold on the unfolded stream.-Unfold.fold Fold.sum Unfold.fromList [1..100]5050 Pre-releasestreamlyApply a monadic function to each element of the stream and replace it with the output of the resulting action. Since: 0.8.0streamlyConvert a stream into an &. Note that a stream converted to an  may not be as efficient as an  in some situations. Since: 0.8.0streamlyLift a monadic function into an unfold generating a nil stream with a side effect.streamly:Prepend a monadic single element generator function to an =. The same seed is used in the action as well as the unfold. Pre-releasestreamly#Convert a list of pure values to a  Since: 0.8.0streamly&Convert a list of monadic values to a  Since: 0.8.0streamly(Generates a stream replicating the seed n times. Since: 0.8.0streamly0Generates an infinite stream repeating the seed. Since: 0.8.0streamlyGenerates an infinite stream starting with the given seed and applying the given function repeatedly. Since: 0.8.0streamlyfromIndicesM gen. generates an infinite stream of values using gen starting from the seed. 8fromIndicesM f = Unfold.mapM f $ Unfold.enumerateFrom 0  Pre-releasestreamly!u = Unfold.take 2 Unfold.fromList"Unfold.fold Fold.toList u [1..100][1,2] Since: 0.8.0streamlySame as  but with a monadic predicate. Since: 0.8.0streamly End the stream generated by the / as soon as the predicate fails on an element. Since: 0.8.0streamlySame as  but with a monadic predicate. Since: 0.8.0streamly2Include only those elements that pass a predicate. Since: 0.8.0streamly drop n unf drops n' elements from the stream generated by unf. Since: 0.8.0streamlydropWhileM f unf- drops elements from the stream generated by unf9 while the condition holds true. The condition function f is monadic in nature. Since: 0.8.0streamly Similar to $ but with a pure condition function. Since: 0.8.0streamlyGenerate an infinite stream starting from a starting value with increments of the given stride. The implementation is numerically stable for floating point values.Note  is faster for integrals. Pre-releasestreamly  numFrom = enumerateFromStepNum 1 Pre-releasestreamlyCan be used to enumerate unbounded integrals. This does not check for overflow or underflow for bounded integrals.streamlyInternal enumerateFromToFractional to = takeWhile (<= to + 1 / 2) $ enumerateFromStepNum 1streamlyInternalstreamlyInternalstreamlyLike  but with following differences: alloc action a -> m c# runs with async exceptions enabledcleanup action c -> m d won't run if the stream is garbage collected after partial evaluation.does not require a  constraint.Inhibits stream fusion Pre-releasestreamlyRun the alloc action a -> m c with async exceptions disabled but keeping blocking operations interruptible (see ). Use the output c as input to  Unfold m c b to generate an output stream. When unfolding use the supplied try operation forall s. m s -> m (Either e s) to catch synchronous exceptions. If an exception occurs run the exception handling unfold Unfold m (c, e) b.The cleanup action c -> m d, runs whenever the stream ends normally, due to a sync or async exception or if it gets garbage collected after a partial lazy evaluation. See ) for the semantics of the cleanup action.6 can express all other exception handling combinators.Inhibits stream fusion Pre-releasestreamlyRun a side effect a -> m c on the input a before unfolding it using  Unfold m a b. (before f = lmapM (\a -> f a >> return a) Pre-releasestreamlyLike  with following differences:action a -> m c won't run if the stream is garbage collected after partial evaluation.Monad m( does not require any other constraints. Pre-releasestreamlyUnfold the input a using  Unfold m a b, run an action on a whenever the unfold stops normally, or if it is garbage collected after a partial lazy evaluation.The semantics of the action a -> m c1 are similar to the cleanup action semantics in . See also  Pre-releasestreamlyUnfold the input a using  Unfold m a b, run the action a -> m c on a* if the unfold aborts due to an exception.Inhibits stream fusion Pre-releasestreamlyLike  with following differences:action a -> m c won't run if the stream is garbage collected after partial evaluation.does not require a  constraint.Inhibits stream fusion Pre-releasestreamlyUnfold the input a using  Unfold m a b, run an action on a whenever the unfold stops normally, aborts due to an exception or if it is garbage collected after a partial lazy evaluation.The semantics of the action a -> m c1 are similar to the cleanup action semantics in . )finally release = bracket return release  See also Inhibits stream fusion Pre-releasestreamlyLike  but with following differences: alloc action a -> m c# runs with async exceptions enabledcleanup action c -> m d won't run if the stream is garbage collected after partial evaluation.does not require a  constraint.Inhibits stream fusion Pre-releasestreamlyRun the alloc action a -> m c with async exceptions disabled but keeping blocking operations interruptible (see ). Use the output c as input to  Unfold m c b to generate an output stream.c0 is usually a resource under the state of monad m, e.g. a file handle, that requires a cleanup after use. The cleanup action c -> m d, runs whenever the stream ends normally, due to a sync or async exception or if it gets garbage collected after a partial lazy evaluation. only guarantees that the cleanup action runs, and it runs with async exceptions enabled. The action must ensure that it can successfully cleanup the resource in the face of sync or async exceptions.When the stream ends normally or on a sync exception, cleanup action runs immediately in the current thread context, whereas in other cases it runs in the GC context, therefore, cleanup may be delayed until the GC gets to run. See also: , Inhibits stream fusion Pre-releasestreamlyWhen unfolding  Unfold m a b if an exception e occurs, unfold e using  Unfold m e b.Inhibits stream fusion Pre-releasestreamlybeforestreamlytry (exception handling)streamlyafter, on normal stopstreamly on exceptionstreamly unfold to runstreamlybeforestreamlytry (exception handling)streamlyafter, on normal stop, or GCstreamly on exceptionstreamly unfold to runadbcadbc!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHCNone #$&-035678>?-(c) 2020 Composewell Technologies and Contributors (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamly An empty .streamly#Can fuse but has O(n^2) complexity.streamlyFor floating point numbers if the increment is less than the precision then it just gets lost. Therefore we cannot always increment it correctly by just repeated addition. 9007199254740992 + 1 + 1 :: Double => 9.007199254740992e15 9007199254740992 + 2 :: Double => 9.007199254740994e15Instead we accumulate the increment counter and compute the increment every time before adding it to the starting number.This works for Integrals as well as floating point numbers, but enumerateFromStepIntegral is faster for integrals.streamlyCan be used to enumerate unbounded integrals. This does not check for overflow or underflow for bounded integrals.streamlyEnumerate upwards from from to to. We are assuming that "to" is constrained by the type to be within max/min bounds.streamlyWe cannot write a general function for Num. The only way to write code portable between the two is to use a  constraint and convert between Fractional and Integral using fromRational which is horribly slow.streamly'Convert a list of monadic actions to a $$.!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>? streamlyGenerate a stream from an SVar. An unevaluated stream can be pushed to an SVar using . As we pull a stream from the SVar the input stream gets evaluated concurrently. The evaluation depends on the SVar style and the configuration parameters e.g. using the maxBuffer/maxThreads combinators.streamlyLike 5 but generates a StreamD style stream instead of CPS.streamlyWrite a stream to an  in a non-blocking manner. The stream is evaluated concurrently as it is read back from the SVar using .streamlyFold the supplied stream to the SVar asynchronously using Parallel concurrency style. {- INLINE [1] toSVarParallel -}streamlyCreate a Fold style SVar that runs a supplied fold function as the consumer. Any elements sent to the SVar are consumed by the supplied fold function.streamlyLike  except that it uses a  instead of a fold function.streamlyPoll for events sent by the fold consumer to the stream pusher. The fold consumer can send a Stop event or an exception. When a Stop$ is received this function returns <. If an exception is recieved then it throws the exception.streamlyPush values from a stream to a fold worker via an SVar. Before pushing a value to the SVar it polls for events received from the fold consumer. If a stop event is received then it returns  otherwise false. Propagates exceptions received from the fold consumer.streamlyTap a stream and send the elements to the specified SVar in addition to yielding them again. The SVar runs a fold consumer. Elements are tapped and sent to the SVar until the fold finishes. Any exceptions from the fold evaluation are propagated in the current thread.  ------input stream---------output stream-----> /|\ | exceptions | | input | \|/ ----SVar | Fold  /!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?Б streamly4A parallely composing IO stream of elements of type a. See  documentation for more details.Since: 0.2.0 (Streamly)streamlyFor  streams: (<>) = g (>>=) = flip . g g See g,  is similar except that all iterations are strictly concurrent while in AsyncT? it depends on the consumer demand and available threads. See  for more details.Since: 0.1.0 (Streamly)5Since: 0.7.0 (maxBuffer applies to ParallelT streams)streamlyLike g except that the execution is much more strict. There is no limit on the number of threads. While g may not schedule a stream if there is no demand from the consumer,  always evaluates both the streams immediately. The only limit that applies to  is g:. Evaluation may block if the output buffer becomes full."import Streamly.Prelude (parallel)stream = Stream.fromEffect (delay 2) `parallel` Stream.fromEffect (delay 1) Stream.toList stream -- IO [Int]1 sec2 sec[1,2] guarantees that all the streams are scheduled for execution immediately, therefore, we could use things like starting timers inside the streams and relying on the fact that all timers were started at the same time.Unlike async this operation cannot be used to fold an infinite lazy container of streams, because it schedules all the streams strictly concurrently.Since: 0.2.0 (Streamly)streamlyLike 8 but stops the output as soon as the first stream stops. Pre-releasestreamlyLike ? but stops the output as soon as any of the two streams stops. Pre-releasestreamlyLike  but uses StreamK internally. Pre-releasestreamlySame as  but for StreamD stream.streamlyMake the stream producer and consumer run concurrently by introducing a buffer between them. The producer thread evaluates the input stream until the buffer fills, it blocks if the buffer is full until there is space in the buffer. The consumer consumes the stream lazily from the buffer. 6mkParallel = D.fromStreamD . mkParallelD . D.toStreamD Pre-releasestreamlyRedirect a copy of the stream to a supplied fold and run it concurrently in an independent thread. The fold may buffer some elements. The buffer size is determined by the prevailing g setting.  Stream m a -> m b | -----stream m a ---------------stream m a-----  > S.drain $ S.tapAsync (S.mapM_ print) (S.enumerateFromTo 1 2) 1 2 Exceptions from the concurrently running fold are propagated to the current computation. Note that, because of buffering in the fold, exceptions may be delayed and may not correspond to the current element being processed in the parent stream, but we guarantee that before the parent stream stops the tap finishes and all exceptions from it are drained. Compare with tap. Pre-releasestreamlyLike  but uses a  instead of a fold function.streamlyConcurrently distribute a stream to a collection of fold functions, discarding the outputs of the folds. > Stream.drain $ Stream.distributeAsync_ [Stream.mapM_ print, Stream.mapM_ print] (Stream.enumerateFromTo 1 2) 1 2 1 2  )distributeAsync_ = flip (foldr tapAsync)  Pre-releasestreamly(Fix the type of a polymorphic stream as .Since: 0.1.0 (Streamly)streamlyGenerates a callback and a stream pair. The callback returned is used to queue values to the stream. The stream is infinite, there is no way for the callback to indicate that it is done now. Pre-release  60!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?, streamlyA round robin parallely composing IO stream of elements of type a. See  documentation for more details.Since: 0.2.0 (Streamly)streamlyFor  streams: (<>) = g (>>=) = flip . g g  A single  bind behaves like a for loop with iterations of the loop executed concurrently a la the  combinator, producing results and side effects of iterations out of order::{&Stream.toList $ Stream.fromWAsync $ do6 x <- Stream.fromList [2,1] -- foreach x in stream Stream.fromEffect $ delay x:}1 sec2 sec[1,2]&Nested monad binds behave like nested for? loops with nested iterations executed concurrently, a la the  combinator::{ &Stream.toList $ Stream.fromWAsync $ do5 x <- Stream.fromList [1,2] -- foreach x in stream5 y <- Stream.fromList [2,4] -- foreach y in stream% Stream.fromEffect $ delay (x + y):}3 sec4 sec5 sec6 sec [3,4,5,6]The behavior can be explained as follows. All the iterations corresponding to the element 1$ in the first stream constitute one 8 output stream and all the iterations corresponding to 2 constitute another > output stream and these two output streams are merged using .The W in the name stands for wide or breadth wise scheduling in contrast to the depth wise scheduling behavior of .Since: 0.2.0 (Streamly)streamlyA demand driven left biased parallely composing IO stream of elements of type a. See  documentation for more details.Since: 0.2.0 (Streamly)streamlyFor  streams: (<>) = g (>>=) = flip . g g  A single  bind behaves like a for loop with iterations of the loop executed concurrently a la the  combinator, producing results and side effects of iterations out of order::{%Stream.toList $ Stream.fromAsync $ do6 x <- Stream.fromList [2,1] -- foreach x in stream Stream.fromEffect $ delay x:}1 sec2 sec[1,2]&Nested monad binds behave like nested for? loops with nested iterations executed concurrently, a la the  combinator::{ %Stream.toList $ Stream.fromAsync $ do5 x <- Stream.fromList [1,2] -- foreach x in stream5 y <- Stream.fromList [2,4] -- foreach y in stream% Stream.fromEffect $ delay (x + y):}3 sec4 sec5 sec6 sec [3,4,5,6]The behavior can be explained as follows. All the iterations corresponding to the element 1 in the first stream constitute one output stream and all the iterations corresponding to 2 constitute another output stream and these two output streams are merged using .Since: 0.1.0 (Streamly)streamlyGenerate a stream asynchronously to keep it buffered, lazily consume from the buffer. Pre-releasestreamlyMake the stream producer and consumer run concurrently by introducing a buffer between them. The producer thread evaluates the input stream until the buffer fills, it terminates if the buffer is full and a worker thread is kicked off again to evaluate the remaining stream when there is space in the buffer. The consumer consumes the stream lazily from the buffer.Since: 0.2.0 (Streamly)streamlyMerges two streams, both the streams may be evaluated concurrently, outputs from both are used as they arrive:import Streamly.Prelude (async)%stream1 = Stream.fromEffect (delay 4)%stream2 = Stream.fromEffect (delay 2)'Stream.toList $ stream1 `async` stream22 sec4 sec[2,4]Multiple streams can be combined. With enough threads, all of them can be scheduled simultaneously:%stream3 = Stream.fromEffect (delay 1)7Stream.toList $ stream1 `async` stream2 `async` stream3...[1,2,4]With 2 threads, only two can be scheduled at a time, when one of those finishes, the third one gets scheduled:Stream.toList $ Stream.maxThreads 2 $ stream1 `async` stream2 `async` stream3...[2,1,4](With a single thread, it becomes serial:Stream.toList $ Stream.maxThreads 1 $ stream1 `async` stream2 `async` stream3...[4,2,1]Only streams are scheduled for async evaluation, how actions within a stream are evaluated depends on the stream type. If it is a concurrent stream they will be evaluated concurrently.In the following example, both the streams are scheduled for concurrent evaluation but each individual stream is evaluated serially:stream1 = Stream.fromListM $ Prelude.map delay [3,3] -- SerialT IO Intstream2 = Stream.fromListM $ Prelude.map delay [1,1] -- SerialT IO Int3Stream.toList $ stream1 `async` stream2 -- IO [Int]... [1,1,3,3]If total threads are 2, the third stream is scheduled only after one of the first two has finished:stream3 = Stream.fromListM $ Prelude.map delay [2,2] -- SerialT IO IntStream.toList $ Stream.maxThreads 2 $ stream1 `async` stream2 `async` stream3 -- IO [Int]... [1,1,3,2,3,2]Thus  goes deep in first few streams rather than going wide in all streams. It prefers to evaluate the leftmost streams as much as possible. Because of this behavior,  can be safely used to fold an infinite lazy container of streams.Since: 0.2.0 (Streamly)streamlySame as .streamly(Fix the type of a polymorphic stream as .Since: 0.1.0 (Streamly)streamlyFor singleton streams,  is the same as . See  for singleton stream behavior. For multi-element streams, while  is left biased i.e. it tries to evaluate the left side stream as much as possible, 5 tries to schedule them both fairly. In other words,  goes deep while = goes wide. However, outputs are always used as they arrive.With a single thread,  starts behaving like serial while  starts behaving like wSerial. import Streamly.Prelude (wAsync)!stream1 = Stream.fromList [1,2,3]!stream2 = Stream.fromList [4,5,6]Stream.toList $ Stream.fromAsync $ Stream.maxThreads 1 $ stream1 `async` stream2 [1,2,3,4,5,6]Stream.toList $ Stream.fromWAsync $ Stream.maxThreads 1 $ stream1 `wAsync` stream2 [1,4,2,5,3,6]8With two threads available, and combining three streams:!stream3 = Stream.fromList [7,8,9]Stream.toList $ Stream.fromAsync $ Stream.maxThreads 2 $ stream1 `async` stream2 `async` stream3[1,2,3,4,5,6,7,8,9]Stream.toList $ Stream.fromWAsync $ Stream.maxThreads 2 $ stream1 `wAsync` stream2 `wAsync` stream3[1,4,2,7,5,3,8,6,9]This operation cannot be used to fold an infinite lazy container of streams, because it schedules all the streams in a round robin manner. Note that WSerialT and single threaded  both interleave streams but the exact scheduling is slightly different in both cases.Since: 0.2.0 (Streamly)streamly(Fix the type of a polymorphic stream as .Since: 0.2.0 (Streamly)  661!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?streamly'A serial IO stream of elements of type a" with concurrent lookahead. See  documentation for more details.Since: 0.3.0 (Streamly)streamlyFor  streams: (<>) = g (>>=) = flip . g g  A single  bind behaves like a for loop with iterations executed concurrently, ahead of time, producing side effects of iterations out of order, but results in order::{%Stream.toList $ Stream.fromAhead $ do6 x <- Stream.fromList [2,1] -- foreach x in stream Stream.fromEffect $ delay x:}1 sec2 sec[2,1]&Nested monad binds behave like nested for loops with nested iterations executed concurrently, ahead of time::{ %Stream.toList $ Stream.fromAhead $ do5 x <- Stream.fromList [1,2] -- foreach x in stream5 y <- Stream.fromList [2,4] -- foreach y in stream% Stream.fromEffect $ delay (x + y):}3 sec4 sec5 sec6 sec [3,5,4,6]The behavior can be explained as follows. All the iterations corresponding to the element 1 in the first stream constitute one output stream and all the iterations corresponding to 2 constitute another output stream and these two output streams are merged using .Since: 0.3.0 (Streamly)streamlyAppends two streams, both the streams may be evaluated concurrently but the outputs are used in the same order as the corresponding actions in the original streams, side effects will happen in the order in which the streams are evaluated:(import Streamly.Prelude (ahead, SerialT)7stream1 = Stream.fromEffect (delay 4) :: SerialT IO Int7stream2 = Stream.fromEffect (delay 2) :: SerialT IO Int3Stream.toList $ stream1 `ahead` stream2 :: IO [Int]2 sec4 sec[4,2]Multiple streams can be combined. With enough threads, all of them can be scheduled simultaneously:%stream3 = Stream.fromEffect (delay 1)7Stream.toList $ stream1 `ahead` stream2 `ahead` stream31 sec2 sec4 sec[4,2,1]With 2 threads, only two can be scheduled at a time, when one of those finishes, the third one gets scheduled:Stream.toList $ Stream.maxThreads 2 $ stream1 `ahead` stream2 `ahead` stream32 sec1 sec4 sec[4,2,1]Only streams are scheduled for ahead evaluation, how actions within a stream are evaluated depends on the stream type. If it is a concurrent stream they will be evaluated concurrently. It may not make much sense combining serial streams using . can be safely used to fold an infinite lazy container of streams.Since: 0.3.0 (Streamly)streamly(Fix the type of a polymorphic stream as .Since: 0.3.0 (Streamly)62!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamly!Simplify a producer to an unfold. Pre-releasestreamly)Convert a StreamD stream into a producer. Pre-release fghijklmnq ijlkmnfghq3!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?4!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?streamlywrite FD buffer offset length tries to write data on the given filesystem fd (cannot be a socket) up to sepcified length starting from the given offset in the buffer. The write will not block the OS thread, it may suspend the Haskell thread until write can proceed. Returns the actual amount of data written.streamlyKeep writing in a loop until all data in the buffer has been written.streamlywrite FD iovec count tries to write data on the given filesystem fd (cannot be a socket) from an iovec with specified number of entries. The write will not block the OS thread, it may suspend the Haskell thread until write can proceed. Returns the actual amount of data written.streamlyKeep writing an iovec in a loop until all the iovec entries are written.5!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?D6!(c) 2020 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHCNone! #$&-035678>?+streamly first addressstreamlyfirst unused addressstreamly%first address beyond allocated memorystreamlyDefault maximum buffer size in bytes, for reading from and writing to IO devices, the value is 32KB minus GHC allocation overhead, which is a few bytes, so that the actual allocation is 32KB.streamly$Remove the free space from an Array.streamly;allocate a new array using the provided allocator function.streamlyAllocate a new array aligned to the specified alignmend and using unmanaged pinned memory. The memory will not be automatically freed by GHC. This could be useful in allocate once global data structures. Use carefully as incorrect use can lead to memory leak.streamly Allocate an array that can hold count3 items. The memory of the array is uninitialized.Note that this is internal routine, the reference to this array cannot be given out until the array has been written to and frozen.streamlyAllocate an Array of the given size and run an IO action passing the array start pointer.streamlyReturn element at the specified index without checking the bounds.9Unsafe because it does not check the bounds of the array.streamlyReturn element at the specified index without checking the bounds.streamlyO(1)" Get the byte length of the array.streamlyO(1) Get the length of the array i.e. the number of elements in the array.streamlyGet the total capacity of an array. An array may have space reserved beyond the current used length of the array. Pre-releasestreamlyarraysOf n stream< groups the input stream into a stream of arrays of size n. .arraysOf n = StreamD.foldMany (Array.writeN n) Pre-releasestreamly(Buffer the stream into arrays in memory.streamlyResumable unfold of an array.streamlyUnfold an array into a stream.streamly/Unfold an array into a stream in reverse order. Pre-releasestreamlyUse the "read" unfold instead. flattenArrays = unfoldMany read=We can try this if there are any fusion issues in the unfold.streamly!Use the "readRev" unfold instead. "flattenArrays = unfoldMany readRev=We can try this if there are any fusion issues in the unfold.streamly Convert an  into a list.streamlyUse the  unfold instead. toStreamD = D.unfold read9We can try this if the unfold has any performance issues.streamlyUse the  unfold instead. toStreamDRev = D.unfold readRev2We can try this if the unfold has any perf issues.streamlyStrict left fold of an array.streamlyRight fold of an array.streamlywriteN n folds a maximum of n' elements from the input stream to an . #writeN n = Fold.take n writeNUnsafestreamlywriteNAligned alignment n folds a maximum of n' elements from the input stream to an  aligned to the given size. Pre-releasestreamlywriteNAlignedUnmanaged n folds a maximum of n' elements from the input stream to an  aligned to the given size and using unmanaged memory. This could be useful to allocate memory that we need to allocate only once in the lifetime of the program. Pre-releasestreamlyLike  but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown.streamly(Buffer a stream into a stream of arrays. 6writeChunks = Fold.many Fold.toStream (Array.writeN n)See . Unimplementedstreamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams.streamlyLike  but the array memory is aligned according to the specified alignment size. This could be useful when we have specific alignment, for example, cache aligned arrays for lookup table etc.-Caution! Do not use this on infinite streams.streamlyUse the  fold instead. "fromStreamDN n = D.fold (writeN n)streamly Create an  from the first N elements of a list. The array is allocated to size N, if the list terminates before N elements then the array may hold less than N elements.streamlyWe could take the approach of doubling the memory allocation on each overflow. This would result in more or less the same amount of copying as in the chunking approach. However, if we have to shrink in the end then it may result in an extra copy of the entire data. 'fromStreamD = StreamD.fold Array.write streamly Create an . from a list. The list must be of finite size.streamly-Copy two arrays into a newly allocated array.streamlySplice an array into a pre-reserved mutable array. The user must ensure that there is enough space in the mutable array, otherwise the splicing fails.streamlySplice a new array into a preallocated mutable array, doubling the space if there is no space in the target array.streamlyDrops the separator bytestreamlyCreate two slices of an array without copying the original array. The specified index i( is the first index of the second slice.streamly3Copies the two arrays into a newly allocated array.??7!(c) 2020 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?- streamlyReturns an immutable array using the same underlying pointers of the mutable array. If the underlying array is mutated, the immutable promise is lost. Please make sure that the mutable array is never used after freezing it using  unsafeFreeze. streamly Similar to   but uses  on the mutable array first. streamlyReturns a mutable array using the same underlying pointers of the immutable array. If the resulting array is mutated, the older immutable array is mutated as well. Please make sure that the immutable array is never used after thawing it using  unsafeThaw. streamly Create an  ) of the given number of elements of type a from a read only pointer Ptr a. The pointer is not freed when the array is garbage collected. This API is unsafe for the following reasons: The pointer must point to static pinned memory or foreign memory that does not require freeing..=The pointer must be legally accessible upto the given length.To guarantee that the array is immutable, the contents of the address must be guaranteed to not change.Unsafe Pre-release streamly Create an  Array Word8? of the given length from a static, read only machine address . See   for safety caveats.A common use case for this API is to create an array from a static unboxed string literal. GHC string literals are of type , and must contain characters that can be encoded in a byte i.e. characters or literal bytes in the range from 0-255.import Data.Word (Word8)0Array.fromAddr# 5 "hello world!"# :: Array Word8[104,101,108,108,111]0Array.fromAddr# 3 "\255\NUL\255"# :: Array Word8 [255,0,255] See also:  fromString#UnsafeTime complexity: O(1) Pre-release streamlyGenerate a byte array from an # that contains a sequence of NUL (0) terminated bytes. The array would not include the NUL byte. The address must be in static read-only memory and must be legally accessible up to and including the first NUL byte. An unboxed string literal (e.g. "hello"#) is a common example of an  in static read only memory. It represents the UTF8 encoded sequence of bytes terminated by a NUL byte (a -) corresponding to the given unicode string."Array.fromCString# "hello world!"#/[104,101,108,108,111,32,119,111,114,108,100,33]"Array.fromCString# "\255\NUL\255"#[255] See also:  Unsafe9Time complexity: O(n) (computes the length of the string) Pre-release streamly Create an   from the first N elements of a list. The array is allocated to size N, if the list terminates before N elements then the array may hold less than N elements.#Since 0.7.0 (Streamly.Memory.Array) streamly Create an  . from a list. The list must be of finite size.#Since 0.7.0 (Streamly.Memory.Array) streamlyarraysOf n stream< groups the input stream into a stream of arrays of size n. streamlyUse the "read" unfold instead. flattenArrays = unfoldMany read=We can try this if there are any fusion issues in the unfold. streamly!Use the "readRev" unfold instead. "flattenArrays = unfoldMany readRev=We can try this if there are any fusion issues in the unfold. streamlyReturn element at the specified index without checking the bounds.9Unsafe because it does not check the bounds of the array. streamlyReturn element at the specified index without checking the bounds. streamlyO(1)" Get the byte length of the array. streamlyO(1) Get the length of the array i.e. the number of elements in the array.#Since 0.7.0 (Streamly.Memory.Array) streamly/Unfold an array into a stream in reverse order. streamly Convert an   into a stream. Pre-release streamly Convert an   into a stream in reverse order. Pre-release streamlyCreate two slices of an array without copying the original array. The specified index i( is the first index of the second slice. streamly Convert an   into a list.#Since 0.7.0 (Streamly.Memory.Array) streamlywriteN n folds a maximum of n' elements from the input stream to an  .#Since 0.7.0 (Streamly.Memory.Array) streamlywriteNAligned alignment n folds a maximum of n' elements from the input stream to an   aligned to the given size. Pre-release streamlywriteNAlignedUnmanaged n folds a maximum of n' elements from the input stream to an   aligned to the given size and using unmanaged memory. This could be useful to allocate memory that we need to allocate only once in the lifetime of the program. Pre-release streamlyLike   but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown. streamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams.#Since 0.7.0 (Streamly.Memory.Array) streamlyLike   but the array memory is aligned according to the specified alignment size. This could be useful when we have specific alignment, for example, cache aligned arrays for lookup table etc.-Caution! Do not use this on infinite streams.3 3  8(c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?. streamlyintersperse after every n items> >     9!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?; streamlyA ring buffer is a mutable array of fixed size. Initially the array is empty, with ringStart pointing at the start of allocated memory. We call the next location to be written in the ring as ringHead. Initially ringHead == ringStart. When the first item is added, ringHead points to ringStart + sizeof item. When the buffer becomes full ringHead would wrap around to ringStart. When the buffer is full, ringHead always points at the oldest item in the ring and the newest item added always overwrites the oldest item.When using it we should keep in mind that a ringBuffer is a mutable data structure. We should not leak out references to it for immutable use. streamly/Get the first address of the ring as a pointer. streamlyCreate a new ringbuffer and return the ring buffer and the ringHead. Returns the ring and the ringHead, the ringHead is same as ringStart. streamlyAdvance the ringHead by 1 item, wrap around if we hit the end of the array. streamlyMove the ringHead by n items. The direction depends on the sign on whether n is positive or negative. Wrap around if we hit the beginning or end of the array. streamlyInsert an item at the head of the ring, when the ring is full this replaces the oldest item in the ring with the new item. This is unsafe beause ringHead supplied is not verified to be within the Ring. Also, the ringStart foreignPtr must be guaranteed to be alive by the caller. streamlyLike   but compares only N bytes instead of entire length of the ring buffer. This is unsafe because the ringHead Ptr is not checked to be in range. streamlyByte compare the entire length of ringBuffer with the given array, starting at the supplied ringHead pointer. Returns true if the Array and the ringBuffer have identical contents.This is unsafe because the ringHead Ptr is not checked to be in range. The supplied array must be equal to or bigger than the ringBuffer, ARRAY BOUNDS ARE NOT CHECKED. streamly8Fold the buffer starting from ringStart up to the given  using a pure step function. This is useful to fold the items in the ring when the ring is not full. The supplied pointer is usually the end of the ring.>Unsafe because the supplied Ptr is not checked to be in range. streamly5Like unsafeFoldRing but with a monadic step function. streamlyFold the entire length of a ring buffer starting at the supplied ringHead pointer. Assuming the supplied ringHead pointer points to the oldest item, this would fold the ring starting from the oldest item to the newest item in the ring.(Note, this will crash on ring of 0 size. streamlyFold Int items in the ring starting at Ptr a0. Won't fold more than the length of the ring.(Note, this will crash on ring of 0 size.  :(c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?A streamlyInterleave streams (full streams, not the elements) unfolded from two input streams and concat. Stop when the first stream stops. If the second stream ends before the first one then first stream still keeps running alone without any interleaving with the second stream. a1, a2, ... an[b1, b2 ...] => [streamA1, streamA2, ... streamAn] [streamB1, streamB2, ...] => [streamA1, streamB1, streamA2...StreamAn, streamBn] => [a11, a12, ...a1j, b11, b12, ...b1k, a21, a22, ...] streamlyInterleave streams (full streams, not the elements) unfolded from two input streams and concat. Stop when the first stream stops. If the second stream ends before the first one then first stream still keeps running alone without any interleaving with the second stream. a1, a2, ... an[b1, b2 ...] => [streamA1, streamA2, ... streamAn] [streamB1, streamB2, ...] => [streamA1, streamB1, streamA2...StreamAn, streamBn] => [a11, a12, ...a1j, b11, b12, ...b1k, a21, a22, ...] streamly)Performs infix separator style splitting. streamly)Performs infix separator style splitting.- -    ;(c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?C streamlyRun a Parse over a stream. streamlyRun a Parse- over a stream and return rest of the Stream. streamly1Run a streaming composition, discard the results. streamly1Execute a monadic action for each element of the ' '     !(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?Dacdb <!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?P streamly  fromList =    Construct a stream from a list of pure values. This is more efficient than  for serial streams. streamly5Convert a stream into a list in the underlying monad. streamlyLike  #, but with a monadic step function. streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction. streamlyStrict left associative fold. streamly&Lazy left fold to a transformer monad.!For example, to reverse a stream: S.toList $ S.foldlT (flip S.cons) S.nil $ (S.fromList [1..5] :: SerialT IO Int) streamly3Strict left scan with an extraction function. Like scanl', but applies a user supplied extraction function (the third argument) at each step. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction. streamly Compare two streams for equality streamlyCompare two streams streamly A variant of  that allows you to fold a  container of streams using the specified stream sum operation. concatFoldableWith async $ map return [1..3]Equivalent to: concatFoldableWith f = Prelude.foldr f S.nil concatFoldableWith f = S.concatMapFoldableWith f id 5Since: 0.8.0 (Renamed foldWith to concatFoldableWith)Since: 0.1.0 (Streamly) streamly A variant of 9 that allows you to map a monadic streaming action on a  container and then fold it using the specified stream merge operation. concatMapFoldableWith async return [1..3]Equivalent to: concatMapFoldableWith f g = Prelude.foldr (f . g) S.nil concatMapFoldableWith f g xs = S.concatMapWith f g (S.fromFoldable xs) ;Since: 0.8.0 (Renamed foldMapWith to concatMapFoldableWith)Since: 0.1.0 (Streamly) streamlyLike   but with the last two arguments reversed i.e. the monadic streaming function is the last argument.Equivalent to: concatForFoldableWith f xs g = Prelude.foldr (f . g) S.nil xs concatForFoldableWith = flip S.concatMapFoldableWith ;Since: 0.8.0 (Renamed forEachWith to concatForFoldableWith)Since: 0.1.0 (Streamly)   =!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?Y streamly>An IO stream whose applicative instance zips streams wAsyncly.Since: 0.2.0 (Streamly) streamlyFor   streams: (<>) = gj ( *) = &'Streamly.Prelude.serial.zipAsyncWith' id Applicative evaluates the streams being zipped concurrently, the following would take half the time that it would take in serial zipping:6s = Stream.fromFoldableM $ Prelude.map delay [1, 1, 1]5Stream.toList $ Stream.fromZipAsync $ (,) <$> s <*> s...[(1,1),(1,1),(1,1)]Since: 0.2.0 (Streamly) streamly>An IO stream whose applicative instance zips streams serially.Since: 0.2.0 (Streamly) streamly streamlyFor   streams: (<>) = gj ( *) = !'Streamly.Prelude.serial.zipWith' id 8Applicative evaluates the streams being zipped serially:s1 = Stream.fromFoldable [1, 2]s2 = Stream.fromFoldable [3, 4]s3 = Stream.fromFoldable [5, 6]Stream.toList $ Stream.fromZipSerial $ (,,) <$> s1 <*> s2 <*> s3[(1,3,5),(2,4,6)]Since: 0.2.0 (Streamly) streamlyLike  & but using a monadic zipping function. streamly7Zip two streams serially using a pure zipping function. > S.toList $ S.zipWith (+) (S.fromList [1,2,3]) (S.fromList [4,5,6]) [5,7,9] streamlyLike   but zips concurrently i.e. both the streams being zipped are generated concurrently. streamlyLike   but zips concurrently i.e. both the streams being zipped are generated concurrently. streamly(Fix the type of a polymorphic stream as  .Since: 0.2.0 (Streamly) streamlySame as  . streamly(Fix the type of a polymorphic stream as  .Since: 0.2.0 (Streamly) streamlySame as  . >!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?h streamly streamly5An interleaving serial IO stream of elements of type a. See  ! documentation for more details.Since: 0.2.0 (Streamly) streamlyFor   streams: (<>) = g --  (>>=) = flip . g g --   Note that  is associative only if we disregard the ordering of elements in the resulting stream. A single  bind behaves like a for loop::{'Stream.toList $ Stream.fromWSerial $ do6 x <- Stream.fromList [1,2] -- foreach x in stream return x:}[1,2]2Nested monad binds behave like interleaved nested for loops::{'Stream.toList $ Stream.fromWSerial $ do5 x <- Stream.fromList [1,2] -- foreach x in stream5 y <- Stream.fromList [3,4] -- foreach y in stream return (x, y):}[(1,3),(2,3),(1,4),(2,4)]It is a result of interleaving all the nested iterations corresponding to element 1 in the first stream with all the nested iterations of element 2:!import Streamly.Prelude (wSerial)Stream.toList $ Stream.fromList [(1,3),(1,4)] `wSerial` Stream.fromList [(2,3),(2,4)][(1,3),(2,3),(1,4),(2,4)]The W in the name stands for wide or breadth wise scheduling in contrast to the depth wise scheduling behavior of  .Since: 0.2.0 (Streamly) streamly streamly'A serial IO stream of elements of type a. See  ! documentation for more details.Since: 0.2.0 (Streamly) streamlyFor   streams: (<>) = gj --  (>>=) = flip . g gj --   A single  bind behaves like a for loop::{Stream.toList $ do6 x <- Stream.fromList [1,2] -- foreach x in stream return x:}[1,2]&Nested monad binds behave like nested for loops::{Stream.toList $ do5 x <- Stream.fromList [1,2] -- foreach x in stream5 y <- Stream.fromList [3,4] -- foreach y in stream return (x, y):}[(1,3),(1,4),(2,3),(2,4)]Since: 0.2.0 (Streamly) streamly(Fix the type of a polymorphic stream as  .Since: 0.1.0 (Streamly) streamly  map = fmap Same as . 5> S.toList $ S.map (+1) $ S.fromList [1,2,3] [2,3,4] streamly(Fix the type of a polymorphic stream as  .Since: 0.2.0 (Streamly) streamlySame as  . streamlyInterleaves two streams, yielding one element from each stream alternately. When one stream stops the rest of the other stream is used in the output stream.!import Streamly.Prelude (wSerial)stream1 = Stream.fromList [1,2]stream2 = Stream.fromList [3,4]>Stream.toList $ Stream.fromWSerial $ stream1 `wSerial` stream2 [1,3,2,4]Note, for singleton streams   and serial are identical.Note that this operation cannot be used to fold a container of infinite streams but it can be used for very large streams as the state that it needs to maintain is proportional to the logarithm of the number of streams.Since: 0.2.0 (Streamly) streamlyLike  : but stops interleaving as soon as the first stream stops. streamlyLike   but stops interleaving as soon as any of the two streams stops. streamlySame as  . streamlyBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns # and the stream ends. For example, let f b = if b > 3 then return Nothing else print b >> return (Just (b, b + 1)) in drain $ unfoldrM f 0   0 1 2 3  Pre-release    6 5!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?i     ?!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>? streamlyTypes that can be enumerated as a stream. The operations in this type class are equivalent to those in the  type class, except that these generate a stream instead of a list. Use the functions in )Streamly.Internal.Data.Stream.Enumeration module to define new instances. streamlyenumerateFrom from/ generates a stream starting with the element from, enumerating up to  when the type is 8 or generating an infinite stream when the type is not . >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFrom (0 :: Int) [0,1,2,3] For  types, enumeration is numerically stable. However, no overflow or underflow checks are performed. >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFrom 1.1 [1.1,2.1,3.1,4.1] streamly3Generate a finite stream starting with the element from(, enumerating the type up to the value to. If to is smaller than from# then an empty stream is returned. <>>> Stream.toList $ Stream.enumerateFromTo 0 4 [0,1,2,3,4] For 3 types, the last element is equal to the specified to5 value after rounding to the nearest integral value. >>> Stream.toList $ Stream.enumerateFromTo 1.1 4 [1.1,2.1,3.1,4.1] >>> Stream.toList $ Stream.enumerateFromTo 1.1 4.6 [1.1,2.1,3.1,4.1,5.1] streamlyenumerateFromThen from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from. Enumeration can occur downwards or upwards depending on whether then comes before or after from. For  types the stream ends when  is reached, for unbounded types it keeps enumerating infinitely. >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromThen 0 2 [0,2,4,6] >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromThen 0 (-2) [0,-2,-4,-6] streamly enumerateFromThenTo from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to. Enumeration can occur downwards or upwards depending on whether then comes before or after from. >>> Stream.toList $ Stream.enumerateFromThenTo 0 2 6 [0,2,4,6] >>> Stream.toList $ Stream.enumerateFromThenTo 0 (-2) (-6) [0,-2,-4,-6] streamly#enumerateFromStepIntegral from step6 generates an infinite stream whose first element is from3 and the successive elements are in increments of step.CAUTION: This function is not safe for finite integral types. It does not check for overflow, underflow or bounds. >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromStepIntegral 0 2 [0,2,4,6] >>> Stream.toList $ Stream.take 3 $ Stream.enumerateFromStepIntegral 0 (-2) [0,-2,-4] streamly Enumerate an  type. enumerateFromIntegral from, generates a stream whose first element is from3 and the successive elements are in increments of 1+. The stream is bounded by the size of the  type. >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromIntegral (0 :: Int) [0,1,2,3] streamly Enumerate an  type in steps. $enumerateFromThenIntegral from then+ generates a stream whose first element is from, the second element is then2 and the successive elements are in increments of  then - from,. The stream is bounded by the size of the  type. >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenIntegral (0 :: Int) 2 [0,2,4,6] >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenIntegral (0 :: Int) (-2) [0,-2,-4,-6] streamly Enumerate an  type up to a given limit. enumerateFromToIntegral from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to. >>> Stream.toList $ Stream.enumerateFromToIntegral 0 4 [0,1,2,3,4] streamly Enumerate an % type in steps up to a given limit. (enumerateFromThenToIntegral from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to. >>> Stream.toList $ Stream.enumerateFromThenToIntegral 0 2 6 [0,2,4,6] >>> Stream.toList $ Stream.enumerateFromThenToIntegral 0 (-2) (-6) [0,-2,-4,-6] streamly&Numerically stable enumeration from a  number in steps of size 1. enumerateFromFractional from, generates a stream whose first element is from2 and the successive elements are in increments of 12. No overflow or underflow checks are performed.This is the equivalent to  for  types. For example: >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromFractional 1.1 [1.1,2.1,3.1,4.1] streamly&Numerically stable enumeration from a  number in steps. %enumerateFromThenFractional from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from2. No overflow or underflow checks are performed.This is the equivalent of  for  types. For example: >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenFractional 1.1 2.1 [1.1,2.1,3.1,4.1] >>> Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenFractional 1.1 (-2.1) [1.1,-2.1,-5.300000000000001,-8.500000000000002] streamly&Numerically stable enumeration from a  number to a given limit. !enumerateFromToFractional from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to.This is the equivalent of  for  types. For example: >>> Stream.toList $ Stream.enumerateFromToFractional 1.1 4 [1.1,2.1,3.1,4.1] >>> Stream.toList $ Stream.enumerateFromToFractional 1.1 4.6 [1.1,2.1,3.1,4.1,5.1] 7Notice that the last element is equal to the specified to. value after rounding to the nearest integer. streamly&Numerically stable enumeration from a ( number in steps up to a given limit. *enumerateFromThenToFractional from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to.This is the equivalent of  for  types. For example: >>> Stream.toList $ Stream.enumerateFromThenToFractional 0.1 2 6 [0.1,2.0,3.9,5.799999999999999] >>> Stream.toList $ Stream.enumerateFromThenToFractional 0.1 (-2) (-6) [0.1,-2.0,-4.1000000000000005,-6.200000000000001] streamly  for  types not larger than . streamly  for  types not larger than . streamly  for  types not larger than .Note: We convert the  to  and enumerate the ,. If a type is bounded but does not have a  instance then we can go on enumerating it beyond the legal values of the type, resulting in the failure of  when converting back to . Therefore we require a / instance for this function to be safely used. streamly "enumerate = enumerateFrom minBound Enumerate a  type from its  to  streamly &enumerateTo = enumerateFromTo minBound Enumerate a  type from its  to specified value. streamly 4enumerateFromBounded = enumerateFromTo from maxBound  for   types.  @!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?u streamlySpecify the maximum number of threads that can be spawned concurrently for any concurrent combinator in a stream. A value of 0 resets the thread limit to default, a negative value means there is no limit. The default value is 1500.   does not affect  ParallelT5 streams as they can use unbounded number of threads.When the actions in a stream are IO bound, having blocking IO calls, this option can be used to control the maximum number of in-flight IO requests. When the actions are CPU bound this option can be used to control the amount of CPU used by the stream.Since: 0.4.0 (Streamly) streamlySpecify the maximum size of the buffer for storing the results from concurrent computations. If the buffer becomes full we stop spawning more concurrent tasks until there is space in the buffer. A value of 0 resets the buffer size to default, a negative value means there is no limit. The default value is 1500.CAUTION! using an unbounded  : value (i.e. a negative value) coupled with an unbounded   value is a recipe for disaster in presence of infinite streams, or very large streams. Especially, it must not be used when  is used in  ZipAsyncM streams as  in applicative zip streams generates an infinite stream causing unbounded concurrent generation with no limit on the buffer or threads.Since: 0.4.0 (Streamly) streamly&Specify the pull rate of a stream. A  value resets the rate to default which is unlimited. When the rate is specified, concurrent production may be ramped up or down automatically to achieve the specified yield rate. The specific behavior for different styles of $ specifications is documented under . The effective maximum production rate achieved by a stream is governed by:The   limitThe   limit5The maximum rate that the stream producer can achieve5The maximum rate that the stream consumer can achieveSince: 0.5.0 (Streamly) streamlySame as )rate (Just $ Rate (r/2) r (2*r) maxBound)Specifies the average production rate of a stream in number of yields per second (i.e. Hertz). Concurrent production is ramped up or down automatically to achieve the specified average yield rate. The rate can go down to half of the specified rate on the lower side and double of the specified rate on the higher side.Since: 0.5.0 (Streamly) streamlySame as %rate (Just $ Rate r r (2*r) maxBound)Specifies the minimum rate at which the stream should yield values. As far as possible the yield rate would never be allowed to go below the specified rate, even though it may possibly go above it at times, the upper limit is double of the specified rate.Since: 0.5.0 (Streamly) streamlySame as %rate (Just $ Rate (r/2) r r maxBound)Specifies the maximum rate at which the stream should yield values. As far as possible the yield rate would never be allowed to go above the specified rate, even though it may possibly go below it at times, the lower limit is half of the specified rate. This can be useful in applications where certain resource usage must not be allowed to go beyond certain limits.Since: 0.5.0 (Streamly) streamlySame as rate (Just $ Rate r r r 0)Specifies a constant yield rate. If for some reason the actual rate goes above or below the specified rate we do not try to recover it by increasing or decreasing the rate in future. This can be useful in applications like graphics frame refresh where we need to maintain a constant refresh rate.Since: 0.5.0 (Streamly) streamly:Print debug information about an SVar when the stream ends Pre-release A(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?R streamly%Change the underlying monad of a fold Pre-release streamlyAdapt a pure fold to any monad -generally = Fold.hoist (return . runIdentity) Pre-release streamly4Flatten the monadic output of a fold to pure output. streamly/Map a monadic function on the output of a fold. streamlymapMaybe f fold maps a  returning function f( on the input of the fold, filters out 1 elements, and return the values extracted from .(f x = if even x then Just x else Nothing!fld = Fold.mapMaybe f Fold.toList-Stream.fold fld (Stream.enumerateFromTo 1 10) [2,4,6,8,10] streamlyApply a transformation on a  using a . Pre-release streamly This hash is often used in Rabin-Karp string search algorithm.See *https://en.wikipedia.org/wiki/Rolling_hash streamly Compute an + sized polynomial rolling hash of a stream. 2rollingHash = Fold.rollingHashWithSalt defaultSalt streamly Compute an  sized polynomial rolling hash of the first n elements of a stream. 0rollingHashFirstN = Fold.take n Fold.rollingHash Pre-release streamlyAppend the elements of an input stream to a provided starting value.Stream.fold (Fold.sconcat 10) (Stream.map Data.Monoid.Sum $ Stream.enumerateFromTo 1 10)Sum {getSum = 65} sconcat = Fold.foldl' (<>) streamly;Fold an input stream consisting of monoidal elements using  and .Stream.fold Fold.mconcat (Stream.map Data.Monoid.Sum $ Stream.enumerateFromTo 1 10)Sum {getSum = 55} mconcat = Fold.sconcat mempty streamly $foldMap f = Fold.lmap f Fold.mconcatMake a fold from a pure function that folds the output of the function using  and .Stream.fold (Fold.foldMap Data.Monoid.Sum) $ Stream.enumerateFromTo 1 10Sum {getSum = 55} streamly &foldMapM f = Fold.lmapM f Fold.mconcatMake a fold from a monadic function that folds the output of the function using  and .Stream.fold (Fold.foldMapM (return . Data.Monoid.Sum)) $ Stream.enumerateFromTo 1 10Sum {getSum = 55} streamlyBuffers the input stream to a list in the reverse order of the input. %toListRev = Fold.foldl' (flip (:)) []Warning! working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead. streamlyA fold that drains the first n elements of its input, running the effects and discarding the results. !drainN n = Fold.take n Fold.drain Pre-release streamlyLike  , except with a more general  argument Pre-release streamly&Lookup the element at the given index. See also: g streamly0Extract the first element of the stream, if any. streamly=Returns the first element that satisfies the given predicate. streamly!In a stream of (key-value) pairs (a, b), return the value b9 of the first pair where the key equals the given value a. 'lookup = snd <$> Fold.find ((==) . fst) streamly;Returns the first index that satisfies the given predicate. streamlyReturns the first index where a given value is found in the stream. #elemIndex a = Fold.findIndex (== a) streamlyReturn  if the input stream is empty. null = fmap isJust Fold.head streamlyReturns : if any of the elements of a stream satisfies a predicate.7Stream.fold (Fold.any (== 0)) $ Stream.fromList [1,0,1]True any p = Fold.lmap p Fold.or streamlyReturn / if the given element is present in the stream. elem a = Fold.any (== a) streamlyReturns 1 if all elements of a stream satisfy a predicate.7Stream.fold (Fold.all (== 0)) $ Stream.fromList [1,0,1]False all p = Fold.lmap p Fold.and streamlyReturns 3 if the given element is not present in the stream. notElem a = Fold.all (/= a) streamlyReturns  if all elements are ,  otherwise and = Fold.all (== True) streamlyReturns  if any element is ,  otherwise or = Fold.any (== True) streamlysplitAt n f1 f2 composes folds f1 and f2 such that first n- elements of its input are consumed by fold f11 and the rest of the stream is consumed by fold f2.let splitAt_ n xs = Stream.fold (Fold.splitAt n Fold.toList Fold.toList) $ Stream.fromList xssplitAt_ 6 "Hello World!"("Hello ","World!")splitAt_ (-1) [1,2,3] ([],[1,2,3])splitAt_ 0 [1,2,3] ([],[1,2,3])splitAt_ 1 [1,2,3] ([1],[2,3])splitAt_ 3 [1,2,3] ([1,2,3],[])splitAt_ 4 [1,2,3] ([1,2,3],[]) 9splitAt n f1 f2 = Fold.serialWith (,) (Fold.take n f1) f2Internal streamlyLike  7 but drops the element on which the predicate succeeds.Stream.fold (Fold.takeEndBy_ (== '\n') Fold.toList) $ Stream.fromList "hello\nthere\n""hello"Stream.toList $ Stream.foldMany (Fold.takeEndBy_ (== '\n') Fold.toList) $ Stream.fromList "hello\nthere\n"["hello","there"] Stream.splitOnSuffix p f = Stream.foldMany (Fold.takeEndBy_ p f)See g/ for more details on splitting a stream using  . streamlyTake the input, stop when the predicate succeeds taking the succeeding element as well.Stream.fold (Fold.takeEndBy (== '\n') Fold.toList) $ Stream.fromList "hello\nthere\n" "hello\n"Stream.toList $ Stream.foldMany (Fold.takeEndBy (== '\n') Fold.toList) $ Stream.fromList "hello\nthere\n"["hello\n","there\n"] Stream.splitWithSuffix p f = Stream.foldMany (Fold.takeEndBy p f)See g/ for more details on splitting a stream using  . streamlyDistribute one copy of the stream to each fold and zip the results.  |-------Fold m a b--------| ---stream m a---| |---m (b,c) |-------Fold m a c--------| Stream.fold (Fold.tee Fold.sum Fold.length) (Stream.enumerateFromTo 1.0 100.0) (5050.0,100) tee = teeWith (,) streamlyDistribute one copy of the stream to each fold and collect the results in a container.  |-------Fold m a b--------| ---stream m a---| |---m [b] |-------Fold m a b--------| | | ... Stream.fold (Fold.distribute [Fold.sum, Fold.length]) (Stream.enumerateFromTo 1 5)[15,5] distribute = Prelude.foldr (Fold.teeWith (:)) (Fold.fromPure [])4This is the consumer side dual of the producer side   operation.Stops when all the folds stop. streamly,Partition the input over two folds using an  partitioning predicate.  |-------Fold b x--------| -----stream m a --> (Either b c)----| |----(x,y) |-------Fold c y--------| #Send input to either fold randomly: > import System.Random (randomIO) > randomly a = randomIO >>= \x -> return $ if x then Left a else Right a > Stream.fold (Fold.partitionByM randomly Fold.length Fold.length) (Stream.enumerateFromTo 1 100) (59,41) 3Send input to the two folds in a proportion of 2:1: import Data.IORef (newIORef, readIORef, writeIORef) proportionately m n = do ref <- newIORef $ cycle $ concat [replicate m Left, replicate n Right] return $ \a -> do r <- readIORef ref writeIORef ref $ tail r return $ head r a main = do f <- proportionately 2 1 r <- S.fold (FL.partitionByM f FL.length FL.length) (S.enumerateFromTo (1 :: Int) 100) print r  (67,33) 4This is the consumer side dual of the producer side mergeBy operation.When one fold is done, any input meant for it is ignored until the other fold is also done.Stops when both the folds stop. See also:   and  . Pre-release streamly Similar to  / but terminates when the first fold terminates. Unimplemented streamly Similar to  ) but terminates when any fold terminates. Unimplemented streamlySame as  $ but with a pure partition function.'Count even and odd numbers in a stream::{ let f = Fold.partitionBy (\n -> if even n then Left n else Right n)= (fmap (("Even " ++) . show) Fold.length)= (fmap (("Odd " ++) . show) Fold.length)1 in Stream.fold f (Stream.enumerateFromTo 1 100):}("Even 50","Odd 50") Pre-release streamlyCompose two folds such that the combined fold accepts a stream of  and routes the  values to the first fold and  values to the second fold. partition = partitionBy id streamlySplit the input stream based on a key field and fold each split using a specific fold collecting the results in a map from the keys to the results. Useful for cases like protocol handlers to handle different type of packets using different handlers.  |-------Fold m a b -----stream m a-----Map-----| |-------Fold m a b | ... Any input that does not map to a fold in the input Map is silently ignored. 7demuxWith f kv = fmap fst $ demuxDefaultWith f kv drain Pre-release streamlyFold a stream of key value pairs using a map of specific folds for each key into a map from keys to the results of fold outputs of the corresponding values.import qualified Data.Map:{ let table = Data.Map.fromList [("SUM", Fold.sum), ("PRODUCT", Fold.product)] input = Stream.fromList [("SUM",1),("PRODUCT",2),("SUM",3),("PRODUCT",4)]) in Stream.fold (Fold.demux table) input:}"fromList [("PRODUCT",8),("SUM",4)] demux = demuxWith id Pre-release streamlyLike   but uses a default catchall fold to handle inputs which do not have a specific fold in the map to handle them.If any fold in the map stops, inputs meant for that fold are sent to the catchall fold. If the catchall fold stops then inputs that do not match any fold are ignored. b& to accept an additional state input  (s, a) -> b?. Convenient to filter with an addiitonal index or time input. filterWithIndex = with indexed filter filterWithAbsTime = with timestamped filter filterWithRelTime = with timeIndexed filter  Pre-release streamlysampleFromthen offset stride samples the element at offset- index and then every element at strides of stride. Unimplemented streamlyconcatSequence f t applies folds from stream t7 sequentially and collects the results using the fold f. Unimplemented streamly7Group the input stream into groups of elements between low and high". Collection starts in chunks of low) and then keeps doubling until we reach high8. Each chunk is folded using the provided fold function.This could be useful, for example, when we are folding a stream of unknown size to a stream of arrays and we want to minimize the number of allocations.NOTE: this would be an application of "many" using a terminating fold. Unimplemented streamly/A fold that buffers its input to a pure stream.Warning! working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Data.Array instead. toStream = foldr K.cons K.nil Pre-release streamlyBuffers the input stream to a pure stream in the reverse order of the input. (toStreamRev = foldl' (flip K.cons) K.nilWarning! working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Data.Array instead. Pre-release               !(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHCNone #$&-035678>?ݶ; ;       B!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>? streamlyTransform the inner monad of a stream using a natural transformation. Internal streamly.Generalize the inner monad of the stream from  to any monad. Internal streamlyLift the inner monad m of a stream t m a to tr m using the monad transformer tr. streamly(Evaluate the inner monad of a stream as . streamly6Run a stream transformation using a given environment. See also:  Internal streamly(Evaluate the inner monad of a stream as .This is supported only for  / as concurrent state updation may not be safe. 2evalStateT s = Stream.map snd . Stream.runStateT s Internal streamlyRun a stateful (StateT) stream transformation using a given state.This is supported only for  / as concurrent state updation may not be safe. .usingStateT s f = evalStateT s . f . liftInner See also: scanl' Internal streamly(Evaluate the inner monad of a stream as > and emit the resulting state and value pair after each step.This is supported only for  / as concurrent state updation may not be safe.  C!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?9 streamlyRun the action m b, before the stream yields its first element. &before action xs = 'nilM' action <> xs streamlyLike  , with following differences:action m b won't run if the stream is garbage collected after partial evaluation.Monad m( does not require any other constraints.%has slightly better performance than  ..Same as the following, but with stream fusion: &after_ action xs = xs <> 'nilM' action Pre-release streamlyRun the action m b whenever the stream t m a stops normally, or if it is garbage collected after a partial lazy evaluation.The semantics of the action m b4 are similar to the semantics of cleanup action in  . See also  streamlyRun the action m b if the stream aborts due to an exception. The exception is not caught, simply rethrown.Inhibits stream fusion streamlyLike   with following differences:action m b won't run if the stream is garbage collected after partial evaluation.does not require a  constraint.%has slightly better performance than  .Inhibits stream fusion Pre-release streamlyRun the action m b whenever the stream t m a stops normally, aborts due to an exception or if it is garbage collected after a partial lazy evaluation.$The semantics of running the action m b; are similar to the cleanup action semantics described in  . 6finally release = bracket (return ()) (const release)  See also  Inhibits stream fusion streamlyLike   but with following differences: alloc action m b# runs with async exceptions enabledcleanup action b -> m c won't run if the stream is garbage collected after partial evaluation.does not require a  constraint.%has slightly better performance than  .Inhibits stream fusion Pre-release streamlyRun the alloc action m b with async exceptions disabled but keeping blocking operations interruptible (see ). Use the output b as input to  b -> t m a to generate an output stream.b0 is usually a resource under the state of monad m, e.g. a file handle, that requires a cleanup after use. The cleanup action b -> m c, runs whenever the stream ends normally, due to a sync or async exception or if it gets garbage collected after a partial lazy evaluation.  only guarantees that the cleanup action runs, and it runs with async exceptions enabled. The action must ensure that it can successfully cleanup the resource in the face of sync or async exceptions.When the stream ends normally or on a sync exception, cleanup action runs immediately in the current thread context, whereas in other cases it runs in the GC context, therefore, cleanup may be delayed until the GC gets to run. See also:  Inhibits stream fusion streamlyLike   but the exception handler is also provided with the stream that generated the exception as input. The exception handler can thus re-evaluate the stream to retry the action that failed. The exception handler can again call  * on it to retry the action multiple times.This is highly experimental. In a stream of actions we can map the stream with a retry combinator to retry each action on failure.Inhibits stream fusion Pre-release streamlyWhen evaluating a stream if an exception occurs, stream evaluation aborts and the specified exception handler is run with the exception as argument.Inhibits stream fusion D!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>? streamly fromPure a = a `cons` nil ,Create a singleton stream from a pure value.?The following holds in monadic streams, but not in Zip streams: -fromPure = pure fromPure = fromEffect . pure In Zip applicative streams   is not the same as  because in that case  is equivalent to  instead.   and ( are equally efficient, in other cases   may be slightly more efficient than the other equivalent definitions.(Since: 0.8.0 (Renamed yield to fromPure) streamlySame as  streamly fromEffect m = m `consM` nil 0Create a singleton stream from a monadic action. <> Stream.toList $ Stream.fromEffect getLine hello ["hello"] +Since: 0.8.0 (Renamed yieldM to fromEffect) streamlySame as  streamly 4repeatM = fix . consM repeatM = cycle1 . fromEffect Generate a stream by repeatedly executing a monadic action forever. drain $ fromSerial $ S.take 10 $ S.repeatM $ (threadDelay 1000000 >> print 1) drain $ fromAsync $ S.take 10 $ S.repeatM $ (threadDelay 1000000 >> print 1) &Concurrent, infinite (do not use with  fromParallel) streamly timesWith g returns a stream of time value tuples. The first component of the tuple is an absolute time reference (epoch) denoting the start of the stream and the second component is a time relative to the reference. The argument g specifies the granularity of the relative time in seconds. A lower granularity clock gives higher precision but is more expensive in terms of CPU usage. Any granularity lower than 1 ms is treated as 1 ms.'import Control.Concurrent (threadDelay)import Streamly.Internal.Data.Stream.IsStream.Common as Stream (timesWith)Stream.mapM_ (\x -> print x >> threadDelay 1000000) $ Stream.take 3 $ Stream.timesWith 0.01(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...)).Note: This API is not safe on 32-bit machines. Pre-release streamlyabsTimesWith g returns a stream of absolute timestamps using a clock of granularity g specified in seconds. A low granularity clock is more expensive in terms of CPU usage. Any granularity lower than 1 ms is treated as 1 ms.Stream.mapM_ print $ Stream.delayPre 1 $ Stream.take 3 $ absTimesWith 0.01*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...}).Note: This API is not safe on 32-bit machines. Pre-release streamlyrelTimesWith g returns a stream of relative time values starting from 0, using a clock of granularity g specified in seconds. A low granularity clock is more expensive in terms of CPU usage. Any granularity lower than 1 ms is treated as 1 ms.Stream.mapM_ print $ Stream.delayPre 1 $ Stream.take 3 $ Stream.relTimesWith 0.01RelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...).Note: This API is not safe on 32-bit machines. Pre-release streamly&Fold a stream using the supplied left  and reducing the resulting expression strictly at each step. The behavior is similar to foldl'. A  can terminate early without consuming the full stream. See the documentation of individual s for termination behavior.3Stream.fold Fold.sum (Stream.enumerateFromTo 1 100)5050Folds never fail, therefore, they produce a default value even when no input is provided. It means we can always fold an empty stream and get a valid result. For example:Stream.fold Fold.sum Stream.nil0 However, foldMany< on an empty stream results in an empty stream. Therefore,  Stream.fold f is not the same as  Stream.head . Stream.foldMany f. )fold f = Stream.parse (Parser.fromFold f) streamly+scanlMAfter' accumulate initial done stream is like scanlM'( except that it provides an additional done function to be applied on the accumulator when the stream stops. The result of done is also emitted in the stream.This function can be used to allocate a resource in the beginning of the scan and release it when the stream ends or to flush the internal state of the scan at the end. Pre-release streamlyLike  postscanl'5 but with a monadic step function and a monadic seed. Since: 0.7.0Since: 0.8.0 (signature change) streamly A stateful , equivalent to a left scan, more like mapAccumL. Hopefully, this is a better alternative to scan. Separation of state from the output makes it easier to think in terms of a shared state, and also makes it easier to keep the state fully strict and the output lazy. See also: scanlM' Pre-release streamly Take first n/ elements from the stream and discard the rest. streamly> return ',') $ Stream.fromList "hello"h.,e.,l.,l.,o"h,e,l,l,o" streamly?Intersperse a monadic action into the input stream after every n seconds. > import Control.Concurrent (threadDelay) > Stream.drain $ Stream.interjectSuffix 1 (putChar ',') $ Stream.mapM (x -> threadDelay 1000000 >> putChar x) $ Stream.fromList "hello" h,e,l,l,o  Pre-release streamlyReturns the elements of the stream in reverse order. The stream must be finite. Note that this necessarily buffers the entire stream in memory. Since 0.7.0 (Monad m constraint) Since: 0.1.1 streamlyLike  & but several times faster, requires a  instance. Pre-release streamlyMap a stream producing monadic function on each element of the stream and then flatten the results into a single stream. Since the stream generation function is monadic, unlike  , it can produce an effect at the beginning of each iteration of the inner loop. streamlyMap a stream producing function on each element of the stream and then flatten the results into a single stream. concatMap f =   (return . f) concatMap =  concatMapWith . concatMap f = 'concat . map f' concatMap f =  unfoldMany (UF.lmap f UF.fromStream) streamlyGiven a stream value in the underlying monad, lift and join the underlying monad with the stream monad. concatM = concat . fromEffect concatM = concat . lift -- requires (MonadTrans t)( concatM = join . lift -- requires  (MonadTrans t,  Monad (t m))  See also: , Internal streamlyLike splitOn but the separator is a sequence of elements instead of a single element.For illustration, let's define a function that operates on pure lists:splitOnSeq' pat xs = Stream.toList $ Stream.splitOnSeq (Array.fromList pat) Fold.toList (Stream.fromList xs)splitOnSeq' "" "hello"["h","e","l","l","o"]splitOnSeq' "hello" ""[""]splitOnSeq' "hello" "hello"["",""]splitOnSeq' "x" "hello" ["hello"]splitOnSeq' "h" "hello" ["","ello"]splitOnSeq' "o" "hello" ["hell",""]splitOnSeq' "e" "hello" ["h","llo"]splitOnSeq' "l" "hello" ["he","","o"]splitOnSeq' "ll" "hello" ["he","o"]  is an inverse of  intercalate!. The following law always holds: intercalate . splitOn == idThe following law holds when the separator is non-empty and contains none of the elements present in the input lists: splitOn . intercalate == id Pre-release  E!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?m streamlyUse a  to transform a stream. Pre-release streamly Right fold to a streaming monad. $foldrS Stream.cons Stream.nil === id  can be used to perform stateless stream to stream transformations like map and filter in general. It can be coupled with a scan to perform stateful transformations. However, note that the custom map and filter routines can be much more efficient than this due to better stream fusion.Stream.toList $ Stream.foldrS Stream.cons Stream.nil $ Stream.fromList [1..5] [1,2,3,4,5]%Find if any element in the stream is :Stream.toList $ Stream.foldrS (\x xs -> if odd x then return True else xs) (return False) $ (Stream.fromList (2:4:5:undefined) :: Stream.SerialT IO Int)[True]:Map (+2) on odd elements and filter out the even elements:Stream.toList $ Stream.foldrS (\x xs -> if odd x then (x + 2) `Stream.cons` xs else xs) Stream.nil $ (Stream.fromList [1..5] :: Stream.SerialT IO Int)[3,5,7]foldrM% can also be represented in terms of  ., however, the former is much more efficient: foldrM f z s = runIdentityT $ foldrS (\x xs -> lift $ f x (runIdentityT xs)) (lift z) s Pre-release streamlyRight fold to a transformer monad. This is the most general right fold function.   is a special case of   , however  ' implementation can be more efficient: foldrS = foldrT foldrM f z s = runIdentityT $ foldrT (\x xs -> lift $ f x (runIdentityT xs)) (lift z) s  can be used to translate streamly streams to other transformer monads e.g. to a different streaming type. Pre-release streamly mapM f = sequence . map f Apply a monadic function to each element of the stream and replace it with the output of the resulting action. >>> drain $ Stream.mapM putStr $ Stream.fromList ["a", "b", "c"] abc >>> :{ drain $ Stream.replicateM 10 (return 1) & (fromSerial . Stream.mapM (x -> threadDelay 1000000 >> print x)) :} 1 ... 1 > drain $ Stream.replicateM 10 (return 1) & (fromAsync . Stream.mapM (x -> threadDelay 1000000 >> print x)) Concurrent (do not use with  fromParallel on infinite streams) streamly sequence = mapM id Replace the elements of a stream of monadic actions with the outputs of those actions. >>> drain $ Stream.sequence $ Stream.fromList [putStr "a", putStr "b", putStrLn "c"] abc >>> :{ drain $ Stream.replicateM 10 (return $ threadDelay 1000000 >> print 1) & (fromSerial . Stream.sequence) :} 1 ... 1 >>> :{ drain $ Stream.replicateM 10 (return $ threadDelay 1000000 >> print 1) & (fromAsync . Stream.sequence) :} 1 ... 1 Concurrent (do not use with  fromParallel on infinite streams) streamly-Tap the data flowing through a stream into a . For example, you may add a tap to log the contents flowing through the stream. The fold is used only for effects, its result is discarded.  Fold m a b | -----stream m a ---------------stream m a----- Stream.drain $ Stream.tap (Fold.drainBy print) (Stream.enumerateFromTo 1 2)12 Compare with  . streamlytapOffsetEvery offset n taps every n&th element in the stream starting at offset. offset can be between 0 and n - 1. Offset 0 means start at the first element in the stream. If the offset is outside this range then offset  n is used as offset.Stream.drain $ Stream.tapOffsetEvery 0 2 (Fold.rmapM print Fold.toList) $ Stream.enumerateFromTo 0 10[0,2,4,6,8,10] streamlyRedirect a copy of the stream to a supplied fold and run it concurrently in an independent thread. The fold may buffer some elements. The buffer size is determined by the prevailing   setting.  Stream m a -> m b | -----stream m a ---------------stream m a-----  >>> Stream.drain $ Stream.tapAsync (Fold.drainBy print) (Stream.enumerateFromTo 1 2) 1 2 Exceptions from the concurrently running fold are propagated to the current computation. Note that, because of buffering in the fold, exceptions may be delayed and may not correspond to the current element being processed in the parent stream, but we guarantee that before the parent stream stops the tap finishes and all exceptions from it are drained. Compare with  . Pre-release streamly*pollCounts predicate transform fold stream4 counts those elements in the stream that pass the  predicate. The resulting count stream is sent to another thread which transforms it using  transform and then folds it using fold. The thread is automatically cleaned up if the stream stops or aborts due to exception.For example, to print the count of elements processed every second: > Stream.drain $ Stream.pollCounts (const True) (Stream.rollingMap (-) . Stream.delayPost 1) (FLold.drainBy print) $ Stream.enumerateFrom 0 5Note: This may not work correctly on 32-bit machines. Pre-release streamlyCalls the supplied function with the number of elements consumed every n seconds. The given function is run in a separate thread until the end of the stream. In case there is an exception in the stream the thread is killed during the next major GC.Note: The action is not guaranteed to run if the main thread exits. > delay n = threadDelay (round $ n * 1000000) >> return n > Stream.toList $ Stream.tapRate 2 (n -> print $ show n ++ " elements processed") (delay 1 Stream.|: delay 0.5 Stream.|: delay 0.5 Stream.|: Stream.nil) "2 elements processed" [1.0,0.5,0.5] "1 elements processed" 5Note: This may not work correctly on 32-bit machines. Pre-release streamlyApply a monadic function to each element flowing through the stream and discard the results. >>> Stream.drain $ Stream.trace print (Stream.enumerateFromTo 1 2) 1 2  Compare with  . streamlyPerform a side effect before yielding each element of the stream and discard the results. >>> Stream.drain $ Stream.trace_ (print "got here") (Stream.enumerateFromTo 1 2) "got here" "got here" Same as   but always serial. See also:   Pre-release streamly+Scan a stream using the given monadic fold.Stream.toList $ Stream.takeWhile (< 10) $ Stream.scan Fold.sum (Stream.fromList [1..10]) [0,1,3,6] streamly/Postscan a stream using the given monadic fold.The following example extracts the input stream up to a point where the running average of elements is no more than 10:import Data.Maybe (fromJust)let avg = Fold.teeWith (/) Fold.sum (fmap fromIntegral Fold.length):{ Stream.toList $ Stream.map (fromJust . fst)( $ Stream.takeWhile (\(_,x) -> x <= 10) $ Stream.postscan (Fold.tee Fold.last avg) (Stream.enumerateFromTo 1.0 100.0):}[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,19.0] streamly3Strict left scan with an extraction function. Like  , but applies a user supplied extraction function (the third argument) at each step. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction. Since 0.2.0!Since: 0.7.0 (Monad m constraint) streamlyLike  5 but with a monadic step function and a monadic seed. Since: 0.4.0Since: 0.8.0 (signature change) streamlyStrict left scan. Like map,   too is a one to one transformation, however it adds an extra element. >>> Stream.toList $ Stream.scanl' (+) 0 $ fromList [1,2,3,4] [0,1,3,6,10]  >>> Stream.toList $ Stream.scanl' (flip (:)) [] $ Stream.fromList [1,2,3,4] [[],[1],[2,1],[3,2,1],[4,3,2,1]] The output of   is the initial value of the accumulator followed by all the intermediate steps and the final result of foldl'.By streaming the accumulated state after each fold step, we can share the state across multiple stages of stream composition. Each stage can modify or extend the state, do some processing with it and emit it for the next stage, thus modularizing the stream processing. This can be useful in stateful or event-driven programming.Consider the following monolithic example, computing the sum and the product of the elements in a stream in one go using a foldl': >>> Stream.foldl' ((s, p) x -> (s + x, p * x)) (0,1) $ Stream.fromList  10,241,2,3,4 Using scanl' we can make it modular by computing the sum in the first stage and passing it down to the next stage for computing the product: >>> :{ Stream.foldl' ((_, p) (s, x) -> (s, p * x)) (0,1) $ Stream.scanl' ((s, _) x -> (s + x, x)) (0,1) $ Stream.fromList [1,2,3,4] :} (10,24)  IMPORTANT:   evaluates the accumulator to WHNF. To avoid building lazy expressions inside the accumulator, it is recommended that a strict data structure is used for accumulator. See also:  usingStateT streamlyLike  : but does not stream the initial value of the accumulator. 8postscanl' f z xs = Stream.drop 1 $ Stream.scanl' f z xs streamlyLike scanl' but does not stream the final value of the accumulator. Pre-release streamlyLike prescanl' but with a monadic step function and a monadic seed. Pre-release streamlyLike  " but with a monadic step function. streamlyLike   but for a non-empty stream. The first element of the stream is used as the initial value of the accumulator. Does nothing if the stream is empty. >>> Stream.toList $ Stream.scanl1' (+) $ fromList [1,2,3,4] [1,3,6,10] streamly Modify a t m a -> t m a1 stream transformation that accepts a predicate (a -> b) to accept  ((s, a) -> b)$ instead, provided a transformation t m a -> t m (s, a)*. Convenient to filter with index or time. filterWithIndex = with indexed filter filterWithAbsTime = with timestamped filter filterWithRelTime = with timeIndexed filter  Pre-release streamly2Include only those elements that pass a predicate. streamlySame as   but with a monadic predicate. streamlyDrop repeated elements that are adjacent to each other using the supplied comparison function.@uniq = uniqBy (==)#To strip duplicate path separators:  f x y = x == ? && x == y Stream.toList $ Stream.uniqBy f $ Stream.fromList "/a/b" "ab" Space: O(1) See also:  . Pre-release streamly7Drop repeated elements that are adjacent to each other. streamlyStrip all leading and trailing occurrences of an element passing a predicate and make all other consecutive occurrences uniq. 9prune p = dropWhileAround p $ uniqBy (x y -> p x && p y)  > Stream.prune isSpace (Stream.fromList " hello world! ") "hello world!" Space: O(1) Unimplemented streamly"Emit only repeated elements, once. Unimplemented streamly.Drop repeated elements anywhere in the stream.*Caution: not scalable for infinite streamsSee also: nubWindowBy Unimplemented streamlyDrop repeated elements within the specified tumbling window in the stream. nubBy = nubWindowBy maxBound Unimplemented streamlyDeletes the first occurrence of the element in the stream that satisfies the given equality predicate. >>> Stream.toList $ Stream.deleteBy (==) 3 $ Stream.fromList [1,3,3,5] [1,3,5] streamlyEvaluate the input stream continuously and keep only the oldest n elements in the buffer, discard the new ones when the buffer is full. When the output stream is evaluated it consumes the values from the buffer in a FIFO manner. Unimplemented streamlyEvaluate the input stream continuously and keep only the latest n elements in a ring buffer, keep discarding the older ones to make space for the new ones. When the output stream is evaluated it consumes the values from the buffer in a FIFO manner. Unimplemented streamlyLike   but samples at uniform intervals to match the consumer rate. Note that   leads to non-uniform sampling depending on the consumer pattern. Unimplemented streamlySame as   but with a monadic predicate. streamlyTake n# elements at the end of the stream.1O(n) space, where n is the number elements taken. Unimplemented streamlyTake time interval i" seconds at the end of the stream.1O(n) space, where n is the number elements taken. Unimplemented streamlyTake all consecutive elements at the end of the stream for which the predicate is true.1O(n) space, where n is the number elements taken. Unimplemented streamlyLike   and   combined.>O(n) space, where n is the number elements taken from the end. Unimplemented streamlytakeInterval duration- yields stream elements upto specified time duration. The duration starts when the stream is evaluated for the first time, before the first element is yielded. The time duration is checked before generating each element, if the duration has expired the stream stops.The total time taken in executing the stream is guaranteed to be at least duration, however, because the duration is checked before generating an element, the upper bound is indeterminate and depends on the time taken in generating and processing the last element.No element is yielded if the duration is zero. At least one element is yielded if the duration is non-zero. Pre-release streamlyDrop elements in the stream as long as the predicate succeeds and then take the rest of the stream. streamlySame as   but with a monadic predicate. streamlydropInterval duration' drops stream elements until specified duration has passed. The duration begins when the stream is evaluated for the first time. The time duration is checked after generating a stream element, the element is yielded if the duration has expired otherwise it is dropped.The time elapsed before starting to generate the first element is at most duration, however, because the duration expiry is checked after the element is generated, the lower bound is indeterminate and depends on the time taken in generating an element.1All elements are yielded if the duration is zero. Pre-release streamlyDrop n# elements at the end of the stream.3O(n) space, where n is the number elements dropped. Unimplemented streamlyDrop time interval i" seconds at the end of the stream.3O(n) space, where n is the number elements dropped. Unimplemented streamlyDrop all consecutive elements at the end of the stream for which the predicate is true.3O(n) space, where n is the number elements dropped. Unimplemented streamlyLike   and   combined.O(n) space, where n is the number elements dropped from the end. Unimplemented streamlyinsertBy cmp elem stream inserts elem before the first element in stream that is less than elem when compared using cmp. insertBy cmp x = mergeBy cmp (fromPure x) >>> Stream.toList $ Stream.insertBy compare 2 $ Stream.fromList [1,3,5] [1,2,3,5] streamly Stream.toList $ Stream.intersperseBySpan 2 (return ',') $ Stream.fromList "hello" "he,ll,o"  Unimplemented streamlyInsert an effect and its output after consuming an element of a stream.Stream.toList $ Stream.trace putChar $ intersperseSuffix (putChar '.' >> return ',') $ Stream.fromList "hello"h.,e.,l.,l.,o.,"h,e,l,l,o," Pre-release streamly>> Stream.mapM_ putChar $ Stream.intersperseSuffix_ (threadDelay 1000000) $ Stream.fromList "hello" hello  Pre-release streamlyLike   but intersperses an effectful action into the input stream after every n% elements and after the last element.Stream.toList $ Stream.intersperseSuffixBySpan 2 (return ',') $ Stream.fromList "hello" "he,ll,o," Pre-release streamly=Insert a side effect before consuming an element of a stream.Stream.toList $ Stream.trace putChar $ Stream.interspersePrefix_ (putChar '.' >> return ',') $ Stream.fromList "hello".h.e.l.l.o"hello"Same as   but may be concurrent. Concurrent Pre-release streamlyIntroduce a delay of specified seconds before consuming an element of the stream except the first one.Stream.mapM_ print $ Stream.timestamped $ Stream.delay 1 $ Stream.enumerateFromTo 1 3.(AbsTime (TimeSpec {sec = ..., nsec = ...}),1).(AbsTime (TimeSpec {sec = ..., nsec = ...}),2).(AbsTime (TimeSpec {sec = ..., nsec = ...}),3) streamlyIntroduce a delay of specified seconds after consuming an element of a stream.Stream.mapM_ print $ Stream.timestamped $ Stream.delayPost 1 $ Stream.enumerateFromTo 1 3.(AbsTime (TimeSpec {sec = ..., nsec = ...}),1).(AbsTime (TimeSpec {sec = ..., nsec = ...}),2).(AbsTime (TimeSpec {sec = ..., nsec = ...}),3) Pre-release streamlyIntroduce a delay of specified seconds before consuming an element of a stream.Stream.mapM_ print $ Stream.timestamped $ Stream.delayPre 1 $ Stream.enumerateFromTo 1 3.(AbsTime (TimeSpec {sec = ..., nsec = ...}),1).(AbsTime (TimeSpec {sec = ..., nsec = ...}),2).(AbsTime (TimeSpec {sec = ..., nsec = ...}),3) Pre-release streamlyBuffer until the next element in sequence arrives. The function argument determines the difference in sequence numbers. This could be useful in implementing sequenced streams, for example, TCP reassembly. Unimplemented streamly indexed = Stream.postscanl' (\(i, _) x -> (i + 1, x)) (-1,undefined) indexed = Stream.zipWith (,) (Stream.enumerateFrom 0)Pair each element in a stream with its index, starting from index 0.8Stream.toList $ Stream.indexed $ Stream.fromList "hello")[(0,'h'),(1,'e'),(2,'l'),(3,'l'),(4,'o')] streamly indexedR n = Stream.postscanl' (\(i, _) x -> (i - 1, x)) (n + 1,undefined) indexedR n = Stream.zipWith (,) (Stream.enumerateFromThen n (n - 1))Pair each element in a stream with its index, starting from the given index n and counting down. t m b to a stream t m a concurrently; the input stream is evaluated asynchronously in an independent thread yielding elements to a buffer and the transformation function runs in another thread consuming the input from the buffer.  5 is just like regular function application operator  except that it is concurrent.If you read the signature as $(t m a -> t m b) -> (t m a -> t m b) you can look at it as a transformation that converts a transform function to a buffered concurrent transform function.The following code prints a value every second even though each stage adds a 1 second delay.:{Stream.drain $5 Stream.mapM (\x -> threadDelay 1000000 >> print x)= |$ Stream.replicateM 3 (threadDelay 1000000 >> return 1):}111 ConcurrentSince: 0.3.0 (Streamly) streamlySame as  .Internal streamlySame as   but with arguments reversed.(|&) = flip (|$) ConcurrentSince: 0.3.0 (Streamly)    0 1F!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>? streamly-Drop prefix from the input stream if present.Space: O(1) Unimplemented - Help wanted. streamlyDrop all matching infix from the input stream if present. Infix stream may be consumed multiple times.Space: O(n)$ where n is the length of the infix. Unimplemented - Help wanted. streamlyDrop suffix from the input stream if present. Suffix stream may be consumed multiple times.Space: O(n)% where n is the length of the suffix. Unimplemented - Help wanted. streamlyLike   but appends empty fold output if the fold and stream termination aligns:f = Fold.take 2 Fold.sum:Stream.toList $ Stream.foldManyPost f $ Stream.fromList [][0]>Stream.toList $ Stream.foldManyPost f $ Stream.fromList [1..9] [3,7,11,15,9]?Stream.toList $ Stream.foldManyPost f $ Stream.fromList [1..10][3,7,11,15,19,0] Pre-release streamlyApply a  repeatedly on a stream and emit the fold outputs in the output stream.1To sum every two contiguous elements in a stream:f = Fold.take 2 Fold.sum;Stream.toList $ Stream.foldMany f $ Stream.fromList [1..10][3,7,11,15,19]'On an empty stream the output is empty:6Stream.toList $ Stream.foldMany f $ Stream.fromList [][]Note Stream.foldMany (Fold.take 0)9 would result in an infinite loop in a non-empty stream. streamlyApply a stream of folds to an input stream and emit the results in the output stream. Pre-release streamly8Iterate a fold generator on a stream. The initial value b is used to generate the first fold, the fold is applied on the stream and the result of the fold is used to generate the next fold and so on. >>> import Data.Monoid (Sum(..)) >>> f x = return (Fold.take 2 (Fold.sconcat x)) >>> s = Stream.map Sum $ Stream.fromList [1..10] >>> Stream.toList $ Stream.map getSum $ Stream.foldIterateM f 0 s [3,10,21,36,55,55] This is the streaming equivalent of monad like sequenced application of folds where next fold is dependent on the previous fold. Pre-release streamlyApply a  repeatedly on a stream and emit the parsed values in the output stream.(This is the streaming equivalent of the %u parse combinator.Stream.toList $ Stream.parseMany (Parser.takeBetween 0 2 Fold.sum) $ Stream.fromList [1..10][3,7,11,15,19] > Stream.toList $ Stream.parseMany (Parser.line Fold.toList) $ Stream.fromList "hello\nworld" ["hello\n","world"]  $foldMany f = parseMany (fromFold f) Known Issues: When the parser fails there is no way to get the remaining stream. Pre-release streamlyApply a stream of parsers to an input stream and emit the results in the output stream. Pre-release streamly!parseManyTill collect test stream tries the parser test on the input, if test fails it backtracks and tries collect, after collect succeeds test1 is tried again and so on. The parser stops when test succeeds. The output of test is discarded and the output of collect7 is emitted in the output stream. The parser fails if collect fails. Unimplemented streamlyIterate a parser generating function on a stream. The initial value b is used to generate the first parser, the parser is applied on the stream and the result is used to generate the next parser and so on.import Data.Monoid (Sum(..))Stream.toList $ Stream.map getSum $ Stream.parseIterate (\b -> Parser.takeBetween 0 2 (Fold.sconcat b)) 0 $ Stream.map Sum $ Stream.fromList [1..10][3,10,21,36,55,55]This is the streaming equivalent of monad like sequenced application of parsers where next parser is dependent on the previous parser. Pre-release streamly'groupsBy cmp f $ S.fromList [a,b,c,...] assigns the element a to the first group, if  b `cmp` a is  then b* is also assigned to the same group. If  c `cmp` a is  then c is also assigned to the same group and so on. When the comparison fails a new group is started. Each group is folded using the fold f= and the result of the fold is emitted in the output stream.Stream.toList $ Stream.groupsBy (>) Fold.toList $ Stream.fromList [1,3,7,0,2,5][[1,3,7],[0,2,5]] streamlyUnlike groupsBy this function performs a rolling comparison of two successive elements in the input stream. /groupsByRolling cmp f $ S.fromList [a,b,c,...] assigns the element a to the first group, if  a `cmp` b is  then b) is also assigned to the same group. If  b `cmp` c is  then c is also assigned to the same group and so on. When the comparison fails a new group is started. Each group is folded using the fold f.Stream.toList $ Stream.groupsByRolling (\a b -> a + 1 == b) Fold.toList $ Stream.fromList [1,2,3,7,8,9][[1,2,3],[7,8,9]] streamly 4groups = groupsBy (==) groups = groupsByRolling (==)Groups contiguous spans of equal elements together in individual groups.Stream.toList $ Stream.groups Fold.toList $ Stream.fromList [1,1,2,2] [[1,1],[2,2]] streamlySplit on an infixed separator element, dropping the separator. The supplied  is applied on the split segments. Splits the stream on separator elements determined by the supplied predicate, separator is considered as infixed between two segments:splitOn' p xs = Stream.toList $ Stream.splitOn p Fold.toList (Stream.fromList xs)splitOn' (== '.') "a.b" ["a","b"];An empty stream is folded to the default value of the fold:splitOn' (== '.') ""[""]If one or both sides of the separator are missing then the empty segment on that side is folded to the default output of the fold:splitOn' (== '.') "."["",""]splitOn' (== '.') ".a"["","a"]splitOn' (== '.') "a."["a",""]splitOn' (== '.') "a..b" ["a","","b"]6splitOn is an inverse of intercalating single element: Stream.intercalate (Stream.fromPure '.') Unfold.fromList . Stream.splitOn (== '.') Fold.toList === id9Assuming the input stream does not contain the separator: Stream.splitOn (== '.') Fold.toList . Stream.intercalate (Stream.fromPure '.') Unfold.fromList === id streamlySplit on a suffixed separator element, dropping the separator. The supplied " is applied on the split segments.splitOnSuffix' p xs = Stream.toList $ Stream.splitOnSuffix p Fold.toList (Stream.fromList xs)splitOnSuffix' (== '.') "a.b." ["a","b"]splitOnSuffix' (== '.') "a."["a"]2An empty stream results in an empty output stream:splitOnSuffix' (== '.') ""[]An empty segment consisting of only a suffix is folded to the default output of the fold:splitOnSuffix' (== '.') "."[""] splitOnSuffix' (== '.') "a..b.."["a","","b",""].A suffix is optional at the end of the stream:splitOnSuffix' (== '.') "a"["a"]splitOnSuffix' (== '.') ".a"["","a"]splitOnSuffix' (== '.') "a.b" ["a","b"] lines = splitOnSuffix (== '\n')  is an inverse of intercalateSuffix with a single element: Stream.intercalateSuffix (Stream.fromPure '.') Unfold.fromList . Stream.splitOnSuffix (== '.') Fold.toList === id9Assuming the input stream does not contain the separator: Stream.splitOnSuffix (== '.') Fold.toList . Stream.intercalateSuffix (Stream.fromPure '.') Unfold.fromList === id streamlySplit on a prefixed separator element, dropping the separator. The supplied " is applied on the split segments. > splitOnPrefix' p xs = Stream.toList $ Stream.splitOnPrefix p (Fold.toList) (Stream.fromList xs) > splitOnPrefix' (== ) ".a.b" ["a","b"] 4An empty stream results in an empty output stream:  > splitOnPrefix' (==  ) "" [] An empty segment consisting of only a prefix is folded to the default output of the fold: > splitOnPrefix' (== !) "." [""] > splitOnPrefix' (== -) ".a.b." ["a","b",""] > splitOnPrefix' (== ) ".a..b" ["a","","b"] 4A prefix is optional at the beginning of the stream: > splitOnPrefix' (== ") "a" ["a"] > splitOnPrefix' (== ) "a.b" ["a","b"]   is an inverse of intercalatePrefix with a single element: Stream.intercalatePrefix (Stream.fromPure '.') Unfold.fromList . Stream.splitOnPrefix (== '.') Fold.toList === id9Assuming the input stream does not contain the separator: Stream.splitOnPrefix (== '.') Fold.toList . Stream.intercalatePrefix (Stream.fromPure '.') Unfold.fromList === id Unimplemented streamlyLike   after stripping leading, trailing, and repeated separators. Therefore, ".a..b." with & as the separator would be parsed as  ["a","b"]. In other words, its like parsing words from whitespace separated text.wordsBy' p xs = Stream.toList $ Stream.wordsBy p Fold.toList (Stream.fromList xs)wordsBy' (== ',') ""[]wordsBy' (== ',') ","[]wordsBy' (== ',') ",a,,b," ["a","b"] words = wordsBy isSpace streamlyLike  8 but keeps the suffix attached to the resulting splits.splitWithSuffix' p xs = Stream.toList $ splitWithSuffix p Fold.toList (Stream.fromList xs)splitWithSuffix' (== '.') ""[]splitWithSuffix' (== '.') "."["."]splitWithSuffix' (== '.') "a"["a"]splitWithSuffix' (== '.') ".a" [".","a"]splitWithSuffix' (== '.') "a."["a."]splitWithSuffix' (== '.') "a.b" ["a.","b"] splitWithSuffix' (== '.') "a.b." ["a.","b."]"splitWithSuffix' (== '.') "a..b.."["a.",".","b.","."] streamlyLike  5 but splits the separator as well, as an infix token.splitOn'_ pat xs = Stream.toList $ Stream.splitBySeq (Array.fromList pat) Fold.toList (Stream.fromList xs)splitOn'_ "" "hello"!["h","","e","","l","","l","","o"]splitOn'_ "hello" ""[""]splitOn'_ "hello" "hello"["","hello",""]splitOn'_ "x" "hello" ["hello"]splitOn'_ "h" "hello"["","h","ello"]splitOn'_ "o" "hello"["hell","o",""]splitOn'_ "e" "hello"["h","e","llo"]splitOn'_ "l" "hello"["he","l","","l","o"]splitOn'_ "ll" "hello"["he","ll","o"] Pre-release streamlyLike  splitSuffixBy but the separator is a sequence of elements, instead of a predicate for a single element.splitOnSuffixSeq_ pat xs = Stream.toList $ Stream.splitOnSuffixSeq (Array.fromList pat) Fold.toList (Stream.fromList xs)splitOnSuffixSeq_ "." ""[]splitOnSuffixSeq_ "." "."[""]splitOnSuffixSeq_ "." "a"["a"]splitOnSuffixSeq_ "." ".a"["","a"]splitOnSuffixSeq_ "." "a."["a"]splitOnSuffixSeq_ "." "a.b" ["a","b"]splitOnSuffixSeq_ "." "a.b." ["a","b"]splitOnSuffixSeq_ "." "a..b.."["a","","b",""] lines = splitOnSuffixSeq "\n"  is an inverse of intercalateSuffix". The following law always holds: *intercalateSuffix . splitOnSuffixSeq == idThe following law holds when the separator is non-empty and contains none of the elements present in the input lists: 'splitSuffixOn . intercalateSuffix == id Pre-release streamlyLike  + but keeps the suffix intact in the splits.splitWithSuffixSeq' pat xs = Stream.toList $ Stream.splitWithSuffixSeq (Array.fromList pat) Fold.toList (Stream.fromList xs)splitWithSuffixSeq' "." ""[]splitWithSuffixSeq' "." "."["."]splitWithSuffixSeq' "." "a"["a"]splitWithSuffixSeq' "." ".a" [".","a"]splitWithSuffixSeq' "." "a."["a."]splitWithSuffixSeq' "." "a.b" ["a.","b"]splitWithSuffixSeq' "." "a.b." ["a.","b."] splitWithSuffixSeq' "." "a..b.."["a.",".","b.","."] Pre-release streamly&Group the input stream into groups of n elements each and then fold each group using the provided fold function.Stream.toList $ Stream.chunksOf 2 Fold.sum (Stream.enumerateFromTo 1 10)[3,7,11,15,19]/This can be considered as an n-fold version of  where we apply = repeatedly on the leftover stream until the stream exhausts. %chunksOf n f = foldMany (FL.take n f) streamly Pre-release streamlyarraysOf n stream9 groups the elements in the input stream into arrays of n elements each.0Same as the following but may be more efficient: )arraysOf n = Stream.foldMany (A.writeN n) Pre-release streamly'Group the input stream into windows of n second each and then fold each group using the provided fold function.Stream.toList $ Stream.take 5 $ Stream.intervalsOf 1 Fold.sum $ Stream.constRate 2 $ Stream.enumerateFrom 1[...,...,...,...,...] streamly?classifySessionsBy tick keepalive predicate timeout fold stream classifies an input event stream consisting of (timestamp, (key, value)) into sessions based on the key, folding all the values corresponding to the same key into a session using the supplied fold.When the fold terminates or a timeout occurs, a tuple consisting of the session key and the folded value is emitted in the output stream. The timeout is measured from the first event in the session. If the  keepalive option is set to : the timeout is reset to 0 whenever an event is received.The  timestamp in the input stream is an absolute time from some epoch, characterizing the time when the input event was generated. The notion of current time is maintained by a monotonic event time clock using the timestamps seen in the input stream. The latest timestamp seen till now is used as the base for the current time. When no new events are seen, a timer is started with a clock resolution of tick seconds. This timer is used to detect session timeouts in the absence of new events.To ensure an upper bound on the memory used the number of sessions can be limited to an upper bound. If the ejection  predicate returns , the oldest session is ejected before inserting a new session.:{ Stream.mapM_ print $ Stream.classifySessionsBy 1 False (const (return False)) 3 (Fold.take 3 Fold.toList) $ Stream.timestamped $ Stream.delay 0.1 $ (,) <$> Stream.fromList [1,2,3] <*> Stream.fromList ['a','b','c']:} (1,"abc") (2,"abc") (3,"abc") Pre-release streamlySame as  < with a timer tick of 1 second and keepalive option set to . 6classifyKeepAliveSessions = classifySessionsBy 1 True  Pre-release streamlySame as  < with a timer tick of 1 second and keepalive option set to . 0classifySessionsOf = classifySessionsBy 1 False  Pre-release streamly#splitInnerBy splitter joiner stream splits the inner containers f a of an input stream  t m (f a) using the splitter function. Container elements f a are collected until a split occurs, then all the elements before the split are joined using the joiner function.$For example, if we have a stream of  Array Word8, we may want to split the stream into arrays representing lines separated by 'n' byte such that the resulting stream after a split would be one array for each line.CAUTION! This is not a true streaming function as the container size after the split and merge may not be bounded. Pre-release streamlyLike   but splits assuming the separator joins the segment in a suffix style. Pre-release streamlytimer tick in secondsstreamly)reset the timer when an event is receivedstreamly2predicate to eject sessions based on session countstreamlysession timeout in secondsstreamly"Fold to be applied to session datastreamly×tamp, (session key, session data)streamlysession key, fold result streamly,predicate to eject sessions on session countstreamlysession inactive timeoutstreamly*Fold to be applied to session payload datastreamly×tamp, (session key, session data) streamly,predicate to eject sessions on session countstreamlytime window sizestreamly"Fold to be applied to session datastreamly×tamp, (session key, session data)! ! G!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?͟ streamly Convert an - into a stream by supplying it an input seed.Stream.drain $ Stream.unfold (Unfold.replicateM 3) (putStrLn "hello")hellohellohello Since: 0.7.0 streamly Convert an ' with a closed input end into a stream. Pre-release streamly 7unfoldr step s = case step s of Nothing -> 0 Just (a, b) -> a `cons` unfoldr step b Build a stream by unfolding a pure step function step starting from a seed s. The step function returns the next element in the stream and the next seed value. When it is done it returns # and the stream ends. For example, >>> :{ let f b = if b > 3 then Nothing else Just (b, b + 1) in Stream.toList $ Stream.unfoldr f 0 :} [0,1,2,3] streamlyBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns # and the stream ends. For example, >>> :{ let f b = if b > 3 then return Nothing else return (Just (b, b + 1)) in Stream.toList $ Stream.unfoldrM f 0 :} [0,1,2,3] When run concurrently, the next unfold step can run concurrently with the processing of the output of the previous step. Note that more than one step cannot run concurrently as the next step depends on the output of the previous step. (fromAsync $ S.unfoldrM (\n -> liftIO (threadDelay 1000000) >> return (Just (n, n + 1))) 0) & S.foldlM' (\_ a -> threadDelay 1000000 >> print a) ()  Concurrent Since: 0.1.0 streamly6Generate an infinite stream by repeating a pure value. streamly replicate = take n . repeat Generate a stream of length n by repeating a value n times. streamly replicateM = take n . repeatM 1Generate a stream by performing a monadic action n times. Same as: drain $ fromSerial $ S.replicateM 10 $ (threadDelay 1000000 >> print 1) drain $ fromAsync $ S.replicateM 10 $ (threadDelay 1000000 >> print 1)  Concurrent streamlytimes returns a stream of time value tuples with clock of 10 ms granularity. The first component of the tuple is an absolute time reference (epoch) denoting the start of the stream and the second component is a time relative to the reference.Stream.mapM_ (\x -> print x >> threadDelay 1000000) $ Stream.take 3 $ Stream.times(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...)).Note: This API is not safe on 32-bit machines. Pre-release streamlyabsTimes returns a stream of absolute timestamps using a clock of 10 ms granularity.Stream.mapM_ print $ Stream.delayPre 1 $ Stream.take 3 $ Stream.absTimes*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...}).Note: This API is not safe on 32-bit machines. Pre-release streamlyrelTimes returns a stream of relative time values starting from 0, using a clock of granularity 10 ms.Stream.mapM_ print $ Stream.delayPre 1 $ Stream.take 3 $ Stream.relTimesRelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...).Note: This API is not safe on 32-bit machines. Pre-release streamly durations g returns a stream of relative time values measuring the time elapsed since the immediate predecessor element of the stream was generated. The first element of the stream is always 0.  durations uses a clock of granularity g specified in seconds. A low granularity clock is more expensive in terms of CPU usage. The minimum granularity is 1 millisecond. Durations lower than 1 ms will be 0..Note: This API is not safe on 32-bit machines. Unimplemented streamlyGenerate ticks at the specified rate. The rate is adaptive, the tick generation speed can be increased or decreased at different times to achieve the specified rate. The specific behavior for different styles of % specifications is documented under . The effective maximum rate achieved by a stream is governed by the processor speed. Unimplemented streamlyGenerate a singleton event at or after the specified absolute time. Note that this is different from a threadDelay, a threadDelay starts from the time when the action is evaluated, whereas if we use AbsTime based timeout it will immediately expire if the action is evaluated too late. Unimplemented streamly fromIndices f = fmap f $ Stream.enumerateFrom 0 fromIndices f = let g i = f i `cons` g (i + 1) in g 0 Generate an infinite stream, whose values are the output of a function f9 applied on the corresponding index. Index starts at 0.5Stream.toList $ Stream.take 5 $ Stream.fromIndices id [0,1,2,3,4] streamly fromIndicesM f = Stream.mapM f $ Stream.enumerateFrom 0 fromIndicesM f = let g i = f i `consM` g (i + 1) in g 0 Generate an infinite stream, whose values are the output of a monadic function f7 applied on the corresponding index. Index starts at 0. Concurrent streamly #iterate f x = x `cons` iterate f x !Generate an infinite stream with x as the first element and each successive element derived by applying the function f on the previous element. >>> Stream.toList $ Stream.take 5 $ Stream.iterate (+1) 1 [1,2,3,4,5] streamly >= a -> return a `consM` iterateM f (f a) Generate an infinite stream with the first element generated by the action m and each successive element derived by applying the monadic function f on the previous element.When run concurrently, the next iteration can run concurrently with the processing of the previous iteration. Note that more than one iteration cannot run concurrently as the next iteration depends on the output of the previous iteration. drain $ fromSerial $ S.take 10 $ S.iterateM (\x -> threadDelay 1000000 >> print x >> return (x + 1)) (return 0) drain $ fromAsync $ S.take 10 $ S.iterateM (\x -> threadDelay 1000000 >> print x >> return (x + 1)) (return 0)  Concurrent Since: 0.1.2Since: 0.7.0 (signature change) streamly fromFoldableM =    Construct a stream from a  containing monadic actions. drain $ fromSerial $ S.fromFoldableM $ replicateM 10 (threadDelay 1000000 >> print 1) drain $ fromAsync $ S.fromFoldableM $ replicateM 10 (threadDelay 1000000 >> print 1) Concurrent (do not use with  fromParallel on infinite containers) streamly  fromListM =    Construct a stream from a list of monadic actions. This is more efficient than   for serial streams. streamlySame as  fromFoldable. streamly6Read lines from an IO Handle into a stream of Strings. streamlyTakes a callback setter function and provides it with a callback. The callback when invoked adds a value at the tail of the stream. Returns a stream of values generated by the callback. Pre-release/ /    H!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>? streamlyAppend the outputs of two streams, yielding all the elements from the first stream and then yielding all the elements from the second stream./IMPORTANT NOTE: This could be 100x faster than  serial/<> for appending a few (say 100) streams because it can fuse via stream fusion. However, it does not scale for a large number of streams (say 1000s) and becomes qudartically slow. Therefore use this for custom appending of a few streams but use  ) or 'concatMapWith serial' for appending n, streams or infinite containers of streams. Pre-release streamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. If any of the streams finishes early the other stream continues alone until it too finishes.:set -XOverloadedStrings'import Data.Functor.Identity (Identity)=Stream.interleave "ab" ",,,," :: Stream.SerialT Identity CharfromList "a,b,,,"=Stream.interleave "abcd" ",," :: Stream.SerialT Identity CharfromList "a,b,cd"  is dual to  , it can be called  interleaveMax.%Do not use at scale in concatMapWith. Pre-release streamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. As soon as the first stream finishes, the output stops, discarding the remaining part of the second stream. In this case, the last element in the resulting stream would be from the second stream. If the second stream finishes early then the first stream still continues to yield elements until it finishes.:set -XOverloadedStrings'import Data.Functor.Identity (Identity)Stream.interleaveSuffix "abc" ",,,," :: Stream.SerialT Identity CharfromList "a,b,c,"Stream.interleaveSuffix "abc" "," :: Stream.SerialT Identity CharfromList "a,bc"  is a dual of  .%Do not use at scale in concatMapWith. Pre-release streamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream and ending at the first stream. If the second stream is longer than the first, elements from the second stream are infixed with elements from the first stream. If the first stream is longer then it continues yielding elements even after the second stream has finished.:set -XOverloadedStrings'import Data.Functor.Identity (Identity)Stream.interleaveInfix "abc" ",,,," :: Stream.SerialT Identity CharfromList "a,b,c"Stream.interleaveInfix "abc" "," :: Stream.SerialT Identity CharfromList "a,bc"  is a dual of  .%Do not use at scale in concatMapWith. Pre-release streamlyInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. The output stops as soon as any of the two streams finishes, discarding the remaining part of the other stream. The last element of the resulting stream would be from the longer stream.:set -XOverloadedStrings'import Data.Functor.Identity (Identity)Stream.interleaveMin "ab" ",,,," :: Stream.SerialT Identity CharfromList "a,b,"Stream.interleaveMin "abcd" ",," :: Stream.SerialT Identity CharfromList "a,b,c"  is dual to  .%Do not use at scale in concatMapWith. Pre-release streamlySchedule the execution of two streams in a fair round-robin manner, executing each stream once, alternately. Execution of a stream may not necessarily result in an output, a stream may chose to Skip producing an element until later giving the other stream a chance to run. Therefore, this combinator fairly interleaves the execution of two streams rather than fairly interleaving the output of the two streams. This can be useful in co-operative multitasking without using explicit threads. This can be used as an alternative to .%Do not use at scale in concatMapWith. Pre-release streamlyMerge two streams using a comparison function. The head elements of both the streams are compared and the smaller of the two elements is emitted, if both elements are equal then the element from the first stream is used first.If the streams are sorted in ascending order, the resulting stream would also remain sorted in ascending order. >>> Stream.toList $ Stream.mergeBy compare (Stream.fromList [1,3,5]) (Stream.fromList [2,4,6,8]) [1,2,3,4,5,6,8] streamlyLike  ( but with a monadic comparison function.Merge two streams randomly: > randomly _ _ = randomIO >>= x -> return $ if x then LT else GT > Stream.toList $ Stream.mergeByM randomly (Stream.fromList [1,1,1,1]) (Stream.fromList [2,2,2,2]) [2,1,2,2,2,1,1,1] )Merge two streams in a proportion of 2:1: >>> :{ do let proportionately m n = do ref <- newIORef $ cycle $ Prelude.concat [Prelude.replicate m LT, Prelude.replicate n GT] return $ _ _ -> do r <- readIORef ref writeIORef ref $ Prelude.tail r return $ Prelude.head r f <- proportionately 2 1 xs <- Stream.toList $ Stream.mergeByM f (Stream.fromList [1,1,1,1,1,1]) (Stream.fromList [2,2,2]) print xs :} [1,1,2,1,1,2,1,1,2] streamlyLike   but merges concurrently (i.e. both the elements being merged are generated concurrently). streamlyLike   but merges concurrently (i.e. both the elements being merged are generated concurrently). streamlyLike   but uses an  for stream generation. Unlike   this can fuse the  code with the inner loop and therefore provide many times better performance. streamlyLike  1 but interleaves the streams in the same way as  # behaves instead of appending them. Pre-release streamlyLike  . but executes the streams in the same way as  . Pre-release streamlyUnfold the elements of a stream, intersperse the given element between the unfolded streams and then concat them into a single stream. unwords = S.interpose ' ' Pre-release streamlyUnfold the elements of a stream, append the given element after each unfolded stream and then concat them into a single stream.  unlines = S.interposeSuffix '\n' Pre-release streamly  followed by unfold and concat. Pre-release streamly intersperse followed by unfold and concat. intercalate unf a str = unfoldMany unf $ intersperse a str intersperse = intercalate (Unfold.function id) unwords = intercalate Unfold.fromList " "Stream.toList $ Stream.intercalate Unfold.fromList " " $ Stream.fromList ["abc", "def", "ghi"] "abc def ghi" streamly  followed by unfold and concat. Pre-release streamlyintersperseSuffix followed by unfold and concat. intercalateSuffix unf a str = unfoldMany unf $ intersperseSuffix a str intersperseSuffix = intercalateSuffix (Unfold.function id) unlines = intercalateSuffix Unfold.fromList "\n"Stream.toList $ Stream.intercalateSuffix Unfold.fromList "\n" $ Stream.fromList ["abc", "def", "ghi"]"abc\ndef\nghi\n" streamly/Flatten a stream of streams to a single stream. concat = concatMap id  Pre-release streamly$concatMapWith mixer generator stream0 is a two dimensional looping combinator. The  generator function is used to generate streams from the elements in the input stream and the mixer* function is used to merge those streams.Note we can merge streams concurrently by using a concurrent merge function. Since: 0.7.0Since: 0.8.0 (signature change) streamlyLike   but carries a state which can be used to share information across multiple steps of concat. concatSmapMWith combine f initial = concatMapWith combine id . smapM f initial  Pre-release streamlyCombine streams in pairs using a binary stream combinator, then combine the resulting streams in pairs recursively until we get to a single combined stream.>For example, you can sort a stream using merge sort like this:Stream.toList $ Stream.concatPairsWith (Stream.mergeBy compare) Stream.fromPure $ Stream.fromList [5,1,7,9,2] [1,2,5,7,9]-Caution: the stream of streams must be finite Pre-release streamlyLike iterateM> but iterates after mapping a stream generator on the output.Yield an input element in the output stream, map a stream generator on it and then do the same on the resulting stream. This can be used for a depth first traversal of a tree like structure. Note that iterateM is a special case of  : iterateM f = iterateMapWith serial (fromEffect . f) . fromEffect It can be used to traverse a tree structure. For example, to list a directory tree: Stream.iterateMapWith Stream.serial (either Dir.toEither (const nil)) (fromPure (Left "tmp"))  Pre-release streamlyLike  iterateMap but carries a state in the stream generation function. This can be used to traverse graph like structures, we can remember the visited nodes in the state to avoid cycles.Note that a combination of  iterateMap and  usingState can also be used to traverse graphs. However, this function provides a more localized state instead of using a global state. See also: mfix Pre-release streamlyIn an  stream iterate on s. This is a special case of  : iterateMapLeftsWith combine f = iterateMapWith combine (either f (const nil)) To traverse a directory tree: iterateMapLeftsWith serial Dir.toEither (fromPure (Left "tmp"))  Pre-release0 0  I!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?*? streamlyDecompose a stream into its head and tail. If the stream is empty, returns &. If the stream is non-empty, returns  Just (a, ma), where a is the head of the stream and ma its tail.This is a brute force primitive. Avoid using it as long as possible, use it when no other combinator can do the job. This can be used to do pretty much anything in an imperative manner, as it just breaks down the stream into individual elements and we can loop over them as we deem fit. For example, this can be used to convert a streamly stream into other stream types.:All the folds in this module can be expressed in terms of  , however the specific implementations are generally more efficient. streamly"Right associative/lazy pull fold. foldrM build final stream9 constructs an output structure using the step function build. build is invoked with the next input element and the remaining (lazy) tail of the output structure. It builds a lazy output expression using the two. When the "tail structure" in the output expression is evaluated it calls build( again thus lazily consuming the input stream. until either the output expression built by build is free of the "tail" or the input is exhausted in which case final is used as the terminating case for the output structure. For more details see the description in the previous section.%Example, determine if any element is  in a stream:Stream.foldrM (\x xs -> if odd x then return True else xs) (return False) $ Stream.fromList (2:4:5:undefined)True Since: 0.7.0 (signature changed) Since: 0.2.0 (signature changed) Since: 0.1.0 streamlyRight fold, lazy for lazy monads and pure streams, and strict for strict monads.Please avoid using this routine in strict monads like IO unless you need a strict right fold. This is provided only for use in lazy monads (e.g. Identity) or pure streams. Note that with this signature it is not possible to implement a lazy foldr when the monad m is strict. In that case it would be strict in its accumulator and therefore would necessarily consume all its input. streamlyLazy right fold for non-empty streams, using first element as the starting value. Returns  if the stream is empty. streamlyStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction. streamly#Left associative/strict push fold. foldl' reduce initial stream invokes reduce with the accumulator and the next input in the input stream, using initial as the initial value of the current value of the accumulator. When the input is exhausted the current value of the accumulator is returned. Make sure to use a strict data structure for accumulator to not build unnecessary lazy expressions unless that's what you want. See the previous section for more details.streamlyStrict left fold, for non-empty streams, using first element as the starting value. Returns  if the stream is empty.streamlyLike  #, but with a monadic step function.streamlyLike  " but with a monadic step function. Since: 0.2.0Since: 0.8.0 (signature change)streamly*Parse a stream using the supplied ParserD .Internalstreamly*Parse a stream using the supplied ParserK .Internalstreamly"Parse a stream using the supplied .Unlike folds, parsers may not always result in a valid output, they may result in an error. For example:4Stream.parse (Parser.takeEQ 1 Fold.drain) Stream.nil*** Exception: ParseError "takeEQ: Expecting exactly 1 elements, input terminated on 0"Note: *fold f = Stream.parse (Parser.fromFold f) parse p is not the same as head . parseMany p on an empty stream. Pre-releasestreamly"Parse a stream using the supplied .Internalstreamly "mapM_ = Stream.drain . Stream.mapMApply a monadic action to each element of the stream and discard the output of the action. This is not really a pure transformation operation but a transformation followed by fold.streamly >drain = mapM_ (\_ -> return ()) drain = Stream.fold Fold.drainRun a stream, discarding the results. By default it interprets the stream as  , to run other types of streams use the type adapting combinators for example Stream.drain .  fromAsync.streamly drainN n = Stream.drain . Stream.take n drainN n = Stream.fold (Fold.take n Fold.drain)Run maximum up to n iterations of a stream.streamly runN n = runStream . take nRun maximum up to n iterations of a stream.streamly 0drainWhile p = Stream.drain . Stream.takeWhile p1Run a stream as long as the predicate holds true.streamly $runWhile p = runStream . takeWhile p1Run a stream as long as the predicate holds true.streamlyRun a stream, discarding the results. By default it interprets the stream as  , to run other types of streams use the type adapting combinators for example  runStream .  fromAsync.streamly&Determine whether the stream is empty. null = Stream.fold Fold.nullstreamly0Extract the first element of the stream, if any. *head = (!! 0) head = Stream.fold Fold.headstreamlyExtract the first element of the stream, if any, otherwise use the supplied default value. It can help avoid one branch in high performance code. Pre-releasestreamly &tail = fmap (fmap snd) . Stream.uncons8Extract all but the first element of the stream, if any.streamly7Extract all but the last element of the stream, if any.streamly/Extract the last element of the stream, if any. last xs = xs !! (Stream.length xs - 1) last = Stream.fold Fold.laststreamly6Determine whether an element is present in the stream. elem = Stream.fold Fold.elemstreamly:Determine whether an element is not present in the stream. !notElem = Stream.fold Fold.lengthstreamly#Determine the length of the stream.streamly?Determine whether all elements of a stream satisfy a predicate. all = Stream.fold Fold.allstreamlyDetermine whether any of the elements of a stream satisfy a predicate. any = Stream.fold Fold.anystreamly8Determines if all elements of a boolean stream are True. and = Stream.fold Fold.andstreamlyDetermines whether at least one element of a boolean stream is True. or = Stream.fold Fold.orstreamlyDetermine the sum of all elements of a stream of numbers. Returns 0 when the stream is empty. Note that this is not numerically stable for floating point numbers. sum = Stream.fold Fold.sumstreamlyDetermine the product of all elements of a stream of numbers. Returns 1 when the stream is empty. "product = Stream.fold Fold.productstreamly3Fold a stream of monoid elements by appending them. "mconcat = Stream.fold Fold.mconcat Pre-releasestreamly  minimum = , compare minimum = Stream.fold Fold.minimum *Determine the minimum element in a stream.streamlyDetermine the minimum element in a stream using the supplied comparison function. &minimumBy = Stream.fold Fold.minimumBystreamly  maximum = , compare maximum = Stream.fold Fold.maximum *Determine the maximum element in a stream.streamlyDetermine the maximum element in a stream using the supplied comparison function. &maximumBy = Stream.fold Fold.maximumBystreamlyEnsures that all the elements of the stream are identical and then returns that unique element.streamly&Lookup the element at the given index.streamly!In a stream of (key-value) pairs (a, b), return the value b9 of the first pair where the key equals the given value a. lookup = snd <$> Stream.find ((==) . fst) lookup = Stream.fold Fold.lookupstreamlyLike " but with a non-monadic predicate. 8find p = findM (return . p) find = Stream.fold Fold.findstreamly=Returns the first element that satisfies the given predicate. findM = Stream.fold Fold.findMstreamly;Returns the first index that satisfies the given predicate. &findIndex = Stream.fold Fold.findIndexstreamlyReturns the first index where a given value is found in the stream. %elemIndex a = Stream.findIndex (== a)streamly toList = Stream.foldr (:) [] Convert a stream into a list in the underlying monad. The list can be consumed lazily in a lazy monad (e.g. ). In a strict monad (e.g. IO) the whole list is generated and buffered before it can be consumed.Warning! working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead.streamly (toListRev = Stream.foldl' (flip (:)) [] Convert a stream into a list in reverse order in the underlying monad.Warning! working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Array instead. Pre-releasestreamly #toHandle h = S.mapM_ $ hPutStrLn h *Write a stream of Strings to an IO Handle.streamly"Convert a stream to a pure stream. /toStream = Stream.foldr Stream.cons Stream.nil  Pre-releasestreamly3Convert a stream to a pure stream in reverse order. :toStreamRev = Stream.foldl' (flip Stream.cons) Stream.nil  Pre-releasestreamly m b to a stream t m a concurrently; The the input stream is evaluated asynchronously in an independent thread yielding elements to a buffer and the folding action runs in another thread consuming the input from the buffer.If you read the signature as  (t m a -> m b) -> (t m a -> m b) you can look at it as a transformation that converts a fold function to a buffered concurrent fold function.The . at the end of the operator is a mnemonic for termination of the stream.In the example below, each stage introduces a delay of 1 sec but output is printed every second because both stages are concurrent.'import Control.Concurrent (threadDelay)import Streamly.Prelude ((|$.)):{ Stream.foldlM' (\_ a -> threadDelay 1000000 >> print a) (return ())> |$. Stream.replicateM 3 (threadDelay 1000000 >> return 1):}111 ConcurrentSince: 0.3.0 (Streamly)streamlySame as .InternalstreamlySame as  but with arguments reversed. (|&.) = flip (|$.) ConcurrentSince: 0.3.0 (Streamly)streamlyReturns  if the first stream is the same as or a prefix of the second. A stream is a prefix of itself.Stream.isPrefixOf (Stream.fromList "hello") (Stream.fromList "hello" :: SerialT IO Char)TruestreamlyReturns  if the first stream is an infix of the second. A stream is considered an infix of itself. Stream.isInfixOf (Stream.fromList "hello") (Stream.fromList "hello" :: SerialT IO Char)TrueSpace: O(n) worst case where n is the length of the infix. Pre-release Requires  constraintstreamlyReturns  if the first stream is a suffix of the second. A stream is considered a suffix of itself.Stream.isSuffixOf (Stream.fromList "hello") (Stream.fromList "hello" :: SerialT IO Char)TrueSpace: O(n)-, buffers entire input stream and the suffix. Pre-release Suboptimal - Help wanted.streamlyReturns  if all the elements of the first stream occur, in order, in the second stream. The elements do not have to occur consecutively. A stream is a subsequence of itself.Stream.isSubsequenceOf (Stream.fromList "hlo") (Stream.fromList "hello" :: SerialT IO Char)TruestreamlystripPrefix prefix stream strips prefix from stream' if it is a prefix of stream. Returns  if the stream does not start with the given prefix, stripped stream otherwise. Returns Just nil, when the prefix is the same as the stream.See also "Streamly.Internal.Data.Stream.IsStream.Nesting.dropPrefix".Space: O(1)streamly.Drops the given suffix from a stream. Returns < if the stream does not end with the given suffix. Returns Just nil, when the suffix is the same as the stream.It may be more efficient to convert the stream to an Array and use stripSuffix on that especially if the elements have a Storable or Prim instance.See also "Streamly.Internal.Data.Stream.IsStream.Nesting.dropSuffix".Space: O(n)7, buffers the entire input stream as well as the suffix Pre-releasestreamly?0streamlywriteN n folds a maximum of n' elements from the input stream to an 2.Since we are folding to a 2 n9 should be <= 128, for larger number of elements use an Array from either Streamly.Data.Array or Streamly.Data.Array.Foreign.streamly Create a 2 from the first n3 elements of a list. The array may hold less than n( elements if the length of the list <= n.$It is recommended to use a value of n* <= 128. For larger sized arrays, use an Array from Streamly.Data.Array or Streamly.Data.Array.Foreignstreamly Create a 2 from the first n7 elements of a stream. The array is allocated to size n", if the stream terminates before n- elements then the array may hold less than n elements.&For optimal performance use this with n <= 128.2323!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?0 2K!(c) 2019 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?7streamlyarraysOf n stream9 groups the elements in the input stream into arrays of n elements each.0Same as the following but may be more efficient: .arraysOf n = Stream.foldMany (MArray.writeN n) Pre-releasestreamlyThis mutates the first array (if it has space) to append values from the second one. This would work for immutable arrays as well because an immutable array never has space so a new array is allocated instead of mutating it.| Coalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size.streamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes.streamlyLike  but generates arrays of exactly equal to the size specified except for the last array in the stream which could be shorter. UnimplementedstreamlyLike  but generates arrays of size greater than or equal to the specified except for the last array in the stream which could be shorter. Unimplementedstreamly!groupIOVecsOf maxBytes maxEntries= groups arrays in the incoming stream to create a stream of  arrays with a maximum of maxBytes' bytes in each array and a maximum of  maxEntries entries in each array.  L!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone! #$&-035678>?A streamlyCopy a range of the first array to the specified region in the second array. Both arrays must fully contain the specified ranges, but this is not checked. The regions are allowed to overlap, although this is only possible when the same array is provided as both the source and the destination.streamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams. Pre-releasestreamlywriteN n folds a maximum of n' elements from the input stream to an . Pre-releasestreamlyLike  but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown. Pre-releasestreamlyfromStreamArraysOf n stream< groups the input stream into a stream of arrays of size n.streamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size. Pre-releasestreamly.Allocate an array that is pinned and can hold count3 items. The memory of the array is uninitialized.Note that this is internal routine, the reference to this array cannot be given out until the array has been written to and frozen.streamlyAllocate a new array aligned to the specified alignment and using pinned memory.streamlyResize (pinned) mutable byte array to new specified size (in elem count). The returned array is either the original array resized in-place or, if not possible, a newly allocated (pinned) array (with the original content copied over).streamlydestination arraystreamlyoffset into destination arraystreamly source arraystreamlyoffset into source arraystreamlynumber of elements to copystreamlyarraystreamlyindexstreamlyelementstreamlynew sizestreamlynew size!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone! #$&-035678>?J^ streamlyDefault maximum buffer size in bytes, for reading from and writing to IO devices, the value is 32KB minus GHC allocation overhead, which is a few bytes, so that the actual allocation is 32KB.streamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams. Pre-releasestreamlywriteN n folds a maximum of n' elements from the input stream to an . Pre-releasestreamlyLike  but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown. Pre-releasestreamlyfromStreamArraysOf n stream< groups the input stream into a stream of arrays of size n.streamly;Splice two immutable arrays creating a new immutable array.streamly Convert an  into a list. Pre-releasestreamly5Strict right-associated fold over the elements of an .streamly4Strict left-associated fold over the elements of an .streamly4Strict left-associated fold over the elements of an .streamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size. Pre-releasestreamlySplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array. Pre-releasestreamlyLexicographic ordering. Subject to change between major versions.00M!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?S streamly Create an  from the first N elements of a stream. The array is allocated to size N, if the stream terminates before N elements then the array may hold less than N elements. Pre-releasestreamly Create an  from a stream. This is useful when we want to create a single array from a stream of unknown size. writeN is at least twice as efficient when the size is already known.Note that if the input stream is too large memory allocation for the array may fail. When the stream size is not known, arraysOf followed by processing of indvidual arrays in the resulting stream should be preferred. Pre-releasestreamly Convert an  into a stream. Pre-releasestreamly Convert an  into a stream in reverse order. Pre-releasestreamlyUnfold an array into a stream.streamlyUnfold an array into a stream, does not check the end of the array, the user is responsible for terminating the stream within the array bounds. For high performance application where the end condition can be determined by a terminating fold.The following might not be true, not that the representation changed. Written in the hope that it may be faster than "read", however, in the case for which this was written, "read" proves to be faster even though the core generated with unsafeRead looks simpler. Pre-releasestreamly null arr = length arr == 0 Pre-releasestreamlyFold an array using a . Pre-releasestreamly,Fold an array using a stream fold operation. Pre-releasestreamlyO(1)8 Lookup the element at the given index, starting from 0. Pre-releasestreamly )last arr = readIndex arr (length arr - 1) Pre-releasestreamly;Convert a stream of arrays into a stream of their elements.)Same as the following but more efficient: concat = S.concatMap A.read Pre-releasestreamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes. Pre-releaseN!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone! #$&-035678>?]streamlyCopy a range of the first array to the specified region in the second array. Both arrays must fully contain the specified ranges, but this is not checked. The regions are allowed to overlap, although this is only possible when the same array is provided as both the source and the destination.streamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams. Pre-releasestreamlywriteN n folds a maximum of n' elements from the input stream to an . Pre-releasestreamlyLike  but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown. Pre-releasestreamlyfromStreamArraysOf n stream< groups the input stream into a stream of arrays of size n.streamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size. Pre-releasestreamly0Allocate an array that is unpinned and can hold count3 items. The memory of the array is uninitialized.Note that this is internal routine, the reference to this array cannot be given out until the array has been written to and frozen.streamlyResize (unpinned) mutable byte array to new specified size (in elem count). The returned array is either the original array resized in-place or, if not possible, a newly allocated (unpinned) array (with the original content copied over).streamlydestination arraystreamlyoffset into destination arraystreamly source arraystreamlyoffset into source arraystreamlynumber of elements to copystreamlyarraystreamlyindexstreamlyelementstreamlynew sizestreamlynew size in elem countO!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone! #$&-035678>?f< streamlyDefault maximum buffer size in bytes, for reading from and writing to IO devices, the value is 32KB minus GHC allocation overhead, which is a few bytes, so that the actual allocation is 32KB.streamly'Fold the whole input to a single array.-Caution! Do not use this on infinite streams. Pre-releasestreamlywriteN n folds a maximum of n' elements from the input stream to an . Pre-releasestreamlyLike  but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown. Pre-releasestreamlyfromStreamArraysOf n stream< groups the input stream into a stream of arrays of size n.streamly;Splice two immutable arrays creating a new immutable array.streamly Convert an  into a list. Pre-releasestreamly5Strict right-associated fold over the elements of an .streamly4Strict left-associated fold over the elements of an .streamly4Strict left-associated fold over the elements of an .streamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size. Pre-releasestreamlySplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array. Pre-releasestreamlyLexicographic ordering. Subject to change between major versions.--P!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?o streamly Create an  from the first N elements of a stream. The array is allocated to size N, if the stream terminates before N elements then the array may hold less than N elements. Pre-releasestreamly Create an  from a stream. This is useful when we want to create a single array from a stream of unknown size. writeN is at least twice as efficient when the size is already known.Note that if the input stream is too large memory allocation for the array may fail. When the stream size is not known, arraysOf followed by processing of indvidual arrays in the resulting stream should be preferred. Pre-releasestreamly Convert an  into a stream. Pre-releasestreamly Convert an  into a stream in reverse order. Pre-releasestreamlyUnfold an array into a stream.streamlyUnfold an array into a stream, does not check the end of the array, the user is responsible for terminating the stream within the array bounds. For high performance application where the end condition can be determined by a terminating fold.The following might not be true, not that the representation changed. Written in the hope that it may be faster than "read", however, in the case for which this was written, "read" proves to be faster even though the core generated with unsafeRead looks simpler. Pre-releasestreamly null arr = length arr == 0 Pre-releasestreamlyFold an array using a . Pre-releasestreamly,Fold an array using a stream fold operation. Pre-releasestreamlyO(1)8 Lookup the element at the given index, starting from 0. Pre-releasestreamly )last arr = readIndex arr (length arr - 1) Pre-releasestreamly;Convert a stream of arrays into a stream of their elements.)Same as the following but more efficient: concat = S.concatMap A.read Pre-releasestreamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes. Pre-release!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?p!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHCNone! #$&-035678>?qn  Q!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamlysampleFromthen offset stride samples the element at offset- index and then every element at strides of stride.Stream.toList $ Stream.sampleFromThen 2 3 $ Stream.enumerateFromTo 0 10[2,5,8] Pre-releasestreamlyContinuously evaluate the input stream and sample the last event in time window of n seconds.This is also known as throttle in some libraries. sampleIntervalEnd n = Stream.catMaybes . Stream.intervalsOf n Fold.last  Pre-releasestreamlyLike sampleInterval1 but samples at the beginning of the time window. sampleIntervalStart n = Stream.catMaybes . Stream.intervalsOf n Fold.head  Pre-releasestreamlySample one event at the end of each burst of events. A burst is a group of events close together in time, it ends when an event is spaced by more than the specified time interval from the previous event.This is known as debounce in some libraries.The clock granularity is 10 ms. Pre-releasestreamlyLike  but samples the event at the beginning of the burst instead of at the end of it. Pre-releasestreamly;Sort the input stream using a supplied comparison function. O(n) spaceNote: this is not the fastest possible implementation as of now. Pre-releasestreamlyThis is the same as , but less efficient.The second stream is evaluated multiple times. If the second stream is consume-once stream then it can be cached in an  before calling this function. Caching may also improve performance if the stream is expensive to evaluate.Time: O(m x n) Pre-releasestreamlyFor all elements in t m a, for all elements in t m b if a and b are equal by the given equality pedicate then return the tuple (a, b).The second stream is evaluated multiple times. If the stream is a consume-once stream then the caller should cache it in an  before calling this function. Caching may also improve performance if the stream is expensive to evaluate.For space efficiency use the smaller stream as the second stream.;Space: O(n) assuming the second stream is cached in memory.Time: O(m x n) Pre-releasestreamlyLike # but uses a hashmap for efficiency.For space efficiency use the smaller stream as the second stream. Space: O(n)Time: O(m + n) UnimplementedstreamlyLike " but works only on sorted streams. Space: O(1)Time: O(m + n) UnimplementedstreamlyFor all elements in t m a, for all elements in t m b if a and b" are equal then return the tuple  (a, Just b). If a is not present in t m b then return  (a, Nothing).The second stream is evaluated multiple times. If the stream is a consume-once stream then the caller should cache it in an  before calling this function. Caching may also improve performance if the stream is expensive to evaluate. rightJoin = flip leftJoin ;Space: O(n) assuming the second stream is cached in memory.Time: O(m x n) UnimplementedstreamlyLike # but uses a hashmap for efficiency. Space: O(n)Time: O(m + n) UnimplementedstreamlyLike " but works only on sorted streams. Space: O(1)Time: O(m + n) UnimplementedstreamlyFor all elements in t m a, for all elements in t m b if a and b are equal by the given equality pedicate then return the tuple (Just a, Just b). If a is not found in t m b? then return (a, Nothing), return (Nothing, b) for vice-versa.For space efficiency use the smaller stream as the second stream. Space: O(n)Time: O(m x n) UnimplementedstreamlyLike # but uses a hashmap for efficiency.For space efficiency use the smaller stream as the second stream. Space: O(n)Time: O(m + n) UnimplementedstreamlyLike " but works only on sorted streams. Space: O(1)Time: O(m + n) Unimplementedstreamly is essentially a filtering operation that retains only those elements in the first stream that are present in the second stream.Stream.toList $ Stream.intersectBy (==) (Stream.fromList [1,2,2,4]) (Stream.fromList [2,1,1,3])[1,2,2]Stream.toList $ Stream.intersectBy (==) (Stream.fromList [2,1,1,3]) (Stream.fromList [1,2,2,4])[2,1,1]# is similar to but not the same as :Stream.toList $ fmap fst $ Stream.innerJoin (==) (Stream.fromList [1,2,2,4]) (Stream.fromList [2,1,1,3]) [1,1,2,2]Space: O(n) where n0 is the number of elements in the second stream.Time: O(m x n) where m4 is the number of elements in the first stream and n0 is the number of elements in the second stream. Pre-releasestreamlyLike " but works only on sorted streams. Space: O(1) Time: O(m+n) UnimplementedstreamlyDelete first occurrences of those elements from the first stream that are present in the second stream. If an element occurs multiple times in the second stream as many occurrences of it are deleted from the first stream.Stream.toList $ Stream.differenceBy (==) (Stream.fromList [1,2,2]) (Stream.fromList [1,2,3])[2]The following laws hold: (s1 serial% s2) `differenceBy eq` s1 === s2 (s1 wSerial! s2) `differenceBy eq` s1 === s2 Same as the list  operation.Space: O(m) where m/ is the number of elements in the first stream.Time: O(m x n) where m4 is the number of elements in the first stream and n0 is the number of elements in the second stream. Pre-releasestreamlyLike " but works only on sorted streams. Space: O(1) UnimplementedstreamlyThis is essentially an append operation that appends all the extra occurrences of elements from the second stream that are not already present in the first stream.Stream.toList $ Stream.unionBy (==) (Stream.fromList [1,2,2,4]) (Stream.fromList [1,1,2,3]) [1,2,2,4,3](Equivalent to the following except that s1 is evaluated only once: 9unionBy eq s1 s2 = s1 `serial` (s2 `differenceBy eq` s1)  Similar to  but not the same. Space: O(n)Time: O(m x n) Pre-releasestreamlyLike " but works only on sorted streams. Space: O(1) Unimplementede!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHCNone #$&-035678>? R!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com pre-releaseGHCNone #$&-035678>?'streamlyRead directories as Left and files as Right. Filter out "." and ".." entries.InternalstreamlyRead files only.Internalstreamly7Read directories only. Filter out "." and ".." entries.InternalstreamlyRaw read of a directory. Pre-releasestreamlyRead directories as Left and files as Right. Filter out "." and ".." entries. Pre-releasestreamlyRead files only.InternalstreamlyRead directories only.InternalS!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com pre-releaseGHCNone! #$&-035678>?streamly Just like  except that it has a zipping  instance and no  instance.streamlyList a is a replacement for [a].streamly3A list constructor and pattern that deconstructs a ) into its head and tail. Corresponds to : for Haskell lists.streamly? T!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone! #$&-035678>?Zstreamly Create an   from the first N elements of a stream. The array is allocated to size N, if the stream terminates before N elements then the array may hold less than N elements. Pre-releasestreamly Create an   from a stream. This is useful when we want to create a single array from a stream of unknown size. writeN is at least twice as efficient when the size is already known.Note that if the input stream is too large memory allocation for the array may fail. When the stream size is not known, arraysOf followed by processing of indvidual arrays in the resulting stream should be preferred. Pre-releasestreamlyUnfold an array into a stream.#Since 0.7.0 (Streamly.Memory.Array)streamlyUnfold an array into a stream, does not check the end of the array, the user is responsible for terminating the stream within the array bounds. For high performance application where the end condition can be determined by a terminating fold.Written in the hope that it may be faster than "read", however, in the case for which this was written, "read" proves to be faster even though the core generated with unsafeRead looks simpler. Pre-releasestreamly null arr = length arr == 0 Pre-releasestreamly (last arr = getIndex arr (length arr - 1) Pre-releasestreamly writeLastN n folds a maximum of n2 elements from the end of the input stream to an  .streamlyO(1)! Slice an array in constant time.1Caution: The bounds of the slice are not checked.Unsafe Pre-releasestreamlyO(1)8 Lookup the element at the given index, starting from 0.streamlyO(1) Write the given element at the given index in the array. Performs in-place mutation of the array. Pre-releasestreamlyTransform an array into another array using a stream transformation operation. Pre-releasestreamly&Cast an array having elements of type a( into an array having elements of type b8. The array size must be a multiple of the size of type b otherwise accessing the last element of the array may result into a crash or a random value. Pre-releasestreamlyCast an Array a into an  Array Word8.streamly&Cast an array having elements of type a( into an array having elements of type b. The length of the array should be a multiple of the size of the target element otherwise  is returned.streamlyUse an Array a as Ptr b.Unsafe Pre-releasestreamlyConvert an array of any type into a null terminated CString Ptr.Unsafe(O(n) Time: (creates a copy of the array) Pre-releasestreamlyFold an array using a . Pre-releasestreamly,Fold an array using a stream fold operation. Pre-releasestreamlystarting indexstreamlylength of the slice" "     U!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHCNone #$&-035678>? streamlyA value of type () is encoded as 0 in binary encoding.  0 ==> ()  Pre-releasestreamlyA value of type * is encoded as follows in binary encoding. 0 ==> False 1 ==> True  Pre-releasestreamlyA value of type * is encoded as follows in binary encoding. 0 ==> LT 1 ==> EQ 2 ==> GT  Pre-releasestreamlyAccept the input byte only if it is equal to the specified value. Pre-releasestreamlyAccept any byte. Pre-releasestreamlyParse two bytes as a , the first byte is the MSB of the Word16 and second byte is the LSB (big endian representation). Pre-releasestreamlyParse two bytes as a , the first byte is the LSB of the Word16 and second byte is the MSB (little endian representation). Pre-releasestreamlyParse four bytes as a , the first byte is the MSB of the Word32 and last byte is the LSB (big endian representation). Pre-releasestreamlyParse four bytes as a , the first byte is the MSB of the Word32 and last byte is the LSB (big endian representation). Pre-releasestreamlyParse eight bytes as a , the first byte is the MSB of the Word64 and last byte is the LSB (big endian representation). Pre-releasestreamlyParse eight bytes as a , the first byte is the MSB of the Word64 and last byte is the LSB (big endian representation). Pre-releasestreamlyParse eight bytes as a  in the host byte order. Pre-release  V!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?% streamlyArray stream fold.2An array stream fold is basically an array stream Parser that does not fail. In case of array stream folds the count in ,  and  is a count of elements that includes the leftover element count in the array that is currently being processed by the parser. If none of the elements is consumed by the parser the count is at least the whole array length. If the whole array is consumed by the parser then the count will be 0. Pre-releasestreamlyConvert an element  into an array stream fold. Pre-releasestreamlyConvert an element Parser into an array stream fold. If the parser fails the fold would throw an exception. Pre-releasestreamlyAdapt an array stream fold. Pre-releasestreamly/Map a monadic function on the output of a fold. Pre-releasestreamlyA fold that always yields a pure value without consuming any input. Pre-releasestreamlyA fold that always yields the result of an effectful action without consuming any input. Pre-releasestreamlyApplies two folds sequentially on the input stream and combines their results using the supplied function. Pre-releasestreamlyApplies a fold on the input stream, generates the next fold from the output of the previously applied fold and then applies that fold. Pre-releasestreamlyMonad instance applies folds sequentially. Next fold can depend on the output of the previous fold. See . (>>=) = flip concatMapstreamly form of . > ( *) = serialWith idstreamly(Maps a function over the result of fold. Pre-release  W!(c) 2019 Composewell Technologies BSD3-3-Clausestreamly@composewell.com pre-releaseGHCNone #$&-035678>? streamlyarraysOf n stream9 groups the elements in the input stream into arrays of n elements each. /arraysOf n = Stream.chunksOf n (Array.writeN n) Pre-releasestreamly;Convert a stream of arrays into a stream of their elements.)Same as the following but more efficient: %concat = Stream.unfoldMany Array.readstreamlyConvert a stream of arrays into a stream of their elements reversing the contents of each array before flattening. +concatRev = Stream.unfoldMany Array.readRevstreamlyFlatten a stream of arrays after inserting the given element between arrays. Pre-releasestreamlyFlatten a stream of arrays appending the given element after each array.streamly!groupIOVecsOf maxBytes maxEntries= groups arrays in the incoming stream to create a stream of  arrays with a maximum of maxBytes' bytes in each array and a maximum of  maxEntries entries in each array.streamlyCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes.streamlySplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array.streamlyGiven a stream of arrays, splice them all together to generate a single array. The stream must be finite.streamly5Fold an array stream using the supplied array stream . Pre-releasestreamlyLike ' but also returns the remaining stream. Pre-releasestreamlyApply an array stream  repeatedly on an array stream and emit the fold outputs in the output stream.1See "Streamly.Prelude.foldMany" for more details. Pre-release!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHCNone #$&-035678>?     X!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?̎streamly$Specify the socket protocol details.streamly action socket runs the monadic computation action passing the socket handle to it. The handle will be closed on exit from , whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by % rather than any exception raised by action.streamlyLike  but runs a streaming computation instead of a monadic computation.Inhibits stream fusionInternalstreamlyUnfold a three tuple (listenQLen, spec, addr) into a stream of connected protocol sockets corresponding to incoming connections.  listenQLen? is the maximum number of pending connections in the backlog. spec7 is the socket protocol and options specification and addr is the protocol address where the server listens for incoming connections.streamlyConnect to a remote host using the given socket specification and remote address. Returns a connected socket or throws an exception. Pre-releasestreamlyConnect to a remote host using the given socket specification, a local address to bind to and a remote address to connect to. Returns a connected socket or throws an exception. Pre-releasestreamlyStart a TCP stream server that listens for connections on the supplied server address specification (address family, local interface IP address and port). The server generates a stream of connected sockets. The first argument is the maximum number of pending connections in the backlog. Pre-releasestreamlyRead a byte array from a file handle up to a maximum of the requested size. If no data is available on the handle it blocks until some data becomes available. If data is available then it immediately returns that data without blocking.streamly Write an Array to a file handle.streamlytoChunksWithBufferOf size h+ reads a stream of arrays from file handle h4. The maximum size of a single array is limited to size. fromHandleArraysUpto ignores the prevailing  TextEncoding and  NewlineMode on the Handle.streamly toChunks h- reads a stream of arrays from socket handle h4. The maximum size of a single array is limited to defaultChunkSize.streamlyUnfold the tuple (bufsize, socket) into a stream of  arrays. Read requests to the socket are performed using a buffer of size bufsize. The size of an array in the resulting stream is always less than or equal to bufsize.streamly"Unfolds a socket into a stream of  arrays. Requests to the socket are performed using a buffer of size 7. The size of arrays in the resulting stream are therefore less than or equal to 7.streamlyGenerate a stream of elements of the given type from a socket. The stream ends when EOF is encountered.streamlyUnfolds the tuple (bufsize, socket) into a byte stream, read requests to the socket are performed using buffers of bufsize.streamly Unfolds a  into a byte stream. IO requests to the socket are performed in sizes of 7.streamly%Write a stream of arrays to a handle.streamlyWrite a stream of arrays to a socket. Each array in the stream is written to the socket as a separate IO request.streamly&writeChunksWithBufferOf bufsize socket writes a stream of arrays to socket3 after coalescing the adjacent arrays in chunks of bufsize. Multiple arrays are coalesed as long as the total size remains below the specified size. It never splits an array, if a single array is bigger than the specified size it emitted as it is.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyWrite a byte stream to a socket. Accumulates the input in chunks of specified number of bytes before writing.streamlyWrite a byte stream to a file handle. Combines the bytes in chunks of size up to ? before writing. Note that the write behavior depends on the IOMode- and the current seek position of the handle.streamlyWrite a byte stream to a socket. Accumulates the input in chunks of up to  bytes before writing. write =   Y!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?ڵstreamly toChunksWithBufferOf size handle0 reads a stream of arrays from the file handle handle4. The maximum size of a single array is limited to size5. The actual size read may be less than or equal to size.streamlyUnfold the tuple (bufsize, handle) into a stream of  arrays. Read requests to the IO device are performed using a buffer of size bufsize. The size of an array in the resulting stream is always less than or equal to bufsize.streamlytoChunks handle reads a stream of arrays from the specified file handle. The maximum size of a single array is limited to defaultChunkSize5. The actual size read may be less than or equal to defaultChunkSize. 0toChunks = toChunksWithBufferOf defaultChunkSizestreamly"Unfolds a handle into a stream of  arrays. Requests to the IO device are performed using a buffer of size . The size of arrays in the resulting stream are therefore less than or equal to .streamlyUnfolds the tuple (bufsize, handle) into a byte stream, read requests to the IO device are performed using buffers of bufsize.streamly"toBytesWithBufferOf bufsize handle reads a byte stream from a file handle, reads are performed in chunks of up to bufsize. Pre-releasestreamlyUnfolds a file handle into a byte stream. IO requests to the device are performed in sizes of .streamly#Generate a byte stream from a file . Pre-releasestreamly Write an   to a file handle.streamly%Write a stream of arrays to a handle.streamly+putChunksWithBufferOf bufsize handle stream writes a stream of arrays to handle3 after coalescing the adjacent arrays in chunks of bufsize. The chunk size is only a maximum and the actual writes could be smaller as we do not split the arrays to fit exactly to the specified size.streamly*putBytesWithBufferOf bufsize handle stream writes stream to handle in chunks of bufsize. A write is performed to the IO device as soon as we collect the required input size.streamlyWrite a byte stream to a file handle. Accumulates the input in chunks of up to  before writing.'NOTE: This may perform better than the ; fold, you can try this if you need some extra perf boost.streamlyWrite a stream of arrays to a handle. Each array in the stream is written to the device as a separate IO request.streamly&writeChunksWithBufferOf bufsize handle writes a stream of arrays to handle3 after coalescing the adjacent arrays in chunks of bufsize. We never split an array, if a single array is bigger than the specified size it emitted as it is. Multiple arrays are coalesed as long as the total size remains below the specified size.streamly writeWithBufferOf reqSize handle writes the input stream to handle. Bytes in the input stream are collected into a buffer until we have a chunk of reqSize# and then written to the IO device.streamlyWrite a byte stream to a file handle. Accumulates the input in chunks of up to " before writing to the IO device.!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHCNone #$&-035678>?ۗZ!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com pre-releaseGHCNone #$&-035678>?streamly name mode act opens a file using 5 and passes the resulting handle to the computation act+. The handle will be closed on exit from , whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by & rather than any exception raised by act. Pre-releasestreamly;Write an array to a file. Overwrites the file if it exists.streamlyappend an array to a file.streamlytoChunksWithBufferOf size file$ reads a stream of arrays from file file6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size.streamly toChunks file$ reads a stream of arrays from file file4. The maximum size of a single array is limited to defaultChunkSize). The actual size read may be less than defaultChunkSize. 0toChunks = toChunksWithBufferOf defaultChunkSizestreamlyUnfold the tuple (bufsize, filepath) into a stream of  arrays. Read requests to the IO device are performed using a buffer of size bufsize. The size of an array in the resulting stream is always less than or equal to bufsize. Pre-releasestreamly Unfolds a  into a stream of  arrays. Requests to the IO device are performed using a buffer of size . The size of arrays in the resulting stream are therefore less than or equal to . Pre-releasestreamlyUnfolds the tuple (bufsize, filepath) into a byte stream, read requests to the IO device are performed using buffers of bufsize. Pre-releasestreamlyUnfolds a file path into a byte stream. IO requests to the device are performed in sizes of .streamlyGenerate a stream of bytes from a file specified by path. The stream ends when EOF is encountered. File is locked using multiple reader and single writer locking mode. Pre-releasestreamlyWrite a stream of arrays to a file. Overwrites the file if it exists.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyWrite a byte stream to a file. Combines the bytes in chunks of size up to  before writing. If the file exists it is truncated to zero size before writing. If the file does not exist it is created. File is locked using single writer locking mode. Pre-releasestreamlyWrite a stream of chunks to a handle. Each chunk in the stream is written to the device as a separate IO request. Pre-releasestreamly"writeWithBufferOf chunkSize handle writes the input stream to handle. Bytes in the input stream are collected into a buffer until we have a chunk of size  chunkSize# and then written to the IO device. Pre-releasestreamlyWrite a byte stream to a file. Accumulates the input in chunks of up to " before writing to the IO device. Pre-releasestreamly$Append a stream of arrays to a file.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyAppend a byte stream to a file. Combines the bytes in chunks of size up to  before writing. If the file exists then the new data is appended to the file. If the file does not exist it is created. File is locked using single writer locking mode.[!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamly8Select alphabetic characters in the ascii character set. Pre-release\(c) 2018 Composewell Technologies (c) Bjoern Hoehrmann 2008-2009 BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamlyDecode a stream of bytes to Unicode characters by mapping each byte to a corresponding Unicode  in 0-255 range.Since: 0.7.0 (Streamly.Data.Unicode.Stream)streamlyEncode a stream of Unicode characters to bytes by mapping each character to a byte in 0-255 range. Throws an error if the input stream contains characters beyond 255.streamlyLike  but silently maps input codepoints beyond 255 to arbitrary Latin1 chars in 0-255 range. No error or exception is thrown when such mapping occurs.Since: 0.7.0 (Streamly.Data.Unicode.Stream) Since: 0.8.0 (Lenient Behaviour)streamlyLike + but drops the input characters beyond 255.streamlySame as streamly Pre-releasestreamly Pre-releasestreamlyDecode a UTF-8 encoded bytestream to a stream of Unicode characters. Any invalid codepoint encountered is replaced with the unicode replacement character.Since: 0.7.0 (Streamly.Data.Unicode.Stream) Since: 0.8.0 (Lenient Behaviour)streamlyDecode a UTF-8 encoded bytestream to a stream of Unicode characters. The function throws an error if an invalid codepoint is encountered.streamlyDecode a UTF-8 encoded bytestream to a stream of Unicode characters. Any invalid codepoint encountered is dropped.streamlySame as streamly Pre-releasestreamly Pre-releasestreamly Pre-releasestreamlyEncode a stream of Unicode characters to a UTF-8 encoded bytestream. When any invalid character (U+D800-U+D8FF) is encountered in the input stream the function errors out.streamly-See section "3.9 Unicode Encoding Forms" in https://www.unicode.org/versions/Unicode13.0.0/UnicodeStandard-13.0.pdfstreamlyEncode a stream of Unicode characters to a UTF-8 encoded bytestream. Any Invalid characters (U+D800-U+D8FF) in the input stream are replaced by the Unicode replacement character U+FFFD.Since: 0.7.0 (Streamly.Data.Unicode.Stream) Since: 0.8.0 (Lenient Behaviour)streamlyEncode a stream of Unicode characters to a UTF-8 encoded bytestream. Any Invalid characters (U+D800-U+D8FF) in the input stream are dropped.streamlySame as streamlyEncode a stream of  using the supplied encoding scheme. Each string is encoded as an  Array Word8.streamly(Remove leading whitespace from a string. stripHead = S.dropWhile isSpace Pre-releasestreamly0Fold each line of the stream using the supplied  and stream the result.Stream.toList $ lines Fold.toList (Stream.fromList "lines\nthis\nstring\n\n\n")["lines","this","string","",""] !lines = S.splitOnSuffix (== '\n') Pre-releasestreamly0Fold each word of the stream using the supplied  and stream the result.Stream.toList $ words Fold.toList (Stream.fromList "fold these words")["fold","these","words"] words = S.wordsBy isSpace Pre-releasestreamly8Unfold a stream to character streams using the supplied 7 and concat the results suffixing a newline character \n to each stream. unlines = Stream.interposeSuffix 'n' unlines = Stream.intercalateSuffix Unfold.fromList "n"  Pre-releasestreamlyUnfold the elements of a stream to character streams using the supplied  and concat the results with a whitespace character infixed between the streams. unwords = Stream.interpose ' ' unwords = Stream.intercalate Unfold.fromList " "  Pre-release'']!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?streamly'Unfold standard input into a stream of .streamly'Read a byte stream from standard input. getBytes = Handle.toBytes stdin getBytes = Stream.unfold Stdio.read () Pre-releasestreamly9Read a character stream from Utf8 encoded standard input. ,getChars = Unicode.decodeUtf8 Stdio.getBytes Pre-releasestreamly(Unfolds standard input into a stream of  arrays.streamlyRead a stream of chunks from standard input. The maximum size of a single chunk is limited to defaultChunkSize). The actual size read may be less than defaultChunkSize. getChunks = Handle.toChunks stdin getChunks = Stream.unfold Stdio.readChunks () Pre-releasestreamlyFold a stream of  to standard output.streamlyFold a stream of  to standard error.streamly+Write a stream of bytes to standard output. putBytes = Handle.putBytes stdout putBytes = Stream.fold Stdio.write Pre-releasestreamlyEncode a character stream to Utf8 and write it to standard output. .putChars = Stdio.putBytes . Unicode.encodeUtf8 Pre-releasestreamlyFold a stream of  Array Word8 to standard output.streamlyFold a stream of  Array Word8 to standard error.streamly,Write a stream of chunks to standard output. putChunks = Handle.putChunks stdout putChunks = Stream.fold Stdio.writeChunks Pre-releasestreamlyWrite a stream of strings to standard output using the supplied encoding. Output is flushed to the device for each string. Pre-releasestreamlyWrite a stream of strings to standard output using UTF8 encoding. Output is flushed to the device for each string. Pre-releasestreamlyLike . but adds a newline at the end of each string.XXX This is not portable, on Windows we need to use "rn" instead. Pre-release!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHCNone #$&-035678>?!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?]!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?   !(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHCNone #$&-035678>?^!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?streamlyUnfold a tuple (ipAddr, port)* into a stream of connected TCP sockets. ipAddr is the local IP address and port6 is the local port on which connections are accepted.streamlyLike  but binds on the IPv4 address 0.0.0.0 i.e. on all IPv4 addresses/interfaces of the machine and listens for TCP connections on the specified port. 4acceptOnPort = UF.supplyFirst acceptOnAddr (0,0,0,0)streamlyLike ) but binds on the localhost IPv4 address  127.0.0.1. The server can only be accessed from the local host, it cannot be accessed from other hosts on the network. ;acceptOnPortLocal = UF.supplyFirst acceptOnAddr (127,0,0,1)streamlyLike  but binds on the specified IPv4 address of the machine and listens for TCP connections on the specified port. Pre-releasestreamlyLike  but binds on the IPv4 address 0.0.0.0 i.e. on all IPv4 addresses/interfaces of the machine and listens for TCP connections on the specified port. /connectionsOnPort = connectionsOnAddr (0,0,0,0) Pre-releasestreamlyLike ) but binds on the localhost IPv4 address  127.0.0.1. The server can only be accessed from the local host, it cannot be accessed from other hosts on the network. 6connectionsOnLocalHost = connectionsOnAddr (127,0,0,1) Pre-releasestreamlyConnect to the specified IP address and port number. Returns a connected socket or throws an exception.streamlyConnect to a remote host using IP address and port and run the supplied action on the resulting socket.  makes sure that the socket is closed on normal termination or in case of an exception. If closing the socket raises an exception, then this exception will be raised by . Pre-releasestreamly Transform an  from a  to an unfold from a remote IP address and port. The resulting unfold opens a socket, uses it using the supplied unfold and then makes sure that the socket is closed on normal termination or in case of an exception. If closing the socket raises an exception, then this exception will be raised by . Pre-releasestreamly addr port act opens a connection to the specified IPv4 host address and port and passes the resulting socket handle to the computation act*. The handle will be closed on exit from , whether by normal termination or by raising an exception. If closing the handle raises an exception, then this exception will be raised by % rather than any exception raised by act. Pre-releasestreamlyRead a stream from the supplied IPv4 host address and port number.streamlyRead a stream from the supplied IPv4 host address and port number.streamlyWrite a stream of arrays to the supplied IPv4 host address and port number.streamlyWrite a stream of arrays to the supplied IPv4 host address and port number.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyLike  but provides control over the write buffer. Output will be written to the IO device as soon as we collect the specified number of input elements.streamlyWrite a stream to the supplied IPv4 host address and port number.streamlyWrite a stream to the supplied IPv4 host address and port number.streamlySend an input stream to a remote host and produce the output stream from the host. The server host just acts as a transformation function on the input stream. Both sending and receiving happen asynchronously. Pre-release!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHCNone #$&-035678>?g!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHCNone #$&-035678>?I                  _!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?!streamlyBreak a string up into a stream of strings at newline characters. The resulting strings do not contain newlines. lines = S.lines A.writeStream.toList $ Unicode.lines $ Stream.fromList "lines\nthis\nstring\n\n\n"[fromListN 5 "lines",fromListN 4 "this",fromListN 6 "string",fromListN 0 "",fromListN 0 ""]streamlyBreak a string up into a stream of strings, which were delimited by characters representing white space. words = S.words A.writeStream.toList $ Unicode.words $ Stream.fromList "A newline\nis considered white space?"[fromListN 1 "A",fromListN 7 "newline",fromListN 2 "is",fromListN 10 "considered",fromListN 5 "white",fromListN 6 "space?"]streamlyFlattens the stream of  Array Char8, after appending a terminating newline to each string. is an inverse operation to .Stream.toList $ Unicode.unlines $ Stream.fromList ["lines", "this", "string"]"lines\nthis\nstring\n" unlines = S.unlines A.readNote that, in general unlines . lines /= idstreamlyFlattens the stream of  Array Char5, after appending a separating space to each string. is an inverse operation to .Stream.toList $ Unicode.unwords $ Stream.fromList ["unwords", "this", "string"]"unwords this string" unwords = S.unwords A.readNote that, in general unwords . words /= id`!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHCNone #$&-035678>?'(streamlyBreak a string up into a stream of strings at newline characters. The resulting strings do not contain newlines. lines = S.lines A.writeStream.toList $ Unicode.lines $ Stream.fromList "lines\nthis\nstring\n\n\n"["lines","this","string","",""]streamlyBreak a string up into a stream of strings, which were delimited by characters representing white space. words = S.words A.writeStream.toList $ Unicode.words $ Stream.fromList "A newline\nis considered white space?"2["A","newline","is","considered","white","space?"]streamlyFlattens the stream of  Array Char8, after appending a terminating newline to each string. is an inverse operation to .Stream.toList $ Unicode.unlines $ Stream.fromList ["lines", "this", "string"]"lines\nthis\nstring\n" unlines = S.unlines A.readNote that, in general unlines . lines /= idstreamlyFlattens the stream of  Array Char5, after appending a separating space to each string. is an inverse operation to .Stream.toList $ Unicode.unwords $ Stream.fromList ["unwords", "this", "string"]"unwords this string" unwords = S.unwords A.readNote that, in general unwords . words /= ida!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?0 streamlyA  is returned by  and is subsequently used to perform read and write operations on a file.streamlyFile handle for standard inputstreamlyFile handle for standard outputstreamlyFile handle for standard errorstreamly?Open a file that is not a directory and return a file handle.  enforces a multiple-reader single-writer locking on files. That is, there may either be many handles on the same file which manage input, or just one handle on the file which manages output. If any open handle is managing a file for output, no new handle can be allocated for that file. If any open handle is managing a file for input, new handles can only be allocated if they do not manage output. Whether two files are the same is implementation-dependent, but they should normally be the same if they have the same absolute path name and neither has been renamed, for example.streamly readArrays h+ reads a stream of arrays from file handle h4. The maximum size of a single array is limited to defaultChunkSize.  ignores the prevailing  TextEncoding and  NewlineMode on the . .readArrays = readArraysOfUpto defaultChunkSizestreamlyreadInChunksOf chunkSize handle reads a byte stream from a file handle, reads are performed in chunks of up to  chunkSize2. The stream ends as soon as EOF is encountered.streamly?U(4streamlyAn Event generated by the file system. Use the accessor functions to examine the event. Pre-releasestreamly*What to do if a watch already exists when  or  is called for a path. Pre-releasestreamly"Do not set an existing setting to  only set to streamly/Replace the existing settings with new settingsstreamlyWhether a setting is  or . Pre-releasestreamlyWatch configuration, used to specify the events of interest and the behavior of the watch. Pre-releasestreamlyIf the pathname to be watched is a symbolic link then watch the target of the symbolic link instead of the symbolic link itself. default: On Pre-releasestreamlyIf an object moves out of the directory being watched then stop watching it. default: On Pre-releasestreamlyWhen adding a new path to the watch, specify what to do if a watch already exists on that path.default: FailIfExists Pre-releasestreamlyWatch the object only for one event and then remove it from the watch. default: Off Pre-releasestreamlyWatch the object only if it is a directory. This provides a race-free way to ensure that the watched object is a directory. default: Off Pre-releasestreamly1Report when the watched path itself gets deleted. default: On Pre-releasestreamly6Report when the watched root path itself gets renamed. default: On Pre-releasestreamlyReport when the metadata e.g. owner, permission modes, modifications times of an object changes. default: On Pre-releasestreamlyReport when a file is accessed. default: On Pre-releasestreamlyReport when a file is opened. default: On Pre-releasestreamly8Report when a file that was opened for writes is closed. default: On Pre-releasestreamly=Report when a file that was opened for not writing is closed. default: On Pre-releasestreamlyReport when a file is created. default: On Pre-releasestreamlyReport when a file is deleted. default: On Pre-releasestreamlyReport the source of a move. default: On Pre-releasestreamlyReport the target of a move. default: On Pre-releasestreamlyReport when a file is modified. default: On Pre-releasestreamlySet all events  or . default: On Pre-releasestreamlyThe default is:       Pre-releasestreamly!addToWatch cfg watch root subpath adds subpath- to the list of paths being monitored under root via the watch handle watch. root must be an absolute path and subpath must be relative to root. Pre-releasestreamly$Remove an absolute root path from a , if a path was moved after adding you need to provide the original path which was used to add the Watch. Pre-releasestreamlyStart monitoring a list of file system paths for file system events with the supplied configuration operation over the . The paths could be files or directories. When the path is a directory, only the files and directories directly under the watched directory are monitored, contents of subdirectories are not monitored. Monitoring starts from the current time onwards. The paths are specified as "/" separated   of . watchPathsWith ( On . # Off) [Array.fromCString# "dir"#]  Pre-releasestreamlyLike  but uses the  options. watchPaths = watchPathsWith id  Pre-releasestreamlyStart monitoring a list of file system paths for file system events with the supplied configuration operation over the . The paths could be files or directories. When the path is a directory, the whole directory tree under it is watched recursively. Monitoring starts from the current time onwards.Note that recrusive watch on a large directory tree could be expensive. When starting a watch, the whole tree must be read and watches are started on each directory in the tree. The initial time to start the watch as well as the memory required is proportional to the number of directories in the tree.When new directories are created under the tree they are added to the watch on receiving the directory create event. However, the creation of a dir and adding a watch for it is not atomic. The implementation takes care of this and makes sure that watches are added for all directories. However, In the mean time, the directory may have received more events which may get lost. Handling of any such lost events is yet to be implemented.See the Linux inotify man page for more details. Pre-releasestreamlyLike  but uses the  options. watchTrees = watchTreesWith id streamly(Get the watch root corresponding to the .Note that if a path was moved after adding to the watch, this will give the original path and not the new path after moving.TBD: we can possibly update the watch root on a move self event. Pre-releasestreamlyGet the file system object path for which the event is generated, relative to the watched root. The path is a "/" separated array of bytes. Pre-releasestreamlyGet the absolute file system object path for which the event is generated. The path is a "/" separated array of bytes. Pre-releasestreamlyCookie is set when a rename occurs. The cookie value can be used to connect the  and  events, if both the events belong to the same move operation then they will have the same cookie value. Pre-releasestreamlyEvent queue overflowed (WD is invalid for this event) and we may have lost some events.. The user application must scan everything under the watched paths to know the current state. Pre-releasestreamly3A path was removed from the watch explicitly using  or automatically (file was deleted, or filesystem was unmounted).Occurs only for a watched path Pre-releasestreamlyWatched file/directory was itself deleted. (This event also occurs if an object is moved to another filesystem, since mv(1) in effect copies the file to the other filesystem and then deletes it from the original filesystem.) In addition, an  event will subsequently be generated for the watch descriptor.Occurs only for a watched path Pre-releasestreamly?Watched file/directory was itself moved within the file system.Occurs only for a watched path Pre-releasestreamlyFilesystem containing watched object was unmounted. In addition, an  event will subsequently be generated for the watch descriptor.Occurs only for a watched path Pre-releasestreamlyDetermine whether the event indicates inode metadata change for an object contained within the monitored path.Metadata change may include, permissions (e.g., chmod(2)), timestamps (e.g., utimensat(2)), extended attributes (setxattr(2)), link count (since Linux 2.6.25; e.g., for the target of link(2) and for unlink(2)), and user/group ID (e.g., chown(2))..Can occur for watched path or a file inside it Pre-releasestreamly&File was accessed (e.g. read, execve).3Occurs only for a file inside the watched directory Pre-releasestreamlyFile or directory was opened.3Occurs only for a file inside the watched directory Pre-releasestreamly#File opened for writing was closed.3Occurs only for a file inside the watched directory Pre-releasestreamly;File or directory opened for read but not write was closed..Can occur for watched path or a file inside it Pre-releasestreamlyFile/directory created in watched directory (e.g., open(2) O_CREAT, mkdir(2), link(2), symlink(2), bind(2) on a UNIX domain socket).6Occurs only for an object inside the watched directory Pre-releasestreamly.File/directory deleted from watched directory.6Occurs only for an object inside the watched directory Pre-releasestreamlyGenerated for the original path when an object is moved from under a monitored directory.6Occurs only for an object inside the watched directory Pre-releasestreamlyGenerated for the new path when an object is moved under a monitored directory.6Occurs only for an object inside the watched directory Pre-releasestreamlyDetermine whether the event indicates modification of an object within the monitored path. This event is generated only for files and not directories.6Occurs only for an object inside the watched directory Pre-releasestreamly4Determine whether the event is for a directory path. Pre-releasestreamly Convert an # record to a String representation.<<!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHCNone #$&-035678>?Z streamlySame as streamlySame as  runStream.streamly%Same as "Streamly.Prelude.runStream".streamlySame as drain . fromWSerial.streamlySame as drain . fromParallel.streamlySame as drain . fromAsync.streamlySame as drain . zipping.streamlySame as drain . zippingAsync.streamlyMake a stream asynchronous, triggers the computation and returns a stream in the underlying monad representing the output generated by the original computation. The returned action is exhaustible and must be drained once. If not drained fully we may have a thread blocked forever and once exhausted it will always return empty.streamlySame as gstreamlySame as gstreamlySame as g         !(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHCNone #$&-035678>?[                      pqijlfhipqrz{|}~luho   y y           p q r  s  t    w x l           !y!!!!!p!q!w!!!!!!!!"z"{"|"}"~#########################u#v###$$$$$$$$%%p%q%w%x%%%%%%%%%%%%%%%%%%%%%%r%s%z%{%|%t%}%~%%%%l%%%u%v%%%%%%%%%&&&&&&&&&&''''''''''''q'p''''''''l'''''u'''((((((((((((((((((((((p(q((((((((i((h(((((((((((((((((l((m((((((())))))************++,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,--------------------------........./////////////////////////00000000000000000000000000000000000111111111111111122333333333444455666666666666666666666666666666666666666666666h666666666666666666666666666777777777777777777777777777777777h7777777777777777778888888888888888888888888888888888888888888888888888888999999999999999::::::::::::::::::::::::::::::k:::::::::;;;;;;;;;;;;;;;;;;;;;;;;;;; < < <<<<i<<h<<<<<<<<<<<<<<<<<<<= = = = = === = = = = = = = = = = = = = = = = = = = = = = = = = > > > > >> > >>> > >> > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ? ? ? ? ? ?????? ? ??? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? @ @@ @ @ @ @ @ @ @ AAAAAAA A AAA A AAAAA A A A A A AA A A AA A A AAAA A AAAAAA A AA A AAA A A A A A AA A A A A A A A AAA AA A AA AABBBBB BB BCCCCCCCCCCDpD DqD DD D D DDDDD DDDDDD DDDDlD DEEEEEEEEEEE E E E E EEEEEEEE EEE EE E E E EE E E EE E E E EEEE E E E E EEEE EEEE E E E E EEE E E E E EEEEEE E E E E F F F FFmF FkFFF F FFF F F FF FFF FF FFFFF F F FFGG GGGGGGG G G G G G GGGGG GG G G HHHHHH HHH H H HHHHHHH HH HHH HfH H H IIiIhII III II I II IIII I II I IIIIIIIIIIII I I I I IIIIIIIIII I III III I I II I II I IIJJJJJhJJJJ JJJJ JJ K K K K K KK K K K K K K LLLLL LLL LL LLLLLL LLL L L LL L LL L        h        M MMMMM MMM M MMM NNNNN NNN NN NNNNNN NNN N N NN O O O O O O OOOOOOOOOOOOOOOOO OOOOOhO OOOOOOOOO O O O OOOOOOOOOOP PPPPP PPP P PPP h  Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q RR R R RR R R S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S T TTTT TTT T T T T T T T T T TT U U U U U U U U U U U U VVVV V VVpVqVrVlVV V VWWW WW WW W W W W WW W WWWmX X X X X X X X X X X X X X X X X X X X XX XX X X X XY Y Y Y Y Y YY Y Y Y Y Y YY Y YY Z Z Z Z Z Z Z Z ZZ Z Z Z ZZ ZZ Z Z[ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ]] ] ] ] ]] ] ] ]] ] ] ] ] ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^ ^ ^^ ^ ^ ^^ _ _ _ _ ` ` ` ` a a a a a a a a aa a a ab b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b                                             h                                      bb%streamly-0.8.0-GpzsIEUhxII7VqjPHGufF6StreamlyStreamly.Internal.BaseCompat-Streamly.Internal.Data.Array.Prim.Pinned.Type!Streamly.Internal.Data.IORef.PrimStreamly.Internal.Data.Array$Streamly.Internal.Control.Concurrent#Streamly.Internal.Control.ExceptionStreamly.Internal.Control.MonadStreamly.Internal.Data.AtomicsStreamly.Internal.Data.Cont$Streamly.Internal.Data.Either.Strict#Streamly.Internal.Data.Maybe.Strict Streamly.Internal.Data.Sink.Type&Streamly.Internal.Data.SmallArray.Type*Streamly.Internal.Data.Stream.StreamD.Step$Streamly.Internal.Data.Producer.Type+Streamly.Internal.Data.Stream.StreamDK.Type&Streamly.Internal.Data.Stream.StreamDKStreamly.Internal.Data.Time$Streamly.Internal.Data.Time.TimeSpec!Streamly.Internal.Data.Time.Units&Streamly.Internal.Data.Time.Clock.TypeStreamly.Internal.Data.SVar*Streamly.Internal.Data.Stream.StreamK.Type"Streamly.Internal.Data.IOFinalizer#Streamly.Internal.Data.Tuple.Strict Streamly.Internal.Data.Pipe.TypeStreamly.Internal.Data.Pipe Streamly.Internal.Data.Fold.Type%Streamly.Internal.Data.Stream.StreamKStreamly.Internal.Data.Sink*Streamly.Internal.Data.Parser.ParserD.Type*Streamly.Internal.Data.Parser.ParserK.Type)Streamly.Internal.Data.Parser.ParserD.Tee%Streamly.Internal.Data.Parser.ParserD&Streamly.Internal.Data.Producer.SourceStreamly.Internal.Data.ParserStreamly.Internal.Data.Fold.Tee"Streamly.Internal.Data.Unfold.Type*Streamly.Internal.Data.Stream.StreamD.Type*Streamly.Internal.Data.Stream.StreamD.Lift/Streamly.Internal.Data.Stream.StreamD.Exception!Streamly.Internal.Data.Time.ClockStreamly.Internal.Data.Unfold.Streamly.Internal.Data.Stream.StreamD.Generate"Streamly.Internal.Data.Stream.SVar&Streamly.Internal.Data.Stream.Parallel#Streamly.Internal.Data.Stream.Async#Streamly.Internal.Data.Stream.AheadStreamly.Internal.Data.Producer"Streamly.Internal.FileSystem.IOVec!Streamly.Internal.FileSystem.FDIO Streamly.Internal.Foreign.Malloc-Streamly.Internal.Data.Array.Foreign.Mut.Type)Streamly.Internal.Data.Array.Foreign.Type/Streamly.Internal.Data.Stream.StreamD.TransformStreamly.Internal.Ring.Foreign-Streamly.Internal.Data.Stream.StreamD.Nesting/Streamly.Internal.Data.Stream.StreamD.Eliminate%Streamly.Internal.Data.Stream.Prelude!Streamly.Internal.Data.Stream.Zip$Streamly.Internal.Data.Stream.Serial2Streamly.Internal.Data.Stream.IsStream.Enumeration2Streamly.Internal.Data.Stream.IsStream.CombinatorsStreamly.Internal.Data.Fold+Streamly.Internal.Data.Stream.IsStream.Lift0Streamly.Internal.Data.Stream.IsStream.Exception-Streamly.Internal.Data.Stream.IsStream.Common0Streamly.Internal.Data.Stream.IsStream.Transform-Streamly.Internal.Data.Stream.IsStream.Reduce/Streamly.Internal.Data.Stream.IsStream.Generate-Streamly.Internal.Data.Stream.IsStream.Expand0Streamly.Internal.Data.Stream.IsStream.Eliminate!Streamly.Internal.Data.SmallArray/Streamly.Internal.Data.Array.Stream.Mut.Foreign1Streamly.Internal.Data.Array.Prim.Pinned.Mut.Type(Streamly.Internal.Data.Array.Prim.Pinned*Streamly.Internal.Data.Array.Prim.Mut.Type&Streamly.Internal.Data.Array.Prim.Type!Streamly.Internal.Data.Array.Prim*Streamly.Internal.Data.Stream.IsStream.Top Streamly.Internal.FileSystem.DirStreamly.Internal.Data.List$Streamly.Internal.Data.Array.Foreign$Streamly.Internal.Data.Binary.Decode0Streamly.Internal.Data.Array.Stream.Fold.Foreign+Streamly.Internal.Data.Array.Stream.Foreign Streamly.Internal.Network.Socket#Streamly.Internal.FileSystem.Handle!Streamly.Internal.FileSystem.FileStreamly.Internal.Unicode.Char Streamly.Internal.Unicode.StreamStreamly.Internal.Console.Stdio"Streamly.Internal.Network.Inet.TCP+Streamly.Internal.Unicode.Array.Prim.Pinned$Streamly.Internal.Unicode.Array.CharStreamly.Internal.FileSystem.FD(Streamly.Internal.FileSystem.Event.LinuxControl.Monad.Trans.ListListT&Streamly.Internal.Data.Stream.IsStreamconcatPairsWithStreamly.PreludefoldrfoldrMserial foldIterateM concatMapfoldMany Control.MonadmfixfromPure fromEffect serialWithsplit_altmanysomediedieMParserteeWith teeWithFst teeWithMinshortestlongestfromFoldpeekeofsatisfymaybeeither takeBetweentakeEQtakeGE takeWhile takeWhile1 sliceSepByPwordBygroupBygroupByRollingeqBy lookahead deintercalatesequencechoice countBetweencountmanyTillData.TraversableControl.ApplicativeStreamly.Data.Fold.TeeControl.Exceptionmaskbeforeafter_after onExceptionbracket_bracketfinally_finallyghandlehandleStreamly.Data.Unfoldparallel concatMapWithAsyncTasync maxBufferwAsyncahead%Streamly.Internal.Data.Stream.StreamD Data.FoldablefoldwSerial,Streamly.Internal.Data.Stream.IsStream.TypesmapM_!! splitOnSuffixsplitWithSuffixStreamly.Data.FoldSerialmapStreamly.Data.SmallArrayStreamly.Data.Prim.Array outerProduct Data.ArrayArray Data.List//Streamly.Data.ArrayStreamly.Data.Array.ForeigndefaultChunkSizeStreamly.FileSystem.HandleAStreamly.Console.StdioStreamly.Data.Unicode.StreamStreamly.Memory.ArrayStreamly.Network.SocketStreamly.Network.Inet.TCPconcatFoldableWithconcatMapFoldableWithconcatForFoldableWithStreamly.Unicode.StreambaseGHC.Base<> Semigroup Data.Either fromRightfromLeftGHC.ListsplitAtstimessconcatGHC.ErrerrorWithoutStackTrace'primitive-0.7.1.0-Jxsyd70oUttYiCXCa0HqVData.Primitive.TypesPrimData.Primitive.Arrayarray##.unsafeWithForeignPtroneShotRunInIOrunInIO MonadAsyncdoForkfork forkManagedassertMverifyverifyMdiscardatomicModifyIORefCASatomicModifyIORefCAS_ writeBarrierstoreLoadBarrier contListMapEither'Left'Right'isLeft'isRight' fromLeft' fromRight' $fShowEither'Maybe'Just'Nothing'toMaybe fromJust'isJust' $fShowMaybe'SinkSmallMutableArray SmallArray newSmallArrayreadSmallArraywriteSmallArrayindexSmallArrayMindexSmallArrayindexSmallArray##cloneSmallArraycloneSmallMutableArrayfreezeSmallArrayunsafeFreezeSmallArraythawSmallArrayunsafeThawSmallArraycopySmallArraycopySmallMutableArraysizeofSmallArraysizeofSmallMutableArraytraverseSmallArrayPmapSmallArray' runSmallArraysmallArrayFromListNsmallArrayFromList$fDataSmallArray$fRead1SmallArray$fReadSmallArray$fShow1SmallArray$fShowSmallArray$fIsListSmallArray$fMonoidSmallArray$fSemigroupSmallArray$fMonadFixSmallArray$fMonadZipSmallArray$fMonadPlusSmallArray$fMonadFailSmallArray$fAlternativeSmallArray$fApplicativeSmallArray$fFunctorSmallArray$fTraversableSmallArray$fFoldableSmallArray$fOrdSmallArray$fOrd1SmallArray$fEqSmallArray$fEq1SmallArray$fDataSmallMutableArray$fEqSmallMutableArray$fMonadSmallArrayStepYieldSkipStop $fFunctorStep NestedLoop OuterLoop InnerLoopProducernilMnilunfoldrMfromList translatelmapconcat$fFunctorProducerStreamconsconsMunfoldr replicateMunconsfoldrSdrainperiodic withClockTimeSpecsecnsec$fStorableTimeSpec $fNumTimeSpec $fOrdTimeSpec $fEqTimeSpec$fReadTimeSpec$fShowTimeSpecRelTime RelTime64AbsTime TimeUnit64TimeUnit MilliSecond64 MicroSecond64 NanoSecond64 toAbsTime fromAbsTime toRelTime64 fromRelTime64 diffAbsTime64addToAbsTime64 toRelTime fromRelTime diffAbsTime addToAbsTimeshowNanoSecond64 showRelTime64$fTimeUnitMilliSecond64$fTimeUnitMicroSecond64$fTimeUnitNanoSecond64$fTimeUnitTimeSpec$fTimeUnit64MilliSecond64$fTimeUnit64MicroSecond64$fTimeUnit64NanoSecond64 $fEqRelTime $fReadRelTime $fShowRelTime $fNumRelTime $fOrdRelTime $fEqRelTime64$fReadRelTime64$fShowRelTime64$fEnumRelTime64$fBoundedRelTime64$fNumRelTime64$fRealRelTime64$fIntegralRelTime64$fOrdRelTime64 $fEqAbsTime $fOrdAbsTime $fShowAbsTime$fEqMilliSecond64$fReadMilliSecond64$fShowMilliSecond64$fEnumMilliSecond64$fBoundedMilliSecond64$fNumMilliSecond64$fRealMilliSecond64$fIntegralMilliSecond64$fOrdMilliSecond64$fPrimMilliSecond64$fEqMicroSecond64$fReadMicroSecond64$fShowMicroSecond64$fEnumMicroSecond64$fBoundedMicroSecond64$fNumMicroSecond64$fRealMicroSecond64$fIntegralMicroSecond64$fOrdMicroSecond64$fPrimMicroSecond64$fEqNanoSecond64$fReadNanoSecond64$fShowNanoSecond64$fEnumNanoSecond64$fBoundedNanoSecond64$fNumNanoSecond64$fRealNanoSecond64$fIntegralNanoSecond64$fOrdNanoSecond64$fPrimNanoSecond64Clock MonotonicRealtimeProcessCPUTime ThreadCPUTime MonotonicRawMonotonicCoarseUptimeRealtimeCoarsegetTime $fEqClock $fEnumClock$fGenericClock $fReadClock $fShowClockHeapDequeueResultClearingWaitingReadyState streamVarSVar svarStylesvarMrun svarStopStyle svarStopBy outputQueueoutputDoorBell readOutputQ postProcessoutputQueueFromConsumeroutputDoorBellFromConsumermaxWorkerLimitmaxBufferLimitpushBufferSpacepushBufferPolicypushBufferMVar remainingWork yieldRateInfoenqueue isWorkDone isQueueDone needDoorBellworkLoop workerThreads workerCount accountThreadworkerStopMVar svarStatssvarRefsvarInspectMode svarCreator outputHeapaheadWorkQueue SVarStopStyleStopNoneStopAnyStopByLimit UnlimitedLimited SVarStatstotalDispatches maxWorkers maxOutQSize maxHeapSize maxWorkQSizeavgWorkerLatencyminWorkerLatencymaxWorkerLatency svarStopTime YieldRateInfosvarLatencyTargetsvarLatencyRangesvarRateBuffersvarGainedLostYieldssvarAllTimeLatencyworkerBootstrapLatencyworkerPollingIntervalworkerPendingLatencyworkerCollectedLatencyworkerMeasuredLatencyRaterateLowrateGoalrateHigh rateBuffer WorkerInfoworkerYieldMaxworkerYieldCountworkerLatencyStart SVarStyleAsyncVar WAsyncVar ParallelVarAheadVarAheadHeapEntryAheadEntryNullAheadEntryPureAheadEntryStream ChildEvent ChildYield ChildStop ThreadAbortdefState adaptState setYieldLimit getYieldLimit setMaxThreads getMaxThreads setMaxBuffer getMaxBuffer setStreamRate getStreamRatesetStreamLatencysetInspectModegetInspectMode cleanupSVarcleanupSVarFromWorkercollectLatencydumpSVar printSVar withDiagMVarcaptureMonadStatedecrementYieldLimitincrementYieldLimitdecrementBufferLimitincrementBufferLimitsendsendToProducersendStopToProducerworkerUpdateLatencyupdateYieldCountworkerRateControl sendYieldsendStop enqueueLIFO enqueueFIFO enqueueAheadreEnqueueAheadqueueEmptyAhead dequeueAhead withIORefdequeueFromHeapdequeueFromHeapSeq heapIsSanerequeueOnHeapTop updateHeapSeq delThread modifyThreadhandleChildExceptionhandleFoldExceptionrecordMaxWorkers pushWorkerParisBeyondMaxRatedispatchWorkerPacedreadOutputQBasicreadOutputQBoundedreadOutputQPacedpostProcessBoundedpostProcessPacedgetYieldRateInfo newSVarStatssendFirstWorker newAheadVarnewParallelVar toStreamVar$fExceptionThreadAbort $fOrdLimit $fEqLimit $fShowWork$fEqSVarStopStyle$fShowSVarStopStyle $fShowLimit$fShowLatencyRange $fEqSVarStyle$fShowSVarStyle$fShowThreadAbort $fEqCount $fReadCount $fShowCount $fEnumCount$fBoundedCount $fNumCount $fRealCount$fIntegralCount $fOrdCount StreamingIsStreamtoStream fromStream|:MkStreamadaptmkStream fromStopK fromYieldKconsK.:consMByfoldStreamShared foldStream consMStream foldrSSharedfoldrSMbuildbuildSbuildSMbuildMsharedMaugmentS augmentSMfoldlx'foldl'conjoinmapM mapMSerialunShareapWithapSerialapSerialDiscardFstapSerialDiscardSndbindWith concatMapBy $fMonadStream$fApplicativeStream$fMonadTransStream$fFunctorStream$fMonoidStream$fSemigroupStream$fIsStreamStream IOFinalizernewIOFinalizerrunIOFinalizerclearingIOFinalizerTuple4'Tuple3'Tuple' $fShowTuple4' $fShowTuple3' $fShowTuple'Pipe PipeStateConsumeProduceContinuezipWithteecompose $fArrowPipe$fCategoryTYPEPipe$fSemigroupPipe$fApplicativePipe $fFunctorPipe ManyState GenericRunnerRunBothRunLeftRunRightFold2FoldPartialDonermapMfoldlM'foldl1'mkFoldmkFold_mkFoldMmkFoldM_simplifytoListserial_lmapMfilterfilterM catMaybestake duplicate initializerunStepmanyPostchunksOf chunksOf2 takeInterval intervalsOf$fBifunctorStep $fFunctorFold$fShowTuple'FusedoncerepeatMrepeat replicate fromIndicesM fromIndicesiterateiterateM fromFoldable fromStreamKfoldrTfoldr1foldlMx'foldlSfoldlTnullheadtailinitelemnotElemallanylastminimum minimumBymaximum maximumBylookupfindMfind findIndices toStreamKhoistscanlx'scanl'drop dropWhile intersperseM intersperseinsertBydeleteByreversemapMaybezipWithMmergeByMmergeBythe withLocaltoFold distributedemuxunzipMunziplfilterlfilterMdrainM ParseErrorErrorInitialIPartialIDoneIErrornoErrorUnsafeSplitWithnoErrorUnsafeSplit_ splitMany splitManyPost splitSomenoErrorUnsafeConcatMap$fFunctorInitial$fBifunctorInitial$fMonadPlusParser $fMonadParser$fAlternativeParser$fApplicativeParser$fFunctorParser$fExceptionParseError$fShowParseErrorMkParser runParser toParserK fromParserK$fFunctorDriver$fFunctorParse$fMonadFailParsersliceBeginWithspanspanBy spanByRolling lookAheadSourcesourceunreadisEmptyproducerparse parseManyD parseMany takeWhileP drainWhile sliceSepWithescapedSliceSepByescapedFrameByconcatSequencemanyP manyTillPmanyThen roundRobin retryMaxTotalretryMaxSuccessiveretryTee $fFloatingTee$fFractionalTee$fNumTee $fMonoidTee$fSemigroupTee$fApplicativeTee $fFunctorTee ConcatState ConcatOuter ConcatInnerUnfold mkUnfoldM mkUnfoldrM apSequence apDiscardSnd crossWithM crossWithcrossapply concatMapMbind functionMfunctionidentity$fFunctorUnfoldFoldMany FoldManyStart FoldManyFirst FoldManyLoop FoldManyYield FoldManyDone FoldManyPostFoldManyPostStartFoldManyPostLoopFoldManyPostYieldFoldManyPostDoneConcatMapUStateConcatMapUOuterConcatMapUInnerUnStreamunfold fromStreamD toStreamDfold_foldrMxcmpBy takeWhileM unfoldMany foldManyPost groupsOf2$fMonadThrowStream generally liftInner runReaderT evalStateT runStateT gbracket_gbracketIORefnewIORef writeIORef readIORef modifyIORef' asyncClock readClocksupply supplyFirst supplySecond discardFirst discardSecondswap mapMWithInput fromListM dropWhileMenumerateFromStepNumnumFromenumerateFromStepIntegralenumerateFromToIntegralenumerateFromIntegralenumerateFromToFractionalfromSVar fromProducer numFromThenenumerateFromThenToIntegralenumerateFromThenIntegralenumerateFromThenToFractionaltimes generateMgenerate fromSVarDtoSVartoSVarParallel newFoldSVar newFoldSVarF fromConsumer pushToFold teeToSVarParallel ParallelT parallelFst parallelMin mkParallelK mkParallelD mkParalleltapAsync tapAsyncFdistributeAsync_ fromParallelnewCallbackStream$fMonadStatesParallelT$fMonadReaderrParallelT$fMonadThrowParallelT$fMonadIOParallelT$fMonadBasebParallelT$fFunctorParallelT$fMonadParallelT$fApplicativeParallelT$fMonoidParallelT$fSemigroupParallelT$fIsStreamParallelT$fMonadTransParallelTWAsyncWAsyncTAsyncmkAsyncKmkAsync<| fromAsync fromWAsync$fMonadStatesAsyncT$fMonadReaderrAsyncT$fMonadThrowAsyncT$fMonadIOAsyncT$fMonadBasebAsyncT$fFunctorAsyncT $fMonadAsyncT$fApplicativeAsyncT$fMonoidAsyncT$fSemigroupAsyncT$fIsStreamAsyncT$fMonadStatesWAsyncT$fMonadReaderrWAsyncT$fMonadThrowWAsyncT$fMonadIOWAsyncT$fMonadBasebWAsyncT$fFunctorWAsyncT$fMonadWAsyncT$fApplicativeWAsyncT$fMonoidWAsyncT$fSemigroupWAsyncT$fIsStreamWAsyncT$fMonadTransWAsyncT$fMonadTransAsyncTAheadAheadT fromAhead$fMonadStatesAheadT$fMonadReaderrAheadT$fMonadThrowAheadT$fMonadIOAheadT$fMonadBasebAheadT$fFunctorAheadT $fMonadAheadT$fApplicativeAheadT$fMonoidAheadT$fSemigroupAheadT$fIsStreamAheadT$fMonadTransAheadTIOVeciovBaseiovLenc_writev c_safe_writev$fStorableIOVec $fEqIOVec $fShowIOVecwritewriteAllwritev writevAllmallocForeignPtrAlignedBytes%mallocForeignPtrAlignedUnmanagedBytes ArrayUnsafe ReadUStateaStartaEndaBound mutableArraymemcpymemcmpunsafeInlineIObytesToElemCount mkChunkSize mkChunkSizeKB shrinkToFitnewArrayAlignedAllocWithnewArrayAlignedUnmanagednewArrayAlignednewArrayunsafeWithNewArrayunsafeWriteIndex unsafeSnocsnocrealloc unsafeIndexIO unsafeIndex byteLengthlength byteCapacityarraysOf bufferChunksreadreadRev flattenArraysflattenArraysRev toStreamDRev toStreamKRevwriteNAllocWithwriteN writeNAlignedwriteNAlignedUnmanaged writeNUnsafe writeChunkstoArrayMinChunk writeAligned fromStreamDN fromListN spliceTwo spliceWithspliceWithDoublingbreakOn $fMonoidArray$fSemigroupArray $fOrdArray $fNFDataArray $fEqArray $fIsListArray$fIsStringArray $fReadArray $fShowArray unsafeFreezeunsafeFreezeWithShrink unsafeThawfromPtr fromAddr# fromCString# toStreamRev transformtaptapOffsetEvery pollCountstapRate postscanOncescanOnce prescanlM' prescanl' postscanlMx' postscanlx'scanlMx' postscanlM' postscanl'postscanlMAfter' postscanlM postscanlscanlM' scanlMAfter'scanlMscanlscanl1Mscanl1scanl1M'scanl1'uniq takeByTime dropByTime intersperseM_intersperseSuffixintersperseSuffix_intersperseSuffixBySpanreverse'indexedindexedR rollingMapM rollingMap mapMaybeMRing ringStart ringBoundstartOfnewadvancemoveBy unsafeInsertunsafeEqArrayN unsafeEqArrayunsafeFoldRingunsafeFoldRingMunsafeFoldRingFullMunsafeFoldRingNMConcatUnfoldInterleaveStateConcatUnfoldInterleaveOuterConcatUnfoldInterleaveInnerConcatUnfoldInterleaveInnerLConcatUnfoldInterleaveInnerRInterleaveStateInterleaveFirstInterleaveSecondInterleaveSecondOnlyInterleaveFirstOnly AppendState AppendFirst AppendSecondappend interleave interleaveMininterleaveSuffixinterleaveInfixunfoldManyInterleaveunfoldManyRoundRobininterposeSuffix interposegintercalateSuffix gintercalate parseIterategroupsBygroupsRollingBywordsBy splitOnSeqsplitOnSuffixSeq splitInnerBysplitInnerBySuffixparse_headElse toListRev isPrefixOfisSubsequenceOf stripPrefix fromStreamS toStreamSZipAsync ZipAsyncM ZipSerial ZipStream ZipSerialM zipAsyncWithM zipAsyncWith fromZipSerialzipping fromZipAsync zippingAsync$fTraversableZipSerialM$fFoldableZipSerialM$fApplicativeZipSerialM$fFunctorZipSerialM$fNFData1ZipSerialM$fNFDataZipSerialM$fIsStringZipSerialM$fReadZipSerialM$fShowZipSerialM$fOrdZipSerialM$fEqZipSerialM$fIsListZipSerialM$fIsStreamZipSerialM$fApplicativeZipAsyncM$fFunctorZipAsyncM$fIsStreamZipAsyncM$fSemigroupZipAsyncM$fMonoidZipAsyncM$fSemigroupZipSerialM$fMonoidZipSerialM InterleavedTWSerialWSerialTStreamTSerialT fromSerial fromWSerial interleaving wSerialFst wSerialMin<=>$fTraversableSerialT$fFoldableSerialT$fNFData1SerialT$fNFDataSerialT$fIsStringSerialT $fReadSerialT $fShowSerialT $fOrdSerialT $fEqSerialT$fIsListSerialT$fMonadStatesSerialT$fMonadReaderrSerialT$fMonadThrowSerialT$fMonadIOSerialT$fMonadBasebSerialT$fFunctorSerialT$fApplicativeSerialT$fMonadSerialT$fIsStreamSerialT$fTraversableWSerialT$fFoldableWSerialT$fNFData1WSerialT$fNFDataWSerialT$fIsStringWSerialT$fReadWSerialT$fShowWSerialT $fOrdWSerialT $fEqWSerialT$fIsListWSerialT$fMonadStatesWSerialT$fMonadReaderrWSerialT$fMonadThrowWSerialT$fMonadIOWSerialT$fMonadBasebWSerialT$fFunctorWSerialT$fMonadWSerialT$fApplicativeWSerialT$fMonoidWSerialT$fSemigroupWSerialT$fIsStreamWSerialT$fMonadTransWSerialT$fSemigroupSerialT$fMonoidSerialT$fMonadTransSerialT Enumerable enumerateFromenumerateFromToenumerateFromThenenumerateFromThenToenumerateFromFractionalenumerateFromThenFractionalenumerateFromToSmallenumerateFromThenToSmallenumerateFromThenSmallBounded enumerate enumerateToenumerateFromBounded$fEnumerableIdentity$fEnumerableRatio$fEnumerableFixed$fEnumerableDouble$fEnumerableFloat$fEnumerableNatural$fEnumerableInteger$fEnumerableWord64$fEnumerableWord32$fEnumerableWord16$fEnumerableWord8$fEnumerableWord$fEnumerableInt64$fEnumerableInt32$fEnumerableInt16$fEnumerableInt8$fEnumerableInt$fEnumerableChar$fEnumerableOrdering$fEnumerableBool$fEnumerable() maxThreadsrateavgRateminRatemaxRate constRate maxYields printState inspectModedrainBydrainBy2sumproductmeanvariancestdDevrollingHashWithSalt rollingHashrollingHashFirstNmconcatfoldMapfoldMapMdrainN genericIndexindex findIndex elemIndexandor takeEndBy_ takeEndBy partitionByMpartitionByFstMpartitionByMinM partitionBy partition demuxWithdemuxDefaultWith demuxDefault classifyWithclassify unzipWithM unzipWithFstM unzipWithMinM unzipWithzipwithsampleFromthen chunksBetween usingReaderT usingStateTyieldyieldM timesWith absTimesWith relTimesWithsmapMinterjectSuffixconcatMtracetrace_scanpostscanscanxuniqByprunerepeatednubBy nubWindowBy sampleOld sampleNew sampleRatetakeLasttakeLastInterval takeWhileLasttakeWhileAround dropIntervaldropLastdropLastInterval dropWhileLastdropWhileAroundintersperseBySpaninterspersePrefix_delay delayPostdelayPre reassembleBy timestampWith timestamped timeIndexWith timeIndexed elemIndicesleftsrights|$ applyAsync|& dropPrefix dropInfix dropSuffix foldSequence parseSequence parseManyTillgroupsByRollinggroupssplitOn splitOnPrefix splitBySeqsplitWithSuffixSeqclassifySessionsByclassifyKeepAliveSessionsclassifySessionsOfunfold0absTimes currentTimerelTimes durationstickstimeout fromFoldableMeach fromHandle fromCallback roundrobin mergeAsyncBy mergeAsyncByM concatUnfold intercalateintercalateSuffixconcatSmapMWithiterateMapWithiterateSmapMWithiterateMapLeftsWithfoldxfoldxMparseDparseKparseD_runNrunWhile runStreamtoHandle|$. foldAsync|&. isInfixOf isSuffixOf stripSuffix fromStreamN streamFold$fNFDataSmallArray SpliceState SpliceInitialSpliceBufferingSpliceYielding SpliceFinishpackArraysChunksOflpackArraysChunksOfcompact compactLE compactEQ compactGE groupIOVecsOf unsafeCopyunsafeReadIndex shrinkArray fromListNMfromStreamDArraysOfnewAlignedArray resizeArray touchArraywithArrayAsPtr FlattenStatefoldr'unlinestoPtr unsafeRead readIndexsampleFromThensampleIntervalEndsampleIntervalStartsampleBurstEndsampleBurstStartsortBy crossJoin innerJoin hashInnerJoinmergeInnerJoinleftJoin hashLeftJoin mergeLeftJoin outerJoin hashOuterJoinmergeOuterJoin intersectBymergeIntersectBy differenceBymergeDifferenceByunionBy mergeUnionBy readEither readFilesreadDirstoEithertoFilestoDirsZipList toZipSerialListtoSerialConsNil fromZipList toZipList $fIsListList$fIsStringList$fIsListZipList$fIsStringZipList $fShowZipList $fReadZipList $fEqZipList $fOrdZipList$fNFDataZipList$fNFData1ZipList$fSemigroupZipList$fMonoidZipList$fFunctorZipList$fFoldableZipList$fApplicativeZipList$fTraversableZipList $fShowList $fReadList$fEqList $fOrdList $fNFDataList $fNFData1List$fSemigroupList $fMonoidList $fFunctorList$fFoldableList$fApplicativeList$fTraversableList $fMonadList writeLastN unsafeSlicegetIndex writeIndexstreamTransform unsafeCastasBytescast unsafeAsPtrunsafeAsCStringunitboolorderingeqWord8word8word16beword16leword32beword32leword64beword64le word64host fromParser fromArrayFold $fMonadFold$fApplicativeFold concatRevtoArraySockSpec sockFamilysockType sockProtosockOpts forSocketM withSocketacceptconnect connectFrom connections readChunk writeChunktoChunksWithBufferOftoChunksreadChunksWithBufferOf readChunkstoBytesreadWithBufferOf putChunkswriteChunksWithBufferOfputBytesWithBufferOfwriteWithBufferOfputBytestoBytesWithBufferOf writeArrayputChunksWithBufferOfwrite2withFile appendArray fromChunksfromBytesWithBufferOf fromBytes appendChunksappendWithBufferOf isAsciiAlpha DecodeError DecodeState CodePoint decodeLatin1 encodeLatin1' encodeLatin1 encodeLatin1_encodeLatin1LaxresumeDecodeUtf8EitherDdecodeUtf8EitherDdecodeUtf8EitherresumeDecodeUtf8Either decodeUtf8D decodeUtf8 decodeUtf8D' decodeUtf8' decodeUtf8D_ decodeUtf8_ decodeUtf8LaxdecodeUtf8ArraysDdecodeUtf8ArraysdecodeUtf8ArraysD'decodeUtf8Arrays'decodeUtf8ArraysD_decodeUtf8Arrays_ encodeUtf8D' encodeUtf8' encodeUtf8D encodeUtf8 encodeUtf8D_ encodeUtf8_ encodeUtf8Lax encodeStrings stripHeadlineswordsunwords$fShowCodingFailureMode$fShowDecodeErrorgetBytesgetChars getChunkswriteErrputCharswriteErrChunksputStringsWith putStrings putStringsLnacceptOnAddrWith acceptOnAddracceptOnPortWith acceptOnPortacceptOnPortLocalconnectionsOnAddrWithconnectionsOnAddrconnectionsOnPortconnectionsOnLocalHostwithConnectionMusingConnectionwithConnection processBytesHandlestdinstdoutstderropenFilereadArraysOfUpto readArraysreadInChunksOf writeArrayswriteArraysPackedUptowriteInChunksOfEventeventWd eventFlags eventCookie eventRelPatheventMap WhenExists AddIfExistsReplaceIfExistsToggleOnOffConfigsetFollowSymLinkssetUnwatchMoved setWhenExists setOneShot setOnlyDirsetRootDeleted setRootMovedsetMetadataChanged setAccessed setOpenedsetWriteClosedsetNonWriteClosed setCreated setDeleted setMovedFrom setMovedTo setModified setAllEvents defaultConfig addToWatchremoveFromWatchwatchPathsWith watchPathswatchTreesWith watchTreesgetRoot getRelPath getAbsPath getCookie isOverflowisRootUnwatched isRootDeleted isRootMovedisRootUnmountedisMetadataChanged isAccessedisOpened isWriteClosedisNonWriteClosed isCreated isDeleted isMovedFrom isMovedTo isModifiedisDir showEvent $fShowEvent $fOrdEvent $fEqEvent $fShowCookie $fEqCookie$fShowWD runStreaming runStreamTrunInterleavedT runParallelT runAsyncT runZipStream runZipAsyncfoldWith foldMapWith forEachWithserially wSeriallyasynclyaheadlywAsyncly parallely zipSerially zipAsynclyassertEitherghc-prim GHC.TypesTrueFalse GHC.MaybeMaybeData.Functor.IdentityIdentityControl.Monad.Primitive PrimMonad?MonadGHC.IntInt64integer-wired-inGHC.Integer.TypeIntegerpuretransformers-0.5.6.2Control.Monad.Trans.Classlift Data.IORef mkWeakIORef*>JustNothingfmapData.BifunctorsecondfirstFoldable sequence_ Alternative<|> Applicativemzeroemptymplus Data.Functor<$>Right.<*>>>=MonoidGHC.NumNum GHC.FloatFloatingGHC.Real Fractional GHC.Conc.SyncThreadIdRealGHC.PrimAddr#Foreign.C.StringCStringGHC.PtrPtrGHC.EnumEnummaxBoundBoundedIntegralenumFrom enumFromThen enumFromToenumFromThenToInttoEnumminBound GHC.ClassesEqmappendmemptyLeftControl.Monad.Trans.ReaderReaderT Control.Monad.Trans.State.StrictStateTForeign.StorableStorablemod/ Data.MaybefromJustisJust$oddBoolOrderingGHC.WordWord16Word32Word64Word8&network-3.1.2.2-HeZUrNHhelu1dabCEyhCwyNetwork.Socket.TypesSocketGHC.IO.Handle.TypesGHC.IO.Handle.FDGHC.IOFilePathCharString GHC.IO.IOModeIOMode openWatchWatch