!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ $None &*23468;=?FHKMA bucket represents portion of a whole data-set on disk, and contains a file handle that points to the next piece of data to be read or written.The bucket could be created from a portion of a single flat file, or be one file of a pre-split data set. The main advantage over a plain  is that a 9 can represent a small portion of a single large file.(Physical location of the file, if known.6Starting position of the bucket in the file, in bytes.'Maximum length of the bucket, in bytes.If J then the length is indeterminate, which is used when writing to files.File handle for the bucket.If several buckets have been created from a single file, then all buckets will have handles bound to that file, but they will be at different positions.Open a file as a single bucket.)Wrap an existing file handle as a bucket. +Open some files as buckets and use them as Sources. Like   , but take a list of file paths. 6Open all the files in a directory as separate buckets.4This operation may fail with the same exceptions as . eOpen a file containing atomic records and split it into the given number of evenly sized buckets. The records are separated by a special terminating charater, which the given predicate detects. The file is split cleanly on record boundaries, so we get a whole number of records in each bucket. As the records can be of varying size the buckets are not guaranteed to have exactly the same length, in either records or buckets, though we try to give them the approximatly the same number of bytes. Like   but start at the given offset.wAdvance a file handle until we reach a byte that, matches the given predicate, then return the final file position.YOpen some files for writing as individual buckets and pass them to the given consumer.Like  , but take a list of file paths.UCreate a new directory of the given name, containing the given number of buckets. If the directory is named somedir then the files are named somedir/000000, somedir/000001, somedir/000002 and so on.rGiven a list of directories, create those directories and open the given number of output files per directory.In the resulting array of buckets, the outer dimension indexes each directory, and the inner one indexes each file in its directory.For each directory somedir the files are named somedir/000000, somedir/000001, somedir/000002 and so on.4Close a bucket, releasing the contained file handle.&Check if the bucket is currently open.?Check if the contained file handle is at the end of the bucket.!Seek to a position with a bucket.Get some data from a bucket.Put some data in a bucket. Files to open. Consumer. Number of buckets.Detect the end of a record. File to open. Consumer. Number of buckets.Detect the end of a record. File to open.Starting offset. Consumer. File paths. Consumer.Number of buckets to create.Path to directory. Consumer.*Number of buckets to create per directory.Paths to directories. Consumer.   None &*23468;=?FHKM #A collection of mutable references.!Get the extent of the collection.kAllocate a new state of the given arity, also returning an index to the first element of the collection.Write an element of the state.Read an element of the state.!Get the zero for this index type. BGiven an index an arity, get the next index after this one, or  if there aren't any more.!*Check if an index is valid for this arity.".Fold all the elements in a collection of refs.Tuple indices.Integer indices. Unit indices. !"# !"#  !"#  !"# None &*23468;=?FHKM $5A bundle of stream sinks, indexed by a value of type i, in some monad m, returning elements of type e.AElements can be pushed to each stream in the bundle individually.& Number of sources in the bundle.'4Push an element to one of the streams in the bundle.(wSignal that no more elements will ever be available for this sink. It is ok to eject the same stream multiple times.)7A bundle of stream sources, indexed by a value of type i, in some monad m, returning elements of type e.CElements can be pulled from each stream in the bundle individually.+!Number of sources in this bundle.,Function to pull data from a bundle. Give it the index of the desired stream, a continuation that accepts an element, and a continuation to invoke when no more elements will ever be available.-4Transform the stream indexes of a bundle of sources.fThe given transform functions should be inverses of each other, else you'll get a confusing result..2Transform the stream indexes of a bundle of sinks.fThe given transform functions should be inverses of each other, else you'll get a confusing result./VFor a bundle of sources with a 2-d stream index, flip the components of the index.0TFor a bundle of sinks with a 2-d stream index, flip the components of the index.1(Attach a finalizer to bundle of sources.For each stream in the bundle, the finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.2(Attach a finalizer to a bundle of sinks.fFor each stream in the bundle, the finalizer will be called the first time that stream is ejected. XThe provided finalizer will be run after any finalizers already attached to the sink.$%&'()*+,-./012 !"#$%&'()*+,-./012$%&'()*+,-./012None &*23468;=?FHKM3$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. 4$Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.5$Send the same data to two consumers.Like 4 but with the arguments flipped.61Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.7Given a bundle of sources containing several streams, produce a new bundle containing a single stream that gets data from the former.Streams from the source are consumed in their natural order, and a complete stream is consumed before moving onto the next one. |> import Data.Repa.Flow.Generic > toList1 () =<< funnel_i =<< fromList (3 :: Int) [11, 22, 33] [11,22,33,11,22,33,11,22,33] 8Given a bundle of sinks consisting of a single stream, produce a new bundle of the given arity that sends all data to the former, ignoring the stream index.gThe argument stream is ejected only when all of the streams in the result bundle have been ejected.EUsing this function in conjunction with parallel operators like drainP introduces non-determinism. Elements pushed to different streams in the result bundle could enter the single stream in the argument bundle in any order.  > import Data.Repa.Flow.Generic > import Data.Repa.Array.Material > import Data.Repa.Nice > let things = [(0 :: Int, "foo"), (1, "bar"), (2, "baz")] > result <- capture_o B () (\k -> funnel_o 4 k >>= pushList things) > nice result [((),"foo"),((),"bar"),((),"baz")] 345678345678345678None &*23468;=?FHKM9[Given an arity and a list of elements, yield sources that each produce all the elements.:"Drain a single source into a list.;DDrain the given number of elements from a single source into a list.<?Push elements into the associated streams of a bundle of sinks.=KPush the elements of a list into the given stream of a bundle of sinks.9:;<=9:;<=9:;<=None &*23468;=?FHKM>\Apply a function to every element pulled from some sources, producing some new sources. ?Like >9, but the worker function is also given the stream index.@\Apply a function to every element pulled from some sources, producing some new sources. ALike @9, but the worker function is also given the stream index.BqCombine the elements of two flows with the given function. The worker function is also given the stream index.CLike B, but take a bundle of $3 for the result elements, and yield a bundle of $ to accept the b elements.DLike B, but take a bundle of $3 for the result elements, and yield a bundle of $ to accept the a elements.>?@ABCD>?@ABCD>?@ABCDNone &*23468;=?FHKMEPull all available values from the sources and push them to the sinks. Streams in the bundle are processed sequentially, from first to last.If the ) and $P have different numbers of streams then we only evaluate the common subset.FPull all available values from the sources and push them to the sinks, in parallel. We fork a thread for each of the streams and evaluate them all in parallel.If the ) and $P have different numbers of streams then we only evaluate the common subset.EFEFEFNone &*23468;=?FHKMG1Project out a single stream source from a bundle.H/Project out a single stream sink from a bundle.I1Yield sources that always produce the same value.JEYield sources of the given length that always produce the same value.K:Prepend some more elements into the front of some sources.LLike KP but only prepend the elements to the streams that match the given predicate.MSplit the given number of elements from the head of a source returning those elements in a list, and yielding a new source for the rest.NxFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs.:Example: groups [4, 4, 4, 3, 3, 1, 1, 1, 4] = [3, 2, 3, 1]OGiven a stream of flags and a stream of values, produce a new stream of values where the corresponding flag was True. The length of the result is the length of the shorter of the two inputs.PSegmented fold. QbApply a monadic function to every element pulled from some sources, producing some new sources.RFPass elements to the provided action as they are pushed into the sink.SRCreate a bundle of sinks of the given arity and capture any data pushed to it. > import Data.Repa.Flow.Generic > import Data.Repa.Array.Material > import Data.Repa.Nice > import Control.Monad > liftM nice $ capture_o B 4 (k -> pushList [(0 :: Int, "foo"), (1, "bar"), (0, "baz")] k) > [(0,"foo"),(1,"bar"),(0,"baz")] TLike S but also return the r-esult of the push function.ULike R+ but doesn't pass elements to another sink.V(A sink that drops all data on the floor.This sink is strict in the elements, so they are demanded before being discarded. Haskell debugging thunks attached to the elements will be demanded.W&A sink that ignores all incoming data.tThis sink is non-strict in the elements. Haskell tracing thunks attached to the elements will *not* be demanded.XUse the  function from  Debug.Trace3 to print each element that is pushed to a sink.[This function is intended for debugging only, and is not intended for production code.GHIJKLMNOPQRSName of desination layout.Arity of result bundle.%Function to push data into the sinks.TName of desination layout.Arity of result bundle.%Function to push data into the sinks.UVWX8GHIJKLMNOPQRSTUVWXGHIJKLMNOPQRSTUVWXNone &*23468;=?FHKMY&Given a bundle of sinks indexed by an ', produce a result sink for arrays.Each time an array is pushed to the result sink, its elements are pushed to the corresponding streams of the argument sink. If there are more elements than sinks then then give them to the spill action.  | .. | | [w0, x0, y0, z0] | :: Sinks () IO (Array l a) | [w1, x1, y1, z1, u1] | (sink for a single stream of arrays) | .. | | | | | | v v v v .------> spilled | .. | .. | .. | .. | | w0 | x0 | y0 | z0 | :: Sinks Int IO a | w1 | x1 | y1 | z1 | (sink for several streams of elements) | .. | .. | .. | .. | ZLike Y), but drop spilled elements on the floor.[Like Y, but with 2-d stream indexes.Given the argument and result sinks, when pushing to the result the stream index is used as the first component for the argument sink, and the index of the element in its array is used as the second component.QIf you want to the components of stream index the other way around then apply 0 to the argument sinks.\Like [), but drop spilled elements on the floor.YMSpill action, given the spilled element along with its index in the array.Sinks to push elements into.Z[MSpill action, given the spilled element along with its index in the array.Sinks to push elements into.\YZ[\YZ[\None &*23468;=?FHKM]Given a bundle of argument sinks, produce a result sink. Arrays of indices and elements are pushed to the result sink. On doing so, the elements are pushed into the corresponding streams of the argument sinks. If the index associated with an element does not have a corresponding stream in the argument sinks, then pass it to the provided spill function.  | .. | | [(0, v0), (1, v1), (0, v2), (0, v3), (2, v4)] | :: Sources Int IO (Array l (Int, a)) | .. | \ \ | \ .------------. | v v .---------> spilled | .. | .. | | [v0, v2, v3] | [v1] | :: Sinks Int IO (Array l a) | .. | .. | The following example uses S to demonstrate how the ] operator can be used as one step of a bucket-sort. We start with two arrays of key-value pairs. In the result, the values from each block that had the same key are packed into the same tuple (bucket). > import Data.Repa.Flow.Generic as G > import Data.Repa.Array as A > import Data.Repa.Array.Material as A > import Data.Repa.Nice > let arr1 = A.fromList B [(0, 'a'), (1, 'b'), (2, 'c'), (0, 'd'), (0, 'c')] > let arr2 = A.fromList B [(0, 'A'), (3, 'B'), (3, 'C')] > result :: Array B (Int, Array U Char) > <- capture_o B 4 (\k -> shuffle_o B (error "spilled") k > >>= pushList1 () [arr1, arr2]) > nice result [(0,"adc"),(1,"b"),(2,"c"),(0,"A"),(3,"BC")] ^Like ]), but drop spilled elements on the floor._Like ^q, but use the given function to decide which stream of the argument bundle each element should be pushed into. > import Data.Repa.Flow.Generic as G > import Data.Repa.Array as A > import Data.Repa.Array.Material as A > import Data.Repa.Nice > import Data.Char > let arr1 = A.fromList B "FooBAr" > let arr2 = A.fromList B "BazLIKE" > result :: Array B (Int, Array U Char) <- capture_o B 2 (\k -> dshuffleBy_o B (\x -> if isUpper x then 0 else 1) k >>= pushList1 () [arr1, arr2]) > nice result [(0,"FBA"),(1,"oor"),(0,"BLIKE"),(1,"az")] ]Name of source layout.Handle spilled elements.Sinks to push results to.^Name of source layout.Sinks to push results to._Name of source layout.%Get the stream number for an element.Sinks to push results to.]^_]^_None &*23468;=?FHKM`STake elements from a flow and pack them into chunks of the given maximum length.`Layout for result chunks.Maximum chunk length.Element sources.Chunk sources.``None &*23468;=?FHKMaOTake a flow of chunks and flatten it into a flow of the individual elements.aChunk sources.Element sources.aaNone &*23468;=?FHKML !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`a>)*+,$%&'(EF9:;<=-./012GHIJKL>@?ABCD345678MNOPQRUSTVWXYZ[\]^_`aNone &*23468;=?FHKM b"Shorthand for common type classes.c@A bundle of sinks, where the elements are chunked into arrays.d@A bundle of sources, where the elements are chunked into arrays.e\Given an arity and a list of elements, yield sources that each produce all the elements. [All elements are stuffed into a single chunk, and each stream is given the same chunk.fLike fZ but take a list of lists, where each of the inner lists is packed into a single chunk.g.Drain a single source into a list of elements.h,Drain a single source into a list of chunks.iSplit the given number of elements from the head of a source, retrurning those elements in a list, and yielding a new source for the rest.We pull  whole chunks from the source stream until we have at least the desired number of elements. The leftover elements in the final chunk are visible in the result d.j*Attach a finalizer to a bundle of sources.For each stream in the bundle, the finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.k(Attach a finalizer to a bundle of sinks.BThe finalizer will be called the first time the stream is ejected.XThe provided finalizer will be run after any finalizers already attached to the sink. bcdefghijk bcdefghijk bcdefghijkNone &*23468;=?FHKMl2Map a function over elements pulled from a source.m0Map a function over elements pushed into a sink.n:Combine the elements of two flows with the given function.lmnlmnlmnNone &*23468;=?FHKMogFold all elements of all streams in a bundle individually, returning an array of per-stream results.pwFold all elements of all streams in a bundle together, one stream after the other, returning the single final value.oDestination layout.Combining function.Starting value for fold.Input elements to fold.pCombining function.Starting value for fold.Input elements to fold.opopNone &*23468;=?FHKMq0Dictionaries needed to perform a segmented fold.r@Segmented fold over vectors of segment lengths and input values.qrLayout for group names.Layout for fold results.Worker function.(Initial state when folding each segment.Segment lengths.Input elements to fold.Result elements.qrqrNone &*23468;=?FHKMs*Dictionaries needed to perform a grouping.txFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs. @ groupsBy (==) [4, 4, 4, 3, 3, 1, 1, 1, 4] => [3, 2, 3, 1] stLayout for group names.Layout for group lengths..Whether successive elements should be grouped.Source values. ststNone &*23468;=?FHKMulHook a monadic function to some sources, which will be passed every chunk that is pulled from the result.viHook a monadic function to some sinks, which will be passed every chunk that is pushed to the result.wLike vJ but discard the incoming chunks after they are passed to the function.x&A sink that ignores all incoming data.pThis sink is non-strict in the chunks. Haskell tracing thunks attached to the chunks will *not* be demanded.yOYield a bundle of sinks of the given arity that drops all data on the floor.The sinks is strict in the *chunks*, so they are demanded before being discarded. Haskell debugging thunks attached to the chunks will be demanded, but thunks attached to elements may not be -- depending on whether the chunk representation is strict in the elements.uvwxyuvwxyuvwxyNone &*23468;=?FHKMzGiven a source index and a length, pull enough chunks from the source to build a list of the requested length, and discard the remaining elements in the final chunk.uThis function is intended for interactive debugging. If you want to retain the rest of the final chunk then use M.{Like z- but also specify now many elements you want.|Like z5, but print results in a tabular form to the console.}Like {3, but print results in tabular form to the console.~Like z(, but show elements in their raw format.Like {(, but show elements in their raw format.zIndex of source in bundle.Bundle of sources.{|Index of source in bundle.Bundle of sources.}~Index of source in bundle.Bundle of sources. z{|}~ z{|}~z{|}~None &*23468;=?FHKMCreate an output sieve that writes data to an indeterminate number of output files. Each new element is appended to its associated file.=TODO: This function keeps a maximum of 8 files open at once, closing and re-opening them in a least-recently-used order. Due to this behaviour it's fine to create thousands of separate output files without risking overflowing the process limit on the maximum number of useable file handles.HProduce the desired file path and output record for this element, or  if it should be discarded.None &*23468;=?FHKMRead complete records of data form a bucket, into chunks of the given length. We read as many complete records as will fit into each chunk.The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the bucket to just after the last complete record that was read, so we can continue to read more complete records next time. If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.LThe provided file handle must support seeking, else you'll get an exception.Like -, but produce all records in a single vector.KRead 8-byte ASCII characters from some files, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.8Read data from some files, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.7Write vectors of text lines to the given files handles.RData is copied into a new buffer to insert newlines before being written out.BWrite chunks of 8-byte ASCII characters to the given file handles.mData is copied into a foreign buffer to truncate the characters to 8-bits each before being written out.0Write chunks of bytes to the given file handles.6Data is written out directly from the provided buffer.Chunk length in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record.Source buckets.Chunk length in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record.Source buckets.Chunk length in bytes.Buckets.Chunk length in bytes.Buckets.Layout of chunks of lines.Layout of lines.Buckets.Buckets.Buckets. None &*23468;=?FHKMLike fileSourceRecords%, but taking an existing file handle.JRead 8-bit ASCII characters from some files, using the given chunk length.8Read data from some files, using the given chunk length.7Write 8-bit ASCII characters to the given file handles./Write chunks of data to the given file handles.Size of chunk to read in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record. File handles.None &*23468;=?FHKM#Sink consisting of a single stream.%Source consisting of a single stream.Attach a finalizer to a source.}The finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.Attach a finalizer to a sink.BThe finalizer will be called the first time the stream is ejected.XThe provided finalizer will be run after any finalizers already attached to the sink.None &*23468;=?FHKMGRead complete records of data from a file, using the given chunk lengthThe records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek to just after the last complete record that was read, so we can continue to read more complete records next time.DIf we cannot find an end-of-record terminator in the chunk then apply the given failure action. The records can be no longer than the chunk length. This fact guards against the case where a large input file is malformed and contains no end-of-record terminators, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.LThe provided file handle must support seeking, else you'll get an exception.The file will be closed the first time the consumer tries to pull an element from the associated stream when no more are available.4Read data from a file, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.The file will be closed the first time the consumer tries to pull an element from the associated stream when no more are available.(Write chunks of data to the given files.>The file will be closed when the associated stream is ejected.Size of chunk to read in bytes.Detect the end of a record.1Action to perform if we can't get a whole record. File handle.  None &*23468;=?FHKMXGiven an arity and a list of elements, yield a source that produces all the elements.Drain a source into a list.DDrain the given number of elements from a single source into a list.!None &*23468;=?FHKM3Yield a source that always produces the same value.GYield a source of the given length that always produces the same value.4Prepend some more elements to the front of a source.VApply a function to every element pulled from some source, producing a new source.OApply a function to every element pushed to some sink, producing a new sink.$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. $Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.$Send the same data to two consumers.Like  but with the arguments flipped.1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.Split the given number of elements from the head of a source, returning those elements in a list, and yielding a new source for the rest.oPeek at the given number of elements in the stream, returning a result stream that still produces them all.xFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs.:Example: groups [4, 4, 4, 3, 3, 1, 1, 1, 4] = [3, 2, 3, 1]Given a stream of flags and a stream of values, produce a new stream of values where the corresponding flag was True. The length of the result is the length of the shorter of the two inputs.Segmented fold. YApply a monadic function to every element pulled from a source producing a new source.DPass elements to the provided action as they are pushed to the sink.Like watch+ but doesn't pass elements to another sink.(A sink that drops all data on the floor.This sink is strict in the elements, so they are demanded before being discarded. Haskell debugging thunks attached to the elements will be demanded.*A sink that ignores all incoming elements.tThis sink is non-strict in the elements. Haskell tracing thinks attached to the elements will *not* be demanded.None &*23468;=?FHKMDPull all available values from the source and push them to the sink..  !"# None &*23468;=?FHKMFPull all available values from the sources and push them to the sinks.' !"#bcdefghijklmnopqrstuvwxydcbefghjklmnitsoprquvwyx None &*23468;=?FHKM!0Dictionaries needed to perform a segmented fold.*Dictionaries needed to perform a grouping."Shorthand for common type classes.XA bundle of stream sinks, where the elements of the stream are chunked into arrays.XA bundle of stream sources, where the elements of the stream are chunked into arrays.The chunks have some  l and contain elements of type a . See Data.Repa.Array for the available layouts.*Yield the number of streams in the bundle.*Yield the number of streams in the bundle.Pull all available values from the sources and push them to the sinks. Streams in the bundle are processed sequentially, from first to last.If the  and P have different numbers of streams then we only evaluate the common subset.Pull all available values from the sources and push them to the sinks, in parallel. We fork a thread for each of the streams and evaluate them all in parallel.If the  and P have different numbers of streams then we only evaluate the common subset.\Given an arity and a list of elements, yield sources that each produce all the elements. [All elements are stuffed into a single chunk, and each stream is given the same chunk.Like  fromLists_iY but take a list of lists. Each each of the inner lists is packed into a single chunk.<Drain a single source from a bundle into a list of elements.:Drain a single source from a bundle into a list of chunks.#Attach a finalizer to some sources.For a given source, the finalizer will be called the first time a consumer of that source tries to pull an element when no more are available. :The finalizer is given the index of the source that ended.SThe finalizer will be run after any finalizers already attached to the source.!Attach a finalizer to some sinks.XFor a given sink, the finalizer will be called the first time that sink is ejected.>The finalizer is given the index of the sink that was ejected.QThe finalizer will be run after any finalizers already attached to the sink.:Apply a function to all elements pulled from some sources.6Apply a function to all elements pushed to some sinks.$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. $Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.$Send the same data to two consumers.Like  but with the arguments flipped.1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.lHook a worker function to some sources, which will be passed every chunk that is pulled from each source.HThe worker is also passed the source index of the chunk that was pulled.gHook a worker function to some sinks, which will be passed every chunk that is pushed to each sink.HThe worker is also passed the source index of the chunk that was pushed._Create a bundle of sinks of the given arity that pass incoming chunks to a worker function.  This is like a, except that the incoming chunks are discarded after they are passed to the worker functionOCreate a bundle of sinks of the given arity that drop all data on the floor.WThe sinks is strict in the *chunks*, so they are demanded before being discarded. Haskell debugging thunks attached to the chunks will be demanded, but thunks attached to elements may not be -- depending on whether the chunk representation is strict in the elements.PCreate a bundle of sinks of the given arity that drop all data on the floor. As opposed to ( the sinks are non-strict in the chunks.MHaskell debugging thunks attached to the chunks will *not* be demanded.Given a source index and a length, split the a list of that length from the front of the source. Yields a new source for the remaining elements.We pull  whole chunks from the source stream until we have at least the desired number of elements. The leftover elements in the final chunk are visible in the result .eScan through some sources to find runs of matching elements, and count the lengths of those runs. > import Data.Repa.Flow > toList1 0 =<< groups_i U U =<< fromList U 1 "waabbbblle" Just [('w',1),('a',2),('b',4),('l',2),('e',1)] Like groupsBya, but take a function to determine whether two consecutive values should be in the same group.tFold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.tFold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.Given streams of lengths and values, perform a segmented fold where fold segments of values of the corresponding lengths are folded together. > import Data.Repa.Flow > sSegs <- fromList U 1 [('a', 1), ('b', 2), ('c', 4), ('d', 0), ('e', 1), ('f', 5 :: Int)] > sVals <- fromList U 1 [10, 20, 30, 40, 50, 60, 70, 80, 90 :: Int] > toList1 0 =<< folds_i U U (+) 0 sSegs sVals Just [('a',10),('b',50),('c',220),('d',0),('e',80)] If not enough input elements are available to fold a complete segment then no output is produced for that segment. However, trailing zero length segments still produce the initial value for the fold. > import Data.Repa.Flow > sSegs <- fromList U 1 [('a', 1), ('b', 2), ('c', 0), ('d', 0), ('e', 0 :: Int)] > sVals <- fromList U 1 [10, 20, 30 :: Int] > toList1 0 =<< folds_i U U (*) 1 sSegs sVals Just [('a',10),('b',600),('c',1),('d',1),('e',1)] Combination of  and E. We determine the the segment lengths while performing the folds.Note that a SQL-like groupby aggregations can be performed using this function, provided the data is pre-sorted on the group key. For example, we can take the average of some groups of values: \> import Data.Repa.Flow > sKeys <- fromList U 1 "waaaabllle" > sVals <- fromList U 1 [10, 20, 30, 40, 50, 60, 70, 80, 90, 100 :: Double] > sResult <- map_i U (\(key, (acc, n)) -> (key, acc / n)) =<< foldGroupsBy_i U U (==) (\x (acc, n) -> (acc + x, n + 1)) (0, 0) sKeys sVals > toList1 0 sResult Just [10.0,35.0,60.0,80.0,100.0] "Layout of result groups.Layout of result lengths.Input elements.&Starting element and length of groups.Layout of result groups.Layout of result lengths.=Fn to check if consecutive elements are in the same group.Input elements.&Starting element and length of groups.Layout for result.Combining funtion.Starting value.Input elements to fold.Combining funtion.Starting value.Input elements to fold.Layout for group names.Layout for fold results.Worker function.(Initial state when folding each segment.Segment lengths.Input elements to fold.Result elements.Layout for group names.Layout for fold results.=Fn to check if consecutive elements are in the same group.Worker function for the fold."Initial when folding each segment.Names that determine groups.Values to fold.      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#"" None &*23468;=?FHKMGiven a source index and a length, pull enough chunks from the source to build a list of the requested length, and discard the remaining elements in the final chunk.uThis function is intended for interactive debugging. If you want to retain the rest of the final chunk then use .Like - but also specify now many elements you want.Like 5, but print results in a tabular form to the console.Like 3, but print results in tabular form to the console.Like (, but show elements in their raw format.Like (, but show elements in their raw format.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.  "None &*23468;=?FHKM,Read a file containing Tab-Separated-Values.#None &*23468;=?FHKM,Read a file containing Tab-Separated-Values.>TODO: handle escaped commas. TODO: check CSV file standard. None &*23468;=?FHKMLike $%", but with the default chunk size.Like $&", but with the default chunk size.Like $'3, but with the default chunk size and error action.Like $(3, but with the default chunk size and error action. An alias for $). An alias for $*. An alias for $+.Size of chunk to read in bytes.4Action to perform if we can't get a whole record.Buckets.Size of chunk to read in bytes.#Detect the end of a record. 4Action to perform if we can't get a whole record. File handles.Layout for chunks of lines.Layout for lines.Buckets   None &*23468;=?FHKM #The default chunk size of 64kBytes..Read a file containing Comma-Separated-Values.,Read a file containing Tab-Separated-Values.Read complete records of data form a file, into chunks of the given length. We read as many complete records as will fit into each chunk. The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the file to just after the last complete record that was read, so we can continue to read more complete records next time. If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.QThe provided file handle must support seeking, else you'll get an exception.Each file is closed the first time the consumer tries to pull a record from the associated stream when no more are available.Read complete lines of data from a text file, using the given chunk length. We read as many complete lines as will fit into each chunk./The trailing new-line characters are discarded.IData is read into foreign memory without copying it through the GHC heap.QThe provided file handle must support seeking, else you'll get an exception.Each file is closed the first time the consumer tries to pull a line from the associated stream when no more are available.JRead 8-bit ASCII characters from some files, using the given chunk length.8Read data from some files, using the given chunk length.7Write vectors of text lines to the given files handles.RData is copied into a new buffer to insert newlines before being written out.+Write 8-bit ASCII characters to some files.Write bytes to some file. Detect the end of a record.Buckets.Layout of chunks.Layout of lines in chunks.Buckets  ,None &*23468;=?FHKM      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  !"#,-./-.0-12-13-1456789:;<=>?@ABCDEFGHIJKLMNOPQRS T T U V W X X Y Z [ \ ] ^ _ `abcdefghijklmnopqrstuvwxyz{|}~TXgh{_`mop(&%+*)(&%*)_`(%) g !w!x!y!l!n!a!b!c!d!{!!|!}!~!!!!!ss     T X Y U s t g h _ ` l n a b c d  { | "# % & ' ( ) * + ( ' & % + * )5I---------$-------------------------------------- - - - - --------------------- -!-"-#--$-%-&-'-(-)-*-+-$-,---.-/-0-1-2-34-35-36-7-8-9-:-;--<=-<>-<?-<@-<A-<B-<C-<D-<E-<F-<G-<H-<I-<J-K-L-M-N-O-P-Q-R-S-T-U-V-W-X-XY-XZ-X[-X\-X]-X^-X_-`-a-b-c-d-e-f-gh-gi-gj-gk-gl-gm-gn-go-gp-gq-gr-gs-gt-gu-gv-gw-gx-gy-gz-gz-g{-g{-g|-g}-g~-g-g-grepa-flow-4.0.0.2Data.Repa.Flow.Generic.DebugData.Repa.Flow.IO.BucketData.Repa.Flow.StatesData.Repa.Flow.GenericData.Repa.Flow.ChunkedData.Repa.Flow.Generic.IOData.Repa.Flow.Chunked.IOData.Repa.Flow.SimpleData.Repa.Flow.DefaultData.Repa.Flow.Default.DebugData.Repa.Flow.Default.SizedIOData.Repa.Flow.Default.IOData.Repa.Flow.Generic.BaseData.Repa.Flow.Generic.ConnectData.Repa.Flow.Generic.ListData.Repa.Flow.Generic.MapData.Repa.Flow.Generic.EvalData.Repa.Flow.Generic.Operator'Data.Repa.Flow.Generic.Array.Distribute$Data.Repa.Flow.Generic.Array.Shuffle"Data.Repa.Flow.Generic.Array.Chunk$Data.Repa.Flow.Generic.Array.UnchunkData.Repa.Flow.Chunked.BaseData.Repa.Flow.Chunked.MapData.Repa.Flow.Chunked.FoldData.Repa.Flow.Chunked.FoldsData.Repa.Flow.Chunked.GroupsData.Repa.Flow.Chunked.OperatorData.Repa.Flow.Generic.IO.SieveData.Repa.Flow.Simple.BaseData.Repa.Flow.Simple.IOData.Repa.Flow.Simple.ListData.Repa.Flow.Simple.OperatorData.Repa.Flow.Default.IO.TSVData.Repa.Flow.Default.IO.CSVF sourceBytes sourceChars sourceLines sourceRecords sinkBytes sinkChars sinkLinesData.Repa.Flowrepa-array-4.0.0.2Data.Repa.Nice.Presentpresent PresentableData.Repa.NiceniceNiceNicerBucket bucketLength openBuckethBucket fromFiles fromFiles'fromDir fromSplitFilefromSplitFileAttoFilestoFiles'toDirtoDirs'bClosebIsOpenbAtEndbSeek bGetArray bPutArrayStatesRefs extentRefsnewRefsreadRefs writeRefsNextfirstnextcheck foldRefsMtoListMSinks sinksArity sinksPush sinksEjectSources sourcesArity sourcesPull mapIndex_i mapIndex_o flipIndex2_i flipIndex2_o finalize_i finalize_odup_oodup_iodup_oi connect_ifunnel_ifunnel_ofromListtoList1 takeList1pushList pushList1map_ismap_imap_osmap_o szipWith_ii szipWith_io szipWith_oidrainSdrainP project_i project_orepeat_i replicate_i prepend_i prependOn_ihead_igroups_ipack_iifolds_iiwatch_iwatch_o capture_o rcapture_o trigger_o discard_oignore_otrace_o distribute_o ddistribute_o distribute2_oddistribute2_o shuffle_o dshuffle_o dshuffleBy_ochunk_i unchunk_iFlow fromListstoLists1foldlS foldlAllS FoldsDictfolds_i GroupsDict groupsBy_imoremore'moretmoret'morermorer'sieve_o sourceChunksSinkSourcetoListtakeListpeek_iFoldGroupsDictfoldGroupsBy_i sourceTSV sourceCSVdefaultChunkSizebaseGHC.IO.Handle.TypesHandlebucketFilePathbucketStartPos Data.MaybeNothing bucketHandledirectory-1.2.1.0System.DirectorygetDirectoryContentsadvance $fNext(,) $fNextInt$fNext()URefsunsafeNewWithVMTFCo:R:Refs()ma $fStates()mTFCo:R:RefsIntIOa $fStatesIntIO Debug.Tracetraceghc-prim GHC.TypesIntfolds_loadChunkNameLensfolds_loadChunkVals folds_updatewrapI_iwrapI_o Data.Repa.Array.Internals.LayoutLayoutData.Repa.Array.LinearLData.Repa.Array.TupleT2Data.Repa.Array.DenseEData.Repa.Array.Material.NestedNNArray Data.Repa.Array.Material.UnboxedUUArray Data.Repa.Array.Material.ForeignFArrayData.Repa.Array.DelayedADelayedData.Repa.Array.Material.BoxedBBArrayData.Repa.Array.WindowWArrayData.Repa.Array.RowWiseRZRCData.Repa.Arraymap2SmapS findIndexreverseMaterial)Data.Repa.Array.Internals.Operator.Filterfilter)Data.Repa.Array.Internals.Operator.Reducefoldl'Data.Repa.Array.Internals.Operator.Fold foldsWithfolds(Data.Repa.Array.Internals.Operator.Group groupsWithgroups)Data.Repa.Array.Internals.Operator.Concat intercalateunlines concatWithconcat ConcatDictData.Repa.Eval.Array computeIntoScomputeS,Data.Repa.Array.Internals.Operator.Partition partitionByIx partitionBy partitionlinear linearLengthLinearuntup2tup2Tup2Data.Repa.Array.Delayed2map2delay2delayed2Layout2delayed2Layout1Delayed2D2cubematrixvectorDense ragspose3 trimStartstrimEndstrimsdiceSepdice segmentOnsegmentconcatsslicesmapElems fromListss nestedLengthNested toUnboxed fromUnboxed unboxedLengthUnboxedfromByteString toByteStringfromStorableVectortoStorableVector toForeignPtrfromForeignPtr foreignLengthForeignmapdelay toFunction fromFunction delayedLayoutDelayedDData.Repa.Array.Internals.LoadloadPloadSLoaddecimatetoBoxed fromBoxed boxedLengthBoxed Data.Repa.Array.Internals.Target fromListInto bufferLayout touchBufferunsafeThawBufferunsafeFreezeBufferunsafeSliceBufferunsafeGrowBufferunsafeWriteBufferunsafeReadBufferunsafeNewBufferBufferTargetIOBufferTargetIentirewindowed windowInner windowSize windowStartWindowWwindow WindowablerowWise rowWiseShapeRowWiseRWData.Repa.Array.Internals.Bulklength!indexlayoutArrayBulkBulkI fromIndextoIndexextentcreatenameNameLayoutIData.Repa.Array.Internals.Shapeish5ish4ish3ish2ish1ish0 showShapeinShape shapeOfList listOfShape inShapeRangesizeaddDim intersectDimunitDimzeroDimrankShapeZ:.SH0SH1SH2SH3SH4SH5vector-0.10.12.2Data.Vector.Unboxed.BaseUnboxrepa-stream-4.0.0.1Data.Repa.Chain.FoldsFolds _stateLens _stateVals_nameSeg_lenSeg_valSeg