D>      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                ,None &*34579<>@GILN #A collection of mutable references. !Get the extent of the collection. kAllocate a new state of the given arity, also returning an index to the first element of the collection.Write an element of the state.Read an element of the state.!Get the zero for this index type.BGiven an index an arity, get the next index after this one, or  if there aren't any more.*Check if an index is valid for this arity..Fold all the elements in a collection of refs.Tuple indices.Integer indices. Unit indices.        None &*34579<>@GILN 5A bundle of stream sinks, indexed by a value of type i, in some monad m, returning elements of type e.AElements can be pushed to each stream in the bundle individually. Number of sources in the bundle.4Push an element to one of the streams in the bundle.wSignal that no more elements will ever be available for this sink. It is ok to eject the same stream multiple times.7A bundle of stream sources, indexed by a value of type i, in some monad m, returning elements of type e.CElements can be pulled from each stream in the bundle individually.!Number of sources in this bundle.Function to pull data from a bundle. Give it the index of the desired stream, a continuation that accepts an element, and a continuation to invoke when no more elements will ever be available.4Transform the stream indexes of a bundle of sources.fThe given transform functions should be inverses of each other, else you'll get a confusing result. 2Transform the stream indexes of a bundle of sinks.fThe given transform functions should be inverses of each other, else you'll get a confusing result.!VFor a bundle of sources with a 2-d stream index, flip the components of the index."TFor a bundle of sinks with a 2-d stream index, flip the components of the index.#(Attach a finalizer to bundle of sources.For each stream in the bundle, the finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.$(Attach a finalizer to a bundle of sinks.fFor each stream in the bundle, the finalizer will be called the first time that stream is ejected. XThe provided finalizer will be run after any finalizers already attached to the sink. !"#$  !"#$ !"#$None &*34579<>@GILN%$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. &$Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.'$Send the same data to two consumers.Like & but with the arguments flipped.(1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.)Given a bundle of sources containing several streams, produce a new bundle containing a single stream that gets data from the former.Streams from the source are consumed in their natural order, and a complete stream is consumed before moving onto the next one. |> import Data.Repa.Flow.Generic > toList1 () =<< funnel_i =<< fromList (3 :: Int) [11, 22, 33] [11,22,33,11,22,33,11,22,33] *Given a bundle of sinks consisting of a single stream, produce a new bundle of the given arity that sends all data to the former, ignoring the stream index.gThe argument stream is ejected only when all of the streams in the result bundle have been ejected.EUsing this function in conjunction with parallel operators like drainP introduces non-determinism. Elements pushed to different streams in the result bundle could enter the single stream in the argument bundle in any order.  > import Data.Repa.Flow.Generic > import Data.Repa.Array.Material > import Data.Repa.Nice > let things = [(0 :: Int, "foo"), (1, "bar"), (2, "baz")] > result <- capture_o B () (\k -> funnel_o 4 k >>= pushList things) > nice result [((),"foo"),((),"bar"),((),"baz")] %&'()*%&'()*%&'()*None &*34579<>@GILN+[Given an arity and a list of elements, yield sources that each produce all the elements.,"Drain a single source into a list.-DDrain the given number of elements from a single source into a list..?Push elements into the associated streams of a bundle of sinks./KPush the elements of a list into the given stream of a bundle of sinks.+,-./+,-./+,-./None &*34579<>@GILN0\Apply a function to every element pulled from some sources, producing some new sources. 1Like 09, but the worker function is also given the stream index.2\Apply a function to every element pulled from some sources, producing some new sources. 3Like 29, but the worker function is also given the stream index.4qCombine the elements of two flows with the given function. The worker function is also given the stream index.5Like 4, but take a bundle of 3 for the result elements, and yield a bundle of  to accept the b elements.6Like 4, but take a bundle of 3 for the result elements, and yield a bundle of  to accept the a elements.012345601234560123456None &*34579<>@GILN7Combination of fold and !.{We walk over the stream start-to-end, maintaining an accumulator. At each point we can chose to emit an element, or not.8/Start-to-end scan over each stream in a bundle.9sFor each stream in a bundle of sources, associated the element with their corresponding position in the stream.789789789None &*34579<>@GILN:Pull all available values from the sources and push them to the sinks. Streams in the bundle are processed sequentially, from first to last.If the  and P have different numbers of streams then we only evaluate the common subset.;Pull all available values from the sources and push them to the sinks, in parallel. We fork a thread for each of the streams and evaluate them all in parallel.If the  and P have different numbers of streams then we only evaluate the common subset.<PPull all available values from the sources and pass them to the given action.:;<:;<:;<None &*34579<>@GILN=1Project out a single stream source from a bundle.>/Project out a single stream sink from a bundle.?1Yield sources that always produce the same value.@EYield sources of the given length that always produce the same value.A:Prepend some more elements into the front of some sources.BLike AP but only prepend the elements to the streams that match the given predicate.CSplit the given number of elements from the head of a source returning those elements in a list, and yielding a new source for the rest.DxFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs.:Example: groups [4, 4, 4, 3, 3, 1, 1, 1, 4] = [3, 2, 3, 1]EGiven a stream of flags and a stream of values, produce a new stream of values where the corresponding flag was True. The length of the result is the length of the shorter of the two inputs.FSegmented fold. GbApply a monadic function to every element pulled from some sources, producing some new sources.HFPass elements to the provided action as they are pushed into the sink.IRCreate a bundle of sinks of the given arity and capture any data pushed to it. > import Data.Repa.Flow.Generic > import Data.Repa.Array.Material > import Data.Repa.Nice > import Control.Monad > liftM nice $ capture_o B 4 (k -> pushList [(0 :: Int, "foo"), (1, "bar"), (0, "baz")] k) > [(0,"foo"),(1,"bar"),(0,"baz")] JLike I but also return the r-esult of the push function.KLike H+ but doesn't pass elements to another sink.L&A sink that ignores all incoming data.WThis sink is strict in the elements, so they are demanded before being discarded. HHaskell debugging thunks attached to the elements will be demanded.M(A sink that drops all data on the floor.)This sink is non-strict in the elements. GHaskell tracing thunks attached to the elements will *not* be demanded.NUse the " function from  Debug.Trace3 to print each element that is pushed to a sink.[This function is intended for debugging only, and is not intended for production code.=>?@ABCDEFGHIName of desination layout.Arity of result bundle.%Function to push data into the sinks.JName of desination layout.Arity of result bundle.%Function to push data into the sinks.KLMN*=>?@ABCDEFGHIJKLMN=>?@ABCDEFGHIJKLMNNone &*34579<>@GILNO&Given a bundle of sinks indexed by an #', produce a result sink for arrays.Each time an array is pushed to the result sink, its elements are pushed to the corresponding streams of the argument sink. If there are more elements than sinks then then give them to the spill action.  | .. | | [w0, x0, y0, z0] | :: Sinks () IO (Array l a) | [w1, x1, y1, z1, u1] | (sink for a single stream of arrays) | .. | | | | | | v v v v .------> spilled | .. | .. | .. | .. | | w0 | x0 | y0 | z0 | :: Sinks Int IO a | w1 | x1 | y1 | z1 | (sink for several streams of elements) | .. | .. | .. | .. | PLike O), but drop spilled elements on the floor.QLike O, but with 2-d stream indexes.Given the argument and result sinks, when pushing to the result the stream index is used as the first component for the argument sink, and the index of the element in its array is used as the second component.QIf you want to the components of stream index the other way around then apply " to the argument sinks.RLike Q), but drop spilled elements on the floor.OMSpill action, given the spilled element along with its index in the array.Sinks to push elements into.PQMSpill action, given the spilled element along with its index in the array.Sinks to push elements into.ROPQROPQRNone &*34579<>@GILNSGiven a bundle of argument sinks, produce a result sink. Arrays of indices and elements are pushed to the result sink. On doing so, the elements are pushed into the corresponding streams of the argument sinks. If the index associated with an element does not have a corresponding stream in the argument sinks, then pass it to the provided spill function.  | .. | | [(0, v0), (1, v1), (0, v2), (0, v3), (2, v4)] | :: Sources Int IO (Array l (Int, a)) | .. | \ \ | \ .------------. | v v .---------> spilled | .. | .. | | [v0, v2, v3] | [v1] | :: Sinks Int IO (Array l a) | .. | .. | The following example uses I to demonstrate how the S operator can be used as one step of a bucket-sort. We start with two arrays of key-value pairs. In the result, the values from each block that had the same key are packed into the same tuple (bucket). > import Data.Repa.Flow.Generic as G > import Data.Repa.Array as A > import Data.Repa.Array.Material as A > import Data.Repa.Nice > let arr1 = A.fromList B [(0, 'a'), (1, 'b'), (2, 'c'), (0, 'd'), (0, 'c')] > let arr2 = A.fromList B [(0, 'A'), (3, 'B'), (3, 'C')] > result :: Array B (Int, Array U Char) > <- capture_o B 4 (\k -> shuffle_o B (error "spilled") k > >>= pushList1 () [arr1, arr2]) > nice result [(0,"adc"),(1,"b"),(2,"c"),(0,"A"),(3,"BC")] TLike S), but drop spilled elements on the floor.ULike Tq, but use the given function to decide which stream of the argument bundle each element should be pushed into. > import Data.Repa.Flow.Generic as G > import Data.Repa.Array as A > import Data.Repa.Array.Material as A > import Data.Repa.Nice > import Data.Char > let arr1 = A.fromList B "FooBAr" > let arr2 = A.fromList B "BazLIKE" > result :: Array B (Int, Array U Char) <- capture_o B 2 (\k -> dshuffleBy_o B (\x -> if isUpper x then 0 else 1) k >>= pushList1 () [arr1, arr2]) > nice result [(0,"FBA"),(1,"oor"),(0,"BLIKE"),(1,"az")] SName of source layout.Handle spilled elements.Sinks to push results to.TName of source layout.Sinks to push results to.UName of source layout.%Get the stream number for an element.Sinks to push results to.STUSTUNone &*34579<>@GILNVTake elements from a flow and pack them into chunks. The chunks are limited to the given maximum length. A predicate can also be supplied to detect the last element in a chunk.VLayout for result chunks.Maximum chunk length.#Detect the last element in a chunk.Element sources.Chunk sources.VVNone &*34579<>@GILNWOTake a flow of chunks and flatten it into a flow of the individual elements.WChunk sources.Element sources.WWNone &*34579<>@GILNXCreate an output sieve that writes data to an indeterminate number of output files. Each new element is appended to its associated file.X#Max payload size of in-memory data.Max number of in-memory chunks.HProduce the desired file path and output record for this element, or  if it should be discarded.XXNone &*34579<>@GILNYA bucket represents portion of a whole data-set on disk, and contains a file handle that points to the next piece of data to be read or written.The bucket could be created from a portion of a single flat file, or be one file of a pre-split data set. The main advantage over a plain $ is that a Y9 can represent a small portion of a single large file.[(Physical location of the file, if known.\6Starting position of the bucket in the file, in bytes.]'Maximum length of the bucket, in bytes.If J then the length is indeterminate, which is used when writing to files.^File handle for the bucket.If several buckets have been created from a single file, then all buckets will have handles bound to that file, but they will be at different positions._Open a file as a single bucket.`)Wrap an existing file handle as a bucket."The starting position is set to 0.a+Open some files as buckets and use them as Sources.bLike b , but take a list of file paths.c6Open all the files in a directory as separate buckets.4This operation may fail with the same exceptions as %.deOpen a file containing atomic records and split it into the given number of evenly sized buckets. The records are separated by a special terminating charater, which the given predicate detects. The file is split cleanly on record boundaries, so we get a whole number of records in each bucket. As the records can be of varying size the buckets are not guaranteed to have exactly the same length, in either records or buckets, though we try to give them the approximatly the same number of bytes.eLike d but start at the given offset.&wAdvance a file handle until we reach a byte that, matches the given predicate, then return the final file position.fYOpen some files for writing as individual buckets and pass them to the given consumer.gLike f, but take an array of 's.hUCreate a new directory of the given name, containing the given number of buckets. If the directory is named somedir then the files are named somedir/000000, somedir/000001, somedir/000002 and so on.irGiven a list of directories, create those directories and open the given number of output files per directory.In the resulting array of buckets, the outer dimension indexes each directory, and the inner one indexes each file in its directory.For each directory somedir the files are named somedir/000000, somedir/000001, somedir/000002 and so on.j4Close a bucket, releasing the contained file handle.k&Check if the bucket is currently open.l?Check if the contained file handle is at the end of the bucket.m!Seek to a position with a bucket.nGet some data from a bucket.oPut some data in a bucket.YZ[\]^_`abFiles to open. Consumer.cdNumber of buckets.Detect the end of a record. File to open. Consumer.eNumber of buckets.Detect the end of a record. File to open.Starting offset. Consumer.&f File paths.Worker writes data to buckets.g File paths.+Worker writes data to buckets. ^ Consumer.hNumber of buckets to create.Path to directory. Consumer.i*Number of buckets to create per directory.Paths to directories. Consumer.jklmnoYZ[\]^_`abcdefghijklmnoYZ[\]^_`abcdefghijklmnoYZ[\]^_`abcde&fghijklmnoNone &*34579<>@GILNpRead complete records of data form a bucket, into chunks of the given length. We read as many complete records as will fit into each chunk.The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the bucket to just after the last complete record that was read, so we can continue to read more complete records next time. If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.LThe provided file handle must support seeking, else you'll get an exception.qLike p-, but produce all records in a single vector.rKRead 8-byte ASCII characters from some files, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.s8Read data from some files, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.t7Write vectors of text lines to the given files handles.RData is copied into a new buffer to insert newlines before being written out.uBWrite chunks of 8-byte ASCII characters to the given file handles.mData is copied into a foreign buffer to truncate the characters to 8-bits each before being written out.v0Write chunks of bytes to the given file handles.6Data is written out directly from the provided buffer.pChunk length in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record.Source buckets.qChunk length in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record.Source buckets.rChunk length in bytes.Buckets.sChunk length in bytes.Buckets.tLayout of chunks of lines.Layout of lines.Buckets.uBuckets.vBuckets.XYZ[\]^_`abcdefghijklmnopqrstuvpqrstuvNone &*34579<>@GILNw.Read a file containing Comma-Separated-Values.>TODO: handle escaped commas. TODO: check CSV file standard.x,Read a file containing Tab-Separated-Values.wxwxwxNone &*34579<>@GILNyvRead lines from a named text file, in a chunk-wise manner, converting each line to values with the given format.zwRead lines from a lazy byte string, in a chunk-wise manner, converting each line to values with the given format.y Chunk length.6Action if we find a line longer than the chunk length.!Action if we can't convert a row.Format of each line.z'Number of streams in the result bundle.!Action if we can't convert a row.Format of each line.Lazy byte string.)Skip this many header lines at the start.yzyzNone &*34579<>@GILN#XYZ[\]^_`abcdefghijklmnopqrstuvwxyz srqpyzvutXwxNone &*34579<>@GILNP  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWB:;<+,-./ !"#$=>?@AB0213456789%&'()*CDEFGHKIJLMNOPQRSTUVWNone &*34579<>@GILN {"Shorthand for common type classes.|@A bundle of sinks, where the elements are chunked into arrays.}@A bundle of sources, where the elements are chunked into arrays.~\Given an arity and a list of elements, yield sources that each produce all the elements. [All elements are stuffed into a single chunk, and each stream is given the same chunk.Like Z but take a list of lists, where each of the inner lists is packed into a single chunk..Drain a single source into a list of elements.,Drain a single source into a list of chunks.Split the given number of elements from the head of a source, retrurning those elements in a list, and yielding a new source for the rest.We pull  whole chunks from the source stream until we have at least the desired number of elements. The leftover elements in the final chunk are visible in the result }.*Attach a finalizer to a bundle of sources.For each stream in the bundle, the finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.(Attach a finalizer to a bundle of sinks.BThe finalizer will be called the first time the stream is ejected.XThe provided finalizer will be run after any finalizers already attached to the sink. {|}~ {|}~ {|}~None &*34579<>@GILNIApply a generic stream process to all the streams in a bundle of sources.IApply a generic stream process to all the streams in a bundle of sources.Worker function.Initial state. Input sourcesResult sources.Worker function.Initial state. Input sourcesResult sources. None &*34579<>@GILNLike fileSourceRecords%, but taking an existing file handle.JRead 8-bit ASCII characters from some files, using the given chunk length.8Read data from some files, using the given chunk length.7Write 8-bit ASCII characters to the given file handles./Write chunks of data to the given file handles.Size of chunk to read in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record. File handles. None &*34579<>@GILN2Map a function over elements pulled from a source.0Map a function over elements pushed into a sink.:Combine the elements of two flows with the given function.!None &*34579<>@GILNgFold all elements of all streams in a bundle individually, returning an array of per-stream results.wFold all elements of all streams in a bundle together, one stream after the other, returning the single final value.Destination layout.Combining function.Starting value for fold.Input elements to fold.Combining function.Starting value for fold.Input elements to fold."None &*34579<>@GILN0Dictionaries needed to perform a segmented fold.@Segmented fold over vectors of segment lengths and input values.Layout for group names.Layout for fold results.Worker function.(Initial state when folding each segment.Segment lengths.Input elements to fold.Result elements.()*()*#None &*34579<>@GILNlHook a monadic function to some sources, which will be passed every chunk that is pulled from the result.iHook a monadic function to some sinks, which will be passed every chunk that is pushed to the result.Like J but discard the incoming chunks after they are passed to the function.&A sink that ignores all incoming data.The sinks is strict in the *chunks*, so they are demanded before being discarded. Haskell debugging thunks attached to the chunks will be demanded, but thunks attached to elements may not be -- depending on whether the chunk representation is strict in the elements.OYield a bundle of sinks of the given arity that drops all data on the floor.pThis sink is non-strict in the chunks. Haskell tracing thunks attached to the chunks will *not* be demanded.$None &*34579<>@GILN*Dictionaries needed to perform a grouping.xFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs. @ groupsBy (==) [4, 4, 4, 3, 3, 1, 1, 1, 4] => [3, 2, 3, 1] Layout for group names.Layout for group lengths..Whether successive elements should be grouped.Source values. %None &*34579<>@GILNNone &*34579<>@GILNGiven a source index and a length, pull enough chunks from the source to build a list of the requested length, and discard the remaining elements in the final chunk.uThis function is intended for interactive debugging. If you want to retain the rest of the final chunk then use C.Like - but also specify now many elements you want.Like 5, but print results in a tabular form to the console.Like 3, but print results in tabular form to the console.Like (, but show elements in their raw format.Like (, but show elements in their raw format.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.  &None &*34579<>@GILN#Sink consisting of a single stream.%Source consisting of a single stream.Attach a finalizer to a source.}The finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.Attach a finalizer to a sink.BThe finalizer will be called the first time the stream is ejected.XThe provided finalizer will be run after any finalizers already attached to the sink.+,+,+,'None &*34579<>@GILNGRead complete records of data from a file, using the given chunk lengthThe records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek to just after the last complete record that was read, so we can continue to read more complete records next time.DIf we cannot find an end-of-record terminator in the chunk then apply the given failure action. The records can be no longer than the chunk length. This fact guards against the case where a large input file is malformed and contains no end-of-record terminators, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.LThe provided file handle must support seeking, else you'll get an exception.The file will be closed the first time the consumer tries to pull an element from the associated stream when no more are available.4Read data from a file, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.The file will be closed the first time the consumer tries to pull an element from the associated stream when no more are available.(Write chunks of data to the given files.>The file will be closed when the associated stream is ejected.Size of chunk to read in bytes.Detect the end of a record.1Action to perform if we can't get a whole record. File handle.af(None &*34579<>@GILNXGiven an arity and a list of elements, yield a source that produces all the elements.Drain a source into a list.DDrain the given number of elements from a single source into a list.)None &*34579<>@GILN3Yield a source that always produces the same value.GYield a source of the given length that always produces the same value.4Prepend some more elements to the front of a source.VApply a function to every element pulled from some source, producing a new source.OApply a function to every element pushed to some sink, producing a new sink.$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. $Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.$Send the same data to two consumers.Like  but with the arguments flipped.1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.Split the given number of elements from the head of a source, returning those elements in a list, and yielding a new source for the rest.oPeek at the given number of elements in the stream, returning a result stream that still produces them all.xFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs.:Example: groups [4, 4, 4, 3, 3, 1, 1, 1, 4] = [3, 2, 3, 1]Given a stream of flags and a stream of values, produce a new stream of values where the corresponding flag was True. The length of the result is the length of the shorter of the two inputs.Segmented fold. YApply a monadic function to every element pulled from a source producing a new source.DPass elements to the provided action as they are pushed to the sink.Like watch+ but doesn't pass elements to another sink.*A sink that ignores all incoming elements.This sink is strict in the elements, so they are demanded before being discarded. Haskell debugging thunks attached to the elements will be demanded.(A sink that drops all data on the floor.tThis sink is non-strict in the elements. Haskell tracing thinks attached to the elements will *not* be demanded.None &*34579<>@GILNDPull all available values from the source and push them to the sink.. af afNone &*34579<>@GILNFPull all available values from the sources and push them to the sinks.MPull all available values from the sources and pass them to the given action.0 {|}~"}|{~ *None &*34579<>@GILN "Shorthand for common type classes.XA bundle of stream sinks, where the elements of the stream are chunked into arrays.XA bundle of stream sources, where the elements of the stream are chunked into arrays.*Yield the number of streams in the bundle.*Yield the number of streams in the bundle.\Given an arity and a list of elements, yield sources that each produce all the elements. [All elements are stuffed into a single chunk, and each stream is given the same chunk.Like Y but take a list of lists. Each each of the inner lists is packed into a single chunk.<Drain a single source from a bundle into a list of elements.PIf the index does not specify a valid stream then the result will be empty.:Drain a single source from a bundle into a list of chunks.QIf the index does not specify a valid stream then the result will be empty.]Given an arity and an array of elements, yield sources that each produce all the elements.[All elements are stuffed into a single chunk, and each stream is given the same chunk.Like X but take an array of arrays. Each of the inner arrays is packed into a single chunk.>Drain a single source from a bundle into an array of elements.PIf the index does not specify a valid stream then the result will be empty.>Drain a single source from a bundle into an array of elements.PIf the index does not specify a valid stream then the result will be empty.  +None &*34579<>@GILN5Select a single column from a flow of rows of fields.-Select a single column from a flow of fields..Discard a single column from a flow of fields..Discard a single column from a flow of fields.#Mask columns from a flow of fields.#Mask columns from a flow of fields.Index of column to keep.Sources of complete rows.Sources of selected column.Index of column to keep.Sinks for selected column.Sinks for complete rows. Index of column to discard.Sources of complete rows.Sources of partial rows.Index of column to discard.Sinks for partial rows.Sinks for complete rows. Column mask.Sources of complete rows.Sources of masked rows. Column mask.Sources of complete rows.Sources of masked rows. None &*34579<>@GILNHCombine corresponding elements of three sources with the given function.GCombine corresponding elements of four sources with the given function.GCombine corresponding elements of five sources with the given function.FCombine corresponding elements of six sources with the given function.HCombine corresponding elements of seven sources with the given function. None &*34579<>@GILN0Dictionaries needed to perform a segmented fold.*Dictionaries needed to perform a grouping.Pull all available values from the sources and push them to the sinks. Streams in the bundle are processed sequentially, from first to last.If the  and P have different numbers of streams then we only evaluate the common subset.Pull all available values from the sources and push them to the sinks, in parallel. We fork a thread for each of the streams and evaluate them all in parallel.If the  and P have different numbers of streams then we only evaluate the common subset.PPull all available values from the sources and pass them to the given action.#Attach a finalizer to some sources.For a given source, the finalizer will be called the first time a consumer of that source tries to pull an element when no more are available. :The finalizer is given the index of the source that ended.SThe finalizer will be run after any finalizers already attached to the source.!Attach a finalizer to some sinks.XFor a given sink, the finalizer will be called the first time that sink is ejected.>The finalizer is given the index of the sink that was ejected.QThe finalizer will be run after any finalizers already attached to the sink.Segmented replicate.:Apply a function to all elements pulled from some sources.6Apply a function to all elements pushed to some sinks.FCombine corresponding elements of two sources with the given function.6Apply a generic stream process to a bundle of sources.9Concatenate a flow of arrays into a flow of the elements.$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. $Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.$Send the same data to two consumers.Like  but with the arguments flipped.1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.lHook a worker function to some sources, which will be passed every chunk that is pulled from each source.HThe worker is also passed the source index of the chunk that was pulled.gHook a worker function to some sinks, which will be passed every chunk that is pushed to each sink.HThe worker is also passed the source index of the chunk that was pushed._Create a bundle of sinks of the given arity that pass incoming chunks to a worker function.  This is like a, except that the incoming chunks are discarded after they are passed to the worker functionPCreate a bundle of sinks of the given arity that drop all data on the floor. Haskell debugging thunks attached to the chunks will be demanded, but thunks attached to elements may not be -- depending on whether the chunk representation is strict in the elements.OCreate a bundle of sinks of the given arity that drop all data on the floor.As opposed to ( the sinks are non-strict in the chunks.MHaskell debugging thunks attached to the chunks will *not* be demanded.Given a source index and a length, split the a list of that length from the front of the source. Yields a new source for the remaining elements.We pull  whole chunks from the source stream until we have at least the desired number of elements. The leftover elements in the final chunk are visible in the result .eScan through some sources to find runs of matching elements, and count the lengths of those runs. e> F.toList1 0 =<< F.groups_i =<< F.fromList 1 "waabbbblle" [('w',1),('a',2),('b',4),('l',2),('e',1)] Like groupsBya, but take a function to determine whether two consecutive values should be in the same group.tFold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.tFold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.Given streams of lengths and values, perform a segmented fold where fold segments of values of the corresponding lengths are folded together. > sSegs <- F.fromList 1 [('a', 1), ('b', 2), ('c', 4), ('d', 0), ('e', 1), ('f', 5 :: Int)] > sVals <- F.fromList 1 [10, 20, 30, 40, 50, 60, 70, 80, 90 :: Int] > F.toList1 0 =<< F.folds_i (+) 0 sSegs sVals [('a',10),('b',50),('c',220),('d',0),('e',80)] If not enough input elements are available to fold a complete segment then no output is produced for that segment. However, trailing zero length segments still produce the initial value for the fold. > sSegs <- F.fromList 1 [('a', 1), ('b', 2), ('c', 0), ('d', 0), ('e', 0 :: Int)] > sVals <- F.fromList 1 [10, 20, 30 :: Int] > F.toList1 0 =<< F.folds_i (*) 1 sSegs sVals [('a',10),('b',600),('c',1),('d',1),('e',1)] Combination of  and E. We determine the the segment lengths while performing the folds.Note that a SQL-like groupby aggregations can be performed using this function, provided the data is pre-sorted on the group key. For example, we can take the average of some groups of values: 9> sKeys <- F.fromList 1 "waaaabllle" > sVals <- F.fromList 1 [10, 20, 30, 40, 50, 60, 70, 80, 90, 100 :: Double] > sResult <- F.map_i (\(key, (acc, n)) -> (key, acc / n)) =<< F.foldGroupsBy_i (==) (\x (acc, n) -> (acc + x, n + 1)) (0, 0) sKeys sVals > F.toList1 0 sResult [10.0,35.0,60.0,80.0,100.0] %Source of segment lengths and values.Worker function.Initial state.Input sources.Input elements.&Starting element and length of groups.=Fn to check if consecutive elements are in the same group.Input elements.&Starting element and length of groups.Combining funtion.Starting value.Input elements to fold.Combining funtion.Starting value.Input elements to fold.Worker function.(Initial state when folding each segment.Segment lengths.Input elements to fold.Result elements.=Fn to check if consecutive elements are in the same group.Worker function for the fold."Initial when folding each segment.Names that determine groups.Values to fold.61 None &*34579<>@GILNGiven a source index and a length, pull enough chunks from the source to build a list of the requested length, and discard the remaining elements in the final chunk.uThis function is intended for interactive debugging. If you want to retain the rest of the final chunk then use .Like - but also specify now many elements you want.Like 5, but print results in a tabular form to the console.Like 3, but print results in tabular form to the console.Like (, but show elements in their raw format.Like (, but show elements in their raw format.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.   None &*34579<>@GILNLike ,-", but with the default chunk size.Like ,.", but with the default chunk size.Like ,/3, but with the default chunk size and error action.Like ,03, but with the default chunk size and error action..Read a file containing Comma-Separated-Values.,Read a file containing Tab-Separated-Values.Size of chunk to read in bytes.4Action to perform if we can't get a whole record.Buckets.Size of chunk to read in bytes.#Detect the end of a record. 4Action to perform if we can't get a whole record. File handles. Chunk length.BAction to perform if we find line longer than the chunk length.Buckets Chunk length.BAction to perform if we find line longer than the chunk length.BucketsYZ[\]^_`abcdefghijklmno None &*34579<>@GILN #The default chunk size of 64kBytes.8Read data from some files, using the given chunk length.JRead 8-bit ASCII characters from some files, using the given chunk length.Read complete records of data form a file, into chunks of the given length. We read as many complete records as will fit into each chunk. The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the file to just after the last complete record that was read, so we can continue to read more complete records next time. If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.QThe provided file handle must support seeking, else you'll get an exception.Each file is closed the first time the consumer tries to pull a record from the associated stream when no more are available.Read complete lines of data from a text file, using the given chunk length. We read as many complete lines as will fit into each chunk./The trailing new-line characters are discarded.IData is read into foreign memory without copying it through the GHC heap.QThe provided file handle must support seeking, else you'll get an exception.Each file is closed the first time the consumer tries to pull a line from the associated stream when no more are available..Read a file containing Comma-Separated-Values. ,Read a file containing Tab-Separated-Values. WRead the lines of a text file, converting each line to values with the given format.  Write 8-bit bytes to some files. +Write 8-bit ASCII characters to some files. 7Write vectors of text lines to the given files handles.NCreate sinks that convert values to some format and writes them to buckets. > import Data.Repa.Flow as F > import Data.Repa.Convert.Format as F > :{ do let format = FixString ASCII 10 :*: Float64be :*: Int16be let vals = listFormat format [ "red" :*: 5.3 :*: 100 , "green" :*: 2.8 :*: 93 , "blue" :*: 0.99 :*: 42 ] ss <- F.fromList 1 vals out <- toFiles' ["colors.bin"] $ sinkFormatLn format (error "convert failed") drainS ss out :} Detect the end of a record.Input Buckets.   Chunk length.Action when a line is too long.#Action if we can't convert a value.Format of each line.Source buckets.   Binary format for each value.)Action when a value cannot be serialized.Output buckets.#YZ[\]^_`abcdefghijklmno                 None &*34579<>@GILN-Pack elements into the given storage formats.Like #, but return sources of flat bytes.Like D, but also append a newline character after every packed element.Like #, but return sources of flat bytes.Like C, but use a default, human-readable format to encode the values.Like #, but return sources of flat bytes.Like C, but use a default, human-readable format to encode the values.Desination format for data.Sources of values to be packed. Packed data.Destination format for data.Sources of values to be packed. Packed data.Sources of values to be packed. Packed data.Sources of values to be packed. Packed data.P-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstu1None &*34579<>@GILN4/v234235267268269:;<:;=:;>:;?:;@ABCDEFGHIJKLMMNOPQQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~0.-MQ`axXY0.- f h i!!""#|#}###$$%&&&X&Y'0'-'(`(()t)u)v)e)g)Z)[)\)])x))y)z){)|)})))ooq**M*Q*R*N*`**a*****++++++    o p q X Y e g Z [ \ ] | } x y - . / 0 - . 0 / B"""&&           !"#$%&%&%'%'%(%(%)%)*+*,*-././.0.1.2.3.4.5676768689:9:;repaf_9XxFaXZvY0Q5Z1rqY6XNpmData.Repa.Flow.Generic.DebugData.Repa.Flow.ChunkedData.Repa.Flow.StatesData.Repa.Flow.GenericData.Repa.Flow.Generic.IOData.Repa.Flow.IO.BucketData.Repa.Flow.Chunked.IOData.Repa.Flow.SimpleData.Repa.Flow.AutoData.Repa.Flow.Auto.ZipWithData.Repa.Flow.Auto.DebugData.Repa.Flow.Auto.SizedIOData.Repa.Flow.Auto.IOData.Repa.Flow.Auto.FormatData.Repa.Flow.Generic.BaseData.Repa.Flow.Generic.ConnectData.Repa.Flow.Generic.ListData.Repa.Flow.Generic.MapData.Repa.Flow.Generic.ProcessData.Repa.Flow.Generic.EvalData.Repa.Flow.Generic.Operator'Data.Repa.Flow.Generic.Array.Distribute$Data.Repa.Flow.Generic.Array.Shuffle"Data.Repa.Flow.Generic.Array.Chunk$Data.Repa.Flow.Generic.Array.UnchunkData.Repa.Flow.Generic.IO.SieveData.Repa.Flow.Generic.IO.BaseData.Repa.Flow.Generic.IO.XSVData.Repa.Flow.Generic.IO.LinesData.Repa.Flow.Chunked.BaseData.Repa.Flow.Chunked.ProcessData.Repa.Flow.Chunked.MapData.Repa.Flow.Chunked.FoldData.Repa.Flow.Chunked.FoldsData.Repa.Flow.Chunked.GenericData.Repa.Flow.Chunked.Groups Data.Repa.Flow.Chunked.ReplicateData.Repa.Flow.Simple.BaseData.Repa.Flow.Simple.IOData.Repa.Flow.Simple.ListData.Repa.Flow.Simple.OperatorData.Repa.Flow.Auto.BaseData.Repa.Flow.Auto.SelectF sourceBytes sourceChars sourceLines sourceRecordsData.Repa.Flowrepaa_5BZzK2a2XfJ3QU1Kmy96aoData.Repa.Nice.Presentpresent PresentableData.Repa.NiceniceNiceNicerrepas_EFL1b8gJ09pH0kfT6reZR5Data.Repa.Chain.Scan StepUnfoldStepUnfoldGiveStepUnfoldNextStepUnfoldBumpStepUnfoldFinishStatesRefs extentRefsnewRefsreadRefs writeRefsNextfirstnextcheck foldRefsMtoListMSinks sinksArity sinksPush sinksEjectSources sourcesArity sourcesPull mapIndex_i mapIndex_o flipIndex2_i flipIndex2_o finalize_i finalize_odup_oodup_iodup_oi connect_ifunnel_ifunnel_ofromListtoList1 takeList1pushList pushList1map_ismap_imap_osmap_o szipWith_ii szipWith_io szipWith_oi compact_iscan_i indexed_idrainSdrainPconsumeS project_i project_orepeat_i replicate_i prepend_i prependOn_ihead_igroups_ipack_iifolds_iiwatch_iwatch_o capture_o rcapture_o trigger_oignore_o abandon_otrace_o distribute_o ddistribute_o distribute2_oddistribute2_o shuffle_o dshuffle_o dshuffleBy_o chunkOn_i unchunk_isieve_oBucketbucketFilePathbucketStartPos bucketLength bucketHandle openBuckethBucket fromFiles fromFiles'fromDir fromSplitFilefromSplitFileAttoFilestoFiles'toDirtoDirsbClosebIsOpenbAtEndbSeek bGetArray bPutArray sourceChunks sinkLines sinkChars sinkBytes sourceCSV sourceTSVsourceLinesFormat#sourceLinesFormatFromLazyByteStringFlow fromListstoLists1 process_i unfolds_ifoldlS foldlAllS FoldsDictfolds_i GroupsDict groupsBy_i replicates_imoremore'moretmoret'morermorer'SinkSourcetoListtakeListpeek_i fromArray fromArraystoArray1 toArrays1select_iselect_o discard_i discard_omask_imask_o zipWith3_i zipWith4_i zipWith5_i zipWith6_i zipWith7_iFoldGroupsDict zipWith_iconcat_ifoldGroupsBy_idefaultChunkSizesourceFormatLn sinkFormatLn packFormat_iflatPackFormat_ipackFormatLn_iflatPackFormatLn_i packAsciiLn_iflatPackAsciiLn_ikeyPackAsciiLn_ibaseGHC.BaseNothing $fNext(,) $fNextInt$fNext()URefsunsafeNewWithVMTFCo:R:Refs()ma $fStates()mTFCo:R:RefsIntIOa $fStatesIntIOGHC.Listfilter Debug.Tracetraceghc-prim GHC.TypesIntGHC.IO.Handle.TypesHandledirec_0hFG6ZxK1nk4zsyOqbNHfmSystem.DirectorygetDirectoryContentsadvanceGHC.IOFilePathfolds_loadChunkNameLensfolds_loadChunkVals folds_updatewrapI_iwrapI_orepas_3ih8WaLBKWQBOo3lIP6awTData.Repa.Scalar.Product:*:repac_GOv5PjsvXFGAKbAFTiFv3FData.Repa.Convert.Format.AppAppData.Repa.Convert.Format.Binary Float64be Float32beInt64beWord64beInt32beWord32beInt16beWord16beInt8beWord8beData.Repa.Convert.Format.BytesVarBytesData.Repa.Convert.Format.Maybe MaybeBytes MaybeChars Data.Repa.Convert.Format.NumericDoubleFixedPack DoubleAscIntAsc0IntAscData.Repa.Convert.Format.Date32 DDsMMsYYYY YYYYsMMsDDData.Repa.Convert.Format.Sep SepFormatmkSep takeSepCharSepSepNilSepConsData.Repa.Convert.Format.String ExactChars VarCharStringVarCharsFixCharsData.Repa.Convert.Format.Ascii FormatAscii FormatAscii' formatAsciiData.Repa.Convert.Format.ObjectField fieldName fieldFormat fieldInclude ObjectFormatObjectmkObjectData.Repa.Convert.Format.Text VarTextStringVarTextData.Repa.Convert.Format.UnitUnitAsc