Ck      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  &None &*23468;=?FHKM #A collection of mutable references.!Get the extent of the collection.kAllocate a new state of the given arity, also returning an index to the first element of the collection. Write an element of the state. Read an element of the state. !Get the zero for this index type. BGiven an index an arity, get the next index after this one, or  if there aren't any more.*Check if an index is valid for this arity..Fold all the elements in a collection of refs.Tuple indices.Integer indices. Unit indices.       None &*23468;=?FHKM 5A bundle of stream sinks, indexed by a value of type i, in some monad m, returning elements of type e.AElements can be pushed to each stream in the bundle individually. Number of sources in the bundle.4Push an element to one of the streams in the bundle.wSignal that no more elements will ever be available for this sink. It is ok to eject the same stream multiple times.7A bundle of stream sources, indexed by a value of type i, in some monad m, returning elements of type e.CElements can be pulled from each stream in the bundle individually.!Number of sources in this bundle.Function to pull data from a bundle. Give it the index of the desired stream, a continuation that accepts an element, and a continuation to invoke when no more elements will ever be available.4Transform the stream indexes of a bundle of sources.fThe given transform functions should be inverses of each other, else you'll get a confusing result.2Transform the stream indexes of a bundle of sinks.fThe given transform functions should be inverses of each other, else you'll get a confusing result.VFor a bundle of sources with a 2-d stream index, flip the components of the index.TFor a bundle of sinks with a 2-d stream index, flip the components of the index.(Attach a finalizer to bundle of sources.For each stream in the bundle, the finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.(Attach a finalizer to a bundle of sinks.fFor each stream in the bundle, the finalizer will be called the first time that stream is ejected. XThe provided finalizer will be run after any finalizers already attached to the sink. None &*23468;=?FHKM $Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. !$Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source."$Send the same data to two consumers.Like ! but with the arguments flipped.#1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.$Given a bundle of sources containing several streams, produce a new bundle containing a single stream that gets data from the former.Streams from the source are consumed in their natural order, and a complete stream is consumed before moving onto the next one. |> import Data.Repa.Flow.Generic > toList1 () =<< funnel_i =<< fromList (3 :: Int) [11, 22, 33] [11,22,33,11,22,33,11,22,33] %Given a bundle of sinks consisting of a single stream, produce a new bundle of the given arity that sends all data to the former, ignoring the stream index.gThe argument stream is ejected only when all of the streams in the result bundle have been ejected.EUsing this function in conjunction with parallel operators like drainP introduces non-determinism. Elements pushed to different streams in the result bundle could enter the single stream in the argument bundle in any order.  > import Data.Repa.Flow.Generic > import Data.Repa.Array.Material > import Data.Repa.Nice > let things = [(0 :: Int, "foo"), (1, "bar"), (2, "baz")] > result <- capture_o B () (\k -> funnel_o 4 k >>= pushList things) > nice result [((),"foo"),((),"bar"),((),"baz")]  !"#$% !"#$% !"#$%None &*23468;=?FHKM&[Given an arity and a list of elements, yield sources that each produce all the elements.'"Drain a single source into a list.(DDrain the given number of elements from a single source into a list.)?Push elements into the associated streams of a bundle of sinks.*KPush the elements of a list into the given stream of a bundle of sinks.&'()*&'()*&'()*None &*23468;=?FHKM+\Apply a function to every element pulled from some sources, producing some new sources. ,Like +9, but the worker function is also given the stream index.-\Apply a function to every element pulled from some sources, producing some new sources. .Like -9, but the worker function is also given the stream index./qCombine the elements of two flows with the given function. The worker function is also given the stream index.0Like /, but take a bundle of 3 for the result elements, and yield a bundle of  to accept the b elements.1Like /, but take a bundle of 3 for the result elements, and yield a bundle of  to accept the a elements.+,-./01+,-./01+,-./01None &*23468;=?FHKM2Pull all available values from the sources and push them to the sinks. Streams in the bundle are processed sequentially, from first to last.If the  and P have different numbers of streams then we only evaluate the common subset.3Pull all available values from the sources and push them to the sinks, in parallel. We fork a thread for each of the streams and evaluate them all in parallel.If the  and P have different numbers of streams then we only evaluate the common subset.232323None &*23468;=?FHKM41Project out a single stream source from a bundle.5/Project out a single stream sink from a bundle.61Yield sources that always produce the same value.7EYield sources of the given length that always produce the same value.8:Prepend some more elements into the front of some sources.9Like 8P but only prepend the elements to the streams that match the given predicate.:Split the given number of elements from the head of a source returning those elements in a list, and yielding a new source for the rest.;xFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs.:Example: groups [4, 4, 4, 3, 3, 1, 1, 1, 4] = [3, 2, 3, 1]<Given a stream of flags and a stream of values, produce a new stream of values where the corresponding flag was True. The length of the result is the length of the shorter of the two inputs.=Segmented fold. >bApply a monadic function to every element pulled from some sources, producing some new sources.?FPass elements to the provided action as they are pushed into the sink.@RCreate a bundle of sinks of the given arity and capture any data pushed to it. > import Data.Repa.Flow.Generic > import Data.Repa.Array.Material > import Data.Repa.Nice > import Control.Monad > liftM nice $ capture_o B 4 (k -> pushList [(0 :: Int, "foo"), (1, "bar"), (0, "baz")] k) > [(0,"foo"),(1,"bar"),(0,"baz")] ALike @ but also return the r-esult of the push function.BLike ?+ but doesn't pass elements to another sink.C(A sink that drops all data on the floor.This sink is strict in the elements, so they are demanded before being discarded. Haskell debugging thunks attached to the elements will be demanded.D&A sink that ignores all incoming data.tThis sink is non-strict in the elements. Haskell tracing thunks attached to the elements will *not* be demanded.EUse the  function from  Debug.Trace3 to print each element that is pushed to a sink.[This function is intended for debugging only, and is not intended for production code.456789:;<=>?@Name of desination layout.Arity of result bundle.%Function to push data into the sinks.AName of desination layout.Arity of result bundle.%Function to push data into the sinks.BCDE%456789:;<=>?@ABCDE456789:;<=>?@ABCDENone &*23468;=?FHKMF&Given a bundle of sinks indexed by an ', produce a result sink for arrays.Each time an array is pushed to the result sink, its elements are pushed to the corresponding streams of the argument sink. If there are more elements than sinks then then give them to the spill action.  | .. | | [w0, x0, y0, z0] | :: Sinks () IO (Array l a) | [w1, x1, y1, z1, u1] | (sink for a single stream of arrays) | .. | | | | | | v v v v .------> spilled | .. | .. | .. | .. | | w0 | x0 | y0 | z0 | :: Sinks Int IO a | w1 | x1 | y1 | z1 | (sink for several streams of elements) | .. | .. | .. | .. | GLike F), but drop spilled elements on the floor.HLike F, but with 2-d stream indexes.Given the argument and result sinks, when pushing to the result the stream index is used as the first component for the argument sink, and the index of the element in its array is used as the second component.QIf you want to the components of stream index the other way around then apply  to the argument sinks.ILike H), but drop spilled elements on the floor.FMSpill action, given the spilled element along with its index in the array.Sinks to push elements into.GHMSpill action, given the spilled element along with its index in the array.Sinks to push elements into.IFGHIFGHINone &*23468;=?FHKMJGiven a bundle of argument sinks, produce a result sink. Arrays of indices and elements are pushed to the result sink. On doing so, the elements are pushed into the corresponding streams of the argument sinks. If the index associated with an element does not have a corresponding stream in the argument sinks, then pass it to the provided spill function.  | .. | | [(0, v0), (1, v1), (0, v2), (0, v3), (2, v4)] | :: Sources Int IO (Array l (Int, a)) | .. | \ \ | \ .------------. | v v .---------> spilled | .. | .. | | [v0, v2, v3] | [v1] | :: Sinks Int IO (Array l a) | .. | .. | The following example uses @ to demonstrate how the J operator can be used as one step of a bucket-sort. We start with two arrays of key-value pairs. In the result, the values from each block that had the same key are packed into the same tuple (bucket). > import Data.Repa.Flow.Generic as G > import Data.Repa.Array as A > import Data.Repa.Array.Material as A > import Data.Repa.Nice > let arr1 = A.fromList B [(0, 'a'), (1, 'b'), (2, 'c'), (0, 'd'), (0, 'c')] > let arr2 = A.fromList B [(0, 'A'), (3, 'B'), (3, 'C')] > result :: Array B (Int, Array U Char) > <- capture_o B 4 (\k -> shuffle_o B (error "spilled") k > >>= pushList1 () [arr1, arr2]) > nice result [(0,"adc"),(1,"b"),(2,"c"),(0,"A"),(3,"BC")] KLike J), but drop spilled elements on the floor.LLike Kq, but use the given function to decide which stream of the argument bundle each element should be pushed into. > import Data.Repa.Flow.Generic as G > import Data.Repa.Array as A > import Data.Repa.Array.Material as A > import Data.Repa.Nice > import Data.Char > let arr1 = A.fromList B "FooBAr" > let arr2 = A.fromList B "BazLIKE" > result :: Array B (Int, Array U Char) <- capture_o B 2 (\k -> dshuffleBy_o B (\x -> if isUpper x then 0 else 1) k >>= pushList1 () [arr1, arr2]) > nice result [(0,"FBA"),(1,"oor"),(0,"BLIKE"),(1,"az")] JName of source layout.Handle spilled elements.Sinks to push results to.KName of source layout.Sinks to push results to.LName of source layout.%Get the stream number for an element.Sinks to push results to.JKLJKLNone &*23468;=?FHKMMSTake elements from a flow and pack them into chunks of the given maximum length.MLayout for result chunks.Maximum chunk length.Element sources.Chunk sources.MMNone &*23468;=?FHKMNOTake a flow of chunks and flatten it into a flow of the individual elements.NChunk sources.Element sources.NNNone &*23468;=?FHKMOCreate an output sieve that writes data to an indeterminate number of output files. Each new element is appended to its associated file.=TODO: This function keeps a maximum of 8 files open at once, closing and re-opening them in a least-recently-used order. Due to this behaviour it's fine to create thousands of separate output files without risking overflowing the process limit on the maximum number of useable file handles.OHProduce the desired file path and output record for this element, or  if it should be discarded.OONone &*23468;=?FHKMPA bucket represents portion of a whole data-set on disk, and contains a file handle that points to the next piece of data to be read or written.The bucket could be created from a portion of a single flat file, or be one file of a pre-split data set. The main advantage over a plain  is that a P9 can represent a small portion of a single large file.R(Physical location of the file, if known.S6Starting position of the bucket in the file, in bytes.T'Maximum length of the bucket, in bytes.If J then the length is indeterminate, which is used when writing to files.UFile handle for the bucket.If several buckets have been created from a single file, then all buckets will have handles bound to that file, but they will be at different positions.VOpen a file as a single bucket.W)Wrap an existing file handle as a bucket.X+Open some files as buckets and use them as Sources.YLike X , but take a list of file paths.Z6Open all the files in a directory as separate buckets.4This operation may fail with the same exceptions as .[eOpen a file containing atomic records and split it into the given number of evenly sized buckets. The records are separated by a special terminating charater, which the given predicate detects. The file is split cleanly on record boundaries, so we get a whole number of records in each bucket. As the records can be of varying size the buckets are not guaranteed to have exactly the same length, in either records or buckets, though we try to give them the approximatly the same number of bytes.\Like [ but start at the given offset.wAdvance a file handle until we reach a byte that, matches the given predicate, then return the final file position.]YOpen some files for writing as individual buckets and pass them to the given consumer.^Like ] , but take a list of file paths._UCreate a new directory of the given name, containing the given number of buckets. If the directory is named somedir then the files are named somedir/000000, somedir/000001, somedir/000002 and so on.`rGiven a list of directories, create those directories and open the given number of output files per directory.In the resulting array of buckets, the outer dimension indexes each directory, and the inner one indexes each file in its directory.For each directory somedir the files are named somedir/000000, somedir/000001, somedir/000002 and so on.a4Close a bucket, releasing the contained file handle.b&Check if the bucket is currently open.c?Check if the contained file handle is at the end of the bucket.d!Seek to a position with a bucket.eGet some data from a bucket.fPut some data in a bucket.PQRSTUVWXFiles to open. Consumer.YZ[Number of buckets.Detect the end of a record. File to open. Consumer.\Number of buckets.Detect the end of a record. File to open.Starting offset. Consumer.] File paths. Consumer.^_Number of buckets to create.Path to directory. Consumer.`*Number of buckets to create per directory.Paths to directories. Consumer.abcdefPQRSTUVWXYZ[\]^_`abcdefPQRSTUVWXYZ[\]^_`abcdefPQRSTUVWXYZ[\]^_`abcdefNone &*23468;=?FHKMgRead complete records of data form a bucket, into chunks of the given length. We read as many complete records as will fit into each chunk.The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the bucket to just after the last complete record that was read, so we can continue to read more complete records next time. If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.LThe provided file handle must support seeking, else you'll get an exception.hLike g-, but produce all records in a single vector.iKRead 8-byte ASCII characters from some files, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.j8Read data from some files, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.k7Write vectors of text lines to the given files handles.RData is copied into a new buffer to insert newlines before being written out.lBWrite chunks of 8-byte ASCII characters to the given file handles.mData is copied into a foreign buffer to truncate the characters to 8-bits each before being written out.m0Write chunks of bytes to the given file handles.6Data is written out directly from the provided buffer.gChunk length in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record.Source buckets.hChunk length in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record.Source buckets.iChunk length in bytes.Buckets.jChunk length in bytes.Buckets.kLayout of chunks of lines.Layout of lines.Buckets.lBuckets.mBuckets.OPQRSTUVWXYZ[\]^_`abcdefghijklmghijklmNone &*23468;=?FHKMn.Read a file containing Comma-Separated-Values.>TODO: handle escaped commas. TODO: check CSV file standard.o,Read a file containing Tab-Separated-Values.nononoNone &*23468;=?FHKM!OPQRSTUVWXYZ[\]^_`abcdefghijklmno jihgmlkOnoNone &*23468;=?FHKMpEClass of element types that we can load and store to the file system.TODO: change to Persistable. q;Specification of how the elements are arranged in the file.:For atomic elements the specification is the storage type.yFor fixed-length arrays of elements, the specification contains the element storage type as well as the array length.r7Representation tag used for an array of these elements.!For atomic elements this will be R, for foreign arrays that do not require extra copying after the data is read.tFor tuples of elements, this will be a tuple of strided arrays, so the elements can also be used without copying.s+Get the size of a single element, in bytes.tRead an array of the given length from a bucket. If the bucket does not contain a whole number of elements remaining then ."Two elements stored consecutively.A native 32-bit integer. pqrstElement specification.Number of elements to read.Bucket to read from.  pqrstpqrstpqrst  None &*23468;=?FHKML  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMN>23&'()*456789+-,./01 !"#$%:;<=>?B@ACDEFGHIJKLMNNone &*23468;=?FHKM u"Shorthand for common type classes.v@A bundle of sinks, where the elements are chunked into arrays.w@A bundle of sources, where the elements are chunked into arrays.x\Given an arity and a list of elements, yield sources that each produce all the elements. [All elements are stuffed into a single chunk, and each stream is given the same chunk.yLike yZ but take a list of lists, where each of the inner lists is packed into a single chunk.z.Drain a single source into a list of elements.{,Drain a single source into a list of chunks.|Split the given number of elements from the head of a source, retrurning those elements in a list, and yielding a new source for the rest.We pull  whole chunks from the source stream until we have at least the desired number of elements. The leftover elements in the final chunk are visible in the result w.}*Attach a finalizer to a bundle of sources.For each stream in the bundle, the finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.~(Attach a finalizer to a bundle of sinks.BThe finalizer will be called the first time the stream is ejected.XThe provided finalizer will be run after any finalizers already attached to the sink. uvwxyz{|}~ uvwxyz{|}~ uvwxyz{|}~None &*23468;=?FHKMLike fileSourceRecords%, but taking an existing file handle.JRead 8-bit ASCII characters from some files, using the given chunk length.8Read data from some files, using the given chunk length.7Write 8-bit ASCII characters to the given file handles./Write chunks of data to the given file handles.Size of chunk to read in bytes.#Detect the end of a record. 1Action to perform if we can't get a whole record. File handles.None &*23468;=?FHKM2Map a function over elements pulled from a source.0Map a function over elements pushed into a sink.:Combine the elements of two flows with the given function.None &*23468;=?FHKMgFold all elements of all streams in a bundle individually, returning an array of per-stream results.wFold all elements of all streams in a bundle together, one stream after the other, returning the single final value.Destination layout.Combining function.Starting value for fold.Input elements to fold.Combining function.Starting value for fold.Input elements to fold.None &*23468;=?FHKM0Dictionaries needed to perform a segmented fold.@Segmented fold over vectors of segment lengths and input values.Layout for group names.Layout for fold results.Worker function.(Initial state when folding each segment.Segment lengths.Input elements to fold.Result elements.      None &*23468;=?FHKM*Dictionaries needed to perform a grouping.xFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs. @ groupsBy (==) [4, 4, 4, 3, 3, 1, 1, 1, 4] => [3, 2, 3, 1] Layout for group names.Layout for group lengths..Whether successive elements should be grouped.Source values. !None &*23468;=?FHKMlHook a monadic function to some sources, which will be passed every chunk that is pulled from the result.iHook a monadic function to some sinks, which will be passed every chunk that is pushed to the result.Like J but discard the incoming chunks after they are passed to the function.&A sink that ignores all incoming data.pThis sink is non-strict in the chunks. Haskell tracing thunks attached to the chunks will *not* be demanded.OYield a bundle of sinks of the given arity that drops all data on the floor.The sinks is strict in the *chunks*, so they are demanded before being discarded. Haskell debugging thunks attached to the chunks will be demanded, but thunks attached to elements may not be -- depending on whether the chunk representation is strict in the elements.None &*23468;=?FHKMGiven a source index and a length, pull enough chunks from the source to build a list of the requested length, and discard the remaining elements in the final chunk.uThis function is intended for interactive debugging. If you want to retain the rest of the final chunk then use :.Like - but also specify now many elements you want.Like 5, but print results in a tabular form to the console.Like 3, but print results in tabular form to the console.Like (, but show elements in their raw format.Like (, but show elements in their raw format.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.   None &*23468;=?FHKMSpecification of elements.Number of elements per chunk.Buckets of table."None &*23468;=?FHKM#Sink consisting of a single stream.%Source consisting of a single stream.Attach a finalizer to a source.}The finalizer will be called the first time a consumer of that stream tries to pull an element when no more are available.ZThe provided finalizer will be run after any finalizers already attached to the source.Attach a finalizer to a sink.BThe finalizer will be called the first time the stream is ejected.XThe provided finalizer will be run after any finalizers already attached to the sink.#None &*23468;=?FHKMGRead complete records of data from a file, using the given chunk lengthThe records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek to just after the last complete record that was read, so we can continue to read more complete records next time.DIf we cannot find an end-of-record terminator in the chunk then apply the given failure action. The records can be no longer than the chunk length. This fact guards against the case where a large input file is malformed and contains no end-of-record terminators, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.LThe provided file handle must support seeking, else you'll get an exception.The file will be closed the first time the consumer tries to pull an element from the associated stream when no more are available.4Read data from a file, using the given chunk length.IData is read into foreign memory without copying it through the GHC heap.<All chunks have the same size, except possibly the last one.The file will be closed the first time the consumer tries to pull an element from the associated stream when no more are available.(Write chunks of data to the given files.>The file will be closed when the associated stream is ejected.Size of chunk to read in bytes.Detect the end of a record.1Action to perform if we can't get a whole record. File handle.X]$None &*23468;=?FHKMXGiven an arity and a list of elements, yield a source that produces all the elements.Drain a source into a list.DDrain the given number of elements from a single source into a list.%None &*23468;=?FHKM3Yield a source that always produces the same value.GYield a source of the given length that always produces the same value.4Prepend some more elements to the front of a source.VApply a function to every element pulled from some source, producing a new source.OApply a function to every element pushed to some sink, producing a new sink.$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. $Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.$Send the same data to two consumers.Like  but with the arguments flipped.1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.Split the given number of elements from the head of a source, returning those elements in a list, and yielding a new source for the rest.oPeek at the given number of elements in the stream, returning a result stream that still produces them all.xFrom a stream of values which has consecutive runs of idential values, produce a stream of the lengths of these runs.:Example: groups [4, 4, 4, 3, 3, 1, 1, 1, 4] = [3, 2, 3, 1]Given a stream of flags and a stream of values, produce a new stream of values where the corresponding flag was True. The length of the result is the length of the shorter of the two inputs.Segmented fold. YApply a monadic function to every element pulled from a source producing a new source.DPass elements to the provided action as they are pushed to the sink.Like watch+ but doesn't pass elements to another sink.(A sink that drops all data on the floor.This sink is strict in the elements, so they are demanded before being discarded. Haskell debugging thunks attached to the elements will be demanded.*A sink that ignores all incoming elements.tThis sink is non-strict in the elements. Haskell tracing thinks attached to the elements will *not* be demanded. None &*23468;=?FHKMDPull all available values from the source and push them to the sink.. X] X]None &*23468;=?FHKMFPull all available values from the sources and push them to the sinks.' uvwxyz{|}~wvuxyz{}~| None &*23468;=?FHKM!0Dictionaries needed to perform a segmented fold.*Dictionaries needed to perform a grouping."Shorthand for common type classes.XA bundle of stream sinks, where the elements of the stream are chunked into arrays.XA bundle of stream sources, where the elements of the stream are chunked into arrays.*Yield the number of streams in the bundle.*Yield the number of streams in the bundle.Pull all available values from the sources and push them to the sinks. Streams in the bundle are processed sequentially, from first to last.If the  and P have different numbers of streams then we only evaluate the common subset.Pull all available values from the sources and push them to the sinks, in parallel. We fork a thread for each of the streams and evaluate them all in parallel.If the  and P have different numbers of streams then we only evaluate the common subset.\Given an arity and a list of elements, yield sources that each produce all the elements. [All elements are stuffed into a single chunk, and each stream is given the same chunk.Like  fromLists_iY but take a list of lists. Each each of the inner lists is packed into a single chunk.<Drain a single source from a bundle into a list of elements.:Drain a single source from a bundle into a list of chunks.#Attach a finalizer to some sources.For a given source, the finalizer will be called the first time a consumer of that source tries to pull an element when no more are available. :The finalizer is given the index of the source that ended.SThe finalizer will be run after any finalizers already attached to the source.!Attach a finalizer to some sinks.XFor a given sink, the finalizer will be called the first time that sink is ejected.>The finalizer is given the index of the sink that was ejected.QThe finalizer will be run after any finalizers already attached to the sink.:Apply a function to all elements pulled from some sources.6Apply a function to all elements pushed to some sinks.$Send the same data to two consumers.Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks. $Send the same data to two consumers.Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.$Send the same data to two consumers.Like  but with the arguments flipped.1Connect an argument source to two result sources.Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.lHook a worker function to some sources, which will be passed every chunk that is pulled from each source.HThe worker is also passed the source index of the chunk that was pulled.gHook a worker function to some sinks, which will be passed every chunk that is pushed to each sink.HThe worker is also passed the source index of the chunk that was pushed._Create a bundle of sinks of the given arity that pass incoming chunks to a worker function.  This is like a, except that the incoming chunks are discarded after they are passed to the worker functionOCreate a bundle of sinks of the given arity that drop all data on the floor.WThe sinks is strict in the *chunks*, so they are demanded before being discarded. Haskell debugging thunks attached to the chunks will be demanded, but thunks attached to elements may not be -- depending on whether the chunk representation is strict in the elements.PCreate a bundle of sinks of the given arity that drop all data on the floor. As opposed to ( the sinks are non-strict in the chunks.MHaskell debugging thunks attached to the chunks will *not* be demanded.Given a source index and a length, split the a list of that length from the front of the source. Yields a new source for the remaining elements.We pull  whole chunks from the source stream until we have at least the desired number of elements. The leftover elements in the final chunk are visible in the result .eScan through some sources to find runs of matching elements, and count the lengths of those runs. e> F.toList1 0 =<< F.groups_i =<< F.fromList 1 "waabbbblle" [('w',1),('a',2),('b',4),('l',2),('e',1)] Like groupsBya, but take a function to determine whether two consecutive values should be in the same group.tFold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.tFold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.Given streams of lengths and values, perform a segmented fold where fold segments of values of the corresponding lengths are folded together. > sSegs <- F.fromList 1 [('a', 1), ('b', 2), ('c', 4), ('d', 0), ('e', 1), ('f', 5 :: Int)] > sVals <- F.fromList 1 [10, 20, 30, 40, 50, 60, 70, 80, 90 :: Int] > F.toList1 0 =<< F.folds_i (+) 0 sSegs sVals [('a',10),('b',50),('c',220),('d',0),('e',80)] If not enough input elements are available to fold a complete segment then no output is produced for that segment. However, trailing zero length segments still produce the initial value for the fold. > sSegs <- F.fromList 1 [('a', 1), ('b', 2), ('c', 0), ('d', 0), ('e', 0 :: Int)] > sVals <- F.fromList 1 [10, 20, 30 :: Int] > F.toList1 0 =<< F.folds_i (*) 1 sSegs sVals [('a',10),('b',600),('c',1),('d',1),('e',1)] Combination of  and E. We determine the the segment lengths while performing the folds.Note that a SQL-like groupby aggregations can be performed using this function, provided the data is pre-sorted on the group key. For example, we can take the average of some groups of values: 9> sKeys <- F.fromList 1 "waaaabllle" > sVals <- F.fromList 1 [10, 20, 30, 40, 50, 60, 70, 80, 90, 100 :: Double] > sResult <- F.map_i (\(key, (acc, n)) -> (key, acc / n)) =<< F.foldGroupsBy_i (==) (\x (acc, n) -> (acc + x, n + 1)) (0, 0) sKeys sVals > F.toList1 0 sResult [10.0,35.0,60.0,80.0,100.0] "Input elements.&Starting element and length of groups.=Fn to check if consecutive elements are in the same group.Input elements.&Starting element and length of groups.Combining funtion.Starting value.Input elements to fold.Combining funtion.Starting value.Input elements to fold.Worker function.(Initial state when folding each segment.Segment lengths.Input elements to fold.Result elements.=Fn to check if consecutive elements are in the same group.Worker function for the fold."Initial when folding each segment.Names that determine groups.Values to fold.""" None &*23468;=?FHKMGiven a source index and a length, pull enough chunks from the source to build a list of the requested length, and discard the remaining elements in the final chunk.uThis function is intended for interactive debugging. If you want to retain the rest of the final chunk then use .Like - but also specify now many elements you want.Like 5, but print results in a tabular form to the console.Like 3, but print results in tabular form to the console.Like (, but show elements in their raw format.Like (, but show elements in their raw format.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.Index of source in bundle.Bundle of sources.   None &*23468;=?FHKMLike &'", but with the default chunk size.Like &(", but with the default chunk size.Like &)3, but with the default chunk size and error action.Like &*3, but with the default chunk size and error action..Read a file containing Comma-Separated-Values.,Read a file containing Tab-Separated-Values.Size of chunk to read in bytes.4Action to perform if we can't get a whole record.Buckets.Size of chunk to read in bytes.#Detect the end of a record. 4Action to perform if we can't get a whole record. File handles. Chunk length.BAction to perform if we find line longer than the chunk length.Buckets Chunk length.BAction to perform if we find line longer than the chunk length.BucketsPQRSTUVWXYZ[\]^_`abcdefNone &*23468;=?FHKM#The default chunk size of 64kBytes.8Read data from some files, using the given chunk length.JRead 8-bit ASCII characters from some files, using the given chunk length.Read complete records of data form a file, into chunks of the given length. We read as many complete records as will fit into each chunk. The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the file to just after the last complete record that was read, so we can continue to read more complete records next time. If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.IData is read into foreign memory without copying it through the GHC heap.QThe provided file handle must support seeking, else you'll get an exception.Each file is closed the first time the consumer tries to pull a record from the associated stream when no more are available.Read complete lines of data from a text file, using the given chunk length. We read as many complete lines as will fit into each chunk./The trailing new-line characters are discarded.IData is read into foreign memory without copying it through the GHC heap.QThe provided file handle must support seeking, else you'll get an exception.Each file is closed the first time the consumer tries to pull a line from the associated stream when no more are available..Read a file containing Comma-Separated-Values.,Read a file containing Tab-Separated-Values.KRead packed binary data from some buckets and unpack the values to some .The following uses the  colors.bin file produced by the  example: N> import Data.Repa.Flow as F > import Data.Repa.Convert.Format as F > :{ do let format = FixString ASCII 10 :*: Float64be :*: Int16be ss <- fromFiles' ["colors.bin"] $ sourcePacked format (error "convert failed") toList1 0 ss :} ["red" :*: (5.3 :*: 100), "green" :*: (2.8 :*: 93), "blue" :*: (0.99 :*: 42)]  Write 8-bit bytes to some files.+Write 8-bit ASCII characters to some files.7Write vectors of text lines to the given files handles.^Create sinks that convert values to a packed binary format and writes them to some buckets. > import Data.Repa.Flow as F > import Data.Repa.Convert.Format as F > :{ do let format = FixString ASCII 10 :*: Float64be :*: Int16be let vals = listFormat format [ "red" :*: 5.3 :*: 100 , "green" :*: 2.8 :*: 93 , "blue" :*: 0.99 :*: 42 ] ss <- F.fromList 1 vals out <- toFiles' ["colors.bin"] $ sinkPacked format (error "convert failed") drainS ss out :} ;Create sinks that write values from some binary Repa table.<Create sources that read values from some binary Repa table.Detect the end of a record.Input Buckets.Binary format for each value.(Action when a value cannot be converted.Input buckets.Binary format for each value.)Action when a value cannot be serialized.Output buckets.Directory holding table.Number of buckets to use.Format of data.)Action when a value cannot be serialised.Directory holding table.Format of data.+Action when a value cannot be deserialised.%PQRSTUVWXYZ[\]^_`abcdef+None &*23468;=?FHKMEPQRSTUVWXYZ[\]^_`abcdef.,-.,-/,01,02,03456789:;<=>?@@ABCDDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}}~*('@DSTgKL*('Y[\  !k!l!o!q!p """K"L#*#'#$S$$%c%d%e%X%Z%M%N%O%P%g%%h%i%j%k%l%o%p%q __     @ D E A _ ` S T K L X Z M N O P k l o p q g h ' ( ) * '(*)5,&""repa-flow-4.1.0.1Data.Repa.Flow.Generic.DebugData.Repa.Flow.StatesData.Repa.Flow.GenericData.Repa.Flow.Generic.IOData.Repa.Flow.IO.BucketData.Repa.Flow.IO.StorableData.Repa.Flow.ChunkedData.Repa.Flow.Chunked.IOData.Repa.Flow.IO.BinaryData.Repa.Flow.SimpleData.Repa.Flow.AutoData.Repa.Flow.Auto.DebugData.Repa.Flow.Auto.SizedIOData.Repa.Flow.Auto.IOData.Repa.Flow.Generic.BaseData.Repa.Flow.Generic.ConnectData.Repa.Flow.Generic.ListData.Repa.Flow.Generic.MapData.Repa.Flow.Generic.EvalData.Repa.Flow.Generic.Operator'Data.Repa.Flow.Generic.Array.Distribute$Data.Repa.Flow.Generic.Array.Shuffle"Data.Repa.Flow.Generic.Array.Chunk$Data.Repa.Flow.Generic.Array.UnchunkData.Repa.Flow.Generic.IO.SieveData.Repa.Flow.Generic.IO.BaseData.Repa.Flow.Generic.IO.XSVData.Repa.Flow.Chunked.BaseData.Repa.Flow.Chunked.MapData.Repa.Flow.Chunked.FoldData.Repa.Flow.Chunked.FoldsData.Repa.Flow.Chunked.GroupsData.Repa.Flow.Chunked.OperatorData.Repa.Flow.Simple.BaseData.Repa.Flow.Simple.IOData.Repa.Flow.Simple.ListData.Repa.Flow.Simple.OperatorF sourceBytes sourceChars sourceLines sourceRecordsData.Repa.Flowrepa-array-4.1.0.1Data.Repa.Nice.Presentpresent PresentableData.Repa.NiceniceNiceNicerStatesRefs extentRefsnewRefsreadRefs writeRefsNextfirstnextcheck foldRefsMtoListMSinks sinksArity sinksPush sinksEjectSources sourcesArity sourcesPull mapIndex_i mapIndex_o flipIndex2_i flipIndex2_o finalize_i finalize_odup_oodup_iodup_oi connect_ifunnel_ifunnel_ofromListtoList1 takeList1pushList pushList1map_ismap_imap_osmap_o szipWith_ii szipWith_io szipWith_oidrainSdrainP project_i project_orepeat_i replicate_i prepend_i prependOn_ihead_igroups_ipack_iifolds_iiwatch_iwatch_o capture_o rcapture_o trigger_o discard_oignore_otrace_o distribute_o ddistribute_o distribute2_oddistribute2_o shuffle_o dshuffle_o dshuffleBy_ochunk_i unchunk_isieve_oBucketbucketFilePathbucketStartPos bucketLength bucketHandle openBuckethBucket fromFiles fromFiles'fromDir fromSplitFilefromSplitFileAttoFilestoFiles'toDirtoDirs'bClosebIsOpenbAtEndbSeek bGetArray bPutArray sourceChunks sinkLines sinkChars sinkBytes sourceCSV sourceTSVStorableSpecRepsizeElemgetArrayFlow fromListstoLists1foldlS foldlAllS FoldsDictfolds_i GroupsDict groupsBy_imoremore'moretmoret'morermorer' sourceBinarySinkSourcetoListtakeListpeek_iFoldGroupsDictfoldGroupsBy_idefaultChunkSize sourcePacked sinkPackedtoTable fromTablebase Data.MaybeNothing $fNext(,) $fNextInt$fNext()URefsunsafeNewWithVMTFCo:R:Refs()ma $fStates()mTFCo:R:RefsIntIOa $fStatesIntIO Debug.Tracetraceghc-prim GHC.TypesIntGHC.IO.Handle.TypesHandledirectory-1.2.1.0System.DirectorygetDirectoryContentsadvance%Data.Repa.Array.Material.Foreign.Base $fStorable(,)$fStorableInt32S2SInt32TFCo:R:Spec(,)TFCo:R:SpecInt32folds_loadChunkNameLensfolds_loadChunkVals folds_updatewrapI_iwrapI_o