repa-flow-4.0.0.2: Data-parallel data flows.

Safe HaskellNone
LanguageHaskell98

Data.Repa.Flow

Contents

Description

Getting Started

A flow consists of a bundle of individual streams. Here we create a bundle of two streams, using different files for each. Data will be read in chunks, using the default chunk size of 64kBytes.

> import Data.Repa.Flow
> import Data.Repa.Flow.Default.Debug
> ws <- fromFiles ["/usr/share/dict/words", "/usr/share/dict/cracklib-small"] sourceLines

Show the first few elements of the first chunk of the first file.

> more 0 ws
Just ["A","A's","AA's","AB's","ABM's","AC's","ACTH's","AI's" ...]

The more function is helpful for debugging. It pulls a whole chunk from a source, displays the requested number of elements from the front of it, then discards the rest. In production code you could use head_i to split a few elements from a stream while retaining the rest.

Use more' to show more elements at a time. We've already pulled the first chunk, so here are the first 100 elements from the second chunk:

> more' 0 100 ws
Just ["Jubal","Judah","Judaic","Judaism","Judaism's","Judaisms","Judas" ...]

Use moret to display elements in tabular form. Here are the first few elements of the second stream in the bundle:

> moret 1 ws
"10th"   
"1st"    
"2nd"    
"3rd"    
"4th"    
"5th"    
...

Lets convert the characters to upper-case.

> import Data.Char
> up <- map_i B (mapS U toUpper) ws
> more 0 up
Just ["UTOPIAN","UTOPIAN'S","UTOPIANS","UTOPIAS","UTRECHT" ...]

The B and U are Layout names that indicate how the chunks for the result streams should be arranged in memory. In this case the chunks are B-oxed arrays of U-nboxed arrays of characters. Other useful layouts are F which stores data in foreign memory, and N for nested arrays.

Flows are data-parallel, which means operators like map_i apply to all streams in the bundle. The second stream has been converted to upper-case as well:

> more 1 up
Just ["BROWNER","BROWNEST","BROWNIAN","BROWNIE","BROWNIE'S" ...]

Lets write out the data to some files. There are two streams in the bundle, so open a file for each stream:

> out <- toFiles ["out1.txt", "out2.txt"] $ sinkLines B U

Note that the ws and up we used before were bundles of stream Sources whereas out is a bundle of stream Sinks. When we used the map_i operator before the _i (input) suffix indicates that this is a transformer of Sources. There is a related map_o (output) operator for Sinks.

Now that we have a bundle of Sources, and some matching Sinks, we can drainS all of the data from the former into the latter.

> drainS up out

At this point we can run an external shell command to check the output.

> :! head out1.txt
BEARSKIN'S
BEARSKINS
BEAST
BEAST'S
BEASTLIER
BEASTLIEST
BEASTLINESS
BEASTLINESS'S
BEASTLY
BEASTLY'S

Performance

Althogh repa-flow can be used productively in the ghci REPL, performance won't be great because you will be running unspecialised, polymorphic code. For best results you should write a complete program and compile it with ghc -fllvm -O2 Main.hs.

Synopsis

Flow types

type Sources l a = Sources Int IO l a Source

A bundle of stream sources, where the elements of the stream are chunked into arrays.

The chunks have some Layout l and contain elements of type a. See Data.Repa.Array for the available layouts.

type Sinks l a = Sinks Int IO l a Source

A bundle of stream sinks, where the elements of the stream are chunked into arrays.

type Flow l a = Flow Int IO l a Source

Shorthand for common type classes.

sourcesArity :: Sources l a -> Int Source

Yield the number of streams in the bundle.

sinksArity :: Sinks l a -> Int Source

Yield the number of streams in the bundle.

States and Arrays

Evaluation

drainS :: Sources l a -> Sinks l a -> IO () Source

Pull all available values from the sources and push them to the sinks. Streams in the bundle are processed sequentially, from first to last.

  • If the Sources and Sinks have different numbers of streams then we only evaluate the common subset.

drainP :: Sources l a -> Sinks l a -> IO () Source

Pull all available values from the sources and push them to the sinks, in parallel. We fork a thread for each of the streams and evaluate them all in parallel.

  • If the Sources and Sinks have different numbers of streams then we only evaluate the common subset.

Conversion

fromList :: TargetI l a => Name l -> Int -> [a] -> IO (Sources l a) Source

Given an arity and a list of elements, yield sources that each produce all the elements.

  • All elements are stuffed into a single chunk, and each stream is given the same chunk.

fromLists :: TargetI l a => Name l -> Int -> [[a]] -> IO (Sources l a) Source

Like fromLists_i but take a list of lists. Each each of the inner lists is packed into a single chunk.

toList1 :: BulkI l a => Int -> Sources l a -> IO [a] Source

Drain a single source from a bundle into a list of elements.

toLists1 :: BulkI l a => Int -> Sources l a -> IO [[a]] Source

Drain a single source from a bundle into a list of chunks.

Finalizers

finalize_i :: (Int -> IO ()) -> Sources l a -> IO (Sources l a) Source

Attach a finalizer to some sources.

  • For a given source, the finalizer will be called the first time a consumer of that source tries to pull an element when no more are available.
  • The finalizer is given the index of the source that ended.
  • The finalizer will be run after any finalizers already attached to the source.

finalize_o :: (Int -> IO ()) -> Sinks l a -> IO (Sinks l a) Source

Attach a finalizer to some sinks.

  • For a given sink, the finalizer will be called the first time that sink is ejected.
  • The finalizer is given the index of the sink that was ejected.
  • The finalizer will be run after any finalizers already attached to the sink.

Flow Operators

Mapping

If you want to work on a chunk at a time then use map_i and map_o from Data.Repa.Flow.Generic.

map_i :: (Flow l1 a, TargetI l2 b) => Name l2 -> (a -> b) -> Sources l1 a -> IO (Sources l2 b) Source

Apply a function to all elements pulled from some sources.

map_o :: (Flow l1 a, TargetI l2 b) => Name l1 -> (a -> b) -> Sinks l2 b -> IO (Sinks l1 a) Source

Apply a function to all elements pushed to some sinks.

Connecting

dup_oo :: Sinks l a -> Sinks l a -> IO (Sinks l a) Source

Send the same data to two consumers.

Given two argument sinks, yield a result sink. Pushing to the result sink causes the same element to be pushed to both argument sinks.

dup_io :: Sources l a -> Sinks l a -> IO (Sources l a) Source

Send the same data to two consumers.

Given an argument source and argument sink, yield a result source. Pulling an element from the result source pulls from the argument source, and pushes that element to the sink, as well as returning it via the result source.

dup_oi :: Sinks l a -> Sources l a -> IO (Sources l a) Source

Send the same data to two consumers.

Like dup_io but with the arguments flipped.

connect_i :: Sources l a -> IO (Sources l a, Sources l a) Source

Connect an argument source to two result sources.

Pulling from either result source pulls from the argument source. Each result source only gets the elements pulled at the time, so if one side pulls all the elements the other side won't get any.

Watching

watch_i :: (Int -> Array l a -> IO ()) -> Sources l a -> IO (Sources l a) Source

Hook a worker function to some sources, which will be passed every chunk that is pulled from each source.

  • The worker is also passed the source index of the chunk that was pulled.

watch_o :: (Int -> Array l a -> IO ()) -> Sinks l a -> IO (Sinks l a) Source

Hook a worker function to some sinks, which will be passed every chunk that is pushed to each sink.

  • The worker is also passed the source index of the chunk that was pushed.

trigger_o :: Int -> (Int -> Array l a -> IO ()) -> IO (Sinks l a) Source

Create a bundle of sinks of the given arity that pass incoming chunks to a worker function.

  • This is like watch_o, except that the incoming chunks are discarded after they are passed to the worker function

Ignorance

discard_o :: Int -> IO (Sinks l a) Source

Create a bundle of sinks of the given arity that drop all data on the floor.

  • The sinks is strict in the *chunks*, so they are demanded before being discarded.
  • Haskell debugging thunks attached to the chunks will be demanded, but thunks attached to elements may not be -- depending on whether the chunk representation is strict in the elements.

ignore_o :: Int -> IO (Sinks l a) Source

Create a bundle of sinks of the given arity that drop all data on the floor.

  • As opposed to discard_o the sinks are non-strict in the chunks.
  • Haskell debugging thunks attached to the chunks will *not* be demanded.

Splitting

head_i :: (Windowable l a, Index l ~ Int) => Int -> Int -> Sources l a -> IO (Maybe ([a], Sources l a)) Source

Given a source index and a length, split the a list of that length from the front of the source. Yields a new source for the remaining elements.

  • We pull whole chunks from the source stream until we have at least the desired number of elements. The leftover elements in the final chunk are visible in the result Sources.

Grouping

groups_i Source

Arguments

:: (GroupsDict lVal lGrp tGrp lLen tLen a, Eq a) 
=> Name lGrp

Layout of result groups.

-> Name lLen

Layout of result lengths.

-> Sources lVal a

Input elements.

-> IO (Sources (T2 lGrp lLen) (a, Int))

Starting element and length of groups.

Scan through some sources to find runs of matching elements, and count the lengths of those runs.

> import Data.Repa.Flow
> toList1 0 =<< groups_i U U =<< fromList U 1 "waabbbblle"
Just [('w',1),('a',2),('b',4),('l',2),('e',1)]

groupsBy_i Source

Arguments

:: GroupsDict lVal lGrp tGrp lLen tLen a 
=> Name lGrp

Layout of result groups.

-> Name lLen

Layout of result lengths.

-> (a -> a -> Bool)

Fn to check if consecutive elements are in the same group.

-> Sources lVal a

Input elements.

-> IO (Sources (T2 lGrp lLen) (a, Int))

Starting element and length of groups.

Like groupsBy, but take a function to determine whether two consecutive values should be in the same group.

type GroupsDict lVal lGrp tGrp lLen tLen a = GroupsDict Int IO lVal lGrp tGrp lLen tLen a Source

Dictionaries needed to perform a grouping.

Folding

foldlS Source

Arguments

:: (Target lDst a, Index lDst ~ Int, BulkI lSrc b) 
=> Name lDst

Layout for result.

-> (a -> b -> a)

Combining funtion.

-> a

Starting value.

-> Sources lSrc b

Input elements to fold.

-> IO (Array lDst a) 

Fold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.

foldlAllS Source

Arguments

:: BulkI lSrc b 
=> (a -> b -> a)

Combining funtion.

-> a

Starting value.

-> Sources lSrc b

Input elements to fold.

-> IO a 

Fold all the elements of each stream in a bundle, one stream after the other, returning an array of fold results.

folds_i Source

Arguments

:: FoldsDict lSeg tSeg lElt tElt lGrp tGrp lRes tRes n a b 
=> Name lGrp

Layout for group names.

-> Name lRes

Layout for fold results.

-> (a -> b -> b)

Worker function.

-> b

Initial state when folding each segment.

-> Sources lSeg (n, Int)

Segment lengths.

-> Sources lElt a

Input elements to fold.

-> IO (Sources (T2 lGrp lRes) (n, b))

Result elements.

Given streams of lengths and values, perform a segmented fold where fold segments of values of the corresponding lengths are folded together.

> import Data.Repa.Flow
> sSegs <- fromList U 1 [('a', 1), ('b', 2), ('c', 4), ('d', 0), ('e', 1), ('f', 5 :: Int)]
> sVals <- fromList U 1 [10, 20, 30, 40, 50, 60, 70, 80, 90 :: Int]
> toList1 0 =<< folds_i U U (+) 0 sSegs sVals
Just [('a',10),('b',50),('c',220),('d',0),('e',80)]

If not enough input elements are available to fold a complete segment then no output is produced for that segment. However, trailing zero length segments still produce the initial value for the fold.

> import Data.Repa.Flow
> sSegs <- fromList U 1 [('a', 1), ('b', 2), ('c', 0), ('d', 0), ('e', 0 :: Int)]
> sVals <- fromList U 1 [10, 20, 30 :: Int]
> toList1 0 =<< folds_i U U (*) 1 sSegs sVals
Just [('a',10),('b',600),('c',1),('d',1),('e',1)]

type FoldsDict lSeg tSeg lElt tElt lGrp tGrp lRes tRes n a b = FoldsDict Int IO lSeg tSeg lElt tElt lGrp tGrp lRes tRes n a b Source

Dictionaries needed to perform a segmented fold.

foldGroupsBy_i Source

Arguments

:: FoldGroupsDict lSeg tSeg lVal tVal lGrp tGrp lRes tRes n a b 
=> Name lGrp

Layout for group names.

-> Name lRes

Layout for fold results.

-> (n -> n -> Bool)

Fn to check if consecutive elements are in the same group.

-> (a -> b -> b)

Worker function for the fold.

-> b

Initial when folding each segment.

-> Sources lSeg n

Names that determine groups.

-> Sources lVal a

Values to fold.

-> IO (Sources (T2 lGrp lRes) (n, b)) 

Combination of groupsBy_i and folds_i. We determine the the segment lengths while performing the folds.

Note that a SQL-like groupby aggregations can be performed using this function, provided the data is pre-sorted on the group key. For example, we can take the average of some groups of values:

> import Data.Repa.Flow
> sKeys   <-  fromList U 1 "waaaabllle"
> sVals   <-  fromList U 1 [10, 20, 30, 40, 50, 60, 70, 80, 90, 100 :: Double]

> sResult <-  map_i U (\(key, (acc, n)) -> (key, acc / n))
          =<< foldGroupsBy_i U U (==) (\x (acc, n) -> (acc + x, n + 1)) (0, 0) sKeys sVals

> toList1 0 sResult
Just [10.0,35.0,60.0,80.0,100.0]

type FoldGroupsDict lSeg tSeg lElt tElt lGrp tGrp lRes tRes n a b = (BulkI lSeg n, Material lElt a, Index lElt ~ Int, Material lGrp n, Index lGrp ~ Int, Material lRes b, Index lRes ~ Int, Unpack (IOBuffer lGrp n) tGrp, Unpack (IOBuffer lRes b) tRes) Source

Flow I/O

defaultChunkSize :: Integer Source

The default chunk size of 64kBytes.

Buckets

Sourcing

sourceCSV :: BulkI l Bucket => Array l Bucket -> IO (Sources N (Array N (Array F Char))) Source

Read a file containing Comma-Separated-Values.

sourceTSV :: BulkI l Bucket => Array l Bucket -> IO (Sources N (Array N (Array F Char))) Source

Read a file containing Tab-Separated-Values.

sourceRecords Source

Arguments

:: BulkI l Bucket 
=> (Word8 -> Bool)

Detect the end of a record.

-> Array l Bucket

Buckets.

-> IO (Sources N (Array F Word8)) 

Read complete records of data form a file, into chunks of the given length. We read as many complete records as will fit into each chunk.

The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the file to just after the last complete record that was read, so we can continue to read more complete records next time.

If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.

  • Data is read into foreign memory without copying it through the GHC heap.
  • The provided file handle must support seeking, else you'll get an exception.
  • Each file is closed the first time the consumer tries to pull a record from the associated stream when no more are available.

sourceLines :: BulkI l Bucket => Array l Bucket -> IO (Sources N (Array F Char)) Source

Read complete lines of data from a text file, using the given chunk length. We read as many complete lines as will fit into each chunk.

  • The trailing new-line characters are discarded.
  • Data is read into foreign memory without copying it through the GHC heap.
  • The provided file handle must support seeking, else you'll get an exception.
  • Each file is closed the first time the consumer tries to pull a line from the associated stream when no more are available.

sourceChars :: BulkI l Bucket => Array l Bucket -> IO (Sources F Char) Source

Read 8-bit ASCII characters from some files, using the given chunk length.

sourceBytes :: BulkI l Bucket => Array l Bucket -> IO (Sources F Word8) Source

Read data from some files, using the given chunk length.

Sinking

sinkChars :: BulkI l Bucket => Array l Bucket -> IO (Sinks F Char) Source

Write 8-bit ASCII characters to some files.

sinkLines Source

Arguments

:: (BulkI l Bucket, BulkI l1 (Array l2 Char), BulkI l2 Char, Unpack (Array l2 Char) t2) 
=> Name l1

Layout of chunks.

-> Name l2

Layout of lines in chunks.

-> Array l Bucket

Buckets

-> IO (Sinks l1 (Array l2 Char)) 

Write vectors of text lines to the given files handles.

  • Data is copied into a new buffer to insert newlines before being written out.

sinkBytes :: BulkI l Bucket => Array l Bucket -> IO (Sinks F Word8) Source

Write bytes to some file.