repa-flow-4.0.0.2: Data-parallel data flows.

Safe HaskellNone
LanguageHaskell98

Data.Repa.Flow.Generic.IO

Contents

Synopsis

Buckets

Sourcing

sourceBytes Source

Arguments

:: Bulk l Bucket 
=> Integer

Chunk length in bytes.

-> Array l Bucket

Buckets.

-> IO (Sources (Index l) IO (Array F Word8)) 

Read data from some files, using the given chunk length.

  • Data is read into foreign memory without copying it through the GHC heap.
  • All chunks have the same size, except possibly the last one.

sourceChars Source

Arguments

:: Bulk l Bucket 
=> Integer

Chunk length in bytes.

-> Array l Bucket

Buckets.

-> IO (Sources (Index l) IO (Array F Char)) 

Read 8-byte ASCII characters from some files, using the given chunk length.

  • Data is read into foreign memory without copying it through the GHC heap.
  • All chunks have the same size, except possibly the last one.

sourceChunks Source

Arguments

:: BulkI l Bucket 
=> Integer

Chunk length in bytes.

-> (Word8 -> Bool)

Detect the end of a record.

-> IO ()

Action to perform if we can't get a whole record.

-> Array l Bucket

Source buckets.

-> IO (Sources (Index l) IO (Array F Word8)) 

Like sourceRecords, but produce all records in a single vector.

sourceRecords Source

Arguments

:: BulkI l Bucket 
=> Integer

Chunk length in bytes.

-> (Word8 -> Bool)

Detect the end of a record.

-> IO ()

Action to perform if we can't get a whole record.

-> Array l Bucket

Source buckets.

-> IO (Sources Int IO (Array N (Array F Word8))) 

Read complete records of data form a bucket, into chunks of the given length. We read as many complete records as will fit into each chunk.

The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the bucket to just after the last complete record that was read, so we can continue to read more complete records next time.

If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.

  • Data is read into foreign memory without copying it through the GHC heap.
  • The provided file handle must support seeking, else you'll get an exception.

Sinking

sinkBytes Source

Arguments

:: Bulk l Bucket 
=> Array l Bucket

Buckets.

-> IO (Sinks (Index l) IO (Array F Word8)) 

Write chunks of bytes to the given file handles.

  • Data is written out directly from the provided buffer.

sinkChars Source

Arguments

:: (Bulk l Bucket, BulkI r Char) 
=> Array l Bucket

Buckets.

-> IO (Sinks (Index l) IO (Array r Char)) 

Write chunks of 8-byte ASCII characters to the given file handles.

  • Data is copied into a foreign buffer to truncate the characters to 8-bits each before being written out.

sinkLines Source

Arguments

:: (Bulk l Bucket, BulkI l1 (Array l2 Char), BulkI l2 Char, Unpack (Array l2 Char) t2) 
=> Name l1

Layout of chunks of lines.

-> Name l2

Layout of lines.

-> Array l Bucket

Buckets.

-> IO (Sinks (Index l) IO (Array l1 (Array l2 Char))) 

Write vectors of text lines to the given files handles.

  • Data is copied into a new buffer to insert newlines before being written out.

Sieving

sieve_o Source

Arguments

:: (a -> Maybe (FilePath, Array F Word8))

Produce the desired file path and output record for this element, or Nothing if it should be discarded.

-> IO (Sinks () IO a) 

Create an output sieve that writes data to an indeterminate number of output files. Each new element is appended to its associated file.

  • TODO: This function keeps a maximum of 8 files open at once, closing and re-opening them in a least-recently-used order. Due to this behaviour it's fine to create thousands of separate output files without risking overflowing the process limit on the maximum number of useable file handles.