repa-flow-4.2.2.1: Data-parallel data flows.

Safe HaskellNone
LanguageHaskell98

Data.Repa.Flow.Generic.IO

Contents

Synopsis

Buckets

Sourcing

sourceBytes Source

Arguments

:: Bulk l Bucket 
=> Integer

Chunk length in bytes.

-> Array l Bucket

Buckets.

-> IO (Sources (Index l) IO (Array F Word8)) 

Read data from some files, using the given chunk length.

  • Data is read into foreign memory without copying it through the GHC heap.
  • All chunks have the same size, except possibly the last one.

sourceChars Source

Arguments

:: Bulk l Bucket 
=> Integer

Chunk length in bytes.

-> Array l Bucket

Buckets.

-> IO (Sources (Index l) IO (Array F Char)) 

Read 8-byte ASCII characters from some files, using the given chunk length.

  • Data is read into foreign memory without copying it through the GHC heap.
  • All chunks have the same size, except possibly the last one.

sourceChunks Source

Arguments

:: BulkI l Bucket 
=> Integer

Chunk length in bytes.

-> (Word8 -> Bool)

Detect the end of a record.

-> IO ()

Action to perform if we can't get a whole record.

-> Array l Bucket

Source buckets.

-> IO (Sources (Index l) IO (Array F Word8)) 

Like sourceRecords, but produce all records in a single vector.

sourceRecords Source

Arguments

:: BulkI l Bucket 
=> Integer

Chunk length in bytes.

-> (Word8 -> Bool)

Detect the end of a record.

-> IO ()

Action to perform if we can't get a whole record.

-> Array l Bucket

Source buckets.

-> IO (Sources Int IO (Array N (Array F Word8))) 

Read complete records of data form a bucket, into chunks of the given length. We read as many complete records as will fit into each chunk.

The records are separated by a special terminating character, which the given predicate detects. After reading a chunk of data we seek the bucket to just after the last complete record that was read, so we can continue to read more complete records next time.

If we cannot fit at least one complete record in the chunk then perform the given failure action. Limiting the chunk length guards against the case where a large input file is malformed, as we won't try to read the whole file into memory.

  • Data is read into foreign memory without copying it through the GHC heap.
  • The provided file handle must support seeking, else you'll get an exception.

sourceLinesFormat Source

Arguments

:: (Unpackable format, Target A (Value format)) 
=> Integer

Chunk length.

-> IO ()

Action if we find a line longer than the chunk length.

-> IO (Array A Word8 -> IO ())

Action if we can't convert a row.

-> format

Format of each line.

-> Array B Bucket 
-> IO (Sources Int IO (Array A (Value format))) 

Read lines from a named text file, in a chunk-wise manner, converting each line to values with the given format.

sourceLinesFormatFromLazyByteString Source

Arguments

:: (Unpackable format, Target A (Value format)) 
=> Int

Number of streams in the result bundle.

-> IO (Array A Word -> IO ())

Action if we can't convert a row.

-> format

Format of each line.

-> ByteString

Lazy byte string.

-> Int

Skip this many header lines at the start.

-> IO (Sources Int IO (Array A (Value format))) 

Read lines from a lazy byte string, in a chunk-wise manner, converting each line to values with the given format.

Sinking

sinkBytes Source

Arguments

:: Bulk l Bucket 
=> Array l Bucket

Buckets.

-> IO (Sinks (Index l) IO (Array F Word8)) 

Write chunks of bytes to the given file handles.

  • Data is written out directly from the provided buffer.

sinkChars Source

Arguments

:: (Bulk l Bucket, BulkI r Char) 
=> Array l Bucket

Buckets.

-> IO (Sinks (Index l) IO (Array r Char)) 

Write chunks of 8-byte ASCII characters to the given file handles.

  • Data is copied into a foreign buffer to truncate the characters to 8-bits each before being written out.

sinkLines Source

Arguments

:: (Bulk l Bucket, BulkI l1 (Array l2 Char), BulkI l2 Char) 
=> Name l1

Layout of chunks of lines.

-> Name l2

Layout of lines.

-> Array l Bucket

Buckets.

-> IO (Sinks (Index l) IO (Array l1 (Array l2 Char))) 

Write vectors of text lines to the given files handles.

  • Data is copied into a new buffer to insert newlines before being written out.

Sieving

sieve_o Source

Arguments

:: Int

Max payload size of in-memory data.

-> Int

Max number of in-memory chunks.

-> (a -> Maybe (FilePath, Array F Word8))

Produce the desired file path and output record for this element, or Nothing if it should be discarded.

-> IO (Sinks () IO a) 

Create an output sieve that writes data to an indeterminate number of output files. Each new element is appended to its associated file.

Tables

sourceCSV :: BulkI l Bucket => Integer -> IO () -> Array l Bucket -> IO (Sources Int IO (Array N (Array N (Array F Char)))) Source

Read a file containing Comma-Separated-Values.

TODO: handle escaped commas. TODO: check CSV file standard.

sourceTSV :: BulkI l Bucket => Integer -> IO () -> Array l Bucket -> IO (Sources Int IO (Array N (Array N (Array F Char)))) Source

Read a file containing Tab-Separated-Values.