storablevector-0.2.9.1: Fast, packed, strict storable arrays with a list interface like ByteString

Safe HaskellNone
LanguageHaskell98

Data.StorableVector.Lazy

Contents

Description

Chunky signal stream build on StorableVector.

Hints for fusion: - Higher order functions should always be inlined in the end in order to turn them into machine loops instead of calling a function in an inner loop.

Synopsis

Documentation

newtype Vector a Source

Constructors

SV 

Fields

chunks :: [Vector a]
 

Instances

(Storable a, Eq a) => Eq (Vector a) 
(Storable a, Show a) => Show (Vector a) 
(Storable a, Arbitrary a) => Arbitrary (Vector a) 
Storable a => Monoid (Vector a) 

Introducing and eliminating Vectors

pack :: Storable a => ChunkSize -> [a] -> Vector a Source

unpack :: Storable a => Vector a -> [a] Source

packWith :: Storable b => ChunkSize -> (a -> b) -> [a] -> Vector b Source

unpackWith :: Storable a => (a -> b) -> Vector a -> [b] Source

unfoldr :: Storable b => ChunkSize -> (a -> Maybe (b, a)) -> a -> Vector b Source

unfoldrResult :: Storable b => ChunkSize -> (a -> Either c (b, a)) -> a -> (Vector b, c) Source

Example:

*Data.StorableVector.Lazy> unfoldrResult (ChunkSize 5) (\c -> if c>'z' then Left (Char.ord c) else Right (c, succ c)) 'a'
(VectorLazy.fromChunks [Vector.pack "abcde",Vector.pack "fghij",Vector.pack "klmno",Vector.pack "pqrst",Vector.pack "uvwxy",Vector.pack "z"],123)

sample :: Storable a => ChunkSize -> (Int -> a) -> Vector a Source

sampleN :: Storable a => ChunkSize -> Int -> (Int -> a) -> Vector a Source

iterate :: Storable a => ChunkSize -> (a -> a) -> a -> Vector a Source

Basic interface

equal :: (Storable a, Eq a) => Vector a -> Vector a -> Bool Source

index :: Storable a => Vector a -> Int -> a Source

cons :: Storable a => a -> Vector a -> Vector a Source

append :: Storable a => Vector a -> Vector a -> Vector a infixr 5 Source

extendL :: Storable a => ChunkSize -> Vector a -> Vector a -> Vector a Source

extendL size x y prepends the chunk x and merges it with the first chunk of y if the total size is at most size. This way you can prepend small chunks while asserting a reasonable average size for chunks.

Transformations

map :: (Storable x, Storable y) => (x -> y) -> Vector x -> Vector y Source

Reducing Vectors

foldl :: Storable b => (a -> b -> a) -> a -> Vector b -> a Source

foldl' :: Storable b => (a -> b -> a) -> a -> Vector b -> a Source

foldr :: Storable b => (b -> a -> a) -> a -> Vector b -> a Source

monoidConcatMap :: (Storable a, Monoid m) => (a -> m) -> Vector a -> m Source

any :: Storable a => (a -> Bool) -> Vector a -> Bool Source

all :: Storable a => (a -> Bool) -> Vector a -> Bool Source

maximum :: (Storable a, Ord a) => Vector a -> a Source

minimum :: (Storable a, Ord a) => Vector a -> a Source

inspecting a vector

viewL :: Storable a => Vector a -> Maybe (a, Vector a) Source

viewR :: Storable a => Vector a -> Maybe (Vector a, a) Source

switchL :: Storable a => b -> (a -> Vector a -> b) -> Vector a -> b Source

switchR :: Storable a => b -> (Vector a -> a -> b) -> Vector a -> b Source

scanl :: (Storable a, Storable b) => (a -> b -> a) -> a -> Vector b -> Vector a Source

mapAccumL :: (Storable a, Storable b) => (acc -> a -> (acc, b)) -> acc -> Vector a -> (acc, Vector b) Source

mapAccumR :: (Storable a, Storable b) => (acc -> a -> (acc, b)) -> acc -> Vector a -> (acc, Vector b) Source

crochetLChunk :: (Storable x, Storable y) => (x -> acc -> Maybe (y, acc)) -> acc -> Vector x -> (Vector y, Maybe acc) Source

crochetL :: (Storable x, Storable y) => (x -> acc -> Maybe (y, acc)) -> acc -> Vector x -> Vector y Source

sub-vectors

take :: Storable a => Int -> Vector a -> Vector a Source

takeEnd :: Storable a => Int -> Vector a -> Vector a Source

Take n values from the end of the vector in a memory friendly way. takeEnd n xs should perform the same as drop (length xs - n) xs, but the latter one would have to materialise xs completely. In contrast to that takeEnd should hold only chunks of about n elements at any time point.

drop :: Storable a => Int -> Vector a -> Vector a Source

splitAt :: Storable a => Int -> Vector a -> (Vector a, Vector a) Source

dropMarginRem :: Storable a => Int -> Int -> Vector a -> (Int, Vector a) Source

dropMarginRem n m xs drops at most the first m elements of xs and ensures that xs still contains n elements. Additionally returns the number of elements that could not be dropped due to the margin constraint. That is dropMarginRem n m xs == (k,ys) implies length xs - m == length ys - k. Requires length xs >= n.

dropWhile :: Storable a => (a -> Bool) -> Vector a -> Vector a Source

takeWhile :: Storable a => (a -> Bool) -> Vector a -> Vector a Source

span :: Storable a => (a -> Bool) -> Vector a -> (Vector a, Vector a) Source

other functions

filter :: Storable a => (a -> Bool) -> Vector a -> Vector a Source

zipWith :: (Storable a, Storable b, Storable c) => (a -> b -> c) -> Vector a -> Vector b -> Vector c Source

Generates laziness breaks wherever one of the input signals has a chunk boundary.

zipWith3 :: (Storable a, Storable b, Storable c, Storable d) => (a -> b -> c -> d) -> Vector a -> Vector b -> Vector c -> Vector d Source

zipWith4 :: (Storable a, Storable b, Storable c, Storable d, Storable e) => (a -> b -> c -> d -> e) -> Vector a -> Vector b -> Vector c -> Vector d -> Vector e Source

zipWithLastPattern :: (Storable a, Storable b, Storable c) => (a -> b -> c) -> Vector a -> Vector b -> Vector c Source

Preserves chunk pattern of the last argument.

zipWithLastPattern3 :: (Storable a, Storable b, Storable c, Storable d) => (a -> b -> c -> d) -> Vector a -> Vector b -> Vector c -> Vector d Source

Preserves chunk pattern of the last argument.

zipWithLastPattern4 :: (Storable a, Storable b, Storable c, Storable d, Storable e) => (a -> b -> c -> d -> e) -> Vector a -> Vector b -> Vector c -> Vector d -> Vector e Source

Preserves chunk pattern of the last argument.

zipWithSize :: (Storable a, Storable b, Storable c) => ChunkSize -> (a -> b -> c) -> Vector a -> Vector b -> Vector c Source

zipWithSize3 :: (Storable a, Storable b, Storable c, Storable d) => ChunkSize -> (a -> b -> c -> d) -> Vector a -> Vector b -> Vector c -> Vector d Source

zipWithSize4 :: (Storable a, Storable b, Storable c, Storable d, Storable e) => ChunkSize -> (a -> b -> c -> d -> e) -> Vector a -> Vector b -> Vector c -> Vector d -> Vector e Source

interleaved vectors

sieve :: Storable a => Int -> Vector a -> Vector a Source

interleaveFirstPattern :: Storable a => [Vector a] -> Vector a Source

Interleave lazy vectors while maintaining the chunk pattern of the first vector. All input vectors must have the same length.

pad :: Storable a => ChunkSize -> a -> Int -> Vector a -> Vector a Source

Ensure a minimal length of the list by appending pad values.

padAlt :: Storable a => ChunkSize -> a -> Int -> Vector a -> Vector a Source

Helper functions for StorableVector

IO

hGetContentsAsync :: Storable a => ChunkSize -> Handle -> IO (IOError, Vector a) Source

Read the rest of a file lazily and provide the reason of termination as IOError. If IOError is EOF (check with System.Error.isEOFError err), then the file was read successfully. Only access the final IOError after you have consumed the file contents, since finding out the terminating reason forces to read the entire file. Make also sure you read the file completely, because it is only closed when the file end is reached (or an exception is encountered).

TODO: In ByteString.Lazy the chunk size is reduced if data is not immediately available. Maybe we should adapt that behaviour but when working with realtime streams that may mean that the chunks are very small.

hPut :: Storable a => Handle -> Vector a -> IO () Source

readFileAsync :: Storable a => ChunkSize -> FilePath -> IO (IOError, Vector a) Source

The file can only closed after all values are consumed. That is you must always assert that you consume all elements of the stream, and that no values are missed due to lazy evaluation. This requirement makes this function useless in many applications.