streaming-bytestring: Effectful sequences of bytes.
This is an implementation of effectful, monadic bytestrings, adequate for non-lazy-io.
pipes uses this isomorphism:
Streaming.unfoldrChunks Pipes.next :: Monad m => Producer ByteString m r -> ByteString m r Pipes.unfoldr Streaming.nextChunk :: Monad m => ByteString m r -> Producer ByteString m r
io-streams is thus:
IOStreams.unfoldM Streaming.unconsChunk :: ByteString IO () -> IO (InputStream ByteString) Streaming.reread IOStreams.read :: InputStream ByteString -> ByteString IO ()
and similarly for other streaming io libraries.
The implementation follows the
as far as is possible, substituting the type
data ByteString m r = Empty r | Chunk Strict.ByteString (ByteString m r) | Go (m (ByteString m r))
for the type
data ByteString = Empty | Chunk Strict.ByteString ByteString
Data.ByteString.Lazy.Internal. (Constructors are necessarily hidden in
internal modules in both cases.) As a lazy bytestring is implemented internally
by a sort of list of strict bytestring chunks, a streaming bytestring is
implemented as a producer or generator of strict bytestring chunks.
Something like this alteration of type is of course obvious and mechanical, once the idea of an effectful bytestring type is contemplated and lazy io is rejected. Indeed it seems that this is the proper expression of what was intended by lazy bytestrings to begin with. The documentation, after all, reads
"A key feature of lazy ByteStrings is the means to manipulate large or unbounded streams of data without requiring the entire sequence to be resident in memory. To take advantage of this you have to write your functions in a lazy streaming style, e.g. classic pipeline composition. The default I/O chunk size is 32k, which should be good in most circumstances."
... which is very much the idea of this library: the default chunk size for
hGetContents and the like follows
Data.ByteString.Lazy and operations
append and so on are tailored not to increase chunk size.
It is natural to think that the direct, naive, monadic formulation of such a type would necessarily make things much slower. This appears to be a prejudice. For example, passing a large file of short lines through this benchmark transformation
Lazy.unlines . map (\bs -> "!" <> Lazy.drop 5 bs) . Lazy.lines Streaming.unlines . S.maps (\bs -> chunk "!" >> Streaming.drop 5 bs) . Streaming.lines
gives pleasing results like these
$ time ./benchlines lazy >> /dev/null real 0m2.097s ... $ time ./benchlines streaming >> /dev/null real 0m1.930s
More typical, perhaps, are the results for the more sophisticated operation
Lazy.intercalate "!\n" . Lazy.lines Streaming.intercalate "!\n" . Streaming.lines
time ./benchlines lazy >> /dev/null real 0m1.250s ... time ./benchlines streaming >> /dev/null real 0m1.531s
The pipes environment (to which this library basically belongs) would express the latter as
Pipes.intercalates (Pipes.yield "!\n") . view Pipes.lines
meaning almost exactly what we mean above, but with results like this
time ./benchlines pipes >> /dev/null real 0m6.353s
The difference, I think, is mostly that this library depends
streaming library, which is used in place of
express the splitting and division of byte streams.
Indeed even if I unwrap and re-wrap with the above-mentioned isomorphism
Pipes.unfoldr Streaming.nextChunk . Streaming.intercalate "!\n" . Streaming.lines . Streaming.unfoldrChunks Pipe.next
I get an excellent speed-up:
$ time ./benchlines pipes_stream >> /dev/null real 0m3.393s
Though we barely alter signatures in
more than is required
by the types, the point of view that emerges is very much that of
pipes-group. In particular
we have the correspondences
Lazy.splitAt :: Int -> ByteString -> (ByteString, ByteString) Streaming.splitAt :: Int -> ByteString m r -> ByteString m (ByteString m r) Pipes.splitAt :: Int -> Producer ByteString m r -> Producer ByteString m (Producer ByteString m r)
Lazy.lines :: ByteString -> [ByteString] Streaming.lines :: ByteString m r -> Stream (ByteString m) m r Pipes.lines :: Producer ByteString m r -> FreeT (Producer ByteString m) m r
Stream type expresses the sequencing of
ByteString m _ layers
with the usual 'free monad' sequencing.
If you are unfamiliar with this
way of structuring material you might take a look at the tutorial for
and the examples in the documentation for the streaming library. See also
implementations of the shell-like examples from the
|Versions||0.1.0.0, 0.1.0.1, 0.1.0.2, 0.1.0.3, 0.1.0.4, 0.1.0.5, 0.1.0.6, 0.1.0.7, 0.1.0.8, 0.1.1.0, 0.1.2.0, 0.1.2.2, 0.1.3.0, 0.1.4.0, 0.1.4.2, 0.1.4.3, 0.1.4.4, 0.1.4.5, 0.1.4.6, 0.1.5, 0.1.6 (info)|
|Dependencies||attoparsec, base (==4.8.*), bytestring (==0.10.*), deepseq (==1.4.*), foldl, http-client, http-client-tls, mmorph (==1.0.*), mtl (==2.2.*), streaming, syb (==0.5.*), transformers [details]|
|Uploaded||by MichaelThompson at Wed Aug 26 15:26:10 UTC 2015|
|Distributions||LTSHaskell:0.1.6, NixOS:0.1.6, Stackage:0.1.6|
|Downloads||6169 total (59 in the last 30 days)|
|Rating||2.25 (votes: 2) [estimated by rule of succession]|
|Status||Docs uploaded by user
Build status unknown [no reports yet]
Hackage Matrix CI
For package maintainers and hackage trustees