name: streaming-bytestring version: 0.1.0.2 synopsis: effectful bytestrings, or: lazy bytestring done right description: This is an implementation of effectful, memory-constrained bytestrings (byte streams) and functions for streaming bytestring manipulation, adequate for non-lazy-io. . Interoperation with @pipes@ uses this isomorphism: . > Streaming.unfoldrChunks Pipes.next :: Monad m => Producer ByteString m r -> ByteString m r > Pipes.unfoldr Streaming.nextChunk :: Monad m => ByteString m r -> Producer ByteString m r . Interoperation with @io-streams@ is thus: . > IOStreams.unfoldM Streaming.unconsChunk :: ByteString IO () -> IO (InputStream ByteString) > Streaming.reread IOStreams.read :: InputStream ByteString -> ByteString IO () . and similarly for other streaming io libraries. . A tutorial module is in the works; is a sequence of simplified implementations of familiar shell utilities. It closely follows those at the end of the . It is generally much simpler; in some case simpler than what you would write with lazy bytestrings. . The implementation follows the details of @Data.ByteString.Lazy@ and @Data.ByteString.Lazy.Char8@ as far as is possible, replacing the lazy bytestring type: . > data ByteString = Empty | Chunk Strict.ByteString ByteString . with the minimal effectful variant . > data ByteString m r = Empty r | Chunk Strict.ByteString (ByteString m r) | Go (m (ByteString m r)) . (Constructors are necessarily hidden in internal modules in both cases.) As a lazy bytestring is implemented internally by a sort of list of strict bytestring chunks, a streaming bytestring is simply implemented as a /producer/ or /generator/ of strict bytestring chunks. Most operations are defined by simply adding a line to what we find in @Data.ByteString.Lazy@. . Something like this alteration of type is of course obvious and mechanical, once the idea of an effectful bytestring type is contemplated and lazy io is rejected. Indeed it seems that this is the proper expression of what was intended by lazy bytestrings to begin with. The documentation, after all, reads . * \"A key feature of lazy ByteStrings is the means to manipulate large or unbounded streams of data without requiring the entire sequence to be resident in memory. To take advantage of this you have to write your functions in a lazy streaming style, e.g. classic pipeline composition. The default I/O chunk size is 32k, which should be good in most circumstances.\" . ... which is very much the idea of this library: the default chunk size for 'hGetContents' and the like follows @Data.ByteString.Lazy@ and operations like @lines@ and @append@ and so on are tailored not to increase chunk size. . It is natural to think that the direct, naive, monadic formulation of such a type would necessarily make things much slower. This appears to be a prejudice. For example, passing a large file of short lines through this benchmark transformation . > Lazy.unlines . map (\bs -> "!" <> Lazy.drop 5 bs) . Lazy.lines > Streaming.unlines . S.maps (\bs -> chunk "!" >> Streaming.drop 5 bs) . Streaming.lines . gives pleasing results like these . > $ time ./benchlines lazy >> /dev/null > real 0m2.097s > ... > $ time ./benchlines streaming >> /dev/null > real 0m1.930s . More typical, perhaps, are the results for the more sophisticated operation . > Lazy.intercalate "!\n" . Lazy.lines > Streaming.intercalate "!\n" . Streaming.lines . > time ./benchlines lazy >> /dev/null > real 0m1.250s > ... > time ./benchlines streaming >> /dev/null > real 0m1.531s . The pipes environment would express the latter as . > Pipes.intercalates (Pipes.yield "!\n") . view Pipes.lines . meaning almost exactly what we mean above, but with results like this . > time ./benchlines pipes >> /dev/null > real 0m6.353s . The difference is not intrinsic to pipes, but is mostly that this library depends the @streaming@ library, which is used in place of @free@ to express the (streaming) splitting and division of byte streams. Those elementary concepts are catastrophically mishandled in the streaming io libraries other than pipes; already the @enumerator@ and @iteratee@ libraries were completely defeated by it: see e.g. the implementation of . This will concatenate strict text forever, if that's what is coming in. . Though we barely alter signatures in @Data.ByteString.Lazy@ more than is required by the types, the point of view that emerges is very much that of @pipes-bytestring@ and @pipes-group@. In particular we have the correspondences: . > Lazy.splitAt :: Int -> ByteString -> (ByteString, ByteString) > Streaming.splitAt :: Int -> ByteString m r -> ByteString m (ByteString m r) > Pipes.splitAt :: Int -> Producer ByteString m r -> Producer ByteString m (Producer ByteString m r) . and . > Lazy.lines :: ByteString -> [ByteString] > Streaming.lines :: ByteString m r -> Stream (ByteString m) m r > Pipes.lines :: Producer ByteString m r -> FreeT (Producer ByteString m) m r . where the @Stream@ type expresses the sequencing of @ByteString m _@ layers with the usual \'free monad\' sequencing. . If you are unfamiliar with this way of structuring material you might take a look at the tutorial for and the examples in the documentation for the streaming library. See also of the shell-like examples mentioned above. license: BSD3 license-file: LICENSE author: michaelt maintainer: what_is_it_to_do_anything@yahoo.com -- copyright: category: Data build-type: Simple extra-source-files: ChangeLog.md cabal-version: >=1.10 library exposed-modules: Data.ByteString.Streaming , Data.ByteString.Streaming.Char8 , Data.ByteString.Streaming.Internal -- other-modules: other-extensions: CPP, BangPatterns, ForeignFunctionInterface, DeriveDataTypeable, Unsafe build-depends: base >=4.7 && <4.9 , bytestring >=0.10 && <0.11 , deepseq , mtl >=2.1 && <2.3 , mmorph >=1.0 && <1.2 , transformers >=0.3 && <0.5 , streaming > 0.1.0.8 && < 0.1.1 -- hs-source-dirs: default-language: Haskell2010 -- ghc-options: -Wall