amazonka-s3-streaming-0.1.0.3: Provides conduits to upload data to S3 using the Multipart API

Safe HaskellNone
LanguageHaskell2010

Network.AWS.S3.StreamingUpload

Synopsis

Documentation

streamUpload :: (MonadResource m, MonadAWS m, AWSConstraint r m) => CreateMultipartUpload -> Sink ByteString m CompleteMultipartUploadResponse Source #

Given a CreateMultipartUpload, creates a Sink which will sequentially upload the data streamed in in chunks of at least chunkSize and return the CompleteMultipartUploadResponse.

If uploading of any parts fails, an attempt is made to abort the Multipart upload, but it is likely that if an upload fails this abort may also fail. ListMultipartUploads can be used to list any pending uploads - it is important to abort multipart uploads because you will be charged for storage of the parts until it is completed or aborted. See the AWS documentation for more details.

May throw Error

data UploadLocation Source #

Specifies whether to upload a file or 'ByteString

Constructors

FP FilePath

A file to be uploaded

BS ByteString

A strict ByteString

concurrentUpload :: (MonadAWS m, MonadBaseControl IO m) => UploadLocation -> CreateMultipartUpload -> m CompleteMultipartUploadResponse Source #

Allows a file or ByteString to be uploaded concurrently, using the async library. ByteStrings are split into chunkSize chunks and uploaded directly.

Files are mmapped into chunkSize chunks and each chunk is uploaded in parallel. This considerably reduces the memory necessary compared to reading the contents into memory as a strict ByteString. The usual caveats about mmaped files apply: if the file is modified during this operation, the data may become corrupt.

May throw Error, or IOError.

abortAllUploads :: MonadAWS m => BucketName -> m () Source #

Aborts all uploads in a given bucket - useful for cleaning up.

chunkSize :: Int Source #

Minimum size of data which will be sent in a single part, currently 6MB