Copyright | (c) 2013-2015 Brendan Hay |
---|---|
License | Mozilla Public License, v. 2.0. |
Maintainer | Brendan Hay <brendan.g.hay@gmail.com> |
Stability | auto-generated |
Portability | non-portable (GHC extensions) |
Safe Haskell | None |
Language | Haskell2010 |
Writes a single data record from a producer into an Amazon Kinesis
stream. Call PutRecord
to send data from the producer into the Amazon
Kinesis stream for real-time ingestion and subsequent processing, one
record at a time. Each shard can support writes up to 1,000 records per
second, up to a maximum data write total of 1 MB per second.
You must specify the name of the stream that captures, stores, and transports the data; a partition key; and the data blob itself.
The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.
The partition key is used by Amazon Kinesis to distribute data across shards. Amazon Kinesis segregates the data records that belong to a data stream into multiple shards, using the partition key associated with each data record to determine which shard a given data record belongs to.
Partition keys are Unicode strings, with a maximum length limit of 256
characters for each key. An MD5 hash function is used to map partition
keys to 128-bit integer values and to map associated data records to
shards using the hash key ranges of the shards. You can override hashing
the partition key to determine the shard by explicitly specifying a hash
value using the ExplicitHashKey
parameter. For more information, see
Adding Data to a Stream
in the Amazon Kinesis Developer Guide.
PutRecord
returns the shard ID of where the data record was placed and
the sequence number that was assigned to the data record.
Sequence numbers generally increase over time. To guarantee strictly
increasing ordering, use the SequenceNumberForOrdering
parameter. For
more information, see
Adding Data to a Stream
in the Amazon Kinesis Developer Guide.
If a PutRecord
request cannot be processed because of insufficient
provisioned throughput on the shard involved in the request, PutRecord
throws ProvisionedThroughputExceededException
.
By default, data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream. This retention period can be modified using the DecreaseStreamRetentionPeriod and IncreaseStreamRetentionPeriod operations.
See: AWS API Reference for PutRecord.
- putRecord :: Text -> ByteString -> Text -> PutRecord
- data PutRecord
- prExplicitHashKey :: Lens' PutRecord (Maybe Text)
- prSequenceNumberForOrdering :: Lens' PutRecord (Maybe Text)
- prStreamName :: Lens' PutRecord Text
- prData :: Lens' PutRecord ByteString
- prPartitionKey :: Lens' PutRecord Text
- putRecordResponse :: Int -> Text -> Text -> PutRecordResponse
- data PutRecordResponse
- prrsResponseStatus :: Lens' PutRecordResponse Int
- prrsShardId :: Lens' PutRecordResponse Text
- prrsSequenceNumber :: Lens' PutRecordResponse Text
Creating a Request
Creates a value of PutRecord
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
Request Lenses
prExplicitHashKey :: Lens' PutRecord (Maybe Text) Source
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
prSequenceNumberForOrdering :: Lens' PutRecord (Maybe Text) Source
Guarantees strictly increasing sequence numbers, for puts from the same
client and to the same partition key. Usage: set the
SequenceNumberForOrdering
of record n to the sequence number of
record n-1 (as returned in the result when putting record n-1). If
this parameter is not set, records will be coarsely ordered based on
arrival time.
prStreamName :: Lens' PutRecord Text Source
The name of the stream to put the data record into.
prData :: Lens' PutRecord ByteString Source
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
Note: This Lens
automatically encodes and decodes Base64 data,
despite what the AWS documentation might say.
The underlying isomorphism will encode to Base64 representation during
serialisation, and decode from Base64 representation during deserialisation.
This Lens
accepts and returns only raw unencoded data.
prPartitionKey :: Lens' PutRecord Text Source
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.
Destructuring the Response
Creates a value of PutRecordResponse
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
data PutRecordResponse Source
Represents the output for PutRecord
.
See: putRecordResponse
smart constructor.
Response Lenses
prrsResponseStatus :: Lens' PutRecordResponse Int Source
The response status code.
prrsShardId :: Lens' PutRecordResponse Text Source
The shard ID of the shard where the data record was placed.
prrsSequenceNumber :: Lens' PutRecordResponse Text Source
The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.