| Copyright | (c) 2013-2023 Brendan Hay |
|---|---|
| License | Mozilla Public License, v. 2.0. |
| Maintainer | Brendan Hay |
| Stability | auto-generated |
| Portability | non-portable (GHC extensions) |
| Safe Haskell | Safe-Inferred |
| Language | Haskell2010 |
Amazonka.Kinesis.Types.PutRecordsRequestEntry
Description
Synopsis
- data PutRecordsRequestEntry = PutRecordsRequestEntry' {
- explicitHashKey :: Maybe Text
- data' :: Base64
- partitionKey :: Text
- newPutRecordsRequestEntry :: ByteString -> Text -> PutRecordsRequestEntry
- putRecordsRequestEntry_explicitHashKey :: Lens' PutRecordsRequestEntry (Maybe Text)
- putRecordsRequestEntry_data :: Lens' PutRecordsRequestEntry ByteString
- putRecordsRequestEntry_partitionKey :: Lens' PutRecordsRequestEntry Text
Documentation
data PutRecordsRequestEntry Source #
Represents the output for PutRecords.
See: newPutRecordsRequestEntry smart constructor.
Constructors
| PutRecordsRequestEntry' | |
Fields
| |
Instances
newPutRecordsRequestEntry Source #
Arguments
| :: ByteString | |
| -> Text | |
| -> PutRecordsRequestEntry |
Create a value of PutRecordsRequestEntry with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:explicitHashKey:PutRecordsRequestEntry', putRecordsRequestEntry_explicitHashKey - The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
$sel:data':PutRecordsRequestEntry', putRecordsRequestEntry_data - The data blob to put into the record, which is base64-encoded when the
blob is serialized. When the data blob (the payload before
base64-encoding) is added to the partition key size, the total size must
not exceed the maximum record size (1 MiB).--
-- Note: This Lens automatically encodes and decodes Base64 data.
-- The underlying isomorphism will encode to Base64 representation during
-- serialisation, and decode from Base64 representation during deserialisation.
-- This Lens accepts and returns only raw unencoded data.
$sel:partitionKey:PutRecordsRequestEntry', putRecordsRequestEntry_partitionKey - Determines which shard in the stream the data record is assigned to.
Partition keys are Unicode strings with a maximum length limit of 256
characters for each key. Amazon Kinesis Data Streams uses the partition
key as input to a hash function that maps the partition key and
associated data to a specific shard. Specifically, an MD5 hash function
is used to map partition keys to 128-bit integer values and to map
associated data records to shards. As a result of this hashing
mechanism, all data records with the same partition key map to the same
shard within the stream.
putRecordsRequestEntry_explicitHashKey :: Lens' PutRecordsRequestEntry (Maybe Text) Source #
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
putRecordsRequestEntry_data :: Lens' PutRecordsRequestEntry ByteString Source #
The data blob to put into the record, which is base64-encoded when the
blob is serialized. When the data blob (the payload before
base64-encoding) is added to the partition key size, the total size must
not exceed the maximum record size (1 MiB).--
-- Note: This Lens automatically encodes and decodes Base64 data.
-- The underlying isomorphism will encode to Base64 representation during
-- serialisation, and decode from Base64 representation during deserialisation.
-- This Lens accepts and returns only raw unencoded data.
putRecordsRequestEntry_partitionKey :: Lens' PutRecordsRequestEntry Text Source #
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.