| Safe Haskell | Safe-Inferred |
|---|---|
| Language | Haskell2010 |
Kafka.RecordBatch.Response
Synopsis
- data RecordBatch = RecordBatch {
- baseOffset :: !Int64
- partitionLeaderEpoch :: !Int32
- attributes :: !Word16
- lastOffsetDelta :: !Int32
- baseTimestamp :: !Int64
- maxTimestamp :: !Int64
- producerId :: !Int64
- producerEpoch :: !Int16
- baseSequence :: !Int32
- recordsCount :: !Int32
- recordsPayload :: !Bytes
- parser :: Context -> Parser Context s RecordBatch
- parserArray :: Context -> Parser Context s (SmallArray RecordBatch)
Documentation
data RecordBatch Source #
A record batch. The following fields are not made explicit since they are only for framing and checksum:
- batchLength
- magic (always the number 2)
- crc
From kafka documentation:
baseOffset: int64
batchLength: int32
partitionLeaderEpoch: int32
magic: int8 (current magic value is 2)
crc: int32
attributes: int16
bit 0~2:
0: no compression
1: gzip
2: snappy
3: lz4
4: zstd
bit 3: timestampType
bit 4: isTransactional (0 means not transactional)
bit 5: isControlBatch (0 means not a control batch)
bit 6: hasDeleteHorizonMs (0 means baseTimestamp is not set as the delete horizon for compaction)
bit 7~15: unused
lastOffsetDelta: int32
baseTimestamp: int64
maxTimestamp: int64
producerId: int64
producerEpoch: int16
baseSequence: int32
records: [Record]A few of my own observations:
- The docs add a note that that last field
recordsis not really what it looks like. The array length is always serialized in the usual way, but the payload might be compressed. - The field
batchLengthincludes the size of everything after it. So, not itself and notbaseOffset.
Constructors
| RecordBatch | |
Fields
| |
Instances
| Show RecordBatch Source # | |
Defined in Kafka.RecordBatch.Response Methods showsPrec :: Int -> RecordBatch -> ShowS # show :: RecordBatch -> String # showList :: [RecordBatch] -> ShowS # | |
parserArray :: Context -> Parser Context s (SmallArray RecordBatch) Source #