I'm convinced we need this unless you can show me that the perf difference is not significant.
I'm convinced we don't need this unless you can show me that the perf difference is significant.
<3
but do you have any objections to length delimited collections?
Why yes, as usual I have objections. A couple of them, so I'll try to keep each single one concise (actually I'm just lazy and want to go to sleep).
For the remainder of this post, "size" refers to the number of entries in a collection, whereas "length" refers to the number of bytes the encoding takes up.
- length-delimited does not enable binary search, it just allows skipping over collections without iterating their content
- worst-case access time complexity thus stays linear, it only speeds up scanning through nested data structures
- all hsdt values will be contained in messages, so this isn't relevant for scanning through the full flume log
- size-delimited is much simpler/easier to produce (length-delimited needs memoization to keep linear time complexity, and that memoization is annoying to get right and either requires preallocating additional memory for caching the length (which is wasteful if the data is never actually serialized), or expensive dynamic memory allocation (with some sort of tree datastructure just to keep track of the lengths, with bad cache locality))
- size-delimited is simpler/easier to parse
- imagine you are a parser and you encounter an array
- part of me wants to just stop here, that sentence is already beautiful as it is
- you read the size, and you allocate that many slots of memory for new values
- but if you got the length instead, what would you do? You can't just allocate that many bytes, twenty
u32
s each require different allocations than two sets of length 50 each.
- imagine you are a parser and you encounter an array
- the format is intended to be used for signing and transport, it seems off to compromise on simplicity to support a use-case that is needed for neither
- the database does not need to store data in the transport format, it should be very cheap to convert into a length-based one before writing to disk, especially if you compare the conversion time to the I/O time or the signature verification cpu time
- even then I'm not convinced that pure length-delimiting is the correct choice, since parsing rather than scanning still needs to happen (in particular, once you scanned to a value, you'll likely want to parse it if it's a non-primitive)
- I guess you could actually use the encoding as an immutable in-memory representation as well, even for collections, but that seems both like a hyper-optimization and pretty fringe/limited
- so a format suitable for flume might want to have both length and size indicators
- which really is fine for the db implementation, but would be needlessly complicated for the signing/transport format
- even then I'm not convinced that pure length-delimiting is the correct choice, since parsing rather than scanning still needs to happen (in particular, once you scanned to a value, you'll likely want to parse it if it's a non-primitive)
- not all db backends will be flume-based, people might just put the data into some sort of nosql database. Those people would pay the complexity cost of a length-delimited format without any gains.
For those reasons, I'm strongly in favor of keeping the signing/transport format simple and - if need be - storing the data in some different format inside the db (which may or may not be flume). Having implementation details of flume leak into the signing format feels wrong.