Do you have any specific thoughts on how to deal with new transforms?
thinking out loud:
let's say we add a Transform abstraction to flume, where a Transform receives a Log (or transformed Log) and returns a transformed stream of "new" messages (decrypted with private box key, decrypted with group key, supplemented with off-chain content, etc). in this case, a "new" message might be an old message with a new transformation, for example if the group-box
transform received a new key, it might check all messages it's previously seen that was unable to decrypt, and push along any "new" decrypted messages.
as you propose, we might have multiple of these transforms, which probably will need to run in series rather than parallel (for example if a decrypted private message contains off-chain content), which together form a single transformed log that we feed into each view.
if a new transform is added, it invalidates everything after it in the chain (further Transform and every View), which i think should be fine. but when i say "new transform" i mean a new way of transforming messages, i think every transform should be able to handle new keys or access to existing messages by keeping track of those it wasn't able to transform and being able to push along new messages when it can.
in this way, each View can no longer assume that it receives the messages in order. they must be able to process any "new" message in any order, in order to update their indexes.
but is this Transform different enough from a View to warrant a new abstraction? or should Views simply be able to depend on other Views, with the guarantee that any "new" message given to a View has already been processed by Views that it depends on. i think both are valid approaches, the one benefit of the Transform abstraction is that it is a stream of messages (like a Log), where a View is not necessarily such.
okay, enough rambles, hope that was at all helpful.