You are reading content from Scuttlebutt
@aljoscha %SZzAqMDv1lqh0fo8CeWITAsp2QAozYDxz9HZA298Y8s=.sha256
Re: %uJX/grT0v

@elavoie:

[...] I am really excited by the research prospects here.

This is funny, for me this is one of the areas of CS I try to avoid as much as possible. Too many heuristics, too much dependence on real-world caching behavior, too many trade-offs between time and space complexity. As fascinating as algorithmics are, I tend to back off once too many unclear tradeoffs have to be made. Oh, and everything that has to deal with file systems usually turns out to be rather unpleasant.

A really serious implementation attempt at an ssb-specific database would be some sort of append-only schemaless graph database. Being append-only is super useful, but schemalessnes and supporting arbitrary graphs are both really painful to get efficient.


@Piet Feel free to ask about anything you'd like me to explain in less computer sciency jargon. I'm just defaulting to it, because it is the most compact way of getting these notes out.

For boring ssb, or just for experimenting, you might want to look at graph databases. There are a lot of implementations for rdf data, and SPARQL queries are relatively easy to get into. Rdf is (somewhat) schemaless, so it might be the closest "real-world" thing to ssb where we could benefit from lots of previous engineering. I did not find any serious attempts at or information on append-only implementations, only this abandoned and unused one.

Assume that the sorting algorithm will be almost always be given data that's already sorted as we'll already have a partial ordering from the last time it was sorted

The problem with this is that we don't want to store the complete partial order, as that requires too much space (num_msgs ^ 2). So all scalable approaches involve some form of compression ((usually) not by running a literal data compression algorithm, but by carefully deciding which parts of the partial order to store, at which ones to recompute on the fly). A new run of the algorithm would need to recompute which parts to store and which parts to drop. So in general, this approach does not really work.

For (initial implementations for) ssb, it might be performant enough to naively store the whole partial order, in which case your observations do apply.

An ssb-specific representation can use the total order on messages from a single feed for some optiizations. Answering a reachability query "did msg1 happen after msg2" can be though of as trying to find a path to msg2.feed, that reaches that feed at a sequence number >= msg2.sequencce_number. This still leaves enough problems to require a fairly general solution (intuitively, you might have to hop along many feeds before you reach the correct one, and when trying to find it, you do not know which other feed to hop to). But it might turn out that the average query on ssb databases can benefit a lot from out structual knowledge of the graph, even if technically it isn't in any graph class that guarantees better algorithms.

When considering links between two feeds A and B, we do not even need to store all links. E.g. if message number 0 from feed A (let's write this as A::0) links to B::2 and A::1 links to B::1, we can drop the latter information. So we are not even storing the original edge set in the graph used for reachability queries, but a subset of it that has the same transitive closure (i.e. following all cypherlinks still leads to the same partial reachablity order). In some sense this means approximating the transitive reduction (exact computation of it is infeasible in our context). This does not give us any "hard" improvements, but it is very cheap and will improve the situation in practice. We might as well use any structural knowledge about the message graph that we have. And the partition of the vertex set into disjunct total orders (i.e. feeds) is super useful for optimizations.


@moid:

This is the hard part , the incremental update.

Yes, and it seems to be completely underresearched. Which strikes me as odd, don't real-world databases get updated from time to time? Well, it might also signal that this is really, really hard.

Many approaches do something like "let's encode all information in a matrix and then do a bunch of multiplications", which is about as unincremental as it gets. So I fear incremental updates will involve a bunch of heuristics. But being append-only (periodically running garbage collection on blocked feeds should allow us to disregard the fact that technically there may be deletion of vertices) and having the partition into feeds makes me optimistic that a solution can be found. I just don't really want to be the person to look for it.

I'm sorry about the paywalled links, I wrote this from univerity and didn't realize these were not open access. Here are open links:


While searching for these, I found this paper, which seems to deal with dynamic graphs specifically. Adding it to my reading list.

Join Scuttlebutt now