You are reading content from Scuttlebutt
@andrestaltz %TIddJIPkn0JmTQG/Ao7gMiSsaVhzx1QC3+QDLuQnhFQ=.sha256

Team Diary

NGI Assure group ("Private Groups in Manyverse")

a screenshot of a Jitsi call showing five participants smiling: Mix, arj, staltz, Jacob, Nicholas

@Mix @arj @andrestaltzūüďĪ @Powersource @nonlinear

#bat-butts #batts #manyverse #private-groups

I realized that people in the SSB community may not know what we're working on, so in an attempt to bring transparency to our work and document our journey, I'm starting this thread that will summarize the whole team's weekly progress. Not quite a dev diary, because we'll also include design and other aspects along the way.

We started by trying to find a time in common for a video call. This proved to be quite hard! We have people in US East Coast, New Zealand, and Europe, and if you take a look at timezones, this means one of us will inevitably be in their usual "sleep" time range. We found a solution when nonlinear volunteered to wake up at 5AM on Fridays for the video call! For Mix, that's 9PM. In Europe, that's roughly lunch time. Our first team meeting was on 20th of May.

Now that we have the basics of teamwork set up (our communication channels are on Signal, our project management is on Airtable, our notes are on Hackmd), we can finally start work on core work.

If you peeked at this diagram for SSB work in the coming months you'll notice that we have 5 boxes inside the "funded by NGIa" rectangle. Those are the 5 milestones we have written down in our grant document. Once those are completed, we'll be paid by NLnet. They are:

(1) User experience design of groups

We want to study user needs and design the user interface of private groups in such a way that is consistent with Manyverse’s current workflows. Nicholas Frota, a product designer by profession, will lead this phase.

(2) Private chats extended to unlimited members

Manyverse currently has private group chats, limited to a maximum of 7 participants. We want to introduce ‚ÄúSSB Private Groups‚ÄĚ as the engine behind group chats, extending the participant count from 7 to (theoretically) unlimited. This will allow us to test the waters of ‚ÄúSSB Private Groups‚ÄĚ while not making user interface changes, yet.

(3) Removal of members by moderators

An important property of groups is community safety where boundaries are set, such that infringers are accountable for their actions and/or removed from the group. ‚ÄúSSB Private Groups‚ÄĚ as a subprotocol does not yet take removal into consideration, so we want to design new rules in the protocol to allow groups to effectively exclude infringers, one way or another.

(4) Space-efficient replication of group messages

Group content is encryted to only the members of a group, and are irrelevant to group outsiders. While SSB stores replica of contents from friends, it is undesired to store encrypted content from friends if that content can never be decrypted. We need to find ways of utilizing peer-to-peer storage efficiently and respectfully.

(5) Groups as a new feature in Manyverse

Finally, we want to put together all the components of the previously mentioned milestones and deliver new group functionality in Manyverse that enables hundreds of members per group, member removal, seamless replication of group content, via an easily understandable user interface.

It seems like these are ordered linearly, but actually we started with (1) and (4). Turns out that (2) depends on (4), so (4) is really important.

Mix, arj and I had a video call to give Mix a primer on metafeeds, and to sketch a first draft of how would private groups replicate as metafeeds and subfeeds. We have a nice sketch, but it has some open questions that need answers before implementing. Mix also got acquainted with ssb-meta-feeds, giving some feedback on its API, and a README PR.

Nonlinear got busy with (1), and the design process starts by mapping out all the assumptions (be them technical, sociotechnical, or social) surrounding Groups (we're dropping the word "private" by the way). Notes can be found in this GitLab issue, but these notes will evolve a lot over time. So if you're reading this in the distant future, I don't know what you'll see there. :) We had a couple of meetings to discuss these assumptions, and we noticed how much nonlinear's design work is extremely important!

One of the conclusions that we arrived was that we cannot promise that Groups are for the security-minded folks. These SSB Groups can't even be on the level of Signal chats in terms of privacy and security. This has to do with limitations in the cryptographic schemes we're using. Any group member can leak the symmetric key of the group to anyone else, which means those others could not only read one message, they could read all messages! And they can also write new posts such that existing members can read them (unless we stop that on the UI layer). It's important that we acknowledge this and that we don't promise too much.

That said, the cryptographic schemes we'll use are an improvement over the wide-open Public posts, because Groups can have some kinds of boundaries that Public cannot. We're reserving "Secure Groups" (with double ratchet-style perfect forward secrecy and other stronger properties) as future work, where we'll be able to cater for personas that need strict security guarantees. Communicating the "security"/"insecurity" of private-groups-based Groups is going to be an important task. At this point I'm very grateful we have a professional designer who helps map the assumptions, concepts, personas, limitations, and utilities.

(continues in the next post...)

@andrestaltz %l9cDEJdATQnCP0QxgeZFdYZGd13hC1igJX3YTjqJ1pU=.sha256

Milestone (2) is also about putting the private-groups libraries in Manyverse and making them "work", or at least not crash. We can't complete (2) without finishing (4), but we can get started with (2) at least. So I started doing that.

I took a look at ssb-tribes and thought of putting it in Manyverse. Turns out that ssb-tribes doesn't support ssb-db2 yet. So that led me to look at ssb-db2-box2 instead, which is a module arj made while #ssb-ngi-pointer was active. This led me to think of how nice it was that ssb-db supported addBoxer and addUnboxer so you could easily choose between box1 or box2. In parallel, I'm taking a look at arj's buttwoo feed format. This all disorganized, and I think it would be hard to work with these modules in a way that minimizes bugs and confusion.

Just look at this mermaid diagram of modules involved:

graph diagram with boxes and arrows, where boxes are npm modules and arrows are relationships of dependencies

As you can see, it started as small investigation of how to put ssb-tribes in Manyverse, and it ended with me investigate deep rabbit holes. ssb-db2 is ripe for a refactor. We need a seamless way of adding new feed formats, crossing those with encryption formats, etc. There are 3 dimensions involved: feed format, encryption format, and serialization format. I began working on a refactor of ssb-db2 and ssb-db2-box, which kept expanding to include refactoring other modules, like ssb-bendy-butt, ssb-keys, ssb-ebt, and ssb-validate.

My notes for this started looking crazy:

notepad open up with several notes written down about ssb-db2 and encoding formats

On Friday, this along all the rest of the work, felt overwhelming. I mean, last week I shared how the coming months are going to be overwhelming, and then I figure out this massive refactor to be done. It got me worried. Are we going to manage doing all this?

But over the weekend, I took some 2 hours to poke at this refactor problem again, and it slowly started to make sense. Today, Monday, I showed the idea to Mix and Arj and they gave green light. Basically the plan is to add addFeedFormat() and addEncryptionFormat() to ssb-db2, and to separate these two as orthogonal dimensions. The devil is in the details, but I feel like those details have designated places in the refactor plan, and it's mostly about executing the refactoring (and, well... later figuring out bugs and corner cases, the big unknowns that have the chance of ruining the plan).

This refactor reminds me of the bermuda triangle refactor, except this time it feels 10x bigger than bermuda. Oh well!

As closing notes, last week I was browsing the "original notes" document Dominic and Keks and Mix (?) wrote for private-groups, and stumbled upon this harsh reminder of the nature of our work:

Some positions are not attainable within the scuttlebutt architecture, due to interactions between hard problems in cryptography and hard problems in distributed systems.


@SoapDog (Macbook Air M1) %VbXecaflAot4W5/iMn+FO11D3+znSH/487MQF3ot8TA=.sha256

Folks lost the chance to call it the NGI Private Butts group....

User has not chosen to be hosted publicly
@The System %SzTs5llquVzUJ5IpL9sQieFtZhkDXb8x42YJGmlwl90=.sha256

E: Good luck!

@luandro %V0Fski4d7yUBB/peY9QOd0pG++Cb1yxQoHaSTLk3klM=.sha256

Awesome to be able to follow thoughts and progress, thanks for this.

@andrestaltz %MJMhTHanQEj9A79uoMf2OXkSZ8XcknAh6AZToZ8oOag=.sha256


After additional meetings Nicholas had with Mix and me, the Assumption Mapping is complete! We now know the scope of Groups, such as the technical limitations, UI requirements and data requirements, the target audiences, as well as what is out of scope. The next steps for design are to study and design the flows that users will go through when joining and interacting with groups.

On the technical side, Mix and Arj made a first draft for the structure of metafeeds and subfeeds to support groups to replicate. They made a diagram and wrote down the step-by-step pseudocode for Alice to invite Bob to a Group.

Diagram showing a root metafeed pointing to a groupsRoot sub-metafeed

On the coding side, I've been continuing the refactor of ssb-db2. I also made improvements to ssb-keys and bipf, adding new helper APIs to those. It's still a lot of mess to untangle, and although I'm making progress, I'm not sure it's the best design. One constraint that we settled on is to not support lipmaa links, because those require a whole lot of "looking backwards" logic that will make the code more complex. So bamboo is ruled out, for now. I'm just trying to find a sane structure of adding feed formats, and the scope currently is classic, bendybutt-v1, and buttwoo-v1. The feed formats you'll be able to add are many, but they have to follow some strict requirements.

@andrestaltz %mzzvNjsYE3UqZpYvjYJuSVvJk1C2lNX6s7KbAi09Qr4=.sha256


Last week was good, in terms of coding. I finished the nasty parts of the ssb-db2 refactor and now it's a downhill ride. I'm quite happy about this refactor, because in many ways it simplifies code, and helps to know where to put new code. This separation of concerns also helped me understand how box2 key management should work. Arj and I also tweaked the performance of buttwoo a bit. We knew the refactor would cause a performance regression, so we tried to recover some performance without undoing the refactor, and I think we got to a satisfactory level of performance. When everything is done, the structure of modules should look like this:

mermaidjs diagram of a computer-science-graph of npm modules related to ssb-db2

Mix, Arj and I also continued tweaking the design of metafeeds to accommodate groups. The current state is that under your root metafeed you'll have a predefined number of subfeeds, only these: main, indexes, groups, social, games, collab. And then under each of these we will have a recommendation to use first-byte-categorization (I don't know what to call it, it's similar to what ssb-blobs does to your ssb/blobs/sha256 folder). This is to avoid any metafeed having too many messages to be replicated, with "too many" meaning "thousands". With first-byte-categorization we limit the message count of a metafeed to 256. More details in issue ssb-meta-feeds-spec#30, and the diagram below:

mermaidjs diagram of a computer-science-tree of metafeeds and subfeeds

Nicholas and Mix collaborated on making diagrams to describe the invite flow, to add new members. We didn't have our Friday meeting last week, so they'll show these diagrams this week. I can update y'all about these diagrams on the next Team Diary update. :v:

User has not chosen to be hosted publicly
@andrestaltz %j0qbBhOvj3BSKP3OmCKZjcCp4IbvyWarzzsiE5Gv8jg=.sha256


On Wednesday, Nicholas, Mix and I had a short meeting to review the "registration flow", which means the process of a person discovering about a Group and applying to join the room.

A group's conversations are private to its members, but the existence of a group is not private. Every group will have what we call a "door", which is a profile detailing what that group is about, who is a member, and instructions to join the group. Any person can then apply to join that group, answering some questions to prove they should be a member. The admins will then see this and have the power to approve or reject the application. This is not the only way of joining a group, the other way is via an invite. This flow is for the random people who are interested in joining a group. See diagram below:

FigJam flowchart detailing how people can apply to become member of any group

On another topic, I'm almost done with the "downhill" part of the ssb-db2 refactor. I created these modules:

@andrestaltz %KErqVQtte1dTA+sxjkKp8T63AdDHxNX448bATE6r09E=.sha256


The ssb-db2 refactor is done and just waiting for changes to ssb-keyring and ssb-db2-box2 before we can merge it and release.

Speaking of those two modules, it's nice that the refactor is done and we can focus on "actual" box2 work, which in this case is generalizing ssb-keyring so that it does key management, but allowing keys to always be passed in. This way, other modules can take care of creating or discovering the encryption keys. Our plan is to either derive the encryption keys from metafeed seeds, or to "discover" each group's keys via the invite messages. Anyway, all the keys discovered will be put in ssb-keyring, and ssb-db2-box has to be able to control/manage ssb-keyring. Mix and I have been working on that. Mix has this PR on ssb-keyring and I have this PR on ssb-db2-box2.

In last week's team meeting we also discussed bigger picture ideas, mostly relating to design. We discussed what should happen when you join a group that contains a person whom you block. Instead of ignoring/hiding that person's content, we chose to show their content and to warn you of the presence of blocked person as soon as you join the group. Showing their content is important to preserve shared context for that group. Depending on the type of group, you might engage with the blocked person in a very different way you would on the Public side. E.g. imagine a group for a work/professional environment, where it doesn't make sense to split the conversation/context based on dislikes between people. On the other hand, on the Public side you have different expectations of who you want to engage with, you are more free to shape the context. In a sense, Groups are Context, Code of Conduct, and the maintenance of these.

One thing I learned in last meeting was that random people will not be able to see who are the members of a group. We spoke briefly about how that will happen in practice in the design of metafeeds and subfeeds.

Another thing we discussed in our team chat was a way for random people to apply to become a member of a group: we could just allow the person to start a private chat with the admins. But this would require showing to the public who the admins of any group are. Instead, Mix suggested that we just use P.O. Box, which is a way of encrypting something to a subset of the group's members (in this case, the admins). So the applicant sends a private chat message to the admins, not knowing who they are, and when the admins reply then they reveal themselves to the applicant, but they're not revealing themselves to the public. That's nice.

User has not chosen to be hosted publicly
@andrestaltz %QxDZu7Y1Dk3WSU6h8NsxvsqXhKdsc6nOlyASi2RqxcQ=.sha256


On the coding side, not much has changed: Mix and I have been working on ssb-keyring for ssb-box2. We didn't have a lot of time last week because I was focused on a Manyverse release, and Mix... with the cat.

But we had a productive team meeting, nicholas+me+arj where we discussed a couple of tricky topics, such as:

Groups as "persons"

Nicholas is more and more convinced that groups are "identities" in a way much similar to how persons have identities. This would be reflected on presentational aspects in the app, and in capacities ("groups should be able to publish public posts"). We discussed the tricky limits of this abstraction though, like:

  • can groups be blocked? (we reached a conclusion that "yes", but we're defining what effects that blocking would cause)
  • how does a group post something on its "feed"? (there is no single author for the "feed", so we'd have fork risks)
  • if groups are persons, can I invite a group to a group? thus creating a subgroup (we decided to not allow this for the time being, otherwise we're complicating our work too much)

Closing a group

If group admins have the power to remove members, could they remove everyone from the group, essentially "deleting" the group. We didn't reach a conclusion for that, but we discovered that "leaving" a group is functionally equivalent to "blocking" the group, on the data layer. Or, there could be an exception where you unsubscribe from the group's contents, but are still willing to replicate the group locally, for the purpose of helping other peers.

Group where admins disappear

Related to "leaving" groups and stopping its replication, we also discussed about an indigenous use case, where a group contains a lot of important content that is old/generational, and should be kept forever, even if the group's admins pass away. Groups where inactivity does not imply autodeletion.

One idea I tossed, to support this, is forking a group such that a person elects themselves as the admin of an entirely new group which inherits content from another group.

Changes to SSB at the conceptual layer

So far, SSB has been a simple map from "1 feed ID to 1 person". The conceptual layer is the data layer. But this abstraction is showing its limits. On the one hand it's simple, and in many ways simple is fantastic, but on the other hand, we need to find solutions for: partial replication, stable storage, groups, same-as, etc.

One way how 1-feed-1-person already doesn't work is when you have two or more devices. That's the case for many people on SSB and we need same-as (fusion identity) to solve that.

For partial replication, we've come up with metafeeds which create a tree of feed IDs that you own, each branch for a specific purpose.

For (private-)groups, we have the group ID as a "cloaked msg ID" and increasingly we're noticing that these group IDs can function as "identities" or "persons".

So going forwards we're going to have to break apart the conceptual layer from the data layer. Feed IDs will just represent a sig-chained sequence of messages, but not much more. When thinking about a person, we're going to have create another representation for them, such that (e.g.) when addressing Alice in a DM or in a mention, the implementation will have to figure out "which of Alice's feeds do we target in this context?"

We want to be intentional about this tradeoff and willingly pay the cost of letting go of the simplicity of 1-feed-1-person.

User has not chosen to be hosted publicly
@andrestaltz %RwiLPOgVeLKQGf1UPzLZsGqP8FEn4v65BmmDzxdvn8w=.sha256


Apart from finishing the ssb-db2 refactor, last week we also sat down to figure out some problems we need to solve before we solve our bigger problems.

User Agents

One of these smaller problems is "How do we know if a friendly peer supports private-groups?" and there is no simple answer to this. We thought of using the heuristic "if they have a root metafeed, then let's assume they also support private-groups", but we began thinking of how to properly solve this, and we kept talking about User Agents. Somehow, each app would publish or indicate what software stack they are using. We still have to flesh out all the details, but Mix kickstarted a repo for a spec for this: ssb-user-agent-spec.


We discussed quite a lot whether any group member should be allowed to invite others to the group, or if only the admin can do that. We also discussed whether this should be a group setting. This problem is actually quite thorny, because it's about authority in a network that is fundamentally without authorities. But we want to address it with thought, and for that we need to write down our thoughts, so Mix made ssb-permissions-spec and ssb-group-admin-spec.

Metafeed structure for groups

This spec is just the version 0.0.1 of the designs and diagrams we've been gathering while meeting many times to discuss how should the content of groups be published among metafeeds and subfeeds: ssb-meta-feed-group-spec.

@andrestaltz %mIcIeWirM8X/aAsFHbh9Mv5kDknuuVsSMEg21cfZM1o=.sha256


It's been some time! We had a summer holiday break, and other obligations (such as launching the new storage screen in Manyverse), but we're back.

We were a bit stuck with "metafeeds for groups" (see how short our spec is so far), so we had a couple of meetings to figure things out.

First, @Mix and I met and we discussed a few times the interplay between group-specific feeds and app-specific feeds. What if you have a cryptographic private group under the context of a Tic-tac-toe game, how are their feeds organized? Metafeeds dictate a strict hierarchy, a tree structure for the feeds. So we thought, do we put groups under apps? Or do we put apps under groups? We tried to predict what were the pros and cons of each approach. Neither approach felt "right".

diagrams for the tree structure of metafeeds, subapps under groups, and groups under subapps

Then, we briefly discussed putting all the subfeeds directly under the root metafeed, such that the "tree" is just very wide and not that much an actual tree. The big downside with that approach is that whoever replicates you will have to be aware of all your subfeeds, and that means a large overhead. E.g. if one app happens to be naively coded to create a new subfeed for each session, that pretty much spells the death of your root metafeed because it'll have hundreds/thousands of messages, and that's a lot of overhead for a "partial replication" strategy. Thus follows that we need to find a tree structure that allows the remote peer to replicate only parts of it.

diagram for the tree structure of metafeeds, where all subfeeds are under the root

We had a second meeting, @arj, Mix, and I and early in the meeting we explored one new idea: letting go of "domains" or any semantics for the tree structure and using only 1st-byte sharding and versioning for the tree structure. Previously, we had thought that the structure would be


notice how sharding and versioning are deep in the tree, closer to the leaf. This time, we pulled those two upwards, inverting the order, and putting them close to the root, like this:


Where the version concerns the whole tree structure, and the shard is a little bit of entropy extracted from each subfeed. The version means that in the future, if we want to design an entirely different tree structure (e.g. if we want to reinsert domains after all), we can refactor the subfeeds by putting them under a new version feed.

diagram for the tree structure of metafeeds, where there is a root, a v1 feed, 1st-byte feeds, and content feeds

The three of us got excited with this structure! It means it's less "UML Class diagram" and it's more "Merkle tree", concerning itself only with shape and bytes, not with semantics. The sharding is important so that remote peers can replicate only the parts they are interested in.

We have a PR for ssb-meta-feeds-spec that we're tweaking before merging, and then Mix and I have been studying a bit how should the shards be created. We were debating a bit whether it should be one byte (i.e. 256 total shards) or one nibble (16 total shards). For that, we wrote a pseudo-academic paper, what we call "sharding math". After running a few simulations and imagining what the first year of metafeeds in production is going to look like, we settled for 16 shards, i.e. a nibble.

Next steps from here are merging the spec PRs and then starting to code these new changes in the JS ssb-meta-feeds library!

@mix.exe %AxG0m7N6TmUEZAiWKqaa6EFfCD5xMSv3uJb/KvTihL4=.sha256
Voted ## Update It's been some time! We had a summer holiday break, and other ob
@mix.exe %gZ7xxxbUMXLbHkfUZEGad8gf2x5WsvI+TNRoIdYPXCc=.sha256

cc @Rabble @Matt Lorentz keen to talk to you about meta-feeds soon

@mix.exe %Y2yANwCCVp/gHso+b8a8M2sAXV2tKwepalCAn5a3CCA=.sha256

Started drafting the meta-groups spec today.. it hurts my brain a bit (so many little things to keep in mind)


@andrestaltz %hvr4yj0oAh9D0izawiD5ygD/doN4ERM5qFxLr6h7Fms=.sha256


We've got the specs in a fairly good shape, and we can start coding.

Index feeds rework. I was going to start implementing shards in ssb-meta-feeds, but I realized that a lot of its APIs were used by ssb-ebt for index feeds in a way that is not great. Arj and I found a way to refactor this, which deletes a bunch of glue code. We ended up changing the index feed spec a bit so that it uses a new "feed format" called indexed-v1, which is basically a copy of the classic feed format, but tagged with new SSB URIs like ssb:feed/indexed-v1/___. Then, a lot of shuffling around in ssb-ebt and the lower level modules like ssb-bfe, ssb-uri2, ssb-keys, etc. Finally, we also updated ssb-index-feed-writer renamed to ssb-index-feeds. This wasn't central to our work on Groups, but allowed us to delete code from ssb-meta-feeds, which is good because we'll be working a lot on that module.

sharding in ssb-meta-feeds. @Mix began working on auto-sharding in ssb-meta-feeds, and the logic is looking good in this PR. He's going to rework our tests, though, to make them more flexible to changes.

ssb-tribes2. @Powersource is back in the game, and we helped kickstart the ssb-tribes2 repo (not ready, do not use!). This is essentially ssb-tribes, but based on ssb-db2.

On a project management level, we tried to figure out what is the timeline for our first payment to come in, and that requires the first milestone to be completed. We think we'll be done with the "replication" milestone by the 1st week of November. So that gives us about a month and a half. It sounds like a suitable deadline. Sounds "close", but on the other hand we have 3 active developers, and we're already in the phase of writing code.

User has not chosen to be hosted publicly
@mix.exe %1cyQtjhpKUCiZ9wcOalmQIpy/bJArWRixNZFjlye1Ug=.sha256
Voted ## Update Nothing new on the technical side, I just want to say that our m
@Anders %94qUGgYwiU1MKDVbztYAl1vyAiCqbTp72psQM3cytm4=.sha256

Writing a metafeeds migration spec, @boreq hinted at this and the information has mostly been scattering around different other specs and maybe just in my head. Feels good writing things out. I think I might also have been holding up on this because it wasn't clear what the path was regarding classic ssb feeds but after several discussions with @andrestaltz I think we have a good path forward. Thanks to @mixmix for facilitating the meeting between boreq, mix and myself.

@andrestaltz %ZmttwudGaznRH7UaT36i4sEfsAuyscXT+uAHvQzV7aM=.sha256


As @arj hinted above, there is now a document describing how we'll migrate to metafeeds, roughly similar to a message I shared on ssb too. He also helped clarify index feeds as a spec.

@Powersource has moved ssb-tribes2 forwards a lot, it's passing several tests copied from ssb-tribes.

@Mix ² has worked on changing ssb-meta-feeds, to support reading box2-encrypted portions of the feed tree, and overall improving the API: ssb-meta-feeds#87, ssb-meta-feeds#90.

I've been busy updating ssb-replication-scheduler to use the new ssb-meta-feeds "v1 tree structure", but along the way I bumped into a bunch of things to be fixed. ssb-index-feeds had to be update, ssb-meta-feeds had a concurrency bug, ssb-ebt index feeds had a bug.

We want to get this milestone replication done by the first week of November (so we get paid without hurting any team member's savings), so we're BUSY. :bee:

@mix.exe %Up84W92C3pznm9CaPgoI7h8GqHPCBC2SPbbRnWHCjJI=.sha256
Voted ## Update As [@arj](@6CAxOI3f+LUOVrbAl0IemqiS7ATpQvr9Mdw9LC4+Uv0=.ed25519)
@andrestaltz %f82C5KpFyMBpqjZFNcQwFMJjA8i83wblZtsbcT16Jt0=.sha256


We're now in the deep part of the replication milestone, and things are getting more concrete. We're bumping into a ton of sub-problems, which is a good sign that we're doing real work.

@Powersource ² has been working on ssb-tribes2 and bumping into a bunch of sub-problems and raising a bunch of interesting questions, such as:

  • Shouldn't we use buttwoo for group-specific feeds instead of classic?
  • Shouldn't we use ssb:message/cloaked/<KEY> instead of %<KEY>.cloaked?
  • Shouldn't we use ssb:identity/group/<KEY> instead of ssb:message/cloaked/<KEY> when we're referring to a group?

See e.g. ssb-uri-spec#18.

@Mix ² ³ also bumped into a fundamental problem when integrating ssb-meta-feeds into ssb-tribes2: when I include another peer into a box2 encrypted message (to add them to a group), which of their keys should we select? Their root metafeed key? Their main feed key? Their corresponding "group-specific" feed key? This turned out to be a surprisingly deep topic.

One would assume you could "just" use the root metafeed, but that has downsides, such as:

  • "Subapps" (e.g. chess) would need access to the root metafeed keys, and you don't want to share the root with any random app just like how you don't want to share sudo access to all processes on Linux
  • It hard-codes the root metafeed as the recipient, making it hard to rotate keys in the future if we identify a need for that

So Mix, Jacob, and I video called to discuss that, and we came up with roughly 3 or 4 different solutions, with their own pros and cons.

Today, we had a meeting arj+mix+jacob+me to discuss buttwoo usage as well as this box2 recipient key mystery, and we touched on a bunch of topics, ranging from:

  • Ease of implementation of buttwoo / bendybutt / classic across languages
  • Network identity spec
  • Solutions for the box2 recipient problem
  • A non-breaking change improvement to SSB URIs for feeds

We ended up coming up with SEVEN different solutions for the box2 recipient problem, which are in summary:

  • (1) just use the root metafeed key always
  • (2A) leaf feed authoring the encrypted message addresses the corresponding leaf feed on the recipient, i.e. Alice's /root/v1/3/chess addresses Bob's /root/v1/3/chess
  • (2B) just use the root metafeed, but auto-translate that to a public key declared elsewhere
  • (3) each app or use case creates a spec that defines what public key to use as the recipient
  • (4) create a leaf feed dedicated to publishing which public key is the current recipient that others should use in box2
  • (5) "brute force" decrypt against all possible A√óB public keys on Alice's tree and Bob's tree
  • (6) just use the v1 key always
  • (7A) just use the root metafeed ID, but auto-translate that to the v1 feed ID
  • (7B) use the root metafeed ID (which is an SSB URI) appended with a query param that indicates that we want the v1 feed ID

We think we're happy with solution "7B", which is to use the v1 subfeed (under the root) as the recipient feedId, but in a special way:

Instead of using the direct SSB URI that identifies that v1 feed, we want to use the SSB URI for the root metafeed plus a query param ?resolveTo=/v1. The reason for this is that we need peers to discover the root metafeed (so you can replicate the tree from top-to-bottom) but the recipient should NOT be the root metafeed, it should be v1 (which is more flexible over time if we make v2, v3, etc). So $ROOT_ID?resolveTo=/v1 is a shortcut to $V1_ID, with the benefit that you now also know the $ROOT_ID. This led us to generalize this concept, so you could do $ROOT_ID?resolveTo=/v1/3/main as a shortcut for the $MAIN_ID. (PS: in practice you can't do that syntax and you'd have to escape the characters like $ROOT_ID?resolveTo=%2Fv1%2F3%2Fmain or $ROOT_ID?resolveTo=!v1!3!main, but we're still going to debate that).

After the 1h30m video call, we were satisfied with the outcome, but tired, and arj had left for another meeting. So we pretended to fight with light sabers with our lasers:

screenshot of a video call with 3 participants, where each person is holding a flashlight of a different color

@mix.exe %Sj5R+H9BXOMb81TH0P/U8KR1ZFtYH/EEOnlIGY8i74Y=.sha256
Voted ## Update We're now in the deep part of the replication milestone, and thi
User has not chosen to be hosted publicly
@andrestaltz %Y1SwBlJ0Zli2ElMQxSz/I0PsuI5EZpCV6dVv0dbRN5I=.sha256

So we completed that milestone. Feels great to be paid for this, finally.

We started working on the next milestone, which is going to be pretty challenging in research & development: how to remove group members.

Some prior art: in the original notes by Keks and Dominic they mention the problem of removing members, and hint that it's always going to be O(n) (compared to member addition which is O(1)) which is bad but it's not the worst news.

This week our team gathered to brainstorm different designs for the removal protocol. We started by listing all the requirements for the system, which I'll just copy-paste below:


  1. removed person

    • A. cannot publish new content to the group
    • B. cannot see new content others post
    • C. should know they were removed
  2. members of the group

    • A. cannot see what the removed person is posting anymore
    • B. know person was removed
    • C. stop replicating blocked persons group feed
  3. removal can only be performed by admins

    • and is idempotent
  4. removal is eventually consistent

    • don't fork the group
    • no race conditions if two admins remove people "simultaneously"
  5. messages published during "key transition" are handled...

    • when does the transition start?
    • when is it "done" (i.e. old no longer used)?
    • what to do with messages published between "start" and "end"
  6. new members can read all old messages

    • (optional, but must support)
  7. Should support sympathy replication with non-group members

  8. Timely removal

    • should not have to wait for all group members to be in sync to transition

We have some six different candidate solutions drafted, but we aren't fully satisfied with any of them. Curious to hear from community members if they have ideas too! This is a good phase to get involved with this milestone since the protocol design space is open and we haven't implemented anything yet, so it's easy to change directions.

Mix also had a hunch we might be reinventing the wheel so he reached out to different folks and ended up making a Signal group with people from #p2panda, #quiet, #earthstar who are all curious about the problem of large encrypted private groups in a decentralized setting. MLS has bubbled up as a strong state-of-the-art design.

@andrestaltz %upl6DfC2CAGkImobnwXlK0SE3ZLxJ1Y5bFCX/AFS65I=.sha256

I bring some update after a hiatus. We just came out of a brainstorming meeting to figure out solutions for the requirements listed above, and our list of potential solutions now has 10 options.

We have things like "only one admin can remove", "all admins ACK a removal proposal", and even "everyone gets a hand grenade" (???). The one we zoomed into was option (2) which keeps the group ID intact (so you can link to it) but rotates the encryption keys in what we call "epochs", which are essentially new groups that all have the same group ID.

The problem with option (2) is... what happens when two admins who are not synchronously online create two diverging epochs? i.e. Admin A removes member X by creating epoch P, while admin B removes member Y by creating epoch Q. Which epoch should group members effectively go to?

We came up with a neat solution for that problem which involves merging the two epochs into one epoch. The merging cannot be done by an admin, because this could also lead to two admins creating two different merges. Instead, the merging is done deterministically by any (and every) group member by doing something like an XOR of the keys of P and Q.

diagram of a computer science tree showing a group, two epochs P and Q, and an intersection epoch

We haven't decided if the intersection epoch's encryption key is just the XOR of the keys of P and Q, or if we use some other cryptographic formula. The important thing is that we cannot do the union of P and Q (which means replicating everything in P and everything in Q, and then blending the content on the app/UI layer), because this would lead to both removed members still being part of the group. Instead, we need to do the intersection of the memberships of P and Q, and this leads to both removed members X and Y to be unable to decrypt content in the intersection group. X can read content in epoch Q, but since they don't know the key for epoch P, then X cannot figure out by themselves what the intersection epoch key is.

We all like this idea! We agreed to have another meeting where we poke at this idea more aggressively, trying to break it, trying to find its corner cases, etc.

@andrestaltz %mE9lQq7vcqbYbDU8Qytdj95ivGqHngYtwBc+a+kHjyE=.sha256

Now we just had a meeting to have a critical lens on our idea, and it was a lot less exciting than the previous meeting, but I think we're still in a good direction. Here are some notes we took:

  • :bomb: how to merge conflicts exactly?
    • xor is good because it's commutative and deterministic
    • how do we tell people they didn't make the new XOR
    • keks says: xor of the two symm keys might be a bad idea ("shared key attack")
    • :grapes: Action point: research solutions for this
      • there is also the case of 3 keys or more which we should support
    • :bulb: make a merge message re-announcing both the keys, to all the people including the recently removed people. the fork is essentially an error that's undone. deterministically pick one of the two keys to use in the future
      • the problem with merging and just pointing to one of the keys to use, is that everyone except the person that was removed in the unused fork can read the messages in that fork
    • :bulb: Whenever a group is created, announce two keys: the official key, and the backup key. Then, when we do an intersection epoch, we do an xor of the backup keys, which were never previously used for anything.
      • Problem: What if there are 3 forked epochs to merge? What if you did an xor of the backups of two of them, published msgs with that, but only later discovered the 3rd forked epoch?
  • do we cap our own feeds on migrations?
    • yes (we need this anyway for stopping replication of blocked person.)
    • a.k.a. How to force, in ssb-ebt, new members to replicate a removed member up until sequence 300 but NOT after that (in case the removed member kept publishing on that feed)?
    • do we cap blocked persons feed so we stop replicating them?
  • what meta-data should the new "epoch/stub/group feed carry"
    • backlink to last feed?
    • root feed?
    • groupId?
    • people removed?
      • and their seq, to support capping
  • :bomb: removed person tries to create a bunch of forks
  • :bomb: flooding the groups/additions feed with a ton of "new epoch key" msgs
    • IDEA: admins do removal operations in the group feed
      • idea is to not overload the "group/additions" feed with 100 invite messages each time we remove a person from a group
      • so admin publishes:
        • remove oscar + reason + oscar seq (cap oscar)
        • DM new location to everyone but oscar (requires DMs active on the leaf)
        • published "capping this feed" (boom I'm out!)
      • this is important
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
@andrestaltz %mYUHuXYmegCKd087AqUgjSlYHcsYKSKpDptbeOb95q0=.sha256


In the last update, we were exploring the idea of using xor to merge forked epochs. A key turning point was meeting at #p2p-basel, where we got to have a proper team meeting to finalize the core ideas of (what we now call) "member exclusion".

Our p2p-basel meeting had team members @arj, @Powersource, and myself, and special guest @keks from whom we needed his expertise. Plus a couple curious folks like @cft and @Geoffrey.

We had a mad science whiteboard session in the university, like proper researchers do. I got flashbacks to my university days.

photograph of a whiteboard with a diagram of a computer science tree

After a lot of back on forth on how to merge forks, CFT delivered a high-impact insight: merging with xor (or similar operation) is not going to work if excluded members collude to join their knowledge and insert themselves back in. If I exclude Oscar, and you simultaneously exclude Oliver, then we create two forked sub-groups (we call them epochs). If Oscar and Oliver are friends, then can just share with each other the keys for the epochs where they are in, do an xor on that, and discover the new merged epoch.

So this lead us to different directions. We discovered different scenarios, and created a rule for each scenario. We took that photograph above as a quick reminder of what we agreed on.

Then we began sketching the specification, and it's up on ssbc/ssb-group-exclusion-spec, albeit totally a draft.

After my #strategy2023 post with huge changes to my focus area and my downgraded time for #batts, our team discussed how this project should proceed. I believe the theoretical work we laid out here is going to be solidly useful in the future, regardless of how #ssb2 turns out to be. But one of the biggest obstacles for us to finish grant milestones and get paid is the requirement that these results have to be put into production in Manyverse. Adding private groups would add metafeeds overhead, which would exacerbate the storage and RAM consumption problem. I emailed NLnet to refactor our milestones, and they responded something but so far no conclusion if we can rearrange the milestones. We are hoping that we can. Jacob is doing a great job with the code, and @Mix is gently re-warming up to contribute too. Looking good.

@andrestaltz %YZy2f9p6GTBWqLPoD7OHcwhTQE1HZ3jIIhdt/E9+OTo=.sha256


A lot has been happening on the implementation side of group removal in the ssb-tribes2 repo, Jacob and Mix are crunching it with lots of good PRs, but I have been on the sidelines just supporting sometimes.

I've been busy with admin things. After several emails exchanges with NLnet, we managed to refactor our milestones to better accommodate the realities of the project after we learned about those realities, lol. The new milestones are:

  • 1 UX Design (same as before)
  • 2 Restructure encryption modules in the database
  • 3.A. Removal of members by moderators (specification)
  • 3.B. Removal of members by moderators (implementation)
  • 4 Replication
  • 5 Protect ssb-tribes2 creation API against crashes
  • 6 Protect ssb-tribes2 exclusion API against crashes
  • 7 Consent system for members added to groups
  • 8.A. Sympathetic replication (specification)
  • 8.B. Sympathetic replication (implementation)
  • 9 Wrap up blog post and minor release of Manyverse

Some of these milestones are already completed! So we sent out payment requests for milestone (2) and (5).

And now, just a few minutes ago, @nonlinear and I finished the UX Design milestone. The main deliverable is a blog post describing the whole design: Go check it out!

Join Scuttlebutt now