You are reading content from Scuttlebutt
@mikey %QRrL2wc9o4LoPb2We8FbZC7WlbsfBRM7CnLeguNmILw=.sha256

flags

in support of my work in progress pull request to add flags to Patchwork, in concert with %+r7V7cU... to split vote messages into "reactions" and "flags", here is a proposal to add flag messages.

why?

at the moment we have no way to clearly signal when something or someone is harmful. at best we can write a post talking about it or we can block the offending user, but even then we lack the ability to self-police our community vibe for onlooking newbies. basically, blocks need context that clearly associates why a very specific something is not okay.

in %ThTCb7P... i discovered that in Patchwork "Classic" (version 2), we used to be able to flag messages or identities as being either dead, spam, or abusive. here is an example of this feature being used in practice.

from scuttlebutt.io documentation on vote messages as flags:

A positive vote on a user signals trust in that user. It's generally used to "confirm" that you think that user publishes good information.

A negative vote on a user is a "flag." You can flag a user for publishing bad information, making false claims, or being abusive. You can also flag a user if the owner lost the keys.

Common values for reason in downvotes on users:

  • dead: Dead Account / Lost Keys
  • spam: Spammer
  • abuse: Abusive behavior

it's important to note that a "flag" is not the opposite of a positive reaction: %KHKJH+o.... this is what led me to believe, maybe we should split votes into reactions and flags.

@mikey %apik1B5KuU4FJp7llqOlrNToruw+hac6LdmZJ+qJDYI=.sha256

message schema

here's the existing message schema as used in Patchwork "Classic"

{
  type: 'vote',
  vote": {
    link: (uxerId | msgId | blobId),
    value: -1,
    reason: 'abuse'
  }
}

here's a draft attempt at something new:

{
  type: 'flag',
  link: (uxerId | msgId | blobId )
  flags: {
    [flagId]: String
  }
}

so if someone posts something not cool:

{
  type: 'flag',
  link: '%...'
  flag: [
    'abuse'
  ]
}

or something. i also consider this:

{
  type: 'flag',
  link: '%...'
  flag: {
    abuse: 'this post contains unsafe content and should not be allowed to represent the community'
  }
}

which would allow people to directly signal detail as to why they are flagging the content. i think it's important to provide as many possible vectors for actionable, specific, and kind communication, rather than a nondescript flag. however maybe this can be implemented with post messages?

user experience

on a message, i see any flags given to the post, similar to how i see likes. when i click on the "X flags", i see a full detailed view of each flag given by each uxer.

on a profile, i see any flags given to the identity by people i trust, similar to how it shows blocks.

open questions

many questions. what do you think?

@Dominic %ZXrWGgFHzYP1eFWGKsvLPrsVhj9SgPUw9gP7dNaGVBk=.sha256

I remember one time when this was used in anger, and the options provided were too narrow... none really fitted the situation at hand, so "abuse" got used. This had an escalating effect, because the person in question could reasonably claim they weren't being abusive... and that was a fair point, because they weren't directly attacking anyone. But maybe "obnoxious" or "annoying" would be fairer. Also I feel misusing a label like "abuse" makes that flag less powerful when someone really does perform "abuse".

I'm wondering if this could be understood as an inverse curation, for example the other day I suggested maybe having a trigger warning field. In this case, someone who is flagging your post as potentially triggering isn't necessarily censoring you but rather curating you. They are making it easier for someone who wants to see that content (or to avoid that content).

User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
@mikey %uYEFJ4wjdE5sk7PbVpLakMlTIF3yW/8m2RLIkA3k/SI=.sha256

so i'm hearing maybe this should be merged with %ynNF1/G... and %6Blj85E..., where flags are just retroactive tags using about messages.

i'm curious if this addresses the original problem: how your social network can provide a map to navigate the territory of potentially harmful, unsafe, or unnecessary content or people.

User has not chosen to be hosted publicly
@mikey %RQQepsLqBEuGv4lZitd4rd+RZr59gdA17jaszGCjmSg=.sha256

i appreciate @marina's question %xA/wnrK...

what do i do if someone is being abusive/harassing?

or, how can we share information within our social network about potentially harmful content or people?

/cc @cinnamon @alanna @kelsey @tim @rhodey

@mikey %2/DLPAfGk2S/gPRClwPZuHEtzL5pF2ea1ViCSiIeFjA=.sha256

/cc @haileycoop ^

@tim %7kJJFpqODnnPNqVg7DkaJkyRVSQ/QPNy0SKFgXo3g3Q=.sha256

Thank you @dinosaur

what do i do if someone is being abusive/harassing...given that this is a decentralized network and given my understanding that moderation is currently not really possible (if possible at all)?

how can we share information within our social network about potentially harmful content or people?

I have no answers only more questions:

  • How do we do this without the burden being placed exclusively on those who are directly hurt by this behaviour?
  • Does anyone have any examples of non-hierarchical groups that have succesfully managed these problems? What can we learn from them?
  • Are we all here looking for tools that empower social solutions rather than technical fixes intended as solutions in themselves? (I suspect yes but clarity might be helpful)
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
@Dominic %/yQu9ORvWm1FiSRUq7+atLAj8BVpz3ePKuHJxCIj1ZM=.sha256

I think it's also worth noting that in our current community, we've had conflicts between people with different ways of thinking, without a troll in sight. For example, the greenhouse gender essentialism thread more recently, the OP of that thread asked for a way to self-tag a thread as possibly offensive which I interpret as a earnest acknowledgement of the presense of other's who felt and thought differently to him.

We also have people who want to live inside a well defined boundry, and other's who want to influence people and converse with people who have different thoughts.

I think sophisticated moderation and curation tools will be very interesting on ssb. That said, I think we should start very simply:

have a way to follow someone else's blocks. so if A follows B's blocks, if B blocks something, A acts like they blocked them too. So, you could set your pub to follow your blocks, or your friends. You could also create a space by following each other's blocks, and if one friend blocks, they are blocked by the others too.

This gets even better if we have private groups, but I think it would be simple to implement - a good next step.

@tim %YMUn3ewRIVCDaN2U/aIyyXc27cBNAIUIzrEEs/++sc0=.sha256

@cinnamon this paper may be interesting

Geiger, R. Stuart. (2016). “Bot-based collective blocklists in Twitter: The counterpublic moderation of harassment in a networked public space.” Information, Communication, and Society 19(6).

PDF: http://stuartgeiger.com/blockbots-ics.pdf
Summary: https://bids.berkeley.edu/news/moderating-harassment-twitter-blockbots

Traditionally, counterpublics have been understood as ‘parallel discursive arenas' (Fraser, 1990, p. 67), separate spaces where members of a subordinated group are able to participate in their own kind of collective sensemaking, opinion formation, and consensus building.... As counterpublics are characterized by their lack of a hegemonic claim to represent or speak to the entire population, members must employ alternative tactics to make their concerns and activities visible to ‘the public,’ while also maintaining a safe space to discuss and understand issues relevant to them on their own terms ... Rather than creating a separate, alternative discursive space, [bot-based collective blocklists, or] blockbots are a way in which counterpublic groups exercise agency over their own experiences within a hegemonic discursive environment.

@Dominic %g95QMJQNtCk7Pqy38P2yzyznfnq7xDuzXVI4Z24TPaw=.sha256

see also social-combat-rules-of-engagement

User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
@mmckegg %oW5ROTG+Nizsq+255IzawJXghgkQV3PY25sndWzBvVQ=.sha256

This could be another contact message. Maybe {trusted: true} or something like that.

User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
@Dominic %8vgc9aIPQCWaVHJUFbIt6GbemjBy2N2ikp4wKmbxSv8=.sha256

@cinnamon firstly, can you please say "when they are implemented" not "if they are ever implemented", because I fully intend to implement private groups ;)

Any action it ssb could be private if we put it in a private message. You could also put it in a private message to the blocklist so that they know you are following them.

@keks recursively following block lists also sounds interesting... oh oh, I feel a rabbit hole forming... maybe each individual follow could have a "hops" setting on it rather than a global configuration setting?

Anyway, probably easiest to have a single layer block-follow for now though.

User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
@Dominic %gz6TMDS1XOhpcA9ms+YHcufObk9xOVTw6sNw8CKxGS4=.sha256

@cinnamon clarification: when I say "private groups" I mean both friend-only and shared-groups... because of implementation details these will both come close together. I also call them "one-way" and "two-way" groups in other places.

User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
User has not chosen to be hosted publicly
@mikey %buTTzPYvYo0zvUdqO0FbNWLhgpu7k18hlEWo5t5Shq0=.sha256

bumping this thread with the flow for how GitHub lets you report abusive content: https://help.github.com/en/articles/reporting-abuse-or-spam

also there's a similar flow for repository owners to hide comments: https://help.github.com/en/articles/managing-disruptive-comments#hiding-a-comment

@Anders %hpP7FRWashRD/hTRXlZBkldVQnuE+A0AuimSoMwh9G0=.sha256
Voted bumping this thread with the flow for how GitHub lets you report abusive co
Join Scuttlebutt now