Follow

While thinking ahead on how I want to tackle a bunch of TODOs in the handler code in , I got thinking about and what things I could do with it.

I thought of the age old question of "I don't want <hate-instance> to have my data" and think I have a solution for apcore at least.

I have a proposal for here:
socialhub.activitypub.rocks/t/

Β· Β· Web Β· 2 Β· 0 Β· 2

The long of it is:

apcore is also an OAuth2 server. So if User A on Apcore-Instance A wants to never have their data on hate-speech Apcore-Instance B, they go to B, do an Authorization Code Grant w/ no extra permissions (Instance A tells B "yep that's user A"), then B can in full confidence know User A is the one telling it "stop processing my data".

Therefore, not only can *User A* tell Instance A to block/mute/ignore *other instances*, User A can tell *other instances* to ignore *them* (user A).

TL:DR;

Everyone in the could tell hate speech instances to not process their data...

...but only if they are running software built on apcore.

I swear I'm trying to make the world a better place.

Obviously, this requires all instance operators to work in good faith.

Thankfully, some of us live in jurisdictions where such actions/notifications could be taken to court to make the case that such notifications are protected and enforceable.

But man, it is a really REALLY weird place to end up thinking "I don't want Gab to exist, but if it has to, could it at least use my software that would respect the rights of others who don't want their shit spread there".

(If this idea panned out)

@cj this seems backwards from using capabilities? If you never grant B an access token, they won't have your data and you don't need to worry about them behaving.

@tedu Backwards as in reverse? Or as in "regressive"?

It is a bit of a "reverse", I think. But consider that capabilities are pro-active reactions (revoke or derive-narrower-scope), whereas the digital rights angle allows pro-active prevention.

Capabilities does prevent too, but it requires eternal vigilance (granting tokens correctly all the time), and the alternative (being silent) isn't healthy, IMO.

@cj yeah, there's a few problems I think.

Historically, such negative grants have not done well. The grant gets discarded or lost. That's like nature of the universe kinda thing, you have less than nothing permissions, magic entropy is going to move you back towards nothing.

Doesn't have to be malicious. Standard practice after fixing a sql injection bug in a framework is to reset all the auth tokens. Now you need to reban them all. Or common deployments may keep tokens in a redis database without backups. The security of this scheme depends on the permanent storage of a token that is more likely to be treated as ephemeral data.

Also, confusing which token to use. Consider user A bans server B, while user C (also on server A) does not. Server B makes a request for a post. Does it send the token or not? It doesn't know who owns the post it's getting. It can not send the token, but then it must accidentally retrieve A's post. Or it sends the token, but then it gets blocked from reading C's post. I don't think the intention is to allow one user to impose blocks on others of the same server.

Running oauth in reverse and pushing out tokens leads to many inconsistencies. Using the normal flow is much smoother. If a server loses or expires a token, it fetches a new one. No trouble.

@tedu What? No auth tokens are being kept.

You just record the user's "unsubscribe me from this email" and move on.

@tedu That is, you just have a list of author IDs. Incoming activities get an early-pass filter against those author IDs. That's it.

The auth token is just to update the user's choice, once the user is done choosing the server is free to forget any auth tokens (there wasn't anything granted anyway).

@tedu *There was a token granted, but no scopes were granted, so all it said was "yep this user is who they say they are, they're not trying to opt someone else out"

@tedu I get that this isn't really a "regular" OAuth2 flow. In the sense that the "resource owner" (the federated user) owns "digital rights", which requires authorization to toggle on the "client" (the federated peer instance).

The user's own instance is the "Authorization server"&"Resource server".

But the "resource" isn't anything stored as a scope on the user's instance b/c its their own digital right to exercise on the client. So the client just knows its authorized to change that right for the user.

@tedu And since that user doesn't have an account on the "client" -- the "client" can't store the auth/access tokens long term anyway -- they're immediately abandoned. But not before the "resource owner" makes their choice (if any) and changes the client.

@tedu So, actually, I change my mind. This is the *regular* OAuth2 Authorization Code Grant Type flow. There's just some quirks to who plays what role, and the unusual ephemeral "no account" situation on the client.

@cj oh, this doesn't sound very different than just sending a Block then? That's verified by the httpsig, but would have the same result?

Maybe explain again how this improves over that?

@tedu ActivityPub explicitly doesn't permit federating blocks. ;)

Blocking only says "stop outgoing directly from instance A to B"

vs

Instance B has verified User A ask about its data on User A, list such data, delete such data, or even prevent accumulating data in the future (via *any incoming* route).

(Edit: I'm fucking up the threading, sorry)

@tedu Also, is there such a thing as "Sending an Instance Block"? I thought that was only for actor-to-actor blocks

@cj I agree, but I'm not sure inventing a new not-AP endpoint and flow is going to speed adoption. :)

@tedu Oh I fully agree, that's why I've been limiting this idea just to apcore.

I'm all ears for some other solution with the same digital-rights view! I don't know how to encode that into AP at the moment though.

Do you have any ideas?

@cj I'm just waiting for ocap to move forward a bit more, which I think can probably do everything needed.
@cj if your security solution relies on the legal system then it's not going to be very effective

public is public and you can access control private posts.
@cj so?

you can send a takedown request already, all this does is make a courtesy way to request it that any bad instance will ignore

@mewmew Yes, they'd ignore it either way. So it's equal chance good/bad either method. so we can skip that discussion.

So would you rather spend an afternoon typing up an email, or click three buttons?

@cj you'd still have to send the email, that's my point
@cj "obviously, this requires all instance operators to work in good faith."

okay, you've lost the game, please stop now before you actually get somebody killed.

@kaniini I understand what you mean. However, I counter that exercising of digital rights should not be a substitute for security, anonymity, etc.

@cj i strongly disagree. in any appropriate security framing, the <hate-instance> is filled with people who want to dox you and do god knows what with your dox. a DRM-style solution is not acceptable in these cases. i will not implement your DRM solution in Pleroma.

@kaniini I don't think you understand, which is OK, it's an unusual Idea. It took me a lot to explain myself to Tedu.

@cj i mean, i'm sorry if you feel i'm kicking your puppy here, but people will depend on the constructions we create. therefore they should not require predicates like "instance operators have to work in good faith" and long-winded explanations about the security properties. security features must be practical.

@kaniini The ONLY thing it requires from the "security" thing is OAuth2. No good-faith required. The end.

Everything else is digital-rights management. Good or bad faith can exist here, but that's a separate problem if you tell someone "I want to delete my data" and they don't.

@kaniini Feel free to kick around, I do need strong feedback. Because it helps me figure out how to present this idea better.

@cj like, you've basically lost me at "instance operators have to work in good faith" because you've invented a slightly better bandaid to replace the current authenticated fetch bandaid. i'm not interested in more bandaids.

@kaniini I'm not doing authenticated fetch.

I'm doing "kaniini told mastodon.technology he doesn't want them processing his Activities anymore".

If mastodon.technology were a good-faith actor, we'd never see your content again *from any source*.

If it were a bad-faith actor, you could send a GDPR letter with evidence.

@cj you don't need OAuth2 to do that, that's what the `Delete` activity is for when used against the actor itself. so congratulations, you've solved an already solved problem.

@kaniini Beyond deletes, what about self-service "right to know"? And easily list the federated data on you? And some of the other end-user rights?

@cj @kaniini Wouldn't the exact same thing be accomplished with:

<blink><rainbow><input type="checkbox" id="451"><label for="451">Enable Mild Annoyance Mode for Instance</></></>

With all it doing being making the server send a 451 error to whichever instance when they try to fetch whatever resource?
@cj @kaniini A GDPR letter would imply users having to deanonymize themselves to achieve anything in the case of a bad-faith actor, which kind of just adds fuel to the fire.

@kick @kaniini Yeah I agree, sending a letter of last resort sucks. Fortunately, a bad actor would probably fuck over a lot of people and become easily identifiable if the expectation is self-service.

Then, pooling resources would be easier.

It's not perfect, but I think it may be better than the current standard (but am getting flak for that)

@cj @kaniini While a lot of AP is basically built around a notion of trust, that's taking "trust fellow humans!" and amplifying it by 100x.
When the implication that lawsuits are involved comes up, "not perfect" is so far from good enough that it's actively harmful to anyone who might need it.
Please strongly consider not trying to push this.

@kick @kaniini Yeah, I'm writing up a follow-up post of mine and am working through all the feedback. Since it's my first foray into trying to think through a problem like this, I think I will land on the "not push" side. Which also means 0 additional engineering effort.

@cj @kaniini I'm glad you're taking the feedback of others into account! Not pushing this is definitely the wisest move.

@kick @kaniini Lol it sucks that it sounds like you're surprised I'm taking feedback into account. I shared it, I wanted feedback, I didn't know if it was good or bad!

@kick @kaniini And yes I do initially push for ideas because I want to give it a good spin... but part of it is letting go.

Plus if I can get it documented, then others won't have to come up with the same bad idea next year.

@cj @kaniini I think a little bit of my surprise is just that you didn't really make it clear you were playing with the idea rather than being set on seeing it implemented already.
The blurb "closing thoughts" on that Discourse instance implied that you were mostly 100% on it, and I think is probably a big reason the response to this (especially on-network) has been the way it's been.
Show newer
@cj but I thought the point was to prevent bad faith instances from getting your posts

@lain No? It's to tell other services how you want to exercise your digital rights. Right now we can't even "tell".

If they become bad faith actors with regards to digital rights, that's a different problem.

@cj ah, so it's more a legal thing than a technical thing?

@lain It's an innovation to go beyond just the minimums of the GDPR, to let us all self-service more easily.

@cj jokes aside, I'll take another look at it tomorrow :)

@lain I'll probably update the post on socialhub tonight with some of the outcomes of discussions I've had, especially with Tedu, who helped me think it through more.

@cj On a higher level this sounds similar to the idea behind the "DoNotTrack" header. Which was super successful. not.

While I believe you to be proposing this in good faith, the biggest flaw in your proposal is the number of obstacles to overcome to even try to enforce your rights:
First of all, sueing someone is costly, takes a while and depends on the jurisdiction you and your opponent are in.

@cj Actually, you're falling for the "If you outlaw distributing of my posts, only outlaws will distribute them."ΒΉ

And now go figure what kind of folks may have an interest in pulling your posts but see no reason in obeying laws. I'm talking about doxxers, people threatening you or spamming you from everywhere. These folks are already jumping instances, accounts, domain or using Tor services, so why should they obey your proposal?

@cj
ΒΉ Adapted from "If you outlaw encryption, only outlaws will encrypt their messages."

@schmittlauch Exactly, this is why I'm not pursuing it any more. :) With helpful feedback such as yours!

Sign in to participate in the conversation
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!