Follow

Oh, my piece for the Institute of Network Cultures got published and I missed it!

Fighting Disinformation: We’re Solving The Wrong Problems
networkcultures.org/tactical-m

> The reason why misinformation and disinformation spread so fast is that our most commonly used communication tools had been built in a way that promotes that kind of content over fact-checked, long-form, nuanced reporting.

> One could call this the “outrage dividend“, and disinformation benefits especially handsomely from it.

> According to Washington Post, “Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content — including content likely to make them angry.”

> I am not saying Facebook intentionally designed its platform to become the best tool a malicious disinformation actor could dream of. This might have been an innocent mistake, an unintended consequence of how the post-promoting algorithm works.

> But in large systems, even tiny mistakes compound to become huge problems, especially over time. And Facebook happens to be a gigantic system that has been with us for almost two decades.

> In the immortal words of fictional Senator Soaper: “To err is human, but to really foul things up you need a computer.”

Pretty happy with that piece, tbh.

@rysiek Do you have a list of all your lectures somewhere?

@kukrak sadly, no. But thanks for the prod, I should put something together.

@nemobis I did not, but I might, thank you for the link. In return: did you read my piece, or just the excerpts I tooted here?

I do not have any hard evidence, as my piece is an opinion piece and not a scientific paper. You are more then welcome to dismiss it on this basis entirely and move on with your life. 🤷‍♀️

But the article you link to also does not seem to disagree with what I wrote. If it does, please point to where.

@nemobis my take-away from the article you linked is that there is/was a sustained, purposeful campaign to spread misinformation/disinformation in social media and outside of it.

That fits perfectly fine with my opinion that misinfo/disinfo get the "outrage dividend", and get spread more. It just means that some people decided to exploit that fact consciously, which is neither surprising nor incompatible with what I wrote.

@rysiek I've read it. You define an "outrage dividend", then you just proceed to *assume* that it exists/it's positive, and derive a few consequences from that assumption. So it's an entirely speculative operation. Alright.

The book is useful if you're interested in looking for actual evidence.

@nemobis one more thing: the outrage dividend exists even if only as the well-documented spread bump that "angry-emoji" posts used to get (and perhaps still get) on Facebook, compared to all other posts. This was documented by Washington Post:
washingtonpost.com/technology/

Of course I define it considerably wider, but if we want to nit-pick, there you have it: an actual proof it existed in a very specific way.

Sign in to participate in the conversation
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!