So there's a whole sub-field of machine learning that studies how to correctly perform data wrangling, sampling techniques, and sample weighting in order to remove spurious underlying trends and equitably sample the data.

So the machine-learning algorithms that replicate racism and classism in social media, computer vision, criminal justice, and the financial industry could be fixed by any competent data scientist. The people who made these algorithms are just bad at machine learning.

Follow

@ash in some cases, yes. In others I've read about, the bias exist in the territory. A correct model trained on correctly sampled dataset will replicate whatever injustices actually exist in the sampled population. I wish that instead of proclaiming data scientists or math itself must be wrong, people had the courage to say what they really want: to inject corrective bias into the model, because the model and reality are in a feedback loop and using the correct model means perpetuating injustice.

@temporal I see what you're trying to say, but I'm not suggesting that people "inject corrective bias into the model." I'm simply suggesting that people competently build their algorithms using well-established techniques for identifying spurious correlations.

@temporal If a computer vision model to detect traffic violations started using "US state on license plate" as a variable for detecting whether a car is drifting into another lane, that variable has nothing to do with the physical location of the car in the lane and suggests that police in the area target out-of-state drivers with tickets. Any competent data scientist would remove that variable as spurious if they actually want their computer vision system to detect actual driving behavior.

@temporal It sounds like you're suggesting that doing this same basic feature selection for other latent variables (like gender, skin tone, etc.) is a departure from normal high-quality data science. It is not, which is why my questioning of the competence of data scientists who build algorithms that replicate superficial biases is appropriate.

@ash I was thinking about cases when a model finds positive correlation between a US-protected class and crime or credit default risk, and the relationship is real, due to history/path dependence shaping economic situation of different communities. People are quick to call the algorithm wrong (and "*.-ist"), but the model is right, and "fixing it" really entails breaking it to compensate for the injustice present in the territory. "Is vs. ought" issue.

Beyond that I agree with what you described now.

@temporal I agree, arguably well-made ML algorithms can give results that reflect the effects of accrued social/economic oppression while successfully removing "acute" bias outside of those factors.

Unfortunately such algorithms are the best-case scenario at this point, and there's evidence that the majority of the ML-based technologies that are being sold to schools and law-enforcement don't even succeed in this basic task. These are the data scientists whose competence I most doubt.

@temporal You bring up a harder problem than blatant incompetence present in many ML algorithms. Does removing the effects of "history/path dependence" from a model's evaluation of an individual constitute "breaking" the model? In many cases, this would result in a model that only evaluates people on the factors they can reasonably be expected to control. I would argue that this is more just and less broken, but I agree that it's a more nuanced issue than basic competence in data science.

@ash removing the effects that are in the data are "breaking" in the sense that one is forcing deviation from the mathematically correct inference/prediction model. It wouldn't be breaking if one changed the question! E.g. rom "crime chance of inddividual" to "intrinsic crime rate, corrected for historical considerations". But this now invites much more scrutiny about how one does the correction, and based on what - instead of declaring model invalid because a protected class isn't uniformly distributed.

@ash now, to be clear, I'm in favor of removing some of that "history effect" from data, but I want people to be clear and explicit about it. It's fair to notice that when a ML model is used to make decisions affecting people, it's no longer just a window on the world, it's now a feedback controller. A model used as a controller will reinforce certain patterns in the underlying territory, and this justifies asking "what ought to be" instead of just "what is", and adjusting the model accordingly.

@ash not making this explicit and instead calling the model broken "sexist/racist/ageist/etc." prevents us from discussing the scope, shape and goal of the correction - and also unjustly criticizes competence of data scientists behind it, as such corrections are policy issues.

WRT. your point about incompetence - I'm not trained as a data scientist, but know enough maths to spot some forms of bullshit, and I see lots of them in companies. So extrapolating from that (I see the irony), I agree & am saddened.

@temporal I agree that the definition of breaking a model is wholly contingent on what the model is attempting to evaluate. Is the model evaluating an individual's risk of default on a loan including factors outside of their control, or is it evaluating for whether they ought to get a home loan? I think the differentiation between these two questions is what needs to be explicit, precisely because of your good point about how ML models become feedback controllers when used to affect lives.

@temporal And given that ML models do become like feedback controllers when affecting lives, it is actually accurate to call a model racist/sexist/ageist if it is being used in a way that replicates systemic oppression, regardless of any good intentions of the data scientists who made it. People do tend to shut down when they feel accused of bigotry, but I think it might help make it less personal if they see that the problem is in specifying the question their models are trying to address.

@ash It may help make it less personal if one doesn't make it personal; calling something or someone sexist/racist/ageist is as personal as it gets - it's questioning one's morality/integrity of character. Word connotations matter.

I'm arguing for not confusing the machine with the purpose it's being used for. The same piece of statistics that's "racist" in context of insurance is absolutely critically correct in context of medicine, because "race" is a proxy for medically relevant genetic differences.

@ash It's easier to say "their model is racist" (with unspoken implication that the modellers or the industry are racist) instead of talking about the real issue - whether insurance companies or police should be allowed to take accurate data at face value, or should they compensate for systemic injustice. The latter doesn't generate headlines.

The difference matters because "is" problems are correctable by mathematics, but "ought" problems are policy problems; you can't fix them by reading more math books.

@temporal Simplistically implementing ML algorithms using data known to be tainted by systemic injustice is making an "is vs ought" decision. It's just making the decision to continue perpetuating systemic oppression. Unfortunately, continuing a pathological status quo is a policy decision. I just wish when people made these algorithms, they would just explicitly state that they're choosing to uphold current harmful dynamics.

@temporal It would be convenient if implementing ML algorithms could be "neutral," but unfortunately the edifice on which the models are built isn't neutral at all. Moving towards implementations which are not explicitly continuing harm is going towards "neutral."

You're right that less harmful ML implementation is a policy decision, but it's a policy decision either way. People just make the decision which requires the least additional work, discussion, and awareness of socioeconomic dynamics.

Sign in to participate in the conversation
Mastodon for Tech Folks

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!