how to better circumvent arbitrariness and bias- time.news

by time news
from Nicola Quadri

Despite advances in automation, our lives still depend (and thankfully) on the decisions made every day by women and men based on the information available to them. But how good are these decisions?

There is often talk of cognitive biases and how they affect the human ability to assess a situation and make rational, evidence-based decisions.

Human decisions

It exists for another phenomenon

, equally relevant in jeopardizing the quality of human decisions and of which we talk too little, perhaps also due to embarrassment and difficulty. Because unlike cognitive biases – which we can explain and understand – it is difficult to predict and risks undermining, data in hand, the credibility of many professions and institutions, from medicine to finance, from police to justice. It is about the enormous arbitrariness of human decisions, which net of cognitive biases (shared among all individuals) are different from each other even when they start from the same factual elements, sometimes even when the same person takes them at two different times, as shown by many experiments. what the prize Nobel for the economy Daniel Kahneman and the scholars Olivier Sibony and Cass Sunstein call “noise” in their book of the same name, published in Italy by Utet. But as Sibony recently told the audience of the festival of BergamoScienzaAlthough the effects of noise cannot be predicted, there are some simple rules of decision-making and daily hygiene – the equivalent of good hand washing – that can limit the damage.

Cognitive biases: relics of evolution

I have a cognitive bias they are automatic reasoning mechanisms that are activated below the cognitive threshold, that is without us realizing it. We inherited them in the course of evolution, because they were once useful for our survival, and still today they allow us to make decisions very quickly (decisions that often, on closer analysis, turn out to be wrong). Examples of cognitive biases include confirmation bias (the tendency to interpret information or to seek new information only to confirm an already formed opinion), the anchoring bias (the tendency to decide on the basis of a single elementoften the first acquired, ignoring others, equally or more relevant), apophenia (the tendency to see patterns and links even where there are none) or the set of egocentric biaswhich are based on a overestimates ourselves, our clarity, control, ability to plan, and so on. All these automatic reasoning mechanisms have one thing in common: they introduce a systematic error, because they have the same effect on each of us, they move – like a constant wind – our aim, making us look in the wrong place. Decision-making biases are also dangerous because they pollute the information ecosystem, producing new facts that reinforce the bias itself: if black people, with the same clues available and therefore only for the color of their skin, are found guilty with greater probability of white people, the crime rate in the black population will be higher. A figure that will be used as a justification for further guilty decisions (and not only by human judges, but also by algorithms trained on data like this one).

A long ignored phenomenon: noise

The role of biases in biasing good decisions is limited, not just because they are well researched and predictable – and therefore, if desired, they can be tackled more successfully – but also because they are only one piece of the puzzle. There is a substantial slice of human decision-making errors that are instead the result of what Sibony and colleagues call noise: the randomness and arbitrariness of the cognitive processes that lead a man or a woman – even the most experienced – to make a decision. The data, collected by Sibony and colleagues also thanks to a series of experiments, are nothing short of disheartening: 10% of fingerprint experts contradict themselves if the same pair of fingerprints is presented to them twice after some time; two highly accredited psychiatrists agree on a patient’s diagnosis only half the time; a doctor is more likely to order a screening exam in the morning than in the afternoon and a judge will plead guilty to the defendant on an empty stomach or on a hot day; not only that, but in controlled experiments we observe aenormous variability between the years of imprisonment requested by different judges in the face of the exact same case, which can even be double or half. Unlike biases, they are not systematic errors, they hit indiscriminately. But this does not mean that they cancel each other out: it is not enough to make the right choice on average, it must always be done, because for each case, and for the life of the individual subject to that decision, a single error is sufficient. And if it is true that in many cases there is no absolute correct choice, or in any case this choice cannot be known, and therefore the variability of decisions reflects the finitude of the resources available and our natural individual diversity, nevertheless we cannot accept that the the success or failure of a job application or a defense in court will causally depend on the person who will have to make the decision that concerns us. a problem of fairness and credibility.

Decision-making hygiene

What to do then? Fortunately, there is also good news. The first is that noise can be measured: it is not necessary to know the right answer around which the variability of the real answers is distributed to measure how far the answers are from each other. The second is that there are good practice which help us reduce noise. One more reason to measure it in the different situations in which it occurs: not only to get an idea of ​​the size of the problem (often higher than expected) but to understand if the decision-making hygiene practices that follow are working in reducing it. Among these practices is the aggregation of independent decisions, which exploits the so-called wisdom of the crowd: if we manage to mediate between evaluations or decisions taken independently, the one that results from the process is the best possible. Equally valuable is filtering the information we have access to: cognitive neuroscience tells us that not always having more information means making better choices. It is best to decide first what is useful to know for the purpose of the decision we have to make and not to expose ourselves to irrelevant information (such as the appearance of a candidate for a job position). Also trying to leave intuition for last and suspend our tendency to make a decision before we have gathered all the necessary evidence. Finally, it is good practice to break the overall decision into segments that we can better control, in a sort of algorithm that tells us what we need to consider and in what order. An attempt – ironically – to make the decision of a person more like that of a machine, which despite suffering from bias and obvious cognitive limitations compared to us (at least for now) does not suffer any kind of noise: on the basis of same set of information, always makes the exact same decision.

October 31, 2022 (change October 31, 2022 | 12:02)

You may also like

Leave a Comment