investigation into the abuses of the algorithm of family allowance funds

by time news

2023-12-04 07:00:07

Tell me who you are, the algorithm will tell if you’re suspicious. At the National Family Allowance Fund (CNAF), where the search for declaration errors and fraud has become industrialized in recent years, a tool has been erected as a totem: the data mining (data mining). The prioritization of files to be checked today relies almost exclusively on a “risk score” calculated for each beneficiary according to a battery of personal criteria.

This system is the pride of the institution, which praises its performance. But alerts are increasing about the possible deviations of this algorithm, called “incoming model data mining” (DMDE). Several associations have accused the CNAF of discriminating against the most vulnerable among the groups to whom it pays active solidarity income (RSA), housing assistance and even family allowances each month.

The former Defender of Rights Jacques Toubon castigated in 2020 an approach based on “prejudices and stereotypes”while many media, Monde has Radio FrancePassing by StreetPressdocumented the distress of beneficiaries faced with an implacable system.

How we investigated the CAF algorithm

Each month, 13.8 million recipient households are noted by the National Family Allowance Fund (CNAF) to prioritize the organization’s checks. But the recipe for this algorithm, which has concrete effects on hundreds of thousands of families, is kept secret. here’s how The world and the collective of journalists Lighthouse Reports investigated to open the “black box” of the CNAF risk score:

our methodology Algorithm source code analysis is detailed here ;all the criteria used by the CNAF to rate beneficiaries can be viewed here ;our exchanges with the CNAF are traced here ;transparency : The world publishes here the documents transmitted by the CNAF as part of this investigation. The Quadrature du Net association also posted the source code of the algorithm here. See more See less

Has the CNAF created a monster? For the knowledge, The world explored with the collective of journalists Lighthouse Reports the functioning and effects of this algorithm. Our investigation shows that it was not designed to identify suspicious behavior, but uses personal characteristics of beneficiaries, some of them discriminatory, in order to assign them a risk of fraud.

A secret recipe

At the family allowance fund (CAF), the data mining has been tested since 2004, in the local banks of Dijon and Bordeaux. Its use was generalized in 2010 throughout the country, in a political context marked by the hunt for social fraud – Nicolas Sarkozy promised during his campaign for the 2007 presidential election “punish the fraudsters”before setting up, once elected, a national delegation to fight against fraud.

The principle is simple: it involves determining the profiles of beneficiaries most likely to have committed irregularities in their declarations. To do this, the CNAF is launching a gigantic life-size test: it is sending its 700 controllers to the homes of 7,000 randomly selected beneficiaries to check their situation in detail. Statisticians then look at the common characteristics of cases leading to claims for sums paid wrongly (“overpayments”, in internal jargon). They look for correlations with the extensive data they have on offending beneficiaries – as many as a thousand separate pieces of information about each person.

You have 85% of this article left to read. The rest is reserved for subscribers.

#investigation #abuses #algorithm #family #allowance #funds

You may also like

Leave a Comment