Authorities must stop using the social welfare agency’s discriminatory artificial intelligence systems

by time news

The Agencia del Seguro Social‌ de Suecia (Social ‌Insurance Agency) they must immediately stop using opaque artificial intelligence (AI) systems.Amnesty International stated this today, following an investigation conducted ‌by Lighthouse Reports and Svenska Dagbladet⁣ into ‌the Swedish welfare system, which revealed that the system has unfairly ⁢singled out marginalized ⁣groups for ⁢investigations into​ welfare fraud.

Research reveals this the system disproportionately targeted certain groups for further investigation in‌ relation‍ to social security fraud,‌ such as ‍women and people⁢ of foreign origin —born in other countries or whose parents were ⁤born in⁢ other countries—, low-income people and those without college degrees. Amnesty International supported the research by‌ reviewing the ‍analysis and⁢ methodology used by the project team, providing input and suggestions, and examining the‌ findings within a human rights framework.

“The Swedish Social Security Agency’s invasive algorithms discriminate​ against people based on gender, foreign origin, income level and education level.⁣ This⁢ is a clear example of ‌a clearly​ biased system violating people’s rights to social security, equality, non-discrimination and privacy,” said David Nolan, ⁣senior researcher at⁤ Amnesty Tech.

The Swedish Social Insurance Agency has been using the machine learning system since at least 2013. This system assigns risk scores, ⁢calculated by ‍an algorithm, to those claiming ​social security benefits to detect social benefit fraud.

The Försäkringskassan carries out two types of‍ checks: the usual investigation ⁣by social⁣ workers, which ⁣does not assume malicious intent and considers the possibility⁣ that‍ people ⁣have simply made mistakes, and another which ⁢is the work⁣ of the “control” department, which ​deals with cases where you​ suspect malicious intent. People⁣ achieving the ‌highest risk scores ​determined by the algorithm were automatically subjected to‍ investigations by fraud monitors⁢ within the social ⁤care agency, presuming “malicious intent” from ⁢the start.

Fraud investigators who examine files selected by the system ‍have⁣ enormous ⁣power. They can ‍examine social media accounts, ⁢obtain ​data⁢ from institutions such as schools and banks, and even interview‌ the affected person’s ​neighbors as part of their investigation.

“The‌ whole system‍ resembles a witch⁢ hunt against anyone selected to be the subject ‍of ​welfare fraud investigations,” said David Nolan.

People ⁣unfairly targeted⁣ by the partial social security system have complained that ⁤they end up facing delays and legal obstacles⁢ in accessing the benefits to which they are entitled.

“One of‌ the⁤ main ⁣problems ⁢with artificial intelligence systems used by social security bodies ‌is that they can exacerbate existing inequalities and discrimination. When a person is detected, ⁣they are treated as ⁣a suspect from the start. This ⁣can be extremely dehumanizing,” said ⁢David Nolan.

Although the project team at Lighthouse Reports ‌and Svenska Dagbladet have submitted freedom‍ of information⁤ requests, the Swedish authorities have not been entirely transparent regarding the ​internal workings of the system. ⁤

However, despite⁤ the welfare agency’s refusal, the Svenska Dagbladet ⁤and⁢ Lighthouse Reports team ‍managed to access disaggregated data on the⁢ results of fraud investigations conducted on a sample of cases⁢ selected by the algorithm, along with⁢ the demographic characteristics​ of ‌the people included in the system. This‍ was only possible because the Social Security Inspectorate (ISF) had previously requested the ‌same⁢ data.

Using this data, the team was able to test the algorithmic system against six common statistical metrics of ⁢fairness, including demographic parity, predictive parity, and false positive rates. Each indicator uses a ‌different approach to try to measure bias and ⁣discrimination in an algorithmic system, and the results confirmed that the Swedish system works disproportionately against ⁤already⁣ marginalized ‍groups in Swedish society.

Deep-rooted prejudices

The biases inherent in the system used by the Swedish Försäkringskassan have long been a cause for concern.‌ A 2018 ISF report explained that ​the algorithm used by the⁣ agency “in its current design [el algoritmo] “It doesn’t respect equal treatment.” The‌ Swedish Social Insurance Agency said the analysis‍ was flawed and based on questionable ⁤grounds. ‍

On the other hand, a data protection officer who previously worked for ‌the Swedish Social Insurance Agency warned in⁣ 2020 that the entire operation ⁣violates[ba] European data protection⁢ rules, because the authority has no ‍legal basis to profile people.

Due to the high risk to people’s rights, under the recently passed European Regulation on Artificial ‍Intelligence (Artificial Intelligence Law), artificial intelligence ⁤systems used by authorities to⁣ determine access to public services and benefits essential must comply with rigorous technical, transparency and ⁣governance standards, including the obligation for users to carry out a human rights risk assessment ‍and ensure mitigation⁢ measures before use. ⁢Furthermore, the Law ⁤prohibits specific ⁢systems that can be considered social scoring tools.

“If the Swedish Social Insurance Agency continues to​ use this system, Sweden could find ​itself immersed​ in ⁤a scandal similar to that of the Netherlands, where tax authorities have falsely ‌accused ⁤tens ⁤of thousands of parents⁢ and guardians ⁢of fraud, most low-income families, and have disproportionately harmed people⁣ from​ ethnic⁣ minorities,” said David Nolan.

“Given the opaque response from the Swedish‌ authorities, which does not allow us ‌to know how the‍ system‍ works, and the imprecise framework of ⁢the‌ ban on social scoring in the AI​ ​​law, it is difficult ⁤to determine where this specific system would fall ​under the risk-based ⁤classification of AI systems established by the Artificial Intelligence Law. However, there is sufficient ⁣data to indicate that​ the system violates the right to ⁢equality ⁢and freedom from discrimination. Therefore the system must be stopped immediately.”

Learn⁢ more

On 13 November 2024, Amnesty International’s Coded Injustice report revealed ⁣that artificial intelligence tools used by the‌ Danish social welfare agency are creating harmful mass surveillance, which risks discriminating against people with disabilities, racialized⁣ groups, migrants and refugees.⁣

On⁢ 15 October, Amnesty International and 14 other coalition⁤ partners led by La Quadrature du Net (LQDN) ‌filed a complaint‌ with the Council of State, France’s highest administrative‌ court, demanding that⁣ the algorithmic risk‌ scoring system used by the National Family Benefits ​Fund (CNAF).

In August ⁢2024, the European Artificial ‌Intelligence Law came into force⁣ to regulate AI that protects and promotes human rights.⁢ Amnesty International,‌ as part of⁣ a coalition of civil society organizations led by‍ the European Digital Rights Network (EDRi), has⁤ called‍ for the ‌European Union’s AI regulation to⁢ protect and promote human rights.

In⁢ 2021, the Amnesty International report Xenophobic​ machines revealed how racial profiling was ‌included in the design of the Dutch tax authorities’ algorithmic system that ⁢screened claims for childcare subsidies as⁣ potentially fraudulent.

How⁣ can transparency in AI algorithms help‍ prevent bias against marginalized groups?

Interview between Time.news Editor and David Nolan, Senior⁣ Researcher at Amnesty Tech

Time.news Editor: David,‌ thank you for joining us⁢ today. The recent findings about ​the Swedish Social Insurance Agency’s use of opaque⁣ AI systems have raised significant concerns. Can you⁤ explain⁤ what led to these revelations and why they are so ‌alarming?

David‍ Nolan: Thank you for having me. The investigation by Lighthouse Reports and Svenska Dagbladet uncovered that ‍the Swedish Social Insurance Agency has⁣ been using a machine⁤ learning ⁣system since 2013 to assign risk‌ scores to ‌individuals claiming social benefits, with the ⁤intent of detecting welfare fraud. What’s alarming ‌is that​ this system disproportionately​ targets marginalized ⁢groups—such as women, ⁣individuals of ‍foreign descent,‍ low-income individuals, and those ⁤without college degrees—for investigations that often presume malicious intent.

Editor: That sounds extremely troubling. How does this algorithm determine who gets flagged for fraud investigations?

Nolan: The algorithm calculates ⁣risk scores based on various‌ data‍ points, and those deemed high-risk are automatically subjected ‌to rigorous investigations by fraud monitors. These monitors ⁢have far-reaching powers, such as accessing social media accounts and interviewing ‌neighbors, which​ essentially treats these individuals as suspects ‍from the moment they are flagged—completely disregarding⁣ the possibility of innocent mistakes.

Editor: It seems‌ this use ⁤of AI creates a bias where individuals are presumed guilty. How ‌does that fit into the larger ⁤context of human rights?

Nolan: Exactly. The current system ​violates fundamental rights, including the right to social security, equality, and privacy. By relying on biased algorithms, we risk exacerbating ‌existing inequalities​ and further marginalizing ⁢already vulnerable⁣ populations. The situation resembles a witch‍ hunt, where individuals targeted for investigation are further dehumanized. Our research demonstrated that the algorithm reflects and entrenches deeper societal prejudices, undermining ⁣the principles of fairness and⁢ equality.

Editor: ‌ What have been the ‍responses from the ⁣Swedish ‍authorities regarding these findings?

Nolan: The Swedish Social Insurance Agency has ‌largely dismissed the findings as flawed and based on​ questionable grounds. However, the lack of transparency into the system’s operation raises more⁣ questions than answers. Despite numerous requests for information, they have not provided adequate insight into how the algorithm works or​ how decisions regarding investigations are made.

Editor: That’s deeply concerning. Have there been any legal frameworks introduced to‌ regulate such uses of AI in public services?

Nolan: Yes, the‌ recent European Regulation on​ Artificial Intelligence introduces stricter standards for AI systems used by authorities, especially those determining access to essential public services and benefits. It mandates human rights risk ​assessments and transparency⁣ measures. However, there​ is a significant ⁤gap when it comes to enforcement‌ and ensuring compliance, particularly in the Swedish context.

Editor: What⁤ potential implications could this have ⁣for Sweden ​if these practices continue unchecked?

Nolan: If ⁣the Swedish Social Insurance Agency persists with their current system, ‍Sweden risks enduring a scandal‍ akin to what occurred in the Netherlands, where ⁢tax authorities wrongfully ​accused thousands of parents of fraud, severely impacting⁢ low-income⁤ families and ethnic minorities.​ The potential for ​similar injustices​ in Sweden is very real and ⁣warrants urgent ‍attention.

Editor: It seems like there’s an urgent need for reform. What ⁣steps can be taken to rectify the issues ⁣you’ve highlighted regarding⁣ AI in ⁣the welfare system?

Nolan: First and⁤ foremost,​ the Swedish authorities must halt the use of opaque ‍AI systems and ⁢prioritize ⁣transparency. Engaging in meaningful consultations with ‌affected communities and human rights organizations can help develop fairer practices. Additionally, there needs to be robust oversight to ensure⁤ compliance with the new⁣ EU regulations. In essence, we must prioritize human rights and equality over algorithmic efficiency.

Editor: Thank you, David, for your ⁤insights and ⁤for shedding light on such a critical issue.⁤ It’s imperative that we continue⁣ to monitor and advocate for the rights of those impacted ⁤by these systems.

Nolan: Thank you for‌ raising awareness about this ‍matter. It’s essential that⁢ we keep the conversation going to drive ⁣meaningful change.

You may also like

Leave a Comment