AUTHOR: Inês Alves Tomé
Introduction
With the establishment of AI in our daily lives, we have quickly learned how to incorporate this technology to make our lives easier. However, we have also started to observe several changes in our surroundings. These changes are particularly noticeable within the public sector, including national governments. In recent years, several governments have adopted systems in which non-human algorithms make automatic decisions without any form of human supervision. These systems are known as “Automated Decision-Making Systems” (ADMs). While they can be useful in many contexts such as assessing vital statistics and handling routine tasks, they have also raised significant concerns regarding human rights. One key issue is that AI systems are programmed by humans who may hold pre-existing biases. As a result, these biases can influence the decision-making processes of ADMs and disproportionately affect certain groups, particularly minorities. This blog post will examine how AI has been implemented in Denmark and The Netherlands, as well as the consequences of these implementations, followed by a concluding reflection.
ADMs and Decision-Making in Social Welfare Applications
Automated Decision-Making Systems (ADMs) are increasingly being used to make core decisions at the government level. One prominent use is in detecting potential cases of fraud when individuals apply for social welfare benefits. While this practice can be observed in several countries, this blog focuses specifically on Denmark and The Netherlands.
According to investigations conducted by Amnesty International, ADMs rely on the collection of sensitive data in order to carry out their assessments. This includes not only public information but also highly personal data, such as date of birth, residency status, family relationships, race, and sexual orientation, among others. In Denmark, authorities have reportedly introduced legislation to facilitate this data collection and to establish a legal framework that can be used to justify it if its legality is challenged. In The Netherlands, however, concerns about bias predate the introduction of ADMs. Even before these systems were implemented, the Dutch Tax Authorities were using criteria such as an applicant’s nationality to assess the likelihood of fraud in childcare benefit applications. These pre-existing biases were subsequently embedded into automated systems, now combined with the added issue of decisions being made without meaningful human oversight.
The lack of transparency and accountability in ADM decision-making creates significant risks. It allows discriminatory factors to influence outcomes without proper scrutiny. In the case of The Netherlands, such issues were already present before automation, but the use of ADMs has arguably amplified their impact. This raises an important question: what are the direct consequences for social welfare applicants when governments rely heavily, or even exclusively, on automated decision-making systems?
ADMs and Their Human Rights Implications
The lack of ethical safeguards in the use of ADMs has a direct impact on applicants’ human rights. These systems have the power to exclude individuals from essential services, particularly those belonging to marginalised groups. Furthermore, several additional risks have already been identified in relation to their widespread adoption.
Amnesty International highlights that ADMs can: “exclude people from access to essential services; replicate inequities along racial, gender, migration status, disability and socio-economic lines; give people impacted limited ability to challenge decisions made about them, leaving them little to no recourse to remedy; and clamp down on the right to peaceful protest through the deployment of mass surveillance technologies at scale, which particularly impact already marginalised communities.” The organisation further warns that as these systems become more embedded in society, the risks associated with them continue to grow. Some of these risks are already evident in the cases of Denmark and The Netherlands. In both countries, individuals from different backgrounds, particularly non-citizens, face a higher level of scrutiny compared to nationals. In Denmark, for example, a system known as “Model Abroad” is used to assess foreign applicants. This model categorises individuals into subgroups based primarily on their nationality. Applicants whose nationalities are associated with medium to high ties to non-EEA countries are subjected to additional scrutiny and investigation. Similarly, in The Netherlands, Amnesty International found that applicants without Dutch citizenship are more likely to be flagged for further review. In both countries, such investigations have been reported to be disproportionately intrusive, often causing significant psychological stress to those affected.
By introducing these additional layers of scrutiny and by disproportionately targeting individuals based on characteristics such as nationality, ethnicity, or migration status these systems contribute to exclusion and economic disadvantage. As a result, certain groups are systematically placed at a higher risk of harm, raising serious concerns about discrimination and the protection of fundamental rights.
Conclusion
In conclusion, Automated Decision-Making Systems have a significant impact on minority groups and can directly undermine their rights. The lack of transparency surrounding how these systems operate creates space for existing societal biases to persist, often at the expense of already marginalised communities.As discussed, even countries widely regarded for their commitment to social rights such as Denmark and The Netherlands, have experienced injustices linked to the use of these systems in the distribution of social benefits. This demonstrates that the issue is not confined to less regulated contexts, but can also emerge within well-established welfare states. Moreover, as AI tools continue to be implemented in increasingly complex areas, the potential for harm grows accordingly. In response to these concerns, Amnesty International has established the Algorithmic Accountability Lab, an initiative aimed at scrutinising the use of ADMs in the public sector and exposing their risks to human rights. Its objectives include ensuring that human rights impact assessments are conducted before such systems are adopted, promoting the use of systems that are demonstrably effective and continuously monitored, and securing adequate protections and remedies in cases where these systems fail.
Sources:
Amnesty International. (2021, October 25). Dutch childcare benefit scandal an urgent wake-up call to ban racist algorithms. Amnesty International. https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/
Amnesty International. (2024, November 12). Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups – report. Amnesty International. https://www.amnesty.org/en/latest/news/2024/11/denmark-ai-powered-welfare-system-fuels-mass-surveillance-and-risks-discriminating-against-marginalized-groups-report/
Amnesty International. (2025a, December 9). Algorithmic accountability toolkit. Amnesty International. https://www.amnesty.org/en/latest/research/2025/12/algorithmic-accountability-toolkit/#h-introduction
Amnesty International. (2025, December 9). Global: Amnesty International launches an Algorithmic Accountability toolkit to enable investigators, rights defenders’ and activists to hold powerful actors accountable for AI-facilitated harms. Amnesty International. https://www.amnesty.org/en/latest/news/2025/12/global-amnesty-international-launches-an-algorithmic-accountability-toolkit-to-enable-investigators-rights-defenders-and-activists-to-hold-powerfu/
Source(s) image(s):
N.D, N. D. (2022). Autonomous Decision-Making in AI. https://www.hpcwire.com/aiwire/2022/01/28/the-importance-of-humanized-autonomous-decision-making-in-ai/

