Automation of Welfare Systems: The downsides of striving for efficiency

AUTHOR: Annika Kannen

Summary: While automation promises to streamline processes and reduce fraud in welfare programs, it often introduces new barriers. Systems like the ones in Serbia, Denmark, and India rely on data aggregation and algorithms to determine eligibility, but they can be flawed, biased, and intrusive.

Automation is often thought to be the answer to the inefficiencies of welfare systems. Governments worldwide increasingly rely on automation, integrating technologies like artificial intelligence and automatic data analysis to speed up the application process and reduce fraud. In theory, this is supposed to make welfare more readily accessible, but in practice, the situation often looks quite different.

These automated systems often fail to deliver on their promises, instead introducing new barriers, exacerbating existing inequalities, and denying critical support. To better explain the issues that can occur with automation, we will take a look at three different countries who already include automation in their welfare systems.

Serbia introduced a new Social Card registry in March of 2022, which was part of a broader welfare reform aimed at streamlining the delivery of social assistance. The key features of this registry include combining personal data from multiple government sources, among other things, a person’s income, their bank accounts and property they own. Their system is semi-automated, meaning that after the collection of data, the system flags discrepancies between the data and the eligibility criteria. If a case is flagged, a social worker is responsible for reviewing and making the final decision.

Despite the process not being fully automated, it still poses multiple risks. The collected data might be outdated or simply false and is not free from bias. People in Serbia have reportedly been denied welfare based on assets they have never owned.

Denmark already established a semi-automated system in 2012, the Udbetaling Danmark (UDK), which centralizes the administration of different kinds of welfare benefits, including pensions, child allowance, housing and unemployment benefits, and for detection of welfare fraud. It uses a “joint data unit”, to collect and analyze personal data. In Denmark this does not only include bank account or income information: This especially extensive and intrusive data collection can also include data from residency registers, health databases and even social media monitoring. UDK officials reportedly scan publicly available social media profiles for evidence that contradicts information provided in welfare applications and are also analyzed to verify compliance with certain welfare program requirements. Although this information is generally publicly available, many have criticized that it is a violation of privacy and often lacks context.

Samagra Vedika is a digitalized welfare delivery system developed by the Telangana state government in India. Similar to the other two systems, it combines data from multiple sources into one database, meant to provide a comprehensive overview of an individual’s financial situation. Something unique to this system is that they use proxies to determine eligibility. A proxy is an indirect indicator of someone’s finances, such as electricity consumption or vehicle ownership.

So after looking at the way these systems usually work and the data they base their decisions on, there are multiple areas of concern:

– Data might simply be wrong. As already mentioned, these automated processes are not without mistakes. Data can be outdated and data sets can be wrongly linked, leading to assets being misattributed, causing people to unfairly lose their welfare benefits.

– Data can be biased. Especially when looking at more subjective variables like Denmark does in their fraud detection system, marginalized communities are often disproportionately affected. By including variables like “foreign affiliation”, geolocation data or “unusual living arrangements” to detect cases of welfare fraud, these systems discriminate against marginalized groups such as migrants, who might still have family in their country of origin, or non-heterosexual households, whos situation might be flagged as “unusual”.

– These systems can be quite intrusive and violate the privacy of those applying for assistance. In Denmark, the collection of personal data extends to social media monitoring, where publicly available content such as photos or posts on social media platforms can be analyzed to detect potential fraud. While the data might be publicly accessible, the lack of consent and the often intrusive nature of this monitoring is a clear violation of privacy. Similarly, in Serbia, people have been monitored for financial activity that might not be linked to their welfare claims, like unexpected bank transactions. In India, the extensive use of data like electricity consumption and vehicle ownership to assess welfare eligibility raises concerns about the misuse of personal information, especially when these proxies don’t always reflect someone’s true financial situation.

– The process is often not very transparent. People often get denied based on data that they were not aware exists or affects their application in any way. In Serbia, the Social Card system flags discrepancies between data, but recipients often don’t know why certain pieces of information have led to a decision, or even that this data was used in the first place. In Denmark, individuals flagged by UDK’s algorithms face significant barriers to understanding how and why they were targeted, especially when decisions are based on subjective criteria like “unusual living arrangements” or “foreign affiliation,” without clear guidelines for these variables. In India, individuals using Samagra Vedika often don’t know how proxy indicators such as electricity consumption or vehicle ownership impact their eligibility, leaving them unable to challenge these often arbitrary exclusions.

– People can be excluded from welfare based on digital barriers. Many of these systems rely heavily on digital literacy and access to technology. In India, Samagra Vedika disproportionately affects vulnerable groups like the elderly, rural residents, and low-income families who may not have easy access to the internet or smartphones. Similarly, in Denmark, some people, especially elderly beneficiaries or those with disabilities, are excluded from the system due to their inability to navigate the digital application process. In Serbia, individuals who are unfamiliar with digital systems or lack access to the necessary paperwork to correct errors in their data have been left without support, as the Social Card registry does not allow for easy human intervention when issues arise.

These examples highlight the dangers of automated welfare systems that lack adequate safeguards. While automation is designed to make welfare more efficient, the unintended consequences—such as wrongful exclusions, discrimination, privacy violations, lack of transparency, and digital exclusion—demonstrate that these systems can often be harmful to those they are meant to help.

Sources:

https://www.amnesty.org/en/latest/research/2023/12/trapped-by-automation-poverty-and-discrimination-in-serbias-welfare-state/#executivesummary

https://www.amnesty.org/en/latest/research/2024/04/entity-resolution-in-indias-welfare-digitalization/

https://www.amnesty.org/en/documents/eur18/8709/2024/en/

Leave a Reply

Your email address will not be published. Required fields are marked *