Adoption of AI in the public sector has potential benefits such as speeding up decision-making, making processes more efficient, decisions fairer, and spotting things that humans would miss. Considerable resources are being poured into the development of algorithms to make public administration and decision making more efficient. But many algorithms are black boxes, and the “reasoning” behind decisions remains unknown. Is it such a good idea to use the public sector as a crash test dummy for new technology, when the consequences are not yet fully understood? Will the technical advances really make our systems better? A closer look at implementations so far, suggests that factors other than purely technical ones, will be more likely to determine the success or failure of AI adoption in the public sector.
Robots and automation are not things of the future; they are popping up in various places in the public sector. AI is increasingly used for both small and big decisions such as sorting incoming mail, answering questions via chatbots, and even distributing social benefits. In the municipality of Trelleborg, Sweden, the administration of jobseekers’ applications for income support has been overseen and decided upon by an AI for several years. The initiative has reportedly led to better citizen service and employees at the municipality can focus on getting people into work. Trelleborg reduced their support costs by about 15 percent per year due to more people finding work. At a glance, this seems an excellent outcome. However, critics such as the Union for Professionals raise questions about whether the process is transparent and fair to citizens.
Three recent reports by the European Commission, AlgorithmWatch and the Council of Ministers respectively, give an overview of cases where AI has been used in the public sector so far. The reports build on self-reported case studies and offer examples of when AI adoption has been successful and where ethical problems have occurred. In these reports, two categories of obstacles to successful AI adoption in the public sector, can be discerned:
One obstacle is related to how the technologies are used per se. There is a risk of discrimination when predictions are based on faulty assumptions, something that happened to UK students when receiving their grades 2020. The UK government decided to entrust the students’ marks to an algorithm, since exams could not be held during the COVID-19 lockdown. When A-level grades were announced, nearly 40 percent were lower than the teachers’ assessments. The lower than expected grades were more frequent for the students from less well-to-do areas and the reason was that the algorithm was including historical grades from the area the student lived in. This produced discriminatory results by putting students from the schools of wealthier areas, and often better exam results on a group level, at an advantage compared with their state-educated peers.
The same risk of discrimination is imbedded in the profiling tool for job opportunities that the employment service in Sweden is trying out. The algorithm includes 26 variables in the profiling tool. Some are sensible (e.g., registered date, historical decisions, efforts in the last 10 years, and level of education) and some questionable (e.g., median income in postal code and proportion of unemployed in postal codes). Is it sensible that my neighbour’s salary should increase or decrease my chance of getting support? Tools like this will draw conclusions about individuals because of the assumptions included in the algorithm. No matter how advanced the algorithm, or the amount of data used to train it, fairer decisions need variables that not only yield efficient predictions, but also reflect our values. This is an exercise of ethics and morals, at which AI (so far?) is not very apt.
The other type of obstacle to successful AI adoption is related to the organisational structure of the public sector. The idea of the “smart city” is sometimes part of the visionary bright future that AI will lead us to. A “smart city” optimises the management of traffic, public transport, logistics and other services to achieve lower energy consumption, reduce costs and achieve higher comfort. An important prerequisite for the AI in such a city is access to data. Data can be created by humans (e.g., administrative, and financial data), and recorded by different types of sensors (e.g., physical environments and wearable devices). To use AI to build up a “smart city”, data from different types of electronic methods and sensors need to flow between different public sector bodies. Espoo municipality in Finland, conducted an experiment working towards a smart city, and found the biggest challenge to be sharing and combining data which was divided into several sectors within the municipality. Each sector had its own computer system, and it was demanding to combine the data from different systems since it was not standardised. It was also challenging that some data were sensitive, and that the municipality had high standards of data security.
The structure of the public sector works like different silos, and the different parts are oftentimes quite isolated from one another. This is not the best condition for a “smart city” where data need to be collected from citizens, devices, buildings, and traffic to manage resources and services efficiently. With the structure of the public sector being what it is today, the full advantage of AI will not be fulfilled.
The public sector poses challenges for AI adoption that are different from those of consumer goods where a free market will to some degree regulate the efficiency of products. Taxpayers expect good quality decisions that also reflect our values in terms of fairness and transparency. The risk of algorithms exacerbating existing prejudice needs to be addressed before AI can be fully adopted in our public institutions. The challenge is to coordinate, control and assess the relevance and quality of AI used in the public sector. Coordination between different sectors in the municipality and between municipalities is needed so that the public sector, and consequently the citizens who rely on its services, does not become a crash test dummy for new technology. Arguably, AI adoption in the public sector will prove to be more of a human process, rather than a technical one.