WordPress database error: [Table 'aifutures_org.wp_ppress_meta_data' doesn't exist]
SELECT * FROM wp_ppress_meta_data WHERE meta_key = 'content_restrict_data'

A (relatively) specific proposal on how to regulate algorithmic disclosure of recommendation systems – AI Futures
Blog post

A (relatively) specific proposal on how to regulate algorithmic disclosure of recommendation systems

WordPress database error: [Table 'aifutures_org.wp_ppress_meta_data' doesn't exist]
SELECT * FROM wp_ppress_meta_data WHERE meta_key = 'content_restrict_data'

[This is part three out of three in a blog post series about transparency and recommendation systems. Here is part one and part two]

In a previous blog post we wrote about the need for a policy requiring companies to divulge information about the goal function of their consumer-facing recommendation systems. We followed this with a blog about the potential problems of such a policy concluding that there are five concerns that one needs to be mindful of when creating such a policy: (1) it may result in increased bureaucracy; (2) the algorithms might be too complex for users to comprehend; (3) it may reduce profit for businesses; (4) it may enable the exploitation of the recommendation system by some users (i.e., “gaming”) of the systems; and (5) it may hinder technology providers’ from swiftly responding to urgent threats. Here we will outline a policy suggestion that addresses those issues while still providing a solution to the fundamental problem posed by companies not disclosing information about their recommendation systems.

Financial disclosure framework

Our aim is to find a way for companies to divulge important information, while limiting the negative effects such a disclosure could have on content creation, content moderation, and algorithmic innovation. Looking at other successful disclosure policies, we suggest that many of these issues can be solved by modeling the policy after financial disclosure requirements. In the US, publicly traded companies are required to disclose standardised financial information to a government agency (the SEC). Similarly, in the EU, listed companies are required to disclose financial information in a standardised form. The EU also has a requirement for large listed companies (over 500 employees) to disclose non-financial information about ”activity relating to, as a minimum, environmental, social and employee matters, respect for human rights, anti-corruption and bribery matters”.

Specific proposal

Building on this framework, we propose a policy that requires large listed companies to disclose structured information about the consumer-facing recommendation systems used on their platforms. This should at a minimum include detailed information about the variables used in the algorithm as well as a full description of the goal function of each ranking up until the point of the final recommendation is displayed to the consumer. To avoid some of the negative potential consequences discussed in the previous blog post we suggest that the information should be disclosed to a government agency that in turn has the responsibility to make that information available to the public in an accessible and standardized format. If a company can make the case that withholding the information from the general public is in the public’s interest, the algorithm could be kept under embargo for a set period of time. Spam filters and hate-speech detecting algorithms are examples of systems that benefit the public by being secret. 

“Building on [financial disclosure requirements], we propose a policy that requires large listed companies to disclose structured information about the consumer-facing recommendation systems used on their platforms.”

This policy proposal would make it easier for users of online platforms that use recommendation systems to understand how their behavior is tracked and how their data is being used to influence them. Today there are sometimes substantial financial advantages in having algorithms that are not aligned with the interest of consumers. By increasing transparency, companies that offer recommendation systems that are attractive to consumers will have a distinct competitive advantage. For example, companies such as the search engine DuckDuckGo might become more attractive if users were more aware of the differences in privacy protection and data gathering. 

Since regulatory agencies would have access to the algorithms, they would also be able to regulate the use of algorithms if it was deemed that these posed a significant threat to the public interest which could not be addressed by a more informed public choice. For example, according to the recent leak of internal emails from Facebook, the company decided to use algorithms that reduce disinformation before the 2020 US presidential election. These algorithms proved to be highly effective, but since they also reduced user engagement, they simultaneously hit profit margins. Had this information been available to regulators, Facebook would have likely not reverted to the previous algorithms, which they did soon after the election (a choice that may have contributed to the rampant disinformation that preceded the January 6 storming of the Capitol).

How the proposal addresses the potential drawbacks

By only requiring large companies to divulge information, we would limit possible negative impacts on innovation mentioned in the previous blog posts, such as disinformation, negative impacts on well-being, etc. One risk of compulsory disclosure is that companies might lose incentives for developing better algorithms, if they might be copied by competitors. However, large companies have access to large sets of data, which serve to train the algorithms and yield better recommendations. A smaller company using copied details would have less data to train on, and therefore not reach the same results. It would still allow for smaller companies to use novel and potentially better recommendation system solutions,  thus compensating for their relative data shortage. 

The cost of administering the reports would also be small compared to the budget of large companies thereby limiting the negative effects of costly administration. Connecting the already developed infrastructures for disclosure would also reduce the administrative burden.

Allowing companies to apply for exemption for parts of the information would make it possible to inform consumers about the recommendations systems they interact with, while keeping parts especially sensitive to gaming hidden. This could be, for instance, goal functions developed to restrict the spread of hate speech. The specifics of appropriate criteria for keeping information from the public would need to be studied further, but the general aim should be to limit potential harmful effects of making the information available to the public. Importantly, the burden of proof should be on companies that want to withhold information and they will have to show that secrecy is important in particular cases. 

Concluding remarks

Turning transparency into reality is not a simple task. Our suggestion is specific for recommendation systems and has a consumer perspective. It could contribute to awareness of online consumption behavior, and gives consumers the possibility to opt out when the algorithmic goal function does not meet their personal preferences.

In these blog posts, we have described how current available information about commonly used recommendation algorithms is too limited, neither allowing for informed consumer choice, nor potential government regulations. This is of substantial concern, since these services are used by a great number of people and there is a growing body of evidence of harm, both to individuals and to society as a whole. We have also suggested a policy that has the potential to address these issues while still avoiding negative effects. This suggestion would increase transparency for the most widespread AI applications, and it is relatively concrete and specific.

There has been an increasing demand for transparency of AI systems from researchers and by regulators (see for example here, here and here. It is argued that AI systems need to be transparent so that consumers can be informed and helped to make better decisions[1,2], but also to maintain trust in digital systems that are increasingly used [3]. These suggestions are implemented in the draft of the AI act which is currently under review but only for high-risk AI systems which do not include recommender systems of the type we discuss here.

New technology presents society with a choice. We can choose heavy regulation to limit the harm, but this might also stifle potential benefits. We can also choose to wait and see what effects the technology brings and when the negative impact is clear, try to regulate to limit those effects. However, for regulation to be possible, we need to have access to information that is today protected by secrecy. We believe that the type of disclosure requirements suggested here has the advantage of not stifling usage, while at the same time creating market opportunities for better solutions. Moreover, they can increase our understanding of the impact of recommendation system design allowing for better regulatory policy. 

[1] Koene, A., Clifton, C., Hatada, Y., Webb, H., & Richardson, R. (2019). A governance framework for algorithmic accountability and transparency. Brussels: European Parliamentary Research Service

[2] Rader, E., Cotter, K., & Cho, J. (2018, April). Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).

[3] Kizilcec, R. F. (2016, May). How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390-2395).

Photo: “Paragraphendschungel 218/365” by Skley. Licensed under CC BY-ND 2.0

Authors appear in alphabetical order.