Protect Me From What AI Wants? The Binge-Watching Problem Part 3

In 1984, Amos Tversky said that his research had simply recognized what had been known for long among advertisers and used-car salesmen, as recounted by William Poundstone in his book about how pricing strategies have been influenced by behavioral decision theory. If merchants have known for millennia that choices depend on the way that different alternatives are framed, it was relatively recently established in the economic discipline, when Tversky and Daniel Kahneman showed how our choices systematically diverge from rational choice theory, and Cass Sunstein and Richard Thaler conceptualized how choice architectures can shape our decisions.

While our merchant ancestors were likely to be early adopters of smart framing strategies to promote trade, merchants of today have implemented AI far ahead of most others. Marketeers such as Alphabet (Google), Amazon, Meta (Facebook), and Microsoft, are investing heavily in AI. These companies are the three largest digital advertisement platforms in the U.S. and they have leading roles in AIs development and deployment. This suggests that the development of artificial intelligence right now frequently concerns advertising intelligence.

AI is often deployed to promote more online engagement

The most important marketing challenge for 63% of marketers globally is generating traffic and leads. Consistently, Facebook’s Initial Public Offering reads: “if our users decrease their level of engagement with Facebook, our revenue, financial results, and business may be significantly harmed. More engagement means better access to data, greater precision in predictions, and more time for advertisements. Andrew Ng, the founder of Google Brain, stated that it is data rather than algorithms that constitute the defensible barrier for many of the firms with leading AI teams. Data depends on access to information about user behavior online.

Hence, AIs are frequently deployed to promote increased online engagement. Correspondingly, Karen Hao in MIT Technology Review argued that recommendation algorithms are among the most powerful applications of machine learning today. Such AI-driven decision aids are part of all the major apps. It is notable in this context that the amount of time that young people spend on online videos in Scandinavia has increased from two to four hours daily in four years.

Potential conflicts between AI providers and users

However, this involves a potential tension between the user and the supplier of AI; systems that promote increased usage may benefit its providers more than its users over time. There are indications that recommendation systems promote choices that do not always promote their users’ long-term goals. There are indications that users’ stated intentions and actual behavior online are sometimes inconsistent:

First, screen time and personal goals are not always coherent. In Sweden, 80% uses social media, but less than one fourth thinks that it is a valuable way to spend time. As many as 40% thinks it is more or less meaningless. Among adults with smartphones, 45% think they use their phones too much, and 42% try to limit their usage. A study of 200,000 mobile app users by the Center for Humane Technology revealed that their “unhappy” time spent on apps was on average 2.4 times higher than their “happy” time.

Second, the sharing of misinformation online has been found to be inconsistent with users’ attitudes: Pennycook and co-authors reported that most survey participants found it important to only share accurate news on social media, though their judgments of a headline’s veracity did not impact their tendency to share. This was attributed to their distraction from accuracy by other factors when choosing what to share.

Third, people in surveys typically express concern about privacy, however their social media usage and sharing of personal information do not always reflect this attitude. In reality, users frequently do not protect their data and sometimes they just give it away. Individuals with higher privacy concerns are often those that disclose more information online. This is known as the ‘privacy paradox‘.

Are online users myopic doers or long-term planners?

Clearly, there is a difference between the online user as a myopic doer as compared to as a farsighted planner in the terminology of Thaler and Shefrin. Our behavior in the past is not always a good indicator of our wishes for the future. Thus, it seems that an AI-driven decision-aid that has been sincerely designed to benefit its users should sometimes remind them of their long-term goals. In a previous post, the Binge watching problem, it was suggested that a voice assistant, Alice, should refute the users’ wish to watch another episode and instead nudge them towards going to bed, as the latter was more consistent with their stated long-term goals.

It has been shown that it is possible to engineer an algorithm that successively steers the users away from their previous behavior online. For instance, an earlier version of YouTube’s AI-driven recommendation system suffered the shortcoming of users getting bored because they were shown content too similar to what they had watched before. Therefore, an updated version, Reinforce, was engineered to maximize user engagement over time. This algorithm was described as “a kind of long-term addiction machine” by Kevin Roose in the New York Times. Coherently, Pennycook and co-authors found that subtly encouraging people to think about accuracy increased the truthfulness of the news that they shared on Twitter.

Evidently, AI can shape user behavior over time. While it seems that algorithms at present mostly use our myopia to promote engagement, this may also give AI the potential to help us commit to long-term goals. It is a problem, however, that these goals sometimes disagree with the data-driven business models of platform providers. But perhaps AI companies are being penny wise and pound foolish? In the long run, we might demand technologies that don’t just indulge our whims but help us in our aspirations for the future.

2021-12-07