Voice Assistants Become Choice Assistants?

In Homer’s epic poem, as Ulysses heads home, Circe suggests that he plug his ears. Otherwise, the Sirens’ song would lure him to steer his ship towards rocks and shipwreck. Ulysses asks his crew to plug their ears and tie him to the ship’s mast. When he passes the Sirens, he asks his crew to release him, but they refuse and the ship sails on.

This story was recounted in Erik Angner’s book on Behavioral economics to illustrate that our preferences may not always be consistent over time. And that sometimes we may need to constrain our future selves. Thus, economic choice theory may help to consider an earlier blog post that illustrates a role that voice assistants may play in the future.

In that scenario, you decided on Saturday not to binge-watch series on weekday nights. You preferred to go to bed early and be well rested at work. Hence, you preferred a large reward later over a small reward sooner. However, the following Tuesday evening you preferred watching another episode over going to bed. You favored a small reward at once over a large reward later. Consequently, your discount rate had changed. Your AI-driven voice assistant, Alice, was the one to implement your preferences.

The post was about the cognitive bias of hyperbolic discounting: the increased tendency to prefer a small reward sooner over a large reward later as the delay nears the present: waiting is more appealing if it occurs further in the future. There are naïve and sophisticated hyperbolic discounters. The former are not aware of their self-control problems, but the latter are. Sophisticated hyperbolic discounters can predict their self-control problems and choose to restrict themselves in the future.

This is how voice assistants may change our decision-making. Could they help us get rid of bad habits? Do we want that? If so, how fervently should they try? Considering two views in economics, two distinct types of voice assistants are conceivable, the Neoclassical and the Behavioral type:

Neoclassical Alice assumes that choices reveal preferences

In economics, a preference is the general disposition to choose one alternative over another, and according to the Neoclassical model, this preference applies to all contexts, as addressed by Robert Sugden. It is assumed that preferences are revealed by choices. The benefit that a person draws from an alternative is represented by the frequency of her choosing it.

According to this theory, it seems that a voice assistant that you continuously feed with your decisions would soon have full information about your preferences.So, if you tend to binge-watch on weekdays, Neoclassical Alice would simply run the next episode automatically – before you even have had time to ask her to do it. She would assume that your preferences on this particular Tuesday night are consistent with your choices on Tuesdays in the past.

Behavioral Alice would be a choice architect

In contrast, choices in behavioral economics are assumed to be influenced by how the alternatives are presented. Richard Thaler and Cass Sunstein argued that even seemingly small features in social situations can have large influence on human behavior. They introduced choice architectures as the intentional design of decision-making environments.

Nudges are aspects of choice architectures that promote alternatives in a predictable way without restricting any options or significantly changing their economic incentives, as defined by Thaler and Sunstein. Nudges should be designed to increase the chances that decision-makers improve their welfare and reach their long-term goals. To justify nudging, Thaler and Sunstein argued that individuals often make bad decisions, which they would not have made if they had full attention and self-control, complete information, and unlimited cognitive abilities.

Maybe, your voice assistant could provide decision-support that reflects the decision that you would make if you had better self-discipline. An AI that has registered your choices for a long time will have very good data about which choice architectures you respond to. For instance, you may tend to stick with the default alternative, or select the option that your friends have chosen.

Reviewing recent research on habit formation, Jerome Groopman reported that bad habits are changed more easily by modifying the choice settings than relying on will power. Thus, Behavioral Alice would create choice environments that promote your long-term goal of getting a promotion. She would use any nudge at her command to keep you from watching another episode. Soon, we may all be sophisticated hyperbolic discounters like Ulysses.

2020-12-29