AI Risks and Do We Have to Care About the Implausible?

Granted the current technological development of AI, the evaluation of risks of AI are highly relevant. Some proponents of AI safety have suggested elaborate scenarios of AI risks [1], which strikes others as implausible and hence—or so they argue—ought to be ignored [2]. Thus, the question of how much attention we should give to implausible scenarios is a highly relevant issue to resolve. Interestingly, there is a decision principle that—in some versions—says that we can ignore risks that are either sufficiently implausible or sufficiently small (the size of a risk is often considered as the product of an evaluation of the severity of its consequence—i.e., if it were to be actualized—and the probability that it is actualize). This principle is called de minimis.

Recently, me and H. Orri Stefánsson have argued that the de minimis principle should not be applied in decision-making. Thus, indirectly contributing to the question of whether we can simply ignore the risk of implausible AI risks. If you are interested in reading the article you can find it here.

It should be noted that that the de minimis principle should not be used does not imply that after evaluating the risks we can still conclude that no serious actions are warranted. Conversely, we can also conclude that actions are warranted and/or that the risks are not as implausible as some would claim.

References

[1] Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. New York, NY: Oxford University Press.

[2] Floridi, L. (2016). Should we be afraid of AI? AEON 9 May. https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible

2020-11-06