All blog posts

  • 2023 - An Update on Progress and Recent Endeavors

    AIFutures is far from idle and we want to take this opportunity and provide you with a small glimpse into our recent undertakings by highlighting three of the papers that we've published in 2023!

  • A (relatively) specific proposal on how to regulate algorithmic disclosure of recommendation systems

    Here we will outline a policy suggestion that addresses those issues while still providing a solution to the fundamental problem posed by companies not disclosing information about their recommendation systems.

  • A Policy for More Transparent Recommender Systems

    Part two out of three in a blog post series about transparency and recommendation systems: “Providers of consumer-facing recommender systems should be required to reveal how they work to an independent public authority. They would have to describe which types of algorithms are used, and how these are optimized, i.e., their goal functions, and what types of input data have […]”

  • Spilling the Recommended Beans: Why Companies Should Have to Disclose the Ingredients of Their Recommendation Systems

    Part one out of three in a blog post series about transparency and recommendation systems, outlining a proposal for algorithmic transparency.

  • Protect Me From What AI Wants? The Binge-Watching Problem Part 3

    While it seems that algorithms at present mostly use our myopia to promote engagement, this may also give AI the potential to help us commit to long-term goals. It is a problem, however, that these goals sometimes disagree with the data-driven business models of platform providers.

  • AI and Moral Responsibility

    We are entering a new chapter in the digitalisation of society, advancing from machines as decision support, to machines as “decision makers”. This is made possible by the many breakthroughs in the development of artificial intelligence (AI) in the last decade. There is reason to expect that AI will be used increasingly not only for private decisions, but also for decisions made by public authorities and institutions. Who is responsible when a machine commits a serious error?

  • If AI Is Controlling Us – Who Is Controlling AI?

    AI sees you. Through an ever growing network of applications, microphones and cameras, data about your behavior is continuously collected almost wherever you are. We are getting accustomed to our digital trail being used to tailor ads and to make your Netflix binging more convenient. These things might seem harmless […]

  • AI in the Public Sector – More of a Human Process

    Robots and automation are not things of the future; they are popping up in various places in the public sector. AI is increasingly used for both small and big decisions such as sorting incoming mail […]

  • AI Rationality and Responsibility – A Talk About Who Is in Charge

    Unlike human judgment, machines are not affected by emotions in their decision making. This can make them seem superior in the sense of being more logical or rational, which is often seen as more accurate or reliable than intuition or “gut feeling” […]

  • AI Winter Is Coming?

    The history of artificial intelligence (AI) has been a history of boom and bust. Periods of hype, exaggerated expectations and plentiful funding have been followed by periods known as “AI Winters”.

  • Ethical Crashing and Safety Requirements of Autonomous Vehicles

    The ethical discussion of autonomous vehicles has been fairly one-sided. It has focused on how vehicles should behave in case of an unavoidable crash and how is responsible if the vehicle crashes. But why don’t we talk about safety?

  • Voice Assistants Become Choice Assistants?

    In Homer’s epic poem, as Ulysses heads home, Circe suggests that he plug his ears. Otherwise, the Sirens’ song would lure him to steer his ship towards rocks and shipwreck. Ulysses asks his crew to plug their ears and tie him to the ship’s mast. When he passes the Sirens, he asks his crew to release him, but they refuse and the ship sails on.

  • AI Risks and Do We Have to Care About the Implausible?

    Granted the current technological development of AI, the evaluation of risks of AI are highly relevant. Some proponents of AI safety have suggested elaborate scenarios of AI risks [1], which strikes others as implausible and hence—or so they argue—ought to be ignored [2]. Thus, the question of how much attention we should give to implausible scenarios is a highly relevant issue to resolve.

  • Diffusion by Infusion – Has AI Spread by Piggybacking on Mobile Apps?

    Social and ethical impacts of any technology presuppose its diffusion. That is why we need to understand the mechanisms underlying the spread of AI, and it is why one of our current research projects concerns AI diffusion in society. This post summarizes a recent study of ours in this project.

  • The Binge-Watching Problem

    A new structure for human decision-making emerges with AI. Many of our choices are already implemented or shaped virtually. A large share of our lives play out online, and the other share – what we do IRL – is increasingly planned and evaluated online, for example when we use a smartphone app to book a taxi or rate a restaurant. AI algorithms may be infused by means of inconspicuous updates in websites and apps that we first visited or installed a long time ago.

  • An Ai System Does Not Always Do as We Intended

    In his 1951 lecture Intelligent machinery: A heretical theory, Alan Turing, who is widely recognized as the father of computer science, offered several ideas that foreshadowed much of cutting-edge AI safety discussion of the early 21st century. The following ominous words have become especially well-known.

  • Was AlphaGo Asia’s “Sputnik Moment”?

    Over the next decade, many governments will face a series of choices with regards to the deployment and regulation of machine learning technology. These choices are likely to be influenced by public opinion and interest in this technology.

  • Political Views of Employees of AI Companies

    In the debate about artificial intelligence, there is the argument that it is virtually impossible for policymakers to understand the complex nature of AI, let alone to regulate it efficiently, timely, and properly

  • Sage Against the Machine – Conditions for AI vs. Human Expertise

    A young farmer sees a cow walking slowly in the pen, ears drooping. It does not move when she approaches her. There seems to be something wrong, but it is beyond her to figure out whether the cow is ill or merely hungry. Expertise is urgent – but there is no human expert around. However, while specialists may be in short supply, smart sensors are getting better by the day – as are the algorithms that may make sense of the data that they collect.

  • On the Alleged Safety of Autonomous Vehicles

    While autonomous vehicles may be more safe than an average ordinary vehicles, this is not a relevant comparison. Rather, autonomous vehicles should be compared with the safest cars in their price class. Manufacturers of such cars are rapidly deploying AI technology to improve safety. Thus, autonomous vehicles are chasing a moving target with regards to traffic safety.

  • AI Futures

    Using science as a map, this blog aims to chart a route in this changing techno-social landscape, to explore how human societies are being reshaped by AI, now and maybe in the future. It is part of an interdisciplinary research program at the Institute for Futures Studies, where we, an interdisciplinary research group, will apply our expertise in social science, philosophy and mathematics to investigate the social and ethical impacts of AI.