Policy paper: Artificial intelligence

This policy paper is out of date and does not necessarily represent our current views.

Artificial Intelligence: Opportunities and Risks

Executive Summary of the Policy paper of the Effective Altruism Foundation (December 2015)

Artificial intelligence (AI) and increasingly complex algorithms currently influence our lives and our civilization more than ever before. The areas of AI application are diverse and the possibilities far-reaching, and thanks to recent improvements in computer hardware, certain AI algorithms already surpass the capacities of today’s human experts. As AI capacity improves, its field of application will continue to grow. In concrete terms, it is likely that the relevant algorithms will start optimizing themselves to an ever greater degree and may one day attain superhuman levels of intelligence. This technological progress is likely to present us with historically unprecedented ethical challenges. Many experts believe that, alongside global opportunities, AI poses global risks surpassing those of e.g. nuclear technology (whose risks were severely underestimated prior to their development). Furthermore, scientific risk analyses suggest that high potential damages resulting from AI should be taken very seriously—even if the probability of their occurrence were low.

Current

In narrow, well-tested areas of application, such as driverless cars and certain areas of medical diagnostics, the superiority of AIs over humans is already established. An increased use of technology in these areas offers great potential, including fewer road traffic accidents, fewer mistakes in the medical treatment and diagnosing of patients, and the discovery of many new therapies and pharmaceuticals. In complex systems where several algorithms interact at high speed (such as in the financial market or in foreseeable military uses), there is a heightened risk that new AI technologies will be misused, or will experience unexpected systematic failures. There is also the threat of an arms race in which the safety of technological developments is sacrificed in favor of rapid progress. In any case, it is crucial to know which goals or ethical values ought to be programmed into AI algorithms and to have a technical guarantee that the goals remain stable and resistant to manipulation. With driverless cars, for instance, there is the well-known question of how the algorithm should act if a collision with several pedestrians can only be avoided by endangering the passenger(s), not to mention how it can be ensured that the algorithms of driverless cars are not at risk of hacking systematic failure.

Measure 1. The promotion of a factual, rational discourse is essential so that cultural prejudices can be dismantled and the most pressing questions of safety can be focused upon.

Measure 2. Legal frameworks must be adapted so as to include the risks and potential of new technologies. AI manufacturers should be required to invest more in the safety and reliability of technologies, and principles like predictability, transparency, and non-manipulability should be enforced, so that the risk of (and potential damage from) unexpected catastrophes can be minimized.

Mid-term

Progress in AI research makes it possible to replace increasing amounts of human jobs with machines. Many economists assume that this increasing automation could lead to a massive increase in unemployment within even the next 10-20 years. It should be noted that while similar predictions in the past have proved inaccurate, the developments discussed here are of a new kind, and it would be irresponsible to ignore the possibility that these predictions come true at some point. Through progressive automation, the global statistical average living standard will rise; however, there is no guarantee that all people—or even a majority of people—will benefit from this.

Measure 3. Can we as a society deal with the consequences of AI automation in a sensible way? Are our current social systems suiciently prepared for a future wherein the human workforce increasingly gives way to machines? These questions must be clarified in detail. If need be, proactive measures should be taken to cushion negative developments or to render them more positive. Proposals like an unconditional basic income or a negative income tax are worth examining as possible ways to ensure a fair distribution of the profits from increased productivity.

Long-term

Many AI experts consider it plausible that this century will witness the creation of AIs whose intelligence surpasses that of humans in all respects. The goals of such AIs could in principle take on any possible form (of which human ethical goals represent only a tiny proportion) and would influence the future of our planet decisively in ways that could pose an existential risk to humanity. Our species only dominates Earth (and, for better or worse, all other species inhabiting it) because it currently has the highest level of intelligence. But it is plausible that by the end of the century, AIs will be developed whose intelligence compares to ours as ours currently compares to, say, chimpanzees. Moreover, the possibility cannot be excluded that AIs also develop phenomenal states—i.e. (self-)consciousness, and in particular subjective preferences and the capacity for suffering—in the future, which would confront us with new kinds of ethical challenges. In view of the immediate relevance of the problem and its longer-term implications, considerations of AI safety are currently highly underrepresented in politics as well as research.

Measure 4. It is worth developing institutional measures to promote safety, for example by granting research funding to projects which concentrate on the analysis and prevention of risks in AI development. Politicians must, in general, allocate more resources towards the ethical development of future-shaping technologies.

Measure 5. Efforts towards international research collaboration (analogous to CERN’s role in particle physics) are to be encouraged. International coordination is particularly essential in the field of AI because it also minimizes the risk of a technological arms race. A ban on all risky AI research would not be practicable, as it would lead to a rapid and dangerous relocation of research to countries with lower safety standards.

Measure 6. Certain AI systems are likely to have the capacity to suffer, particularly neuromorphic ones as they are structured analogously to the human brain. Research projects that develop or test such AIs should be placed under the supervision of ethical commissions (analogous to animal research commissions).

Read Full Text (PDF)