It’s difficult to quantify our own impact since we want to help others have as much impact as they can instead of having a direct impact ourselves. We still attempt to do so for some of our activities below. However, sometimes we also settled for simply listing what we have done or achieved in terms of output. We’re committed to further improving the evaluation of our work.
We developed the concept of risks of astronomical suffering (s-risks) for describing a particular risk class of advanced AI systems. This idea has been well received within the effective altruism community (e.g., at this research workshop on existential risks, or in this research agenda on effective altruism by the Oxford-based Global Priorities Institute). In May 2019, we hosted a workshop on this topic with researchers from OpenAI, Deepmind, the Future of Humanity Institute, the Open Philanthropy Project, and various university departments. This event is a good example of the kind of work we hope will positively influence the development of advanced AI systems.
We recommend the following resources for learning more about the topic:
- Peer-reviewed article: Sotala, Gloor (2017): Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. Informatica (41,4).
- Introductory talk: Max Daniel (2017): S-risks: Why they are the worst existential risks, and how to prevent them. EAG Boston.
- Introductory essay: Althaus, Gloor (2016): Reducing Risks of Astronomical Suffering: A Neglected Global Priority.
We have also started developing two new concepts in ethics and decision theory:
Our researchers have also given talks at various international conferences and published many articles (both reviewed and non-reviewed). See the website of our project, the Foundational Research Institute, for a full list.
Since our founding in December 2013, we estimate to have raised $12,538,885 for other high-impact charities that would not have been donated otherwise. To achieve this, we think we dedicated about $635,349 of our budget to the relevant activities over this time period. This yields a net impact of $11,903,536 (not factoring in opportunity costs of the staff involved).
We want to help young graduates reach positions where they can have as much of an impact as possible. We have run over 80 coaching sessions since we started offering this in early 2018. Using a metric similar to that of 80,000 Hours, we estimate that our work in this area has resulted in 66 expected impact-adjusted significant plan changes. This falls far short of 80,000 Hours’ own performance but we have lower economies of scale and only started recently, so we are not particularly worried about that.
Spreading effective altruism
While it’s not a top priority anymore, we worked hard to establish an effective altruism community in Germany and Switzerland from 2012 to 2017. By now the community self-organizes to a large extent with additional help from the Centre for Effective Altruism. We consider this a success.
- Events: We organized three independent effective altruism conferences (EAGx): Basel (2015), Berlin (2016 & 2017). They were the largest ones in continental Europe. We also organized five retreats on topics related to effective altruism.
- Media and PR: We appeared or published in numerous influential German-speaking media outlets. Our former president was on the renowned Swiss television program Sternstunde Philosophie and both Jan Dirk Capelle and Stefan Torges gave TEDx talks on our work.
- Local groups: We initiated or supported the formation of over 20 local groups in German-speaking countries. We organized four retreats and published a guide to coach German-speaking local group leaders.
While it’s not a top priority anymore, we worked hard on putting issues critical to effective altruism on the political agenda. We achieved some success but ceased these efforts after a change in our overall strategy.
- Risks from artificial intelligence: In our 2015 policy paper we outlined the opportunities and risks from the development of artificial intelligence. Thomas Metzinger, one of the co-authors, is now a member of the EU’s High-Level Expert Group on Artificial Intelligence.
- Evidence-based development cooperation: In 2017, we published a policy paper on the topic and followed it with a ballot initiative in the city of Zurich, advocating a larger and more evidence-based foreign development program. We’re confident that this will result in a significant legislative change.
- Animal welfare: When it was still a project of ours, Sentience Politics published policy papers on the following topics: Sustainable Food, Cultured Meat, and Fundamental Rights for Primates. Informed by these insights we successfully launched several ballot initiatives in Switzerland.