Donation Recommendations 2018

These donation recommendations follow from our focus on risks of astronomical suffering (s-risks). This focus is based on the view that the long-term future is overwhelmingly important from an altruistic point of view, and the belief that there is a significant chance that artificial intelligence will fundamentally transform human civilization in the near future. It implies, however, a slightly different emphasis and strategic outlook than a focus on the risk of human extinction, as we have explained elsewhere.

We believe there are two factors that determine whether you should prioritize mitigating s-risks:

  • How much more weight, if at all, do you put on suffering as opposed to happiness? All else equal, somebody who places more weight on reducing suffering should care more about s-risks than somebody who places less weight.1
  • How likely do you think s-risks are when compared to really positive future scenarios? All else equal, somebody who thinks they are more likely (even if not strictly more likely) should care more about s-risks.2

If you do prioritize preventing s-risks over ensuring the immense value that we would fail to realize in the case of extinction, some specific donation recommendations apply. Otherwise you can learn more about our recommendations for other cause areas in our guide.

Effective Altruism Foundation

So far we are the only organization specifically dedicated to reducing risks of astronomical suffering. At the same time it has proven difficult to identify specific interventions and organizations where we can be confident that they will reduce such risks. That’s why we believe our work is still the best bet for anybody with a focus similar to ours. Future research insights on how to best prevent s-risks will most likely be the result of research efforts, exchanges, and collaborations that we initiate.

Donate to the Effective Altruism Foundation

EAF Fund (formerly “REG Fund”)

In 2018 we started the EAF Fund with the explicit mission to reduce risks of astronomical suffering through grants to other charities or individuals. Currently, we expect grants to be made in the following priority areas: decision theory research, fail-safe AI architecture, theory and history of conflict, macrostrategy, global cooperation and institutional decision-making, and moral circle expansion. So far we have made two grants, one to Rethink Priorities and one to Daniel Kokotajlo. Although the fund’s mission is to address s-risks, there are two reasons why we think donations to the Effective Altruism Foundation itself are more valuable right now: (1) Since we committed ourselves never to use the fund to support our own work, donations to EAF itself are more flexible. For instance, we can decide to commit a portion of our budget to the fund. (2) We think what’s most needed right now is additional research to figure how to best reduce s-risks, as opposed to funding for more specific interventions.

Donate to the EAF Fund (DE, CH, NL) Donate to the EAF Fund (US, UK)

Note: Donations to the EAF Fund can be matched 1:1 as part of this matching challenge. Such doubled donations to the fund are likely more impactful than a simple donation to EAF. In the context of the matching challenge we still refer to it as the “REG Fund”, its former name.

Machine Intelligence Research Institute

Research carried out by the Machine Intelligence Research Institute (MIRI) is particularly valuable from our perspective. Their work on agent foundations is an approach to AI alignment that is most likely to lead to safeguards against the very worst risks from AI development. We also believe they have considerably more room for more funding than other organizations in their field. So if you favor giving to a specific organization over giving to funds, we recommend giving to MIRI. Otherwise we think giving to the EAF Fund is better since such a donation is more flexible.

Donate to the Machine Intelligence Research Institute

Other recommendations

These are recommendations which we believe to be less valuable, but still a good choice for anybody looking to reduce s-risks.

Charities focused on expanding humanity’s moral circle

Most s-risks involve the suffering of nonhuman, most likely digital minds. Working toward the inclusion of more types of beings in humanity’s sphere of concern likely increases the chance that we will also extend our compassion toward such minds, thus reducing s-risks. We think that’s true, both in the broad sense of affecting societal values, but also in the narrow sense of perhaps influencing AI development. However, we believe that there are more direct ways of shaping the development of AI which we think will be critical (e.g. technical work which reduces the likelihood of conflicts involving such systems). That’s why it’s not a top priority of ours. Still, if you find the case for moral circle expansion persuasive (e.g. as argued for by Jacy Reese), we recommend you look into the work of the Effective Animal Advocacy Fund run by Animal Charity Evaluators, the Animal Welfare Fund run by the Centre for Effective Altruism, and charity recommendations made by Animal Charity Evaluators.

Charities focused on reducing wild animal suffering

It’s very plausible to us that the suffering experienced by animals in the wild is the largest source of suffering in the present day. However, we also believe that there are plausible s-risks with an even larger scale which we have a significant chance of affecting. While efforts to reduce wild animal suffering likely also have positive indirect effects on the long-term future, targeted moral circle expansion arguably captures these more reliably. So the best case for funding work on wild animal suffering is probably based on the belief that efforts to affect the long-term future are bound to fail, e.g. because we’re clueless about their effects. Organizations in this area are: Wild Animal Suffering Research, Utility Farm, Animal Ethics. Note, however, that we have not systematically investigated which interventions we expect to reduce the most suffering when disregarding effects on the long-term future.


1 We have given reasons for prioritizing suffering before. Brian Tomasik has also argued for a similar view. However, many people reject this position (e.g. Greaves, Sinhababu).

2 For reasons outlined by e.g. Ben West and Paul Christiano, we think very good futures are more likely. At the same time, we think the probability of very bad futures is only lower by a factor of about 100, but as Althaus and Gloor have argued they’re by no means negligible.