We make grants to promising projects and individuals through our fund, the EAF Fund. You may donate to this fund or apply for funding yourself.


The fund’s mission is supporting research and policy efforts to prevent technological risks facing our civilization. The potentially transformative nature of artificial intelligence poses a particular challenge that we want to address. We want to prevent a situation similar to the advent of nuclear weapons, in which careful reflection on the serious implications of this technology took a back seat during the wartime arms race. As our technological power grows, future inventions may cause harm on an even larger scale—unless we act early and deliberately.

Priority areas: decision theory and bargaining, specific AI alignment approaches, fail-safe architectures, macrostrategy research, AI governance, as well as social science research on conflicts and moral circle expansion.

If you want to learn more about the mission of the fund:

Fund Management

Lukas Gloor is responsible for prioritization at the Effective Altruism Foundation, and coordinates our research with other organizations. He conceptualized worst-case AI safety, and helped coin and establish the term s-risks. Currently, his main research focus is on better understanding how different AI alignment approaches affect worst-case outcomes.

Brian Tomasik has written prolifically and comprehensively about ethics, animal welfare, artificial intelligence, and the long-term future from a suffering-focused perspective. His ideas have been very influential in the effective altruism movement, and he helped found the Foundational Research Institute, a project of the Effective Altruism Foundation, which he still advises. He graduated from Swarthmore College in 2009, where he studied computer science, mathematics, statistics, and economics.

Jonas Vollmer is the Co-Executive Director of the Effective Altruism Foundation where he is responsible for setting the strategic direction, management, as well as communications with the effective altruism community. He holds degrees in medicine and economics with a focus on health economics and development economics. He previously served on the boards of several charities, is an advisor to the EA Long-term Future Fund, and played a key part in establishing the effective altruism movement in continental Europe.

Grantmaking Process

  • Grant decisions are made by simple majority of the fund managers.
  • Recipients may be charitable organizations, academic institutions, or individuals.2
  • Grants will likely be made every six to twelve months.

Past Grants


  • Grant size: $39,200
  • Payout date: September 6, 2019

As part of the EAF Fund’s first open application round, Wild Animal Initiative (WAI) applied for a grant to carry out a research project to develop a long-termist approach to wild-animal welfare, to be carried out by various members of their research team.

We have a generally favorable view of WAI as an organization, though we didn’t conduct a thorough evaluation. Their research proposal prominently mentions various considerations that explore the relationship between long-termism and wild-animal welfare research, but they didn’t seem well-developed yet. We also thought some of their expectations of the impact of their project were too optimistic. That said, we are excited to see more research into the tractability, reversibility, and resilience of wild-animal welfare interventions.

We do not believe that research on wild-animal welfare contributes directly to the EAF Fund’s main priorities, but we think it might help improve concern for suffering prevention. While we might not make any further grants in the area of wild-animal welfare, we decided in favor of this grant in part due to the large current amount of funding available.

Note that WAI was created through a merger that involved a largely independent project previously housed at the Effective Altruism Foundation.

  • Grant size: $12,147
  • Payout date: September 6, 2019

As part of the EAF Fund’s first open application round, Jaime Sevilla applied for a grant of £10,000 ($12,147 at time of conversion) to develop and analyze a mathematical decision-making model, trying to determine under which conditions actions are time-sensitive. Among other things, he aims to refine the option value argument for extinction risk reduction.

We think it’s probably very difficult to produce significant new insights through such foundational research. We think applying standard models to analyze specific scenarios outlined in the research proposal might turn out to be valuable, though we also don’t think this is a priority for reducing s-risks.

We also see this grant as an investment in Jaime’s career as a researcher. We have been impressed by a paper draft on the relevance of quantum computing to AI alignment that Jaime is co-authoring, and might have decided against this grant otherwise. We think it’s unlikely that Jaime will make s-risks a primary focus of his research, but we hope that he might make sporadic contributions to the EAF Fund’s research priorities.

  • Grant size: $5,000
  • Payout date: September 9, 2019

As part of the EAF Fund’s first open application round, Riley Harris applied for travel and conference funding to attend summer school programs and conferences abroad. Riley Harris is a talented Master’s student at the University of Adelaide interested in pursuing an academic career in economics.

We see this grant as an investment in Riley’s potential academic career. His current interest is in game theory and behavioral economics, with potential applications in AI governance.

While we have been somewhat impressed by Riley’s academic track record and interest in effective altruism and AI risk, one fund manager felt unsure about his ability to quickly get up to speed with the research on s-risk, pursue outstanding original research, and convey his thinking clearly. We hope that this grant will help Riley determine whether an economics Ph.D. is a good personal fit for him.


  • Grant size: $26,000
  • Payout date: September 18, 2018

We made a grant to Rethink Priorities for implementing a survey designed to study the population ethical views of the effective altruism community. More common knowledge about values within the effective altruism community will make moral cooperation easier. There is also some chance that a more open discussion of fundamental values will lead some people to adjust their prioritization in a way they endorse. The grant allows Rethink Priorities to contract David Moss. He has experience running and analyzing the SHIC survey and the 2015 Effective Altruism Survey. So have reason to believe that the project will be well executed. It’s unlikely that this survey would have been funded by anybody else.

They will also use part of the grant to run a representative survey on attitudes toward reducing the suffering of animals in the wild. While we don’t think this is as valuable as their descriptive ethics project, the gathered information will likely still result in important strategic insights for a cause area we’re very sympathetic toward. This survey will also be led by David Moss, in collaboration with academics at Cornell University.

  • Grant size: $27,450
  • Payout date: November 27, 2018

We made a grant to Daniel Kokotajlo to free him from his teaching studies for a year. He is currently pursuing a Ph.D. in philosophy at the University of North Carolina at Chapel Hill. The grant will double the hours he can dedicate to his research. His work will mainly focus on improving our understanding of acausal interactions between AI systems. We want to learn more about whether such acausal interactions are possible and what they imply for the prioritization of effective altruists. We believe this area of research is currently neglected since only a handful of people have done scholarly work on this topic, and many questions are still unexplored. We were impressed by Daniel’s previous work and his research proposals, and therefore believe he has the skills required to make progress on these questions.

Donating to this fund

Our fund helps you give more effectively with minimal time investment. It works similarly to a mutual fund, but the fund managers aim to maximize the impact of your donations instead of your investment returns. They use the pooled donations to make grants to recipients whose work will contribute most to the mission of the fund. Giving through a fund can increase the impact of your donation in several ways:

Unique opportunities. Some funding opportunities, such as academic grants, are simply not open to most individual donors, unless they pool their contributions in a fund or donor lottery.

Economies of scale. Finding the best funding opportunities is difficult and time consuming, since there are a lot of different considerations and relevant research. A fund allows many donors with limited time to delegate the relevant work to the fund managers. They in turn can invest significant amounts of time in order to identify the best recipients for many people at once, making the process far more efficient.

Expert judgment. The fund managers have built up knowledge in the relevant domains and consult with technical experts where appropriate. They have thought about the long-term effects of different philanthropic interventions for years. Using expert judgment might be particularly important in this domain since unlike for other cause areas, no charity evaluator such as GiveWell exists yet for selecting organizations dedicated to improving the long-term future.1

You should give to this fund in particular if:

  • you value future lives as much as current ones, and you expect most individuals to exist in the long-term future;
  • you think there is a significant chance that advanced artificial intelligence will shape the future in profound ways and cause harm on an unprecedented scale;
  • you believe there are actions we can take right now to mitigate these risks;
  • you are particularly concerned about worst-case scenarios and s-risks.

Donors from Germany, Switzerland, and the Netherlands can donate to this fund using the form below. Donors from the US and the UK can donate to this fund via the EA Funds Platform.

  1. 1 Amount
  2. 2 Payment
  3. 3 Finish

1 The Open Philanthropy Project makes grants in this area, but they only publish very few rigorous analyses or comparative reviews.

2 Due to conflicts of interest, we will not make any grants to the Effective Altruism Foundation or its affiliate projects.

Get involved

Career coaching

Are you interested in using your career to do good, and work on similar priorities to ours? We're happy to help you find the most impactful path for you and connect you with like-minded people.

Learn more

Donation advice

Are you planning to donate a significant amount? Let us know, and we’ll get in touch with you directly to discuss how you can make the most of your gift.

Learn more

Open positions

Are you interested in working with us? Check our vacancies for roles that might suit you or send us a proposal for how you want to contribute.

Learn more