EA Reading List
This list aims to provide an overview of the core ideas of effective altruism. EAF doesn’t necessarily endorse everything that’s said in those pieces*, but we think it’s a useful selection of content if you’re interested in learning more.
Core ideas and concepts
- Introduction to Effective Altruism: An accessible introduction to some core frameworks used in effective altruism, some promising causes that are being considered, and what effective altruism might mean for you.
- Prospecting for Gold: Discusses a series of key effective altruist concepts, such as heavy-tailed distributions, diminishing marginal returns, and comparative advantage, illustrating them with metaphors.
- Do Unto Others: Compares doing good to an Arctic exploration and shows why both require awareness of opportunity costs and a hard-headed commitment to investigating the best use of resources.
- EA Concepts: An encylopedia of some concepts that are often used or referenced in effective altruism.
- Crucial Considerations and Wise Philanthropy: Explores what “crucial considerations” are and what they mean for effective altruism.
- Understanding cause-neutrality: Clarifies different ways in which the concept of “cause-neutrality” is being used in effective altruism.
- Cause prioritization for downside-focused value systems: Outlines thinking on cause prioritization from the perspective of value systems whose primary concern is reducing disvalue.
Global Health and Development
- Global Health and Development: Sets out why you might want to focus on problems in global health and development – and why you might not.
- Animal Welfare: Sets out why you might want to work on improving animal welfare – and why you might not.
- The Case Against Speciesism: Argues that we should take the wellbeing of animals into consideration and covers some common objections.
- Wild Animal Suffering: Argues that we should care about the suffering of animals in the wild using the scale, tractability and neglectedness framework.
The Long-Term Future
- The Long-Term Future: Reasons to care about the long-term future, and reasons not to.
- Existential Risk Prevention As Global Priority: Clarifies the concept of existential risk (x-risk), discusses the relation between existential risks and basic issues in axiology and argues that x-risk reduction can serve as an action-guiding principle for utilitarian concerns.
- Reducing Risks from Astronomical Suffering: A Neglected Priority: Argues that instead of focusing exclusively on ensuring that there will be a future, we should improve the quality of future, and in particular, reduce risks of astronomical suffering (s-risk).
Risks from Advanced Artificial Intelligence
- What Does (and Doesn’t) AI Mean for Effective Altruism?: Discusses what strategy effective altruists ought to adopt with regards to the development of advanced artificial intelligence. Argues that we ought to adopt a portfolio approach – i.e., that we ought to invest resources in strategies relevant to several different AI scenarios.
- Altruists Should Prioritize Artificial Intelligence: Argues that if we want our actions to have an influence on the very long-term future, we should consider focusing on outcomes with AI. As smarter-than-human artificial intelligence would likely aim to colonize space in pursuit of its goals, focusing on AI means focusing on the scenarios where the stakes will be highest.
- Potential Risks from Advanced AI: Presents Open Philanthropy Project’s work and thinking on advanced artifical intelligence. Also gives an overview over the field, distinguishing between strategic risks and misalignment risks.
Further relevant reading
- 80’000 Hours Career Guide: Comprehensive guide on how to find a high impact career.
- The Fidelity Model of Spreading Ideas: Develops a distinction between mechanisms for spreading EA ideas and argues that we ought to prefer spreading EA ideas in a way that retain the nuance of the ideas.
- Considering Considerateness: Argues that for communities of people striving to do good, considerateness should be a high priority.
- Reasons to Be Nice to Other Value Systems: Argues that we should work together even when we value different things.
- Hits-based Giving: Argues that high-risk projects might be the best bets.
- Room for Other Things: Gives some advice for coping with the feeling that EA can sometimes be overwhelming.
*if you’re interested in EAF’s particular views on cause prioritiziation, we recommend Max Daniel’s talk on s-risks given at EA Global Boston 2017, or EAF’s plans for 2019.