[1] See this page for our approach to dealing with uncertainty in cost-effectiveness models. It explains how we approach the optimizer’s curse, how we use sensitivity checks, and how we incorporate evidence from outside our models.
[2] See this page for an analysis of whether our Top Charities are saving the same lives each year.
[3] See here for our assessment of Happier Lives Institute’s cost-effectiveness estimates using subjective well-being and how they compare to our cost-effectiveness estimates for our Top Charities.
[4] See here for an example of how we think about long-term income increases related to the programs we support.
[5] See here for a discussion of adjustments we make in our cost-effectiveness models to account for the cost of illness averted.
[6] See here for a discussion of what we learned from “red teaming” our Top Charity research.
[7] As part of our effort to determine whether GiveWell funding crowds out other funders, we funded a Center for Global Development research project on why other funders don’t fund the opportunities that we do. See here.
[8] See here for an example of how we incorporated expert opinions into our evaluation of insecticide-treated nets.
[9] See here, here, and here for responses that we made to criticisms of GiveWell that we received as part of our 2022 Change Our Mind Contest, in which we invited critiques of our cost-effectiveness analyses and awarded cash prizes to the best entries.
[10] In 2022 GiveWell held the Change Our Mind Contest, where we invited critiques of our cost-effectiveness analyses and awarded cash prizes to the best entries.
[11] In 2024 GiveWell established explicit standards for making its research more understandable and easier to engage with, including specific guidelines for grant pages. These standards include clear summaries of key considerations, reasoning, and reservations. See our blog post on the topic here.
[12] For example, in this section of our page on MiracleFeet’s clubfoot treatment intervention, you can see our discussion of the high-level drivers of program effectiveness and a simple CEA that drives to quantify those drivers. We also describe outside-the-model considerations, such as the learning value of the grant.
[13] Some examples of cross-cutting questions potentially changing our grantmaking came out of “red teaming” our Top Charities. For example, we discuss how relying on estimates from one data source may have caused us to underestimate all-cause mortality across Nigerian states and malaria mortality in Chad. Higher estimates for either of these inputs would significantly change our cost-effectiveness estimates for those programs and therefore our grantmaking.
[14] For example, GiveWell significantly updated our assessment of water quality interventions after reviewing a new meta-analysis by Michael Kremer and his team. This increased our estimate of the cost-effectiveness of water quality interventions and led us to make a number of grants. See our blog post on this update here and our current approach to water-related grantmaking here.
[15] Transparently explaining our rationale for making grants or deprioritizing investigations allows others to scrutinize our work. For example, after GiveWell published a short note deprioritizing family planning radio campaigns, Family Empowerment Media was able to read and respond to our analysis, identifying specific ways its program might be more cost-effective than GiveWell’s model suggested. This led to a productive dialogue and ultimately a grant recommendation. See this section of our recent grant page for the full discussion.
[16] See here how red teaming of our Top Charities determined that we were inconsistent in how we calculated certain common elements of our cost-effectiveness models across grantmaking areas as well as our plan to address this issue.
[17] Technical assistance programs typically work with governments and local organizations to improve the quality, efficiency, or scale of existing health services by providing expertise, training, and support. In contrast, direct delivery programs implement health interventions, such as distributing malaria nets or incentivizing vaccines. Among our recent grants for technical assistance is a grant to Evidence Action to support iron and folic acid (IFA) supplementation in India.
[18] Technical assistance programs are often harder to evaluate than direct delivery programs because their impact depends on successfully influencing government systems and behavior rather than tracking concrete outputs like supplements delivered. To better understand how technical assistance works in practice, we’ll be conducting a lookback on these programs. As part of this effort, this year, a GiveWell researcher conducted site visits to technical assistance programs in India, observing how they engage with and influence government health systems on the ground.
[19] See the second open question on “How reliable are the mortality estimates we rely on?” in this section of our report on New Incentives’ conditional cash transfer program.