Aggregator

Response to concerns about GiveWell’s spillovers analysis

5 years 9 months ago

Last week, we published an updated analysis on "spillover" effects of GiveDirectly's cash transfer program: i.e., effects that cash transfers may have on people who don't receive cash transfers but who live nearby those who do receive cash transfers.((For more context on this topic, see our May 2018 blog post.)) We concluded: "[O]ur best guess is that negative or positive spillover effects of cash are minimal on net." (More)

Economist Berk Özler posted a series of tweets expressing concern over GiveWell's research process for this report. We understood his major questions to be:

  1. Why did GiveWell publish its analysis on spillover effects before a key study it relied on was public? Is this consistent with GiveWell's commitment to transparency? Has GiveWell done this in other cases?
  2. Why did GiveWell place little weight on some papers in its analysis of spillover effects?
  3. Why did GiveWell's analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?

These questions apply to GiveWell's research process generally, not just our spillovers analysis, so the discussion below addresses topics such as:

  • When do our recommendations rely on private information, and why?
  • How do we decide on which evidence to review in our analyses of charities' impact?
  • How do we decide which outcomes to include in our cost-effectiveness analyses?

Finally, this feedback led us to realize a communication mistake we made: our initial report did not communicate as clearly as it should have that we were specifically estimating spillovers of GiveDirectly’s current program, not commenting on spillovers of cash transfers in general. We will now revise the report to clarify this.

Note: It may be difficult to follow some of the details of this post without having read our report on the spillover effects of GiveDirectly’s cash transfers.

Read More

The post Response to concerns about GiveWell’s spillovers analysis appeared first on The GiveWell Blog.

Josh Rosenberg

Response to concerns about GiveWell’s spillovers analysis

5 years 9 months ago

Last week, we published an updated analysis on “spillover” effects of GiveDirectly‘s cash transfer program: i.e., effects that cash transfers may have on people who don’t receive cash transfers but who live nearby those who do receive cash transfers.1For more context on this topic, see our May 2018 blog post. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We concluded: “[O]ur best guess is that negative or positive spillover effects of cash are minimal on net.” (More)

Economist Berk Özler posted a series of tweets expressing concern over GiveWell’s research process for this report. We understood his major questions to be:

  1. Why did GiveWell publish its analysis on spillover effects before a key study it relied on was public? Is this consistent with GiveWell’s commitment to transparency? Has GiveWell done this in other cases?
  2. Why did GiveWell place little weight on some papers in its analysis of spillover effects?
  3. Why did GiveWell’s analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?

These questions apply to GiveWell’s research process generally, not just our spillovers analysis, so the discussion below addresses topics such as:

  • When do our recommendations rely on private information, and why?
  • How do we decide on which evidence to review in our analyses of charities’ impact?
  • How do we decide which outcomes to include in our cost-effectiveness analyses?

Finally, this feedback led us to realize a communication mistake we made: our initial report did not communicate as clearly as it should have that we were specifically estimating spillovers of GiveDirectly’s current program, not commenting on spillovers of cash transfers in general. We will now revise the report to clarify this.

Note: It may be difficult to follow some of the details of this post without having read our report on the spillover effects of GiveDirectly’s cash transfers.

Summary

In brief, our responses to Özler’s questions are:

  • Why did GiveWell publish its analysis on spillover effects before a key paper it relied on was public? One of our major goals is to allocate money to charities as effectively as possible. Sometimes, research we learn about cannot yet be made public but we believe it should affect our recommendations. In these cases, we incorporate the private information into our recommendations and we are explicit about how it is affecting our views. We expect that private results may be more likely to change but nonetheless believe that they contain useful information; we believe ignoring such results because they are private would lead us to reach less accurate conclusions. For another recent example of an important conclusion that relied on private results, see our update on the preliminary (private) results from a study on No Lean Season, which was key to the decision to remove No Lean Season as a top charity in 2018. We discuss other examples below.
  • Why did GiveWell place little weight on some papers in its analysis of spillover effects? In general, our analyses aim to estimate the impact of programs as implemented by particular charities. The goal of our spillovers analysis is to make our best guess about the size of spillover effects caused by GiveDirectly’s programs in Kenya, Uganda, and Rwanda. We are not trying to communicate an opinion on the size of spillover effects of cash transfers in other countries or in development economics more broadly. Therefore, our analysis places substantially more weight on studies that are most similar to GiveDirectly’s program on basic characteristics such as geographic location and program type. Correspondingly, we place little weight on papers that do not meet these criteria. However, we’d welcome additional information that would help us improve our future decisionmaking about which papers to put the most weight on in our analyses.
  • Why did GiveWell’s analysis of spillovers focus on effects on consumption? Our cost-effectiveness models focus on key outcomes that we expect to drive the bulk of the welfare effects of a program. In the case of our spillovers analysis, we believe the two most relevant outcomes for estimating spillover effects on welfare are consumption and subjective well-being. We chose to focus on consumption effects in large part because (a) this is consistent with how we model the impacts of other programs, such as deworming, and (b) distinguishing effects on subjective well-being from effects on consumption in a way that avoids double-counting benefits was too complex to do in the time we had available. It is possible that additional work on subjective well-being measures would meaningfully change how we assess benefits of programs (for this program and potentially others). This is a question we plan to return to in the future.

As noted above, our current best guess is that negative or positive spillover effects of GiveDirectly’s cash transfers are minimal on net. However, we emphasize that our conclusion at this point is very tentative, and we hope to update our views next year if there is more public discussion or research on the areas of uncertainty highlighted in our analysis and/or if public debate about the studies covered in our report raises major issues we had not previously considered.

Details follow.

Why did GiveWell publish its analysis on spillover effects before a key paper it relied on was public?

In our analysis of the spillover effects of GiveDirectly’s cash transfer program, we place substantial weight on GiveDirectly’s “general equilibrium” (GE) study (as we noted we would do in May 2018,2“We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.” (More.) jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); prior to seeing the study’s results) because:

  • it is the study with the largest sample size,
  • its methodology was designed to estimate both across-village and within-village spillover effects, and
  • it is a direct study of a version of GiveDirectly’s program.

The details of this study are currently private, though we were able to share the headline results and methodology when we published our report.

This represents one example of a general policy we follow, which is to be willing to compromise to some degree on transparency in order to use the best information available to us to improve the quality of our recommendations. More on the reasoning behind this policy:

  • Since our recommendations affect the allocation of over $100 million each year, the value of improving our recommendations by factoring in the best information (even if private) can be high. Every November we publish updates to our recommended charities so that donors giving in December and January (when the bulk of charitable giving occurs) can act on the most up-to-date information.
  • We have ongoing communications with charities and researchers to learn about new information that could affect our recommendations. Private information (both positive and negative) has been important to our views on a number of occasions. Beyond the example of our spillovers analysis, early private results were key to our views on topics including:
    • No Lean Season in 2018 (negative result)3“In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.” (More.) jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Deworming in 2017 (positive result)4“We have seen preliminary, confidential results from a 15-year follow-up to Miguel and Kremer 2004. We are not yet able to discuss the results in detail, but they are broadly consistent with the findings from the 10-year follow-up analyzed in Baird et al. 2016.” (More.) jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Insecticide resistance in 2016 (modeling study)5“We have seen two modeling studies which model clinical malaria outcomes in areas with ITN coverage for different levels of resistance based on experimental hut trial data. Of these two studies, the most recent study we have seen is unpublished (it was shared with us privately), but we prefer it because the insecticide resistance data it draws from is more recent and more comprehensive.” (More.) jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Development Media International in 2015 (negative result)6“The preliminary endline results did not find any effect of DMI’s program on child mortality (it was powered to detect a reduction of 15% or more), and it found substantially less effect on behavior change than was found at midline. We cannot publicly discuss the details of the endline results we have seen, because they are not yet finalised and because the finalised results will be embargoed prior to publication, but we have informally incorporated the results into our view of DMI’s program effectiveness.” (More.) jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Living Goods in 2014 (positive result)7“The researchers have published an abstract on the study, and shared a more in-depth report with us. The more in-depth report is not yet cleared for publication because the authors are seeking publication in an academic journal.” (More.) jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
  • Note that in all of the above cases we worked with the relevant researchers to get permission to publicly share basic information about the results we were relying on, as we did in the case of the GE study.
  • In all cases, we expected that full results would be made public in the future. Our understanding is that oftentimes early headline results from studies can be shared publicly while it may take substantially longer to publicly release full working papers because working papers are time-intensive to produce. We would be more hesitant to rely on a study that has been private for an unusually long period of time unless there were a good reason for it.
  • However, relying on private studies conflicts to some extent with our goal to be transparent. In particular, we believe two major downsides of our policy with respect to private information are (a) early private results are more likely to contain errors, and (b) we are not able to benefit from public scrutiny and discussion of the research. We would have ideally seen a robust public discussion of the GE study before we released our recommendations in November, but the timeline for the public release of GE study results did not allow that. We look forward to closely following the public debate in the future and plan to update our views based on what we learn.
  • Despite these limitations, we have generally found early, private results to be predictive of final, public results. This, combined with the fact that we believe private results have improved our recommendations on a number of occasions, leads us to believe that the benefits of our current policy on using private information outweigh the costs.

A few other notes:

  • Although we provide a number of cases above in which we relied on private information, the vast majority of the key information we rely on for our charity recommendations is public.
  • When private information is shared with us that implies a positive update about a charity’s program, we try to be especially attentive about potential conflicts of interest. In this case, there is potential for concern because the GE study was co-authored by Paul Niehaus, Chairman of GiveDirectly. We chose not to substantially limit the weight we place on the GE study because (a) a detailed pre-analysis plan was submitted for this study, and (b) three of the four co-authors (Ted Miguel, Johannes Haushofer, and Michael Walker) do not have an affiliation with GiveDirectly. We have no reason to believe that GiveDirectly’s involvement altered the analysis undertaken. In addition, the GE study team informed us that Paul Niehaus recused himself from final decisions about what the team communicated to GiveWell.
  • When we published our report (about one week ago), we expected that some additional analysis from the GE study would be shared publicly soon (which we still expect). We do not yet have an exact date and do not know precisely what content will be shared (though we expect it to be similar to what was shared with us privately).
Why did GiveWell place little weight on some papers in its analysis of spillover effects?

Some general context on GiveWell’s research that we think is useful for understanding our approach in this case is:

  • We are typically estimating the impact of programs as implemented by particular charities, not aiming to publish formal meta-analyses about program areas as a whole. As noted above, we believe we should have communicated more clearly about this in our original report on spillovers and we will revise the report to clarify.
  • We focus our limited time on the research that we think is most likely to affect our decisions, so our style of analysis is often different from what is typically seen in academia. (We think the differences in the kind of work we do is captured well by a relevant Rachel Glennerster blog post.)

Consistent with the above, the goal of our spillovers analysis was to make a best guess for the size of the spillover effect of GiveDirectly’s (GD’s) program in Kenya, Uganda, and Rwanda specifically.8This program provides $1,000 unconditional transfers and treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We are not trying to communicate an opinion on the size of spillover effects of cash transfers in other countries or development economics more broadly. If we were trying to do the latter, we would have considered a much wider range of literature.

We expect that studies that are most similar to GD’s program on basic characteristics such as geographic location and program type will be most useful for predicting spillovers in the GD context. So, we prioritize looking at studies that 1) took place in sub-Saharan Africa, and 2) evaluate unconditional cash transfer programs (further explanation in footnote).9On (1): Our understanding is that the nature and size of spillover effects is likely to be highly dependent on the context studied, for example because the extent to which village economies are integrated might differ substantially across contexts (e.g. how close households are to larger markets outside of the village in which they live, how easily goods can be transported, etc.).
On (2): We expect that providing cash transfers conditional on behavioral choices is a fairly different intervention from providing unconditional cash transfers, and so may have different spillover effects. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We would welcome additional engagement on this topic: that is, (a) to what extent should we believe that effects estimated in studies not meeting these criteria would apply to GD’s cash transfer programs, and (b) are there other criteria that we should have used?

A further factor that causes us to put more weight on the five studies we chose to review deeply is that they all study transfers distributed by GD, which we see as increasing their relevance to GD’s current work (though the specifics of the programs that were studied vary from GD’s current program). We believe that studies that do not meet the above criteria could affect our views on spillovers of GD’s program to some extent, but they would receive lower weight in our conclusions since they are less directly relevant to GD’s program.

We saw further review of studies that did not meet the above criteria as lower priority than a number of other analyses that we think would be more likely to shift our bottom line estimate of the spillovers of GD’s program. Even though we focused on the subset of studies most relevant to GD’s program, we were not able to combine their results to create a reasonable explicit model of spillover effects because we found that key questions were not answered by the available data (our attempt at an explicit model is in the following footnote).10We tried to create such an explicit model here (explanation here). jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); One fundamental challenge is that we are trying to apply estimates of “within-village” spillover effects to predict across-village spillover effects.11GiveDirectly treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Additional complications are described here.

More on why we placed little weight on particular studies that Özler highlighted in his comments:12Note on terminology: In our spillovers analysis report, we talk about studies in terms of “inclusion” and “exclusion.” We may use the term “exclude” differently than it is sometimes used in, e.g., academic meta-analyses. When we say that we have excluded studies, we have typically lightly reviewed their results and placed little weight on them in our conclusions. We did not ignore them entirely, as may happen for papers excluded from an academic meta-analysis. To try to clarify this, in this blog post we have used the term “place little weight.” We will try to be attentive to this in future research that we publish. jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

  • We placed little weight on the following papers in our initial analysis for the reasons given in parentheses: Angelucci & DiGiorgi 2009 (conditional transfers, study took place in Mexico), Cunha et al. 2017 (study took place in Mexico), Filmer et al. 2018 (conditional transfers, study took place in the Philippines), and Baird, de Hoop, and Özler 2013 (mix of conditional and unconditional transfers).
  • In addition, the estimates of mental health effects on teenage schoolgirls in Baird, de Hoop, and Özler 2013 seem like they would be relatively less useful for predicting the impacts of spillovers from cash transfers given to households, particularly in villages where almost all households receive transfers as is often the case in GD’s program.13We expect that local spillover effects via psychological mechanisms are less likely to occur with the current spatial distribution of GD’s program. In GD’s program in Kenya and Uganda, almost all households are treated within its target villages. In addition, the majority of villages within a region are treated in a block. Baird, de Hoop, and Özler 2013 estimate spillover effects within enumeration areas (groups of several villages), and the authors believe that the “detrimental effects on the mental well-being of those randomly excluded from the program in intervention areas is consistent with the idea that an individual’s utility depends on her relative consumption (or income or status) within her peer group”, p.372. The spatial distribution of GD’s program in Kenya and Uganda makes it more likely that the majority of one’s local peer group receives the same treatment assignment. jQuery("#footnote_plugin_tooltip_13").tooltip({ tip: "#footnote_plugin_tooltip_text_13", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
Why did GiveWell’s analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?

Some general context on GiveWell’s research that we think is useful for understanding our approach in this case is:

  • When modeling the cost-effectiveness of any program, there are typically a large number of outcomes that could be included in the model. In our analyses, we focus on the key outcomes that we expect to drive the bulk of the welfare effects of a program.
  • For example, our core cost-effectiveness model primarily considers various programs’ effects on averting deaths and increasing consumption (either immediately or later in life). This means that, e.g., we do not include benefits of averting vision impairment in our cost-effectiveness model for vitamin A supplementation (in part because we expect those effects to be relatively small as a portion of the overall impact of the program).
  • This does not mean that we think excluded outcomes are unimportant. We focus on the largest impacts of programs because (a) we think they are a good proxy for the overall impact of the relevant programs, and (b) having fewer outcomes simplifies our analysis, which leads to less potential for error, better comparability between programs, and a more manageable time investment in modeling.
  • For a deeper assessment of which program impacts we include and exclude from our core cost-effectiveness model and why, see our model’s “Inclusion/exclusion” sheet.14We have not yet added it, but we plan to add “Subjective well-being” under the list of outcomes excluded in the “Cross-cutting / Structural” section of the sheet, since it may be relevant to all programs. jQuery("#footnote_plugin_tooltip_14").tooltip({ tip: "#footnote_plugin_tooltip_text_14", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We aim to include outcomes that can be justified by evidence, feasibly modeled, and are consistent with how we handle other program outcomes. We revisit our list of excluded outcomes periodically to assess whether such outcomes could lead to a major shift in our cost-effectiveness estimate for a particular program.

In our spillovers analysis, we applied the above principles to try to identify the key welfare effects. Among the main five studies we reviewed on spillovers, it seems like the two most relevant outcomes are consumption and subjective well-being. We chose to focus on consumption for the following reasons:

  • Assessing the effects of cash transfers on consumption (rather than subjective well-being) is consistent with how we model the welfare effects of other programs that we think increase consumption on expectation, such as deworming.
  • Distinguishing effects on subjective well-being from effects on consumption in order to avoid double-counting benefits was too complex to do in the time we had available. It seems intuitively likely that standards of living (proxied by consumption) affect subjective well-being. In the Haushofer and Shapiro studies and in the GE study, the spillover effects act in the same direction for both consumption and subjective well-being. We do not think it would be appropriate to simply add subjective well-being effects into our model over and above effects on consumption since that risks double-counting benefits.
  • We do not have a strong argument that consumption is a more robust proxy for “true well-being” than subjective well-being, but given that consumption effects can be more easily compared across our programs we have chosen it as the default option at this point.

We hope to broadly revisit in the future whether we should be placing more weight on measures of subjective well-being across programs. It is possible that additional work on subjective well-being measures would meaningfully change how we assess benefits of programs (for this program and potentially others).

Examples of our questions about how to interpret subjective well-being effects in the cash spillovers literature include:

  • In the Haushofer and Shapiro studies, how should we interpret each of the underlying components of the subjective well-being indices? For example, how does self-reported life satisfaction map onto utility versus self-reported happiness?
  • In Haushofer, Reisinger, & Shapiro 2015, there is a statistically significant negative spillover effect on life-satisfaction, but there are no statistically significant effects on happiness, depression, stress, cortisol levels or the overall subjective well-being index (column (4) of Table 1). How should we interpret these findings?
Next steps
  • We hope that there is more public discussion on some of the policy-relevant questions we highlighted in our report and on the other points of uncertainty highlighted throughout this post. Our conclusions on spillovers are very tentative and could be affected substantially by more analysis, so we would greatly appreciate any feedback or pointers to relevant work.15If you are aware of relevant analyses or studies that we have not covered here, please let us know at info@givewell.org. jQuery("#footnote_plugin_tooltip_15").tooltip({ tip: "#footnote_plugin_tooltip_text_15", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
  • We are planning to follow up with Dr. Özler to better understand his views on spillover effects of cash transfers. We have appreciated his previous blog posts on this topic and want to ensure we are getting multiple perspectives on the relevant issues.

Notes   [ + ]

1. ↑ For more context on this topic, see our May 2018 blog post. 2. ↑ “We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.” (More.) 3. ↑ “In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.” (More.) 4. ↑ “We have seen preliminary, confidential results from a 15-year follow-up to Miguel and Kremer 2004. We are not yet able to discuss the results in detail, but they are broadly consistent with the findings from the 10-year follow-up analyzed in Baird et al. 2016.” (More.) 5. ↑ “We have seen two modeling studies which model clinical malaria outcomes in areas with ITN coverage for different levels of resistance based on experimental hut trial data. Of these two studies, the most recent study we have seen is unpublished (it was shared with us privately), but we prefer it because the insecticide resistance data it draws from is more recent and more comprehensive.” (More.) 6. ↑ “The preliminary endline results did not find any effect of DMI’s program on child mortality (it was powered to detect a reduction of 15% or more), and it found substantially less effect on behavior change than was found at midline. We cannot publicly discuss the details of the endline results we have seen, because they are not yet finalised and because the finalised results will be embargoed prior to publication, but we have informally incorporated the results into our view of DMI’s program effectiveness.” (More.) 7. ↑ “The researchers have published an abstract on the study, and shared a more in-depth report with us. The more in-depth report is not yet cleared for publication because the authors are seeking publication in an academic journal.” (More.) 8. ↑ This program provides $1,000 unconditional transfers and treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). 9. ↑ On (1): Our understanding is that the nature and size of spillover effects is likely to be highly dependent on the context studied, for example because the extent to which village economies are integrated might differ substantially across contexts (e.g. how close households are to larger markets outside of the village in which they live, how easily goods can be transported, etc.).
On (2): We expect that providing cash transfers conditional on behavioral choices is a fairly different intervention from providing unconditional cash transfers, and so may have different spillover effects. 10. ↑ We tried to create such an explicit model here (explanation here). 11. ↑ GiveDirectly treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). 12. ↑ Note on terminology: In our spillovers analysis report, we talk about studies in terms of “inclusion” and “exclusion.” We may use the term “exclude” differently than it is sometimes used in, e.g., academic meta-analyses. When we say that we have excluded studies, we have typically lightly reviewed their results and placed little weight on them in our conclusions. We did not ignore them entirely, as may happen for papers excluded from an academic meta-analysis. To try to clarify this, in this blog post we have used the term “place little weight.” We will try to be attentive to this in future research that we publish. 13. ↑ We expect that local spillover effects via psychological mechanisms are less likely to occur with the current spatial distribution of GD’s program. In GD’s program in Kenya and Uganda, almost all households are treated within its target villages. In addition, the majority of villages within a region are treated in a block. Baird, de Hoop, and Özler 2013 estimate spillover effects within enumeration areas (groups of several villages), and the authors believe that the “detrimental effects on the mental well-being of those randomly excluded from the program in intervention areas is consistent with the idea that an individual’s utility depends on her relative consumption (or income or status) within her peer group”, p.372. The spatial distribution of GD’s program in Kenya and Uganda makes it more likely that the majority of one’s local peer group receives the same treatment assignment. 14. ↑ We have not yet added it, but we plan to add “Subjective well-being” under the list of outcomes excluded in the “Cross-cutting / Structural” section of the sheet, since it may be relevant to all programs. 15. ↑ If you are aware of relevant analyses or studies that we have not covered here, please let us know at info@givewell.org. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Response to concerns about GiveWell’s spillovers analysis appeared first on The GiveWell Blog.

Josh (GiveWell)

Our updated top charities for giving season 2018

5 years 10 months ago

We’re excited to share our list of top charities for the 2018 giving season. We recommend eight top charities, all of which we also recommended last year.

Our bottom line

We recommend three top charities implementing programs whose primary benefit is reducing deaths. They are:

Five of our top charities implement programs that aim to increase recipients’ incomes and consumption. They are:

These charities represent the best opportunities we’re aware of to help people, according to our criteria. We expect GiveWell’s recommendations to direct more than $100 million to these organizations collectively over the next year. We expect our top charities to be able to effectively absorb hundreds of millions of dollars beyond that amount.

Our list of top charities is the same as it was last year, with the exception of Evidence Action’s No Lean Season. We removed No Lean Season from the list following our review of the results of a 2017 study of the program.

We also recognize a group of standout charities. We believe these charities are implementing programs that are evidence-backed and may be extremely cost-effective. However, we do not feel as confident in the impact of these organizations as we do in our top charities. We provide more information about our standout organizations here.

Where do we recommend donors give?
  • We recommend that donors choose the “Grants to recommended charities at GiveWell’s discretion” option on our donation forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. If we had additional funds to allocate now, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. If you have not supported GiveWell’s operations in the past, we ask that you consider designating 10 percent of your donation to help fund GiveWell’s operations.
How should donors give? Conference call to discuss recommendations

We’re holding a conference call on Tuesday, December 4, at 12pm ET/9am PT to discuss our latest recommendations and to answer any questions you have. Sign up here to join the call.

Additional details

Below, we provide:

Our research process in 2018

We plan to summarize all of the research we completed this year in a future post as part of our annual review process. A major focus of 2018 was improving our recommendations in future years, in particular through our work on GiveWell Incubation Grants and completing intervention reports on promising programs.

Below, we highlight the key research that led to our current charity recommendations. This page describes our general process for conducting research.

  • Following existing top charities. We followed the progress and plans of each of our 2017 top charities. We had several conversations with each organization and reviewed documents they shared with us. We published updated reviews of each of our top charities. Key information from this work is available in the following locations:
    • Our page summarizing changes at each of our top charities and standouts in 2018.
    • Our workbook with each charity’s funding needs and our estimates of the cost-effectiveness of filling each need.
    • Our full reviews for each charity are linked from this page.
  • Staying up to date on the research on the interventions implemented by our top charities. Details on some of what we learned in the section below.
  • Making extensive updates to our cost-effectiveness model and publishing 14 updates to the model over the course of the year. In addition to updating our cost-effectiveness model with information from the intervention research described above, we added a “country selection” tab to our cost-effectiveness analysis (so that users can toggle between overall and country-specific cost-effectiveness estimates); an “inclusion/exclusion” tab, which lists different items that we considered whether or not to account for in our cost-effectiveness analysis; and we explicitly modeled factors that could lead to wastage (charities failing to use the funds they receive to implement their programs effectively).
  • Completing a review of Zusha! We completed our review of the Georgetown University Initiative on Innovation, Development, and Evaluation—Zusha! Road Safety Campaign and determined that it did not meet all of our criteria to be a top charity. We named Zusha! a standout charity.
Major updates from the last 12 months

Below, we summarize major updates across our recommended charities over the past year. For detailed information on what changed at each of our top and standout charities, see this page.

  • We removed Evidence Action’s No Lean Season from our top charity list. At the end of 2017, we named No Lean Season, a program that provides loans to support seasonal migration in Bangladesh, as one of GiveWell’s top charities. This year, we updated our assessment of No Lean Season based on preliminary results we received from a 2017 study of the program. These results suggested the program did not successfully induce migration in the 2017 lean season. Taking this new information into account alongside previous studies of the program, we and Evidence Action no longer believe No Lean Season meets our top charity criteria. We provide more details on this decision in this blog post.
  • We received better information about Sightsavers’ deworming program. In previous years, we had limited information from Sightsavers documenting how it knew that its deworming programs were effectively reaching their intended beneficiaries. This year, Sightsavers shared significantly more monitoring information with us. This additional information substantially increased our confidence in Sightsavers’ deworming program. This spreadsheet shows the monitoring we received from Sightsavers in 2018.
  • We reviewed new research on the priority programs implemented by our top charities and updated our views and cost-effectiveness analyses accordingly. Examples of such updates include:
Recommended allocation of funding for Good Ventures and top charities’ remaining room for more funding Allocation recommended to Good Ventures

Good Ventures is a large foundation with which GiveWell works closely; it has been a major supporter of GiveWell’s top charities since 2011. Each year, we provide recommendations to Good Ventures regarding how we believe it can most effectively allocate its grants to GiveWell’s recommended charities, in terms of the total amount donated (within the constraints of Good Ventures’ planning, based in part on the Open Philanthropy Project’s recommendations on how to allocate funding across time and across cause areas) as well as the distribution between recipient charities.

Because Good Ventures is a major funder that we expect to follow our recommendations, we think it’s important for other donors to take its actions into account; we also want to be transparent about the research that leads us to make our recommendations to Good Ventures. That said, Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

This year, GiveWell recommended that Good Ventures grant $64.0 million to our recommended charities, allocated as shown in the table below.

Charity Recommended allocation from Good Ventures Remaining room for more funding1This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Malaria Consortium (SMC program) $26.6 million $43.9 million Evidence Action (Deworm the World Initiative) $10.4 million $27.0 million Sightsavers (deworming program) $9.7 million $1.6 million Helen Keller International (VAS program) $6.5 million $20.6 million Against Malaria Foundation $2.5 million $72.5 million Schistosomiasis Control Initiative $2.5 million $16.9 million The END Fund (deworming program) $2.5 million $45.8 million GiveDirectly $2.5 million >$100 million Standout charities $800,000 (combined)

We discuss our process for making our recommendation to Good Ventures in detail in this blog post.

Allocation of GiveWell discretionary funds

As part of reviewing our top charities’ funding gaps to make a recommendation to Good Ventures, we also decided how to allocate the $1.1 million in discretionary funding we currently hold. The latter comes from donors who chose to donate to “Grants to recommended charities at GiveWell’s discretion” in recent months. We decided to allocate this funding to Malaria Consortium’s seasonal malaria chemoprevention program, due to how large and cost-effective we believe Malaria Consortium’s funding gap is.

Top charities’ remaining room for more funding

Although we are expecting to direct a significant amount of funding to our top charities ($65.1 million between Good Ventures and our discretionary funding), we believe that nearly all of our top charities could productively absorb considerably more funding than we expect them to receive from Good Ventures, our discretionary funding, and additional donations we direct based on our recommendation. This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding gaps by charity.

Our recommendation for donors The bottom line
  • We recommend that donors choose the option to support “Grants to recommended charities at GiveWell’s discretion” on our donate forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good. We take into account charities’ funding needs and donations they have received from other sources when deciding where to grant discretionary funds. (The principles we outline in this post are indicative of how we will make decisions on what to fund.) We then make these grants to the highest-value funding opportunities we see among our recommended charities. This page lists discretionary grants we have made since 2014.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. See below for information that may be helpful in deciding between charities we recommend.
  • If we had additional funds to allocate, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
Comparing our top charities

If you’re interested in donating to a specific top charity or charities, the following information may be helpful as you compare the options on our list. The table summarizes key facts about our top charities; column headings are defined below.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.2“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Organization Modeled cost-effectiveness (relative to cash transfers) at the present margin3For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Primary benefits of the intervention Quality of the organization’s communication Ongoing monitoring and likelihood of detecting future problems Malaria Consortium (SMC program) 8.8 Averting deaths of children under 5 Strong Strong Evidence Action (Deworm the World Initiative) See footnote4At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Strong Strong Helen Keller International (VAS program) 6.4 Averting deaths of children under 5 Strong Moderate Against Malaria Foundation 7.3 Averting deaths Moderate Moderate Schistosomiasis Control Initiative 8.3 Possibly increasing income in adulthood Moderate Relatively weak Sightsavers (deworming program) See footnote5We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Moderate Moderate The END Fund (deworming program) 5.4 Possibly increasing income in adulthood Moderate Relatively weak GiveDirectly 1 Immediately increasing income and assets Strong Strong

Definitions of column headings follow:

  • Estimated cost-effectiveness (relative to cash transfers) at the present margin. We recommended that Good Ventures give $64.0 million to our top and standout charities, prioritizing the funding gaps that we believe are most cost-effective. The table above shows our estimates for the cost-effectiveness of additional donations to each charity, after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors).
  • Primary benefits of the intervention. This column describes the major benefit we see to supporting a charity implementing this intervention.
  • Quality of the organization’s communication. In most cases, we have have spent dozens or hundreds of hours interacting with our top charities. Here, we share our subjective impression of how well each organization has communicated with us. Our assessment of the quality of a charity’s communications is driven by whether we have been able to resolve our questions—particularly our less straightforward questions—about the organization’s activities, impact, and plans; how much time and effort was required to resolve those questions; how often the charity has sent us information that we later learned is inaccurate; and how direct we believe the charity is in acknowledging their weaknesses and mistakes.

    The organizations that stand out for high-quality communications are those that have most thoughtfully and completely answered our questions; brought problems with the program to our attention; and communicated clearly with us about timelines for providing additional information. High-quality communications reduce the time that we need to spend answering each question and therefore allow us to gain a greater degree of confidence in an organization. More importantly, our communication with an organization is one of the few ways that we can directly observe an organization’s general competence and thoughtfulness, so we see this as a proxy for unobserved ways in which the organization’s staff affect the impact of the program.

  • Ongoing monitoring and likelihood of detecting future problems. The quality of the monitoring we have received from our top charities varies widely, although we believe it stands out from that of the majority of charities. Ideally, the monitoring data charities collect would be representative of the program overall (by sampling all or a random selection of locations or other relevant units); would measure the outcomes of greatest interest for understanding the impact of the program; and would use methods that result in a low risk of bias or fraud in the results. In assessing the quality of a charity’s monitoring, we ask ourselves, “how likely do we believe it is that there are substantive problems with the program that are not detected by this monitoring?”

    Monitoring results inform our cost-effectiveness analyses directly. In addition, we believe that the quality of an organization’s monitoring give us information that is not fully captured in these analyses. Similar to how we view communication quality, we believe that understanding how an organization designs and implements monitoring is a opportunity to observe its general competence and degree of openness to learning and program improvement.

Other key factors donors might want to consider when making their giving decision:

  • As shown in the table above, our top charities implement programs with different primary benefits: some primarily avert deaths; others primarily increase incomes or consumption. Donors’ preference for programs that avert deaths relative to those that increase incomes (or how one weighs the value of averting a death at a given cost or increasing incomes a certain amount at a given cost) depends on their moral values. The cost-effectiveness estimates shown above rely on the GiveWell research team’s moral values. For more on how we (and others) compare the “good” accomplished by different programs, see this blog post. Donors may make a copy of our cost-effectiveness model to input their own moral weights and see how that impacts the relative cost-effectiveness of our top charities.
  • The table above shows cost-effectiveness estimates for different charities. We put significant weight on cost-effectiveness figures, but they have limitations. Read more about how we use cost-effectiveness estimates in this blog post.
  • Ultimately, donors are faced with a decision about how to weigh estimated cost-effectiveness (incorporating their moral values) against additional information about an organization that we have not explicitly modeled. We’ve written about this choice in the context of choosing between GiveDirectly and SCI in this 2016 blog post.
  • Four of our top charities implement deworming programs. We recommend the provision of deworming treatments to children for its possible impact on recipients’ incomes in adulthood. We work in an expected value framework; in other words, we’re willing to support a higher-risk intervention if it has the potential for higher impact (more in this post about our worldview). Deworming is such an intervention. We believe that deworming may have very little impact, but that risk is outweighed by the possibility that it has very large impact, and it’s very cheap to implement. We describe our assessment of deworming in this summary blog post as well as this detailed post. Donors who have lower risk tolerance may choose not to support charities implementing deworming programs.
  • The table above lists our views on the quality of each of our top charities’ monitoring. This 2016 blog post describes our view of AMF’s monitoring and may give donors more insight into how we think about monitoring quality.
Giving to support GiveWell’s operations

GiveWell is currently in a financially stable position. Over the next few years, we are planning to significantly increase our spending, driven by hiring additional research and outreach staff. We project that our revenue will approximately equal our expenses over the next few years; however, this projection includes an expectation of growth in the level of operating support we receive.

We retain our “excess assets policy” to ensure that if we fundraise for our own operations beyond a certain level, we will grant the excess to our recommended charities. In June of 2018, we applied our excess assets policy and designated $1.75 million in unrestricted funding for grants to recommended charities.

We cap the amount of operating support we ask Good Ventures to provide to GiveWell at 20 percent of our operating expenses, for reasons described here. We ask that donors who use GiveWell’s research consider the following:

  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. Having a strong base of consistent operations support allows us to make valuable hires when opportunities arise and to minimize staff time spent on fundraising for our operating expenses.
  • If you have not supported GiveWell’s operations in the past, we ask that you designate 10 percent of your donation to help fund GiveWell’s operations. This can be done by selecting the option to “Add 10% to help fund GiveWell’s operations” on our credit card donation form or letting us know how you would like to designate your funding when giving another way.
Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. 2. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 3. ↑ For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. 4. ↑ At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. 5. ↑ We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our updated top charities for giving season 2018 appeared first on The GiveWell Blog.

Catherine Hollander

Our updated top charities for giving season 2018

5 years 10 months ago

We’re excited to share our list of top charities for the 2018 giving season. We recommend eight top charities, all of which we also recommended last year.

Our bottom line

We recommend three top charities implementing programs whose primary benefit is reducing deaths. They are:

Five of our top charities implement programs that aim to increase recipients’ incomes and consumption. They are:

These charities represent the best opportunities we’re aware of to help people, according to our criteria. We expect GiveWell’s recommendations to direct more than $100 million to these organizations collectively over the next year. We expect our top charities to be able to effectively absorb hundreds of millions of dollars beyond that amount.

Our list of top charities is the same as it was last year, with the exception of Evidence Action’s No Lean Season. We removed No Lean Season from the list following our review of the results of a 2017 study of the program.

We also recognize a group of standout charities. We believe these charities are implementing programs that are evidence-backed and may be extremely cost-effective. However, we do not feel as confident in the impact of these organizations as we do in our top charities. We provide more information about our standout organizations here.

Where do we recommend donors give?
  • We recommend that donors choose the “Grants to recommended charities at GiveWell’s discretion” option on our donation forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. If we had additional funds to allocate now, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. If you have not supported GiveWell’s operations in the past, we ask that you consider designating 10 percent of your donation to help fund GiveWell’s operations.
How should donors give? Conference call to discuss recommendations

We’re holding a conference call on Tuesday, December 4, at 12pm ET/9am PT to discuss our latest recommendations and to answer any questions you have. Sign up here to join the call.

Additional details

Below, we provide:

Our research process in 2018

We plan to summarize all of the research we completed this year in a future post as part of our annual review process. A major focus of 2018 was improving our recommendations in future years, in particular through our work on GiveWell Incubation Grants and completing intervention reports on promising programs.

Below, we highlight the key research that led to our current charity recommendations. This page describes our general process for conducting research.

  • Following existing top charities. We followed the progress and plans of each of our 2017 top charities. We had several conversations with each organization and reviewed documents they shared with us. We published updated reviews of each of our top charities. Key information from this work is available in the following locations:
    • Our page summarizing changes at each of our top charities and standouts in 2018.
    • Our workbook with each charity’s funding needs and our estimates of the cost-effectiveness of filling each need.
    • Our full reviews for each charity are linked from this page.
  • Staying up to date on the research on the interventions implemented by our top charities. Details on some of what we learned in the section below.
  • Making extensive updates to our cost-effectiveness model and publishing 14 updates to the model over the course of the year. In addition to updating our cost-effectiveness model with information from the intervention research described above, we added a “country selection” tab to our cost-effectiveness analysis (so that users can toggle between overall and country-specific cost-effectiveness estimates); an “inclusion/exclusion” tab, which lists different items that we considered whether or not to account for in our cost-effectiveness analysis; and we explicitly modeled factors that could lead to wastage (charities failing to use the funds they receive to implement their programs effectively).
  • Completing a review of Zusha! We completed our review of the Georgetown University Initiative on Innovation, Development, and Evaluation—Zusha! Road Safety Campaign and determined that it did not meet all of our criteria to be a top charity. We named Zusha! a standout charity.
Major updates from the last 12 months

Below, we summarize major updates across our recommended charities over the past year. For detailed information on what changed at each of our top and standout charities, see this page.

  • We removed Evidence Action’s No Lean Season from our top charity list. At the end of 2017, we named No Lean Season, a program that provides loans to support seasonal migration in Bangladesh, as one of GiveWell’s top charities. This year, we updated our assessment of No Lean Season based on preliminary results we received from a 2017 study of the program. These results suggested the program did not successfully induce migration in the 2017 lean season. Taking this new information into account alongside previous studies of the program, we and Evidence Action no longer believe No Lean Season meets our top charity criteria. We provide more details on this decision in this blog post.
  • We received better information about Sightsavers’ deworming program. In previous years, we had limited information from Sightsavers documenting how it knew that its deworming programs were effectively reaching their intended beneficiaries. This year, Sightsavers shared significantly more monitoring information with us. This additional information substantially increased our confidence in Sightsavers’ deworming program. This spreadsheet shows the monitoring we received from Sightsavers in 2018.
  • We reviewed new research on the priority programs implemented by our top charities and updated our views and cost-effectiveness analyses accordingly. Examples of such updates include:
Recommended allocation of funding for Good Ventures and top charities’ remaining room for more funding Allocation recommended to Good Ventures

Good Ventures is a large foundation with which GiveWell works closely; it has been a major supporter of GiveWell’s top charities since 2011. Each year, we provide recommendations to Good Ventures regarding how we believe it can most effectively allocate its grants to GiveWell’s recommended charities, in terms of the total amount donated (within the constraints of Good Ventures’ planning, based in part on the Open Philanthropy Project’s recommendations on how to allocate funding across time and across cause areas) as well as the distribution between recipient charities.

Because Good Ventures is a major funder that we expect to follow our recommendations, we think it’s important for other donors to take its actions into account; we also want to be transparent about the research that leads us to make our recommendations to Good Ventures. That said, Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

This year, GiveWell recommended that Good Ventures grant $64.0 million to our recommended charities, allocated as shown in the table below.

Charity Recommended allocation from Good Ventures Remaining room for more funding1This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Malaria Consortium (SMC program) $26.6 million $43.9 million Evidence Action (Deworm the World Initiative) $10.4 million $27.0 million Sightsavers (deworming program) $9.7 million $1.6 million Helen Keller International (VAS program) $6.5 million $20.6 million Against Malaria Foundation $2.5 million $72.5 million Schistosomiasis Control Initiative $2.5 million $16.9 million The END Fund (deworming program) $2.5 million $45.8 million GiveDirectly $2.5 million >$100 million Standout charities $800,000 (combined)

We discuss our process for making our recommendation to Good Ventures in detail in this blog post.

Allocation of GiveWell discretionary funds

As part of reviewing our top charities’ funding gaps to make a recommendation to Good Ventures, we also decided how to allocate the $1.1 million in discretionary funding we currently hold. The latter comes from donors who chose to donate to “Grants to recommended charities at GiveWell’s discretion” in recent months. We decided to allocate this funding to Malaria Consortium’s seasonal malaria chemoprevention program, due to how large and cost-effective we believe Malaria Consortium’s funding gap is.

Top charities’ remaining room for more funding

Although we are expecting to direct a significant amount of funding to our top charities ($65.1 million between Good Ventures and our discretionary funding), we believe that nearly all of our top charities could productively absorb considerably more funding than we expect them to receive from Good Ventures, our discretionary funding, and additional donations we direct based on our recommendation. This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding gaps by charity.

Our recommendation for donors The bottom line
  • We recommend that donors choose the option to support “Grants to recommended charities at GiveWell’s discretion” on our donate forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good. We take into account charities’ funding needs and donations they have received from other sources when deciding where to grant discretionary funds. (The principles we outline in this post are indicative of how we will make decisions on what to fund.) We then make these grants to the highest-value funding opportunities we see among our recommended charities. This page lists discretionary grants we have made since 2014.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. See below for information that may be helpful in deciding between charities we recommend.
  • If we had additional funds to allocate, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
Comparing our top charities

If you’re interested in donating to a specific top charity or charities, the following information may be helpful as you compare the options on our list. The table summarizes key facts about our top charities; column headings are defined below.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.2“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Organization Modeled cost-effectiveness (relative to cash transfers) at the present margin3For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Primary benefits of the intervention Quality of the organization’s communication Ongoing monitoring and likelihood of detecting future problems Malaria Consortium (SMC program) 8.8 Averting deaths of children under 5 Strong Strong Evidence Action (Deworm the World Initiative) See footnote4At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Strong Strong Helen Keller International (VAS program) 6.4 Averting deaths of children under 5 Strong Moderate Against Malaria Foundation 7.3 Averting deaths Moderate Moderate Schistosomiasis Control Initiative 8.3 Possibly increasing income in adulthood Moderate Relatively weak Sightsavers (deworming program) See footnote5We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Moderate Moderate The END Fund (deworming program) 5.4 Possibly increasing income in adulthood Moderate Relatively weak GiveDirectly 1 Immediately increasing income and assets Strong Strong

Definitions of column headings follow:

  • Estimated cost-effectiveness (relative to cash transfers) at the present margin. We recommended that Good Ventures give $64.0 million to our top and standout charities, prioritizing the funding gaps that we believe are most cost-effective. The table above shows our estimates for the cost-effectiveness of additional donations to each charity, after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors).
  • Primary benefits of the intervention. This column describes the major benefit we see to supporting a charity implementing this intervention.
  • Quality of the organization’s communication. In most cases, we have have spent dozens or hundreds of hours interacting with our top charities. Here, we share our subjective impression of how well each organization has communicated with us. Our assessment of the quality of a charity’s communications is driven by whether we have been able to resolve our questions—particularly our less straightforward questions—about the organization’s activities, impact, and plans; how much time and effort was required to resolve those questions; how often the charity has sent us information that we later learned is inaccurate; and how direct we believe the charity is in acknowledging their weaknesses and mistakes.

    The organizations that stand out for high-quality communications are those that have most thoughtfully and completely answered our questions; brought problems with the program to our attention; and communicated clearly with us about timelines for providing additional information. High-quality communications reduce the time that we need to spend answering each question and therefore allow us to gain a greater degree of confidence in an organization. More importantly, our communication with an organization is one of the few ways that we can directly observe an organization’s general competence and thoughtfulness, so we see this as a proxy for unobserved ways in which the organization’s staff affect the impact of the program.

  • Ongoing monitoring and likelihood of detecting future problems. The quality of the monitoring we have received from our top charities varies widely, although we believe it stands out from that of the majority of charities. Ideally, the monitoring data charities collect would be representative of the program overall (by sampling all or a random selection of locations or other relevant units); would measure the outcomes of greatest interest for understanding the impact of the program; and would use methods that result in a low risk of bias or fraud in the results. In assessing the quality of a charity’s monitoring, we ask ourselves, “how likely do we believe it is that there are substantive problems with the program that are not detected by this monitoring?”

    Monitoring results inform our cost-effectiveness analyses directly. In addition, we believe that the quality of an organization’s monitoring give us information that is not fully captured in these analyses. Similar to how we view communication quality, we believe that understanding how an organization designs and implements monitoring is a opportunity to observe its general competence and degree of openness to learning and program improvement.

Other key factors donors might want to consider when making their giving decision:

  • As shown in the table above, our top charities implement programs with different primary benefits: some primarily avert deaths; others primarily increase incomes or consumption. Donors’ preference for programs that avert deaths relative to those that increase incomes (or how one weighs the value of averting a death at a given cost or increasing incomes a certain amount at a given cost) depends on their moral values. The cost-effectiveness estimates shown above rely on the GiveWell research team’s moral values. For more on how we (and others) compare the “good” accomplished by different programs, see this blog post. Donors may make a copy of our cost-effectiveness model to input their own moral weights and see how that impacts the relative cost-effectiveness of our top charities.
  • The table above shows cost-effectiveness estimates for different charities. We put significant weight on cost-effectiveness figures, but they have limitations. Read more about how we use cost-effectiveness estimates in this blog post.
  • Ultimately, donors are faced with a decision about how to weigh estimated cost-effectiveness (incorporating their moral values) against additional information about an organization that we have not explicitly modeled. We’ve written about this choice in the context of choosing between GiveDirectly and SCI in this 2016 blog post.
  • Four of our top charities implement deworming programs. We recommend the provision of deworming treatments to children for its possible impact on recipients’ incomes in adulthood. We work in an expected value framework; in other words, we’re willing to support a higher-risk intervention if it has the potential for higher impact (more in this post about our worldview). Deworming is such an intervention. We believe that deworming may have very little impact, but that risk is outweighed by the possibility that it has very large impact, and it’s very cheap to implement. We describe our assessment of deworming in this summary blog post as well as this detailed post. Donors who have lower risk tolerance may choose not to support charities implementing deworming programs.
  • The table above lists our views on the quality of each of our top charities’ monitoring. This 2016 blog post describes our view of AMF’s monitoring and may give donors more insight into how we think about monitoring quality.
Giving to support GiveWell’s operations

GiveWell is currently in a financially stable position. Over the next few years, we are planning to significantly increase our spending, driven by hiring additional research and outreach staff. We project that our revenue will approximately equal our expenses over the next few years; however, this projection includes an expectation of growth in the level of operating support we receive.

We retain our “excess assets policy” to ensure that if we fundraise for our own operations beyond a certain level, we will grant the excess to our recommended charities. In June of 2018, we applied our excess assets policy and designated $1.75 million in unrestricted funding for grants to recommended charities.

We cap the amount of operating support we ask Good Ventures to provide to GiveWell at 20 percent of our operating expenses, for reasons described here. We ask that donors who use GiveWell’s research consider the following:

  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. Having a strong base of consistent operations support allows us to make valuable hires when opportunities arise and to minimize staff time spent on fundraising for our operating expenses.
  • If you have not supported GiveWell’s operations in the past, we ask that you designate 10 percent of your donation to help fund GiveWell’s operations. This can be done by selecting the option to “Add 10% to help fund GiveWell’s operations” on our credit card donation form or letting us know how you would like to designate your funding when giving another way.
Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. 2. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 3. ↑ For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. 4. ↑ At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. 5. ↑ We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our updated top charities for giving season 2018 appeared first on The GiveWell Blog.

Catherine Hollander

Our updated top charities for giving season 2018

5 years 10 months ago

We’re excited to share our list of top charities for the 2018 giving season. We recommend eight top charities, all of which we also recommended last year.

Our bottom line

We recommend three top charities implementing programs whose primary benefit is reducing deaths. They are:

Five of our top charities implement programs that aim to increase recipients’ incomes and consumption. They are:

These charities represent the best opportunities we’re aware of to help people, according to our criteria. We expect GiveWell’s recommendations to direct more than $100 million to these organizations collectively over the next year. We expect our top charities to be able to effectively absorb hundreds of millions of dollars beyond that amount.

Our list of top charities is the same as it was last year, with the exception of Evidence Action’s No Lean Season. We removed No Lean Season from the list following our review of the results of a 2017 study of the program.

We also recognize a group of standout charities. We believe these charities are implementing programs that are evidence-backed and may be extremely cost-effective. However, we do not feel as confident in the impact of these organizations as we do in our top charities. We provide more information about our standout organizations here.

Where do we recommend donors give?
  • We recommend that donors choose the “Grants to recommended charities at GiveWell’s discretion” option on our donation forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. If we had additional funds to allocate now, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. If you have not supported GiveWell’s operations in the past, we ask that you consider designating 10 percent of your donation to help fund GiveWell’s operations.
How should donors give? Conference call to discuss recommendations

We’re holding a conference call on Tuesday, December 4, at 12pm ET/9am PT to discuss our latest recommendations and to answer any questions you have. Sign up here to join the call.

Additional details

Below, we provide:

Our research process in 2018

We plan to summarize all of the research we completed this year in a future post as part of our annual review process. A major focus of 2018 was improving our recommendations in future years, in particular through our work on GiveWell Incubation Grants and completing intervention reports on promising programs.

Below, we highlight the key research that led to our current charity recommendations. This page describes our general process for conducting research.

  • Following existing top charities. We followed the progress and plans of each of our 2017 top charities. We had several conversations with each organization and reviewed documents they shared with us. We published updated reviews of each of our top charities. Key information from this work is available in the following locations:
    • Our page summarizing changes at each of our top charities and standouts in 2018.
    • Our workbook with each charity’s funding needs and our estimates of the cost-effectiveness of filling each need.
    • Our full reviews for each charity are linked from this page.
  • Staying up to date on the research on the interventions implemented by our top charities. Details on some of what we learned in the section below.
  • Making extensive updates to our cost-effectiveness model and publishing 14 updates to the model over the course of the year. In addition to updating our cost-effectiveness model with information from the intervention research described above, we added a “country selection” tab to our cost-effectiveness analysis (so that users can toggle between overall and country-specific cost-effectiveness estimates); an “inclusion/exclusion” tab, which lists different items that we considered whether or not to account for in our cost-effectiveness analysis; and we explicitly modeled factors that could lead to wastage (charities failing to use the funds they receive to implement their programs effectively).
  • Completing a review of Zusha! We completed our review of the Georgetown University Initiative on Innovation, Development, and Evaluation—Zusha! Road Safety Campaign and determined that it did not meet all of our criteria to be a top charity. We named Zusha! a standout charity.
Major updates from the last 12 months

Below, we summarize major updates across our recommended charities over the past year. For detailed information on what changed at each of our top and standout charities, see this page.

  • We removed Evidence Action’s No Lean Season from our top charity list. At the end of 2017, we named No Lean Season, a program that provides loans to support seasonal migration in Bangladesh, as one of GiveWell’s top charities. This year, we updated our assessment of No Lean Season based on preliminary results we received from a 2017 study of the program. These results suggested the program did not successfully induce migration in the 2017 lean season. Taking this new information into account alongside previous studies of the program, we and Evidence Action no longer believe No Lean Season meets our top charity criteria. We provide more details on this decision in this blog post.
  • We received better information about Sightsavers’ deworming program. In previous years, we had limited information from Sightsavers documenting how it knew that its deworming programs were effectively reaching their intended beneficiaries. This year, Sightsavers shared significantly more monitoring information with us. This additional information substantially increased our confidence in Sightsavers’ deworming program. This spreadsheet shows the monitoring we received from Sightsavers in 2018.
  • We reviewed new research on the priority programs implemented by our top charities and updated our views and cost-effectiveness analyses accordingly. Examples of such updates include:
Recommended allocation of funding for Good Ventures and top charities’ remaining room for more funding Allocation recommended to Good Ventures

Good Ventures is a large foundation with which GiveWell works closely; it has been a major supporter of GiveWell’s top charities since 2011. Each year, we provide recommendations to Good Ventures regarding how we believe it can most effectively allocate its grants to GiveWell’s recommended charities, in terms of the total amount donated (within the constraints of Good Ventures’ planning, based in part on the Open Philanthropy Project’s recommendations on how to allocate funding across time and across cause areas) as well as the distribution between recipient charities.

Because Good Ventures is a major funder that we expect to follow our recommendations, we think it’s important for other donors to take its actions into account; we also want to be transparent about the research that leads us to make our recommendations to Good Ventures. That said, Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

This year, GiveWell recommended that Good Ventures grant $64.0 million to our recommended charities, allocated as shown in the table below.

Charity Recommended allocation from Good Ventures Remaining room for more funding1This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Malaria Consortium (SMC program) $26.6 million $43.9 million Evidence Action (Deworm the World Initiative) $10.4 million $27.0 million Sightsavers (deworming program) $9.7 million $1.6 million Helen Keller International (VAS program) $6.5 million $20.6 million Against Malaria Foundation $2.5 million $72.5 million Schistosomiasis Control Initiative $2.5 million $16.9 million The END Fund (deworming program) $2.5 million $45.8 million GiveDirectly $2.5 million >$100 million Standout charities $800,000 (combined)

We discuss our process for making our recommendation to Good Ventures in detail in this blog post.

Allocation of GiveWell discretionary funds

As part of reviewing our top charities’ funding gaps to make a recommendation to Good Ventures, we also decided how to allocate the $1.1 million in discretionary funding we currently hold. The latter comes from donors who chose to donate to “Grants to recommended charities at GiveWell’s discretion” in recent months. We decided to allocate this funding to Malaria Consortium’s seasonal malaria chemoprevention program, due to how large and cost-effective we believe Malaria Consortium’s funding gap is.

Top charities’ remaining room for more funding

Although we are expecting to direct a significant amount of funding to our top charities ($65.1 million between Good Ventures and our discretionary funding), we believe that nearly all of our top charities could productively absorb considerably more funding than we expect them to receive from Good Ventures, our discretionary funding, and additional donations we direct based on our recommendation. This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding gaps by charity.

Our recommendation for donors The bottom line
  • We recommend that donors choose the option to support “Grants to recommended charities at GiveWell’s discretion” on our donate forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good. We take into account charities’ funding needs and donations they have received from other sources when deciding where to grant discretionary funds. (The principles we outline in this post are indicative of how we will make decisions on what to fund.) We then make these grants to the highest-value funding opportunities we see among our recommended charities. This page lists discretionary grants we have made since 2014.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. See below for information that may be helpful in deciding between charities we recommend.
  • If we had additional funds to allocate, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
Comparing our top charities

If you’re interested in donating to a specific top charity or charities, the following information may be helpful as you compare the options on our list. The table summarizes key facts about our top charities; column headings are defined below.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.2“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Organization Modeled cost-effectiveness (relative to cash transfers) at the present margin3For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Primary benefits of the intervention Quality of the organization’s communication Ongoing monitoring and likelihood of detecting future problems Malaria Consortium (SMC program) 8.8 Averting deaths of children under 5 Strong Strong Evidence Action (Deworm the World Initiative) See footnote4At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Strong Strong Helen Keller International (VAS program) 6.4 Averting deaths of children under 5 Strong Moderate Against Malaria Foundation 7.3 Averting deaths Moderate Moderate Schistosomiasis Control Initiative 8.3 Possibly increasing income in adulthood Moderate Relatively weak Sightsavers (deworming program) See footnote5We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Moderate Moderate The END Fund (deworming program) 5.4 Possibly increasing income in adulthood Moderate Relatively weak GiveDirectly 1 Immediately increasing income and assets Strong Strong

Definitions of column headings follow:

  • Estimated cost-effectiveness (relative to cash transfers) at the present margin. We recommended that Good Ventures give $64.0 million to our top and standout charities, prioritizing the funding gaps that we believe are most cost-effective. The table above shows our estimates for the cost-effectiveness of additional donations to each charity, after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors).
  • Primary benefits of the intervention. This column describes the major benefit we see to supporting a charity implementing this intervention.
  • Quality of the organization’s communication. In most cases, we have have spent dozens or hundreds of hours interacting with our top charities. Here, we share our subjective impression of how well each organization has communicated with us. Our assessment of the quality of a charity’s communications is driven by whether we have been able to resolve our questions—particularly our less straightforward questions—about the organization’s activities, impact, and plans; how much time and effort was required to resolve those questions; how often the charity has sent us information that we later learned is inaccurate; and how direct we believe the charity is in acknowledging their weaknesses and mistakes.

    The organizations that stand out for high-quality communications are those that have most thoughtfully and completely answered our questions; brought problems with the program to our attention; and communicated clearly with us about timelines for providing additional information. High-quality communications reduce the time that we need to spend answering each question and therefore allow us to gain a greater degree of confidence in an organization. More importantly, our communication with an organization is one of the few ways that we can directly observe an organization’s general competence and thoughtfulness, so we see this as a proxy for unobserved ways in which the organization’s staff affect the impact of the program.

  • Ongoing monitoring and likelihood of detecting future problems. The quality of the monitoring we have received from our top charities varies widely, although we believe it stands out from that of the majority of charities. Ideally, the monitoring data charities collect would be representative of the program overall (by sampling all or a random selection of locations or other relevant units); would measure the outcomes of greatest interest for understanding the impact of the program; and would use methods that result in a low risk of bias or fraud in the results. In assessing the quality of a charity’s monitoring, we ask ourselves, “how likely do we believe it is that there are substantive problems with the program that are not detected by this monitoring?”

    Monitoring results inform our cost-effectiveness analyses directly. In addition, we believe that the quality of an organization’s monitoring give us information that is not fully captured in these analyses. Similar to how we view communication quality, we believe that understanding how an organization designs and implements monitoring is a opportunity to observe its general competence and degree of openness to learning and program improvement.

Other key factors donors might want to consider when making their giving decision:

  • As shown in the table above, our top charities implement programs with different primary benefits: some primarily avert deaths; others primarily increase incomes or consumption. Donors’ preference for programs that avert deaths relative to those that increase incomes (or how one weighs the value of averting a death at a given cost or increasing incomes a certain amount at a given cost) depends on their moral values. The cost-effectiveness estimates shown above rely on the GiveWell research team’s moral values. For more on how we (and others) compare the “good” accomplished by different programs, see this blog post. Donors may make a copy of our cost-effectiveness model to input their own moral weights and see how that impacts the relative cost-effectiveness of our top charities.
  • The table above shows cost-effectiveness estimates for different charities. We put significant weight on cost-effectiveness figures, but they have limitations. Read more about how we use cost-effectiveness estimates in this blog post.
  • Ultimately, donors are faced with a decision about how to weigh estimated cost-effectiveness (incorporating their moral values) against additional information about an organization that we have not explicitly modeled. We’ve written about this choice in the context of choosing between GiveDirectly and SCI in this 2016 blog post.
  • Four of our top charities implement deworming programs. We recommend the provision of deworming treatments to children for its possible impact on recipients’ incomes in adulthood. We work in an expected value framework; in other words, we’re willing to support a higher-risk intervention if it has the potential for higher impact (more in this post about our worldview). Deworming is such an intervention. We believe that deworming may have very little impact, but that risk is outweighed by the possibility that it has very large impact, and it’s very cheap to implement. We describe our assessment of deworming in this summary blog post as well as this detailed post. Donors who have lower risk tolerance may choose not to support charities implementing deworming programs.
  • The table above lists our views on the quality of each of our top charities’ monitoring. This 2016 blog post describes our view of AMF’s monitoring and may give donors more insight into how we think about monitoring quality.
Giving to support GiveWell’s operations

GiveWell is currently in a financially stable position. Over the next few years, we are planning to significantly increase our spending, driven by hiring additional research and outreach staff. We project that our revenue will approximately equal our expenses over the next few years; however, this projection includes an expectation of growth in the level of operating support we receive.

We retain our “excess assets policy” to ensure that if we fundraise for our own operations beyond a certain level, we will grant the excess to our recommended charities. In June of 2018, we applied our excess assets policy and designated $1.75 million in unrestricted funding for grants to recommended charities.

We cap the amount of operating support we ask Good Ventures to provide to GiveWell at 20 percent of our operating expenses, for reasons described here. We ask that donors who use GiveWell’s research consider the following:

  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. Having a strong base of consistent operations support allows us to make valuable hires when opportunities arise and to minimize staff time spent on fundraising for our operating expenses.
  • If you have not supported GiveWell’s operations in the past, we ask that you designate 10 percent of your donation to help fund GiveWell’s operations. This can be done by selecting the option to “Add 10% to help fund GiveWell’s operations” on our credit card donation form or letting us know how you would like to designate your funding when giving another way.
Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. 2. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 3. ↑ For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. 4. ↑ At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. 5. ↑ We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our updated top charities for giving season 2018 appeared first on The GiveWell Blog.

Catherine Hollander

Our recommendation to Good Ventures

5 years 10 months ago

Today, we announce our list of top charities for the 2018 giving season. We expect to direct over $100 million to the eight charities on our list as a result of our recommendation.

Good Ventures, a large foundation with which we work closely, is the largest single funder of our top charities. We make recommendations to Good Ventures each year for how much funding to provide to our top charities and how to allocate that funding among them. As this funding is significant, we think it’s important for other donors to take into account the recommendation we make to Good Ventures.

This blog post explains in detail how we decide what to recommend to Good Ventures and why; we want to be transparent about the research that leads us to our recommendations to Good Ventures. If you’re interested in a bottom-line recommendation for where to donate this year, please view our post with recommendations for non-Good Ventures donors.

Note that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

Summary

In this post, we discuss:

How we decided how much funding to recommend to Good Ventures

This year, GiveWell recommended that Good Ventures grant $64.0 million to our top charities and standout charities. The amount Good Ventures gives to our top charities is based in part on how the Open Philanthropy Project plans to allocate funding across time and across cause areas. (Read more about our relationships with Good Ventures and the Open Philanthropy Project here.)

The Open Philanthropy Project currently plans to allocate around 10% of its total available capital to “straightforward charity,” which it currently allocates to global health and development causes based on GiveWell’s recommendations. This 10% allocation includes two “buckets”—a fixed percentage of total giving each year of 5% and another “flexible” bucket of 5%, which can be spent down quickly (over a few years) or slowly (over many years). GiveWell’s recommendation that Good Ventures grant $64.0 million this year puts the flexible bucket on track to be spent down within the next 14 years.

We’re recommending $64.0 million this year to balance two considerations:

  • As the world gets richer, giving opportunities in global health and development generally seem likely to get worse over time. This implies that giving now has a larger impact.
  • In the coming years, GiveWell may find opportunities that are considerably more cost-effective than our current recommendations (e.g., among policy advocacy organizations). This would make spending in future years have a larger impact.
Our recommended allocation for Good Ventures

The table below summarizes how much funding we recommend Good Ventures grant to each of our top charities, along with our explicit cost-effectiveness estimate for each organization and organizational factors we don’t model explicitly that affect our assessment of impact.

As always, cost-effectiveness figures should be interpreted with caution.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.1“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_3814_1").tooltip({ tip: "#footnote_plugin_tooltip_text_3814_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Charity Modeled cost-effectiveness (relative to cash transfers)2“We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. jQuery("#footnote_plugin_tooltip_3814_2").tooltip({ tip: "#footnote_plugin_tooltip_text_3814_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Organizational factors we don’t model explicitly3We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. jQuery("#footnote_plugin_tooltip_3814_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3814_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Recommended allocation Malaria Consortium (SMC program) 8.8 Very strong $26.6 million Evidence Action (Deworm the World Initiative) 14.6 Very strong $10.4 million Sightsavers (deworming program) 12.0 Moderate $9.7 million Helen Keller International (VAS program) 7.0 Strong $6.5 million Against Malaria Foundation 7.3 Moderate $2.5 million Schistosomiasis Control Initiative 8.3 Relatively weak $2.5 million The END Fund (deworming program) 5.4 Relatively weak $2.5 million GiveDirectly 1 Very strong $2.5 million Standout charities $800,000 (combined) Sum $64.0 million

The underlying objective of GiveWell’s allocation is to direct as much money as possible to the most cost-effective giving opportunities over the long run. (We aim to optimize cost-effectiveness, as defined broadly—we recognize the limitations of our cost-effectiveness model and consider additional factors in our assessment.) We relied on modeled cost-effectiveness figures as well as the organizational factors described above to inform our recommendations.

To meet this objective, our allocation this year was driven by the principles described below.

Principles we followed in arriving at this allocation

Principle 1: Put significant weight on our cost-effectiveness estimates. Our cost-effectiveness estimates incorporate a substantial amount of information relevant to our decisionmaking. While we recognize the high levels of uncertainty around our cost-effectiveness estimates, they are the single largest factor we take into consideration. More on how we use cost-effectiveness to inform our decisions here.

Principle 2: Consider additional information about an organization that we have not explicitly modeled. While our cost-effectiveness estimates are the best tool we know of to estimate the amount of good a charity accomplishes, we believe it’s infeasible to try to incorporate all relevant considerations into a single quantitative estimate. Subjective assessments that aren’t included in our cost-effectiveness calculations but affect how much impact a charity has include:

  • A charity’s ability to make good decisions on how to prioritize. Our top charities often take factors that aren’t included in our cost-effectiveness estimates into account when deciding how to spend their limited budgets. We use our subjective assessment of how well charities answer our questions about their activities as a proxy for how well they make these decisions.
  • Upside. Our top charities often perform activities that go beyond the scope of their direct work, such as conducting and sharing research that influences others, or raising funds for their programs from funders that would otherwise give to less cost-effective programs.

For the most part, we do not have the opportunity to directly observe these factors. Our subjective assessments of these factors are based on the observed but unmodeled factors that we discuss in this post: the quality of the organization’s communication and ongoing monitoring, and the likelihood of detecting future problems.

Principle 3: Assess charities’ funding gaps at the margin, i.e., where they would spend additional funding, where possible. We try to understand how charities’ funding would be spent among different programs or locations. Our cost-effectiveness estimates for charities’ projects often vary substantially (depending, for example, on the underlying disease burden in a particular country the charity plans to work in). Where possible, we compare our best guess of how funding would be used on the margin, rather than on average. As part of assessing charities’ marginal cost-effectiveness, we intend to capture whether there are diminishing returns to their receiving additional funding.

Principle 4: Default towards not imposing restrictions on charity spending. While we rely on our expectation of how charities would prioritize funding gaps to estimate marginal cost-effectiveness, we do not plan to impose any restrictions on how the funding is actually used in practice. (There is one exception to this: in cases where a top charity implements multiple global health and development programs and our recommendation is restricted to one of those programs, we do restrict funding to the priority program we recommend, such as deworming or vitamin A supplementation.) We believe our top charities are often better placed to make decisions about which projects to fund than we are, and we want to ensure maximum flexibility for them to do so.

Principle 5: Fund on a three-year horizon, unless we are particularly uncertain whether we will want to continue recommending a program in the future. Our top charities have communicated to us that there are often substantial benefits to knowing that funding for a program is secure for the future. As a general rule, we aim to provide funding for three years for each program we choose to fund. The exception is when we are more uncertain whether we would want to renew funding for a third year (e.g. because our estimated cost-effectiveness of a program is close to the marginal program we decided not to fund).

Principle 6: Ensure charities are incentivized to engage with our process. We recognize that our charity review process requires deep engagement from senior members of charities’ staff. We want to ensure that charities are incentivized to keep engaging with our process. To this end, since 2016, we recommended that Good Ventures provide a minimum “incentive grant” to top charities ($2.5 million) and standout charities ($100,000).

We hope that providing significant incentive grants increases the chances that charities are motivated to compete for a GiveWell recommendation. We fear that without ensuring that every top charity or standout receives a substantial amount of funding, some charities might be deterred from applying for a GiveWell recommendation or from making changes to their programs to potentially become top charities.

Our process for determining our recommended allocation for Good Ventures

In line with the principles above, we used the following process to arrive at our recommended allocation for Good Ventures:

  1. We recommended that Good Ventures provide each charity with an incentive grant ($2.5 million per top charity and $100,000 per standout charity).
  2. We identified the most cost-effective gap we were unable to entirely fill with the $64.0 million we recommended to Good Ventures (noting again that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended): Malaria Consortium’s seasonal malaria chemoprevention program in Nigeria, Burkina Faso, and Chad. Our cost-effectiveness analysis suggests this gap is about 8.8x as cost-effective as cash transfers, and that Malaria Consortium could absorb about $70 million in additional funding to support this work. We have a high opinion of Malaria Consortium as an organization, and this qualitative assessment supports our consideration of this gap as highly cost-effective to fill.

    Our best guess is there are limited diminishing marginal returns over the interval of this funding gap.

  3. Remaining funding gaps were compared to the Malaria Consortium funding gap in Nigeria, Burkina Faso, and Chad based on (i) their estimated cost-effectiveness, (ii) our subjective assessment of the organization’s quality, and (iii) particular arguments relevant to that funding gap but not captured elsewhere in our analysis (e.g., whether our decision to not fund a particular gap would be disproportionately disruptive to an organization’s activities).

This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding needs gaps by charity. We relied on this list of funding needs in determining our recommendation to Good Ventures, as well as in making our assessment of how much additional funding our top charities can absorb, after taking into account our recommendation to Good Ventures.

In brief, we concluded that some charities’ funding gaps compared favorably to Malaria Consortium’s seasonal malaria chemoprevention gap, which led us to recommend a total of ~$6-10 million in funding to each of Deworm the World Initiative, Sightsavers’ deworming program, and Helen Keller International’s vitamin A supplementation program. We did not see compelling reasons to recommend funding to the other top charities ahead of Malaria Consortium’s funding gap, so we only recommended that those charities receive the $2.5 million incentive grant.

We explain our recommended allocation to Good Ventures for each of our top charities in more detail on this page.

Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 2. ↑ “We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. 3. ↑ We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our recommendation to Good Ventures appeared first on The GiveWell Blog.

Catherine Hollander

Our recommendation to Good Ventures

5 years 10 months ago

Today, we announce our list of top charities for the 2018 giving season. We expect to direct over $100 million to the eight charities on our list as a result of our recommendation.

Good Ventures, a large foundation with which we work closely, is the largest single funder of our top charities. We make recommendations to Good Ventures each year for how much funding to provide to our top charities and how to allocate that funding among them. As this funding is significant, we think it’s important for other donors to take into account the recommendation we make to Good Ventures.

This blog post explains in detail how we decide what to recommend to Good Ventures and why; we want to be transparent about the research that leads us to our recommendations to Good Ventures. If you’re interested in a bottom-line recommendation for where to donate this year, please view our post with recommendations for non-Good Ventures donors.

Note that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

Summary

In this post, we discuss:

How we decided how much funding to recommend to Good Ventures

This year, GiveWell recommended that Good Ventures grant $64.0 million to our top charities and standout charities. The amount Good Ventures gives to our top charities is based in part on how the Open Philanthropy Project plans to allocate funding across time and across cause areas. (Read more about our relationships with Good Ventures and the Open Philanthropy Project here.)

The Open Philanthropy Project currently plans to allocate around 10% of its total available capital to “straightforward charity,” which it currently allocates to global health and development causes based on GiveWell’s recommendations. This 10% allocation includes two “buckets”—a fixed percentage of total giving each year of 5% and another “flexible” bucket of 5%, which can be spent down quickly (over a few years) or slowly (over many years). GiveWell’s recommendation that Good Ventures grant $64.0 million this year puts the flexible bucket on track to be spent down within the next 14 years.

We’re recommending $64.0 million this year to balance two considerations:

  • As the world gets richer, giving opportunities in global health and development generally seem likely to get worse over time. This implies that giving now has a larger impact.
  • In the coming years, GiveWell may find opportunities that are considerably more cost-effective than our current recommendations (e.g., among policy advocacy organizations). This would make spending in future years have a larger impact.
Our recommended allocation for Good Ventures

The table below summarizes how much funding we recommend Good Ventures grant to each of our top charities, along with our explicit cost-effectiveness estimate for each organization and organizational factors we don’t model explicitly that affect our assessment of impact.

As always, cost-effectiveness figures should be interpreted with caution.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.1“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_8881_1").tooltip({ tip: "#footnote_plugin_tooltip_text_8881_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Charity Modeled cost-effectiveness (relative to cash transfers)2“We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. jQuery("#footnote_plugin_tooltip_8881_2").tooltip({ tip: "#footnote_plugin_tooltip_text_8881_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Organizational factors we don’t model explicitly3We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. jQuery("#footnote_plugin_tooltip_8881_3").tooltip({ tip: "#footnote_plugin_tooltip_text_8881_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Recommended allocation Malaria Consortium (SMC program) 8.8 Very strong $26.6 million Evidence Action (Deworm the World Initiative) 14.6 Very strong $10.4 million Sightsavers (deworming program) 12.0 Moderate $9.7 million Helen Keller International (VAS program) 7.0 Strong $6.5 million Against Malaria Foundation 7.3 Moderate $2.5 million Schistosomiasis Control Initiative 8.3 Relatively weak $2.5 million The END Fund (deworming program) 5.4 Relatively weak $2.5 million GiveDirectly 1 Very strong $2.5 million Standout charities $800,000 (combined) Sum $64.0 million

The underlying objective of GiveWell’s allocation is to direct as much money as possible to the most cost-effective giving opportunities over the long run. (We aim to optimize cost-effectiveness, as defined broadly—we recognize the limitations of our cost-effectiveness model and consider additional factors in our assessment.) We relied on modeled cost-effectiveness figures as well as the organizational factors described above to inform our recommendations.

To meet this objective, our allocation this year was driven by the principles described below.

Principles we followed in arriving at this allocation

Principle 1: Put significant weight on our cost-effectiveness estimates. Our cost-effectiveness estimates incorporate a substantial amount of information relevant to our decisionmaking. While we recognize the high levels of uncertainty around our cost-effectiveness estimates, they are the single largest factor we take into consideration. More on how we use cost-effectiveness to inform our decisions here.

Principle 2: Consider additional information about an organization that we have not explicitly modeled. While our cost-effectiveness estimates are the best tool we know of to estimate the amount of good a charity accomplishes, we believe it’s infeasible to try to incorporate all relevant considerations into a single quantitative estimate. Subjective assessments that aren’t included in our cost-effectiveness calculations but affect how much impact a charity has include:

  • A charity’s ability to make good decisions on how to prioritize. Our top charities often take factors that aren’t included in our cost-effectiveness estimates into account when deciding how to spend their limited budgets. We use our subjective assessment of how well charities answer our questions about their activities as a proxy for how well they make these decisions.
  • Upside. Our top charities often perform activities that go beyond the scope of their direct work, such as conducting and sharing research that influences others, or raising funds for their programs from funders that would otherwise give to less cost-effective programs.

For the most part, we do not have the opportunity to directly observe these factors. Our subjective assessments of these factors are based on the observed but unmodeled factors that we discuss in this post: the quality of the organization’s communication and ongoing monitoring, and the likelihood of detecting future problems.

Principle 3: Assess charities’ funding gaps at the margin, i.e., where they would spend additional funding, where possible. We try to understand how charities’ funding would be spent among different programs or locations. Our cost-effectiveness estimates for charities’ projects often vary substantially (depending, for example, on the underlying disease burden in a particular country the charity plans to work in). Where possible, we compare our best guess of how funding would be used on the margin, rather than on average. As part of assessing charities’ marginal cost-effectiveness, we intend to capture whether there are diminishing returns to their receiving additional funding.

Principle 4: Default towards not imposing restrictions on charity spending. While we rely on our expectation of how charities would prioritize funding gaps to estimate marginal cost-effectiveness, we do not plan to impose any restrictions on how the funding is actually used in practice. (There is one exception to this: in cases where a top charity implements multiple global health and development programs and our recommendation is restricted to one of those programs, we do restrict funding to the priority program we recommend, such as deworming or vitamin A supplementation.) We believe our top charities are often better placed to make decisions about which projects to fund than we are, and we want to ensure maximum flexibility for them to do so.

Principle 5: Fund on a three-year horizon, unless we are particularly uncertain whether we will want to continue recommending a program in the future. Our top charities have communicated to us that there are often substantial benefits to knowing that funding for a program is secure for the future. As a general rule, we aim to provide funding for three years for each program we choose to fund. The exception is when we are more uncertain whether we would want to renew funding for a third year (e.g. because our estimated cost-effectiveness of a program is close to the marginal program we decided not to fund).

Principle 6: Ensure charities are incentivized to engage with our process. We recognize that our charity review process requires deep engagement from senior members of charities’ staff. We want to ensure that charities are incentivized to keep engaging with our process. To this end, since 2016, we recommended that Good Ventures provide a minimum “incentive grant” to top charities ($2.5 million) and standout charities ($100,000).

We hope that providing significant incentive grants increases the chances that charities are motivated to compete for a GiveWell recommendation. We fear that without ensuring that every top charity or standout receives a substantial amount of funding, some charities might be deterred from applying for a GiveWell recommendation or from making changes to their programs to potentially become top charities.

Our process for determining our recommended allocation for Good Ventures

In line with the principles above, we used the following process to arrive at our recommended allocation for Good Ventures:

  1. We recommended that Good Ventures provide each charity with an incentive grant ($2.5 million per top charity and $100,000 per standout charity).
  2. We identified the most cost-effective gap we were unable to entirely fill with the $64.0 million we recommended to Good Ventures (noting again that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended): Malaria Consortium’s seasonal malaria chemoprevention program in Nigeria, Burkina Faso, and Chad. Our cost-effectiveness analysis suggests this gap is about 8.8x as cost-effective as cash transfers, and that Malaria Consortium could absorb about $70 million in additional funding to support this work. We have a high opinion of Malaria Consortium as an organization, and this qualitative assessment supports our consideration of this gap as highly cost-effective to fill.

    Our best guess is there are limited diminishing marginal returns over the interval of this funding gap.

  3. Remaining funding gaps were compared to the Malaria Consortium funding gap in Nigeria, Burkina Faso, and Chad based on (i) their estimated cost-effectiveness, (ii) our subjective assessment of the organization’s quality, and (iii) particular arguments relevant to that funding gap but not captured elsewhere in our analysis (e.g., whether our decision to not fund a particular gap would be disproportionately disruptive to an organization’s activities).

This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding needs gaps by charity. We relied on this list of funding needs in determining our recommendation to Good Ventures, as well as in making our assessment of how much additional funding our top charities can absorb, after taking into account our recommendation to Good Ventures.

In brief, we concluded that some charities’ funding gaps compared favorably to Malaria Consortium’s seasonal malaria chemoprevention gap, which led us to recommend a total of ~$6-10 million in funding to each of Deworm the World Initiative, Sightsavers’ deworming program, and Helen Keller International’s vitamin A supplementation program. We did not see compelling reasons to recommend funding to the other top charities ahead of Malaria Consortium’s funding gap, so we only recommended that those charities receive the $2.5 million incentive grant.

We explain our recommended allocation to Good Ventures for each of our top charities in more detail on this page.

Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 2. ↑ “We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. 3. ↑ We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our recommendation to Good Ventures appeared first on The GiveWell Blog.

Catherine Hollander

Our recommendation to Good Ventures

5 years 10 months ago

Today, we announce our list of top charities for the 2018 giving season. We expect to direct over $100 million to the eight charities on our list as a result of our recommendation.

Good Ventures, a large foundation with which we work closely, is the largest single funder of our top charities. We make recommendations to Good Ventures each year for how much funding to provide to our top charities and how to allocate that funding among them. As this funding is significant, we think it’s important for other donors to take into account the recommendation we make to Good Ventures.

This blog post explains in detail how we decide what to recommend to Good Ventures and why; we want to be transparent about the research that leads us to our recommendations to Good Ventures. If you’re interested in a bottom-line recommendation for where to donate this year, please view our post with recommendations for non-Good Ventures donors.

Note that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

Summary

In this post, we discuss:

How we decided how much funding to recommend to Good Ventures

This year, GiveWell recommended that Good Ventures grant $64.0 million to our top charities and standout charities. The amount Good Ventures gives to our top charities is based in part on how the Open Philanthropy Project plans to allocate funding across time and across cause areas. (Read more about our relationships with Good Ventures and the Open Philanthropy Project here.)

The Open Philanthropy Project currently plans to allocate around 10% of its total available capital to “straightforward charity,” which it currently allocates to global health and development causes based on GiveWell’s recommendations. This 10% allocation includes two “buckets”—a fixed percentage of total giving each year of 5% and another “flexible” bucket of 5%, which can be spent down quickly (over a few years) or slowly (over many years). GiveWell’s recommendation that Good Ventures grant $64.0 million this year puts the flexible bucket on track to be spent down within the next 14 years.

We’re recommending $64.0 million this year to balance two considerations:

  • As the world gets richer, giving opportunities in global health and development generally seem likely to get worse over time. This implies that giving now has a larger impact.
  • In the coming years, GiveWell may find opportunities that are considerably more cost-effective than our current recommendations (e.g., among policy advocacy organizations). This would make spending in future years have a larger impact.
Our recommended allocation for Good Ventures

The table below summarizes how much funding we recommend Good Ventures grant to each of our top charities, along with our explicit cost-effectiveness estimate for each organization and organizational factors we don’t model explicitly that affect our assessment of impact.

As always, cost-effectiveness figures should be interpreted with caution.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.1“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_1612_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1612_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Charity Modeled cost-effectiveness (relative to cash transfers)2“We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. jQuery("#footnote_plugin_tooltip_1612_2").tooltip({ tip: "#footnote_plugin_tooltip_text_1612_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Organizational factors we don’t model explicitly3We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. jQuery("#footnote_plugin_tooltip_1612_3").tooltip({ tip: "#footnote_plugin_tooltip_text_1612_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Recommended allocation Malaria Consortium (SMC program) 8.8 Very strong $26.6 million Evidence Action (Deworm the World Initiative) 14.6 Very strong $10.4 million Sightsavers (deworming program) 12.0 Moderate $9.7 million Helen Keller International (VAS program) 7.0 Strong $6.5 million Against Malaria Foundation 7.3 Moderate $2.5 million Schistosomiasis Control Initiative 8.3 Relatively weak $2.5 million The END Fund (deworming program) 5.4 Relatively weak $2.5 million GiveDirectly 1 Very strong $2.5 million Standout charities $800,000 (combined) Sum $64.0 million

The underlying objective of GiveWell’s allocation is to direct as much money as possible to the most cost-effective giving opportunities over the long run. (We aim to optimize cost-effectiveness, as defined broadly—we recognize the limitations of our cost-effectiveness model and consider additional factors in our assessment.) We relied on modeled cost-effectiveness figures as well as the organizational factors described above to inform our recommendations.

To meet this objective, our allocation this year was driven by the principles described below.

Principles we followed in arriving at this allocation

Principle 1: Put significant weight on our cost-effectiveness estimates. Our cost-effectiveness estimates incorporate a substantial amount of information relevant to our decisionmaking. While we recognize the high levels of uncertainty around our cost-effectiveness estimates, they are the single largest factor we take into consideration. More on how we use cost-effectiveness to inform our decisions here.

Principle 2: Consider additional information about an organization that we have not explicitly modeled. While our cost-effectiveness estimates are the best tool we know of to estimate the amount of good a charity accomplishes, we believe it’s infeasible to try to incorporate all relevant considerations into a single quantitative estimate. Subjective assessments that aren’t included in our cost-effectiveness calculations but affect how much impact a charity has include:

  • A charity’s ability to make good decisions on how to prioritize. Our top charities often take factors that aren’t included in our cost-effectiveness estimates into account when deciding how to spend their limited budgets. We use our subjective assessment of how well charities answer our questions about their activities as a proxy for how well they make these decisions.
  • Upside. Our top charities often perform activities that go beyond the scope of their direct work, such as conducting and sharing research that influences others, or raising funds for their programs from funders that would otherwise give to less cost-effective programs.

For the most part, we do not have the opportunity to directly observe these factors. Our subjective assessments of these factors are based on the observed but unmodeled factors that we discuss in this post: the quality of the organization’s communication and ongoing monitoring, and the likelihood of detecting future problems.

Principle 3: Assess charities’ funding gaps at the margin, i.e., where they would spend additional funding, where possible. We try to understand how charities’ funding would be spent among different programs or locations. Our cost-effectiveness estimates for charities’ projects often vary substantially (depending, for example, on the underlying disease burden in a particular country the charity plans to work in). Where possible, we compare our best guess of how funding would be used on the margin, rather than on average. As part of assessing charities’ marginal cost-effectiveness, we intend to capture whether there are diminishing returns to their receiving additional funding.

Principle 4: Default towards not imposing restrictions on charity spending. While we rely on our expectation of how charities would prioritize funding gaps to estimate marginal cost-effectiveness, we do not plan to impose any restrictions on how the funding is actually used in practice. (There is one exception to this: in cases where a top charity implements multiple global health and development programs and our recommendation is restricted to one of those programs, we do restrict funding to the priority program we recommend, such as deworming or vitamin A supplementation.) We believe our top charities are often better placed to make decisions about which projects to fund than we are, and we want to ensure maximum flexibility for them to do so.

Principle 5: Fund on a three-year horizon, unless we are particularly uncertain whether we will want to continue recommending a program in the future. Our top charities have communicated to us that there are often substantial benefits to knowing that funding for a program is secure for the future. As a general rule, we aim to provide funding for three years for each program we choose to fund. The exception is when we are more uncertain whether we would want to renew funding for a third year (e.g. because our estimated cost-effectiveness of a program is close to the marginal program we decided not to fund).

Principle 6: Ensure charities are incentivized to engage with our process. We recognize that our charity review process requires deep engagement from senior members of charities’ staff. We want to ensure that charities are incentivized to keep engaging with our process. To this end, since 2016, we recommended that Good Ventures provide a minimum “incentive grant” to top charities ($2.5 million) and standout charities ($100,000).

We hope that providing significant incentive grants increases the chances that charities are motivated to compete for a GiveWell recommendation. We fear that without ensuring that every top charity or standout receives a substantial amount of funding, some charities might be deterred from applying for a GiveWell recommendation or from making changes to their programs to potentially become top charities.

Our process for determining our recommended allocation for Good Ventures

In line with the principles above, we used the following process to arrive at our recommended allocation for Good Ventures:

  1. We recommended that Good Ventures provide each charity with an incentive grant ($2.5 million per top charity and $100,000 per standout charity).
  2. We identified the most cost-effective gap we were unable to entirely fill with the $64.0 million we recommended to Good Ventures (noting again that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended): Malaria Consortium’s seasonal malaria chemoprevention program in Nigeria, Burkina Faso, and Chad. Our cost-effectiveness analysis suggests this gap is about 8.8x as cost-effective as cash transfers, and that Malaria Consortium could absorb about $70 million in additional funding to support this work. We have a high opinion of Malaria Consortium as an organization, and this qualitative assessment supports our consideration of this gap as highly cost-effective to fill.

    Our best guess is there are limited diminishing marginal returns over the interval of this funding gap.

  3. Remaining funding gaps were compared to the Malaria Consortium funding gap in Nigeria, Burkina Faso, and Chad based on (i) their estimated cost-effectiveness, (ii) our subjective assessment of the organization’s quality, and (iii) particular arguments relevant to that funding gap but not captured elsewhere in our analysis (e.g., whether our decision to not fund a particular gap would be disproportionately disruptive to an organization’s activities).

This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding needs gaps by charity. We relied on this list of funding needs in determining our recommendation to Good Ventures, as well as in making our assessment of how much additional funding our top charities can absorb, after taking into account our recommendation to Good Ventures.

In brief, we concluded that some charities’ funding gaps compared favorably to Malaria Consortium’s seasonal malaria chemoprevention gap, which led us to recommend a total of ~$6-10 million in funding to each of Deworm the World Initiative, Sightsavers’ deworming program, and Helen Keller International’s vitamin A supplementation program. We did not see compelling reasons to recommend funding to the other top charities ahead of Malaria Consortium’s funding gap, so we only recommended that those charities receive the $2.5 million incentive grant.

We explain our recommended allocation to Good Ventures for each of our top charities in more detail on this page.

Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 2. ↑ “We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. 3. ↑ We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our recommendation to Good Ventures appeared first on The GiveWell Blog.

Catherine Hollander

Update on No Lean Season’s top charity status

5 years 10 months ago

At the end of 2017, we named Evidence Action's No Lean Season one of GiveWell's nine top charities. Now, GiveWell and Evidence Action agree that No Lean Season should not be a GiveWell top charity this year, and Evidence Action is not seeking additional funding to support No Lean Season's work at this time.

This post will discuss this decision in detail. In brief, we updated our assessment of No Lean Season, a program that provides loans to support seasonal migration, based on preliminary results Evidence Action began discussing with us in July from a study of the 2017 implementation of the program (hereinafter referred to as “2017 RCT”). These results suggested the program, as implemented in 2017, did not successfully induce migration. Taking this new information into account alongside previous studies of the program, we and Evidence Action do not believe No Lean Season meets our top charity criteria at this time.

Evidence Action's post on this decision is here.

GiveWell's mission is to identify and recommend charities that can most effectively use additional donations. While it may be disappointing for a top charity to be removed from our list of recommendations, we believe that adding and removing top charities from our list is an important part of our process. If our top charities list never changed, we would guess we were (a) acting too conservatively (i.e. not being open enough to adding new top charities), or (b) not being critical enough of groups once they've been added to our list (i.e. not being open enough to removing existing top charities).

We believe this decision speaks positively of Evidence Action and demonstrates our mutual commitment to updating our views based on new evidence. GiveWell has interacted with hundreds of organizations in our history, and very few have subjected their programs to a rigorous study in the way that Evidence Action did last year and, at smaller scale, in 2014. We're excited to work with a group like Evidence Action that is committed to rigorous study and openness about results.

Summary

In this post, we will discuss:

  • The history of GiveWell and No Lean Season. (More)
  • How the 2017 RCT updated our views of No Lean Season. (More)
    • What did the 2017 RCT find? (More)
    • How did we interpret the RCT results? (More)
    • What does the future of No Lean Season look like? (More)
  • Conclusion

Read More

The post Update on No Lean Season’s top charity status appeared first on The GiveWell Blog.

Catherine Hollander

Update on No Lean Season’s top charity status

5 years 10 months ago

At the end of 2017, we named Evidence Action’s No Lean Season one of GiveWell’s nine top charities. Now, GiveWell and Evidence Action agree that No Lean Season should not be a GiveWell top charity this year, and Evidence Action is not seeking additional funding to support its work at this time.

This post will discuss this decision in detail. In brief, we updated our assessment of No Lean Season, a program that provides loans to support seasonal migration, based on preliminary results Evidence Action began discussing with us in July from a study of the 2017 implementation of the program (hereinafter referred to as “2017 RCT”). These results suggested the program, as implemented in 2017, did not successfully induce migration. Taking this new information into account alongside previous studies of the program, we and Evidence Action do not believe No Lean Season meets our top charity criteria at this time.

Evidence Action’s post on this decision is here.

GiveWell’s mission is to identify and recommend charities that can most effectively use additional donations. While it may be disappointing for a top charity to be removed from our list of recommendations, we believe that adding and removing top charities from our list is an important part of our process. If our top charities list never changed, we would guess we were (a) acting too conservatively (i.e. not being open enough to adding new top charities), or (b) not being critical enough of groups once they’ve been added to our list (i.e. not being open enough to removing existing top charities).

We believe this decision speaks positively of Evidence Action and demonstrates our mutual commitment to updating our views based on new evidence. GiveWell has interacted with hundreds of organizations in our history, and very few have subjected their programs to a rigorous study in the way that Evidence Action did last year and, at smaller scale, in 2014. We’re excited to work with a group like Evidence Action that is committed to rigorous study and openness about results.

Summary

In this post, we will discuss:

  • The history of GiveWell and No Lean Season. (More)
  • How the 2017 RCT updated our views of No Lean Season. (More)
    • What did the 2017 RCT find? (More)
    • How did we interpret the RCT results? (More)
    • What does the future of No Lean Season look like? (More)
  • Conclusion
GiveWell and No Lean Season

No Lean Season provides support for low-income agricultural workers in rural Bangladesh during the time of seasonal income and food insecurity (“lean season”). The program provides small, interest-free loans to support workers’ temporary migration to seek employment. No Lean Season is implemented by RDRS Bangladesh; Evidence Action provides strategic direction, conducts program monitoring, and provides technical assistance, among other functions. Evidence Action developed No Lean Season as part of its Beta portfolio, which is focused on prototyping and scaling cost-effective programs.

GiveWell began engaging with No Lean Season as a potential top charity in 2013, when we began to explore making an Incubation Grant to support its scale-up. We saw No Lean Season as a promising program that lacked the track record to be considered for a top charity recommendation at that time. We describe our initial interest in the program in a February 2017 blog post:

We approached Evidence Action in late 2013 to express our interest in supporting the creation of new GiveWell top charities.

In March 2014, Good Ventures made a $250,000 grant to Evidence Action to support the investigation and scale-up of promising programs.

Since then, Good Ventures has made three additional grants totaling approximately $2.7 million to support the program’s scale-up.

No Lean Season continued to test and scale their program with this and other support. We decided to recommend No Lean Season as a top charity in late 2017. We based our recommendation on three randomized controlled trials (RCTs) of the program. (We generally consider RCTs to be one of the strongest types of evidence available; you can read more about why we rely on RCTs here.)

Two of the RCTs (conducted in 2008 and 2014) indicated increased migration, income, and consumption for program participants. In the third RCT, which was conducted in 2013 and has not been published, the program is considered to have failed to induce migration, potentially due to political violence that year. We discuss the RCT evidence in greater depth in our intervention report on conditional subsidies for seasonal labor migration in northern Bangladesh.

Weighing the evidence, the cost of the program, and the potential impacts, we decided No Lean Season met our criteria to be named a top charity in November 2017. We summarized our reasoning in our blog post announcing our 2017 list of top charities, and noted the risks of this recommendation:

Several randomized controlled trials (RCTs) of subsidies to increase migration provide moderately strong evidence that such an intervention increases household income and consumption during the lean season. An additional RCT is ongoing. We estimate that No Lean Season is roughly five times as cost-effective as cash transfers (see our cost-effectiveness analysis).

Evidence Action has shared some details of its plans for monitoring No Lean Season in the future, but, as many of these plans have not been fully implemented, we have seen limited results. Therefore, there is some uncertainty as to whether No Lean Season will produce the data required to give us confidence that loans are appropriately targeted and reach their intended recipients in full; that recipients are not pressured into accepting loans; and that participants successfully migrate, find work, and are not exposed to major physical and other risks while migrating.

As indicated above, No Lean Season conducted an additional RCT to evaluate its program during the 2017 lean season (approximately September to December), the preliminary results of which indicate the program failed to induce migration. With the evidence from the 2017 RCT, the case for the program’s impact and cost-effectiveness looks weaker.

Our updated perspective on No Lean Season

The 2017 RCT was a key factor in the decision to remove No Lean Season from our top charities list. Below, we discuss:

What did the 2017 RCT find?

The 2017 RCT was a collaboration between Evidence Action, Innovations for Poverty Action, and researchers from Yale University, the London School of Economics, and the University of California, Davis. In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.[1]

However, the implementation of the program during the 2017[2] lean season and the evaluation of it differed from previous iterations. No Lean Season operated at a larger scale in the fall of 2017 than it had previously, offering loans to 158,155 households, compared with 16,268 households in 2016. Relative to earlier versions of the program, the program in 2017 involved (a) higher-intensity delivery of the intervention (offering loans to most eligible individuals) and (b) broader eligibility requirements (the eligibility rate in 2017 was 77 percent, compared with 49 percent in 2016).[3]

At this point, neither GiveWell, nor No Lean Season, nor the researchers feel we have a conclusive understanding of why the program failed to induce migration. However, No Lean Season and the researchers are exploring various hypotheses about what may explain the failure to induce migration, and they note that some suggestive evidence supports some hypotheses more than others. The researchers have posited several possibilities:

  1. The way the program was targeted in 2017 was suboptimal. The Migration Organizers, who survey households for eligibility and offer and disburse loans (more detail here under “Migration Organizers”), may have focused their efforts on the individuals that were seen as most likely to migrate, rather than those who needed a loan to afford migration. The use of loan targets during implementation may have inadvertently incentivized this behavior.[4] If, for example, loan officers mostly made loans to people who would have migrated regardless of receiving a loan, this could have led to the lack of impact on migration found in the study.
  2. The 2017 lean season was particularly bad for the program. The researchers note that severe flooding and associated implementation delays in some regions may have caused problems in 2017. The researchers plan to look more closely at the regions that experienced flooding, though they note that they don’t have the data necessary to make experimental comparisons.[5] In addition, a 2013 trial may have failed due to issues that were specific to the year of that trial, such as increased labor strikes.
  3. There exists another (currently unknown) reason why this program won’t work at scale. Conditions in Bangladesh may have changed, negative spillovers (harmful impacts for individuals who did not receive loans) may cancel out gains, or pilot villages may have been strategically picked in earlier trials.[6]

The researchers are considering all of these possibilities. After considering various possible theories as well as some non-experimental data (including administrative data and data from a special-purpose survey of Migration Organizers who worked on the program in 2017), they feel that the ‘mistargeting’ theory is the most likely explanation and the explanation most consistent with the analysis.[7]

In scenario (1), No Lean Season may be able to identify and fix the problem. In scenario (2), GiveWell will need to update our estimate of the impact of the program to take into account the fact that periodic program failures due to external factors are more likely than we previously thought. In scenario (3), the program is unlikely to be effective in the future.

How did we interpret the RCT results?

We don’t know the extent to which each of the above explanations contributed to the study not finding an effect on migration.

We used the results of the 2017 RCT to update our cost-effectiveness estimate for the program. Cost-effectiveness estimates form arguably the most important single input into our decisions about whether or not to recommend charities (more on how GiveWell uses cost-effectiveness analyses here). When we calculate a program’s cost-effectiveness, we take many different factors into account, such as the administrative and program costs and the expected impact. We also make a number of educated guesses, such as the likelihood that a program’s impact in a new country will be similar to that in a country where it has previously worked. Below, we describe the mechanism by which the 2017 RCT result was incorporated into our model and how it changed our conclusion.

Prior to this year, we formed our view of No Lean Season based on the three small-scale RCTs mentioned above (conducted in 2008, 2013, and 2014). Each of these RCTs looked at a slightly different version of the program. We believed that the ‘high-intensity’ arm of the 2014 RCT was the version most likely to resemble the program at scale. We thus used the migration rate measured in this arm of the RCT as our starting point for calculating the program’s impact.

The high-intensity arm of the 2014 RCT also had the highest measured migration rate of the three RCTs we assessed, and so we wanted to give some consideration to the less-positive results found in the other two assessments. We applied a small, downward adjustment to the rate of induced migration observed in the 2014 high-intensity arm in our cost-effectiveness model; this was an educated guess, based on the information we had. Our best guess was that the program would lead, in expectation, to 80% of the induced migration seen in the 2014 high-intensity arm.[8]

Now, the preliminary 2017 RCT results show no significant impact on migration rates or incomes. Because this trial was large and very recent, we updated our expectations of the impact of the program substantially, and in a negative direction. Our best guess now is that the program will lead, in expectation, to 40% of the induced migration seen in the 2014 high-intensity arm. Holding other inputs constant, this adjustment reduces our estimate of No Lean Season’s cost-effectiveness by a factor of two.

This reduced cost-effectiveness, along with our updated qualitative picture of No Lean Season’s evidence of effectiveness, led to the decision to remove No Lean Season from our top charities list.

What does the future of No Lean Season look like?

Although they are not raising more funding at this time, No Lean Season has over two years’ worth of remaining funding. We understand that the organization has made changes to the program design in 2018 based on emerging interpretations of the 2017 results, and has collected additional data to evaluate some of the hypotheses which may explain those results (including, for example, a survey of Migration Organizers who worked on the 2017 program). They plan to subject the 2018 implementation round to an additional ‘RCT-at-scale,’ with a particular focus on reassessing the program’s effects on migration, income and consumption, as well as potential effects at migration destinations. They will continue to explore what may have caused the issue in the 2017 program at scale, and to see whether they can find a solution. If they do that, we’ll want to reassess the evidence and the costs to determine whether No Lean Season meets our bar for top charity status. Evidence Action believes we should have the necessary information to reassess starting in mid-2019, based on the results of the RCT conducted during the 2018 lean season and other analyses they perform.

Conclusion

This is the second time since 2011 that we have removed a top charity from our list (prior to 2011, our top charities list was fairly different from today; we made a big-picture shift in our priorities that year that led us to our more recent lists). The previous removal occurred in 2013, when we took the Against Malaria Foundation (AMF) off of our list because we didn’t believe it could absorb additional funding effectively in the near term. AMF was reinstated as a top charity in 2014.

The decision to remove a top charity is never easy. But continuously evaluating GiveWell’s recommended charities is an important part of our work, and we take it seriously. It’s easy to talk about a commitment to evidence when the results are positive. It’s hard to maintain that commitment when the results are not. We’re excited to work with a group like Evidence Action that is committed to rigorous program evaluation and open discussion of the results of those evaluations. Its openness about these results has increased our confidence in Evidence Action as an organization. We look forward to seeing the results from the 2018 RCT in 2019.

Notes

[1] “At this early stage in analysis, we find no evidence that the program had an impact (positive or negative) on migration, caloric intake, food expenditure, or income.” Evidence Action, unpublished summary document, Page 1.

[2] The 2017 RCT studied a period from the fall of 2017 through early 2018.

[3] “This study has two main goals:

  1. “A replication of previous findings showing positive impact of incentivized migration on seasonal migration, caloric intake, food and non-food expenditure, income, and food security. Our aim is to estimate impact of a scaled version of the No Lean Season program: intensifying program implementation within branches and expanding the provision of loans to all eligible households.”

Unpublished summary document, Page 1.

[4] “The second set of explanations focus on unintentional implementation changes caused by the change ineligibility, the vastly expanded scope of the program, or other factors. In the most recent round, it is possible that Migration Organizers (MOs) focused their efforts on those households who were most likely to migrate even without a loan to the exclusion of the target population households who need a loan to afford migration. Such behavior may have even been encouraged by the use of targets set by the NGO to manage implementation at such a large scale. We have implemented a qualitative survey to understand the incentives and actions of MOs last year, and are revising our instructions to avoid any possibility of this issue this year.” Evidence Action, unpublished summary document (with minor revision from Evidence Action), Page 11.

[5] “Most notably, the program was affected by severe flooding in many regions, and implementation was subsequently delayed as well. We are still evaluating whether these regions are the ones with the most diminished effects, although we lack the data in control areas to conduct an experimental comparison.” Evidence Action, unpublished summary document, Page 11-12.

[6] “It is possible that what we observe this year may be the true effect of the No Lean Season program when implemented at scale. This may be because conditions in rural Bangladesh have changed since the initial years of success, spillovers at scale cancel out any gains observed in small-scale pilots, or pilot villages were selected because they were most likely to be receptive to the program.” Evidence Action, unpublished summary document, Page 11.

[7] Evidence Action, “Interpretation of 2017 Results” deck and narrative (unpublished)

[8] “This adjustment is used to account for external validity concerns not accounted for elsewhere in the CEA.

“The default adjustment value of 80% is our best guess about the appropriate value, but it is not based on a formal calculation.

“The program at scale takes place in the same region with the same implementers (RDRS and Evidence Action) as the source of our key evidence for the intervention (the 2014 RCT). The program at scale differs in some aspects of implementation, particularly the inclusiveness of the eligibility criteria and the proportion of eligible households offered an incentive. In the 2014 RCT, the subsidy was a cash transfer rather than an interest-free loan, however the 2008 RCT found a similar effect regardless of whether the subsidy was a cash transfer or an interest-free loan.

“There is some evidence (from a 2013 RCT) suggesting that the program may be ineffective when the perceived risk of migrating increases for reasons such as labor strikes and violence. The researchers estimated that these are 1-in-10 year events.

“Additional discussion related to this parameter can be found at https://www.givewell.org/charities/no-lean-season#programdifferentfromRCTs.” 2018 GiveWell Cost-Effectiveness Model — Version 10, “Migration subsidies” tab, note on cell A19.

The post Update on No Lean Season’s top charity status appeared first on The GiveWell Blog.

Catherine Hollander

Update on No Lean Season’s top charity status

5 years 10 months ago

At the end of 2017, we named Evidence Action’s No Lean Season one of GiveWell’s nine top charities. Now, GiveWell and Evidence Action agree that No Lean Season should not be a GiveWell top charity this year, and Evidence Action is not seeking additional funding to support its work at this time.

This post will discuss this decision in detail. In brief, we updated our assessment of No Lean Season, a program that provides loans to support seasonal migration, based on preliminary results Evidence Action began discussing with us in July from a study of the 2017 implementation of the program (hereinafter referred to as “2017 RCT”). These results suggested the program, as implemented in 2017, did not successfully induce migration. Taking this new information into account alongside previous studies of the program, we and Evidence Action do not believe No Lean Season meets our top charity criteria at this time.

Evidence Action’s post on this decision is here.

GiveWell’s mission is to identify and recommend charities that can most effectively use additional donations. While it may be disappointing for a top charity to be removed from our list of recommendations, we believe that adding and removing top charities from our list is an important part of our process. If our top charities list never changed, we would guess we were (a) acting too conservatively (i.e. not being open enough to adding new top charities), or (b) not being critical enough of groups once they’ve been added to our list (i.e. not being open enough to removing existing top charities).

We believe this decision speaks positively of Evidence Action and demonstrates our mutual commitment to updating our views based on new evidence. GiveWell has interacted with hundreds of organizations in our history, and very few have subjected their programs to a rigorous study in the way that Evidence Action did last year and, at smaller scale, in 2014. We’re excited to work with a group like Evidence Action that is committed to rigorous study and openness about results.

Summary

In this post, we will discuss:

  • The history of GiveWell and No Lean Season. (More)
  • How the 2017 RCT updated our views of No Lean Season. (More)
    • What did the 2017 RCT find? (More)
    • How did we interpret the RCT results? (More)
    • What does the future of No Lean Season look like? (More)
  • Conclusion
GiveWell and No Lean Season

No Lean Season provides support for low-income agricultural workers in rural Bangladesh during the time of seasonal income and food insecurity (“lean season”). The program provides small, interest-free loans to support workers’ temporary migration to seek employment. No Lean Season is implemented by RDRS Bangladesh; Evidence Action provides strategic direction, conducts program monitoring, and provides technical assistance, among other functions. Evidence Action developed No Lean Season as part of its Beta portfolio, which is focused on prototyping and scaling cost-effective programs.

GiveWell began engaging with No Lean Season as a potential top charity in 2013, when we began to explore making an Incubation Grant to support its scale-up. We saw No Lean Season as a promising program that lacked the track record to be considered for a top charity recommendation at that time. We describe our initial interest in the program in a February 2017 blog post:

We approached Evidence Action in late 2013 to express our interest in supporting the creation of new GiveWell top charities.

In March 2014, Good Ventures made a $250,000 grant to Evidence Action to support the investigation and scale-up of promising programs.

Since then, Good Ventures has made three additional grants totaling approximately $2.7 million to support the program’s scale-up.

No Lean Season continued to test and scale their program with this and other support. We decided to recommend No Lean Season as a top charity in late 2017. We based our recommendation on three randomized controlled trials (RCTs) of the program. (We generally consider RCTs to be one of the strongest types of evidence available; you can read more about why we rely on RCTs here.)

Two of the RCTs (conducted in 2008 and 2014) indicated increased migration, income, and consumption for program participants. In the third RCT, which was conducted in 2013 and has not been published, the program is considered to have failed to induce migration, potentially due to political violence that year. We discuss the RCT evidence in greater depth in our intervention report on conditional subsidies for seasonal labor migration in northern Bangladesh.

Weighing the evidence, the cost of the program, and the potential impacts, we decided No Lean Season met our criteria to be named a top charity in November 2017. We summarized our reasoning in our blog post announcing our 2017 list of top charities, and noted the risks of this recommendation:

Several randomized controlled trials (RCTs) of subsidies to increase migration provide moderately strong evidence that such an intervention increases household income and consumption during the lean season. An additional RCT is ongoing. We estimate that No Lean Season is roughly five times as cost-effective as cash transfers (see our cost-effectiveness analysis).

Evidence Action has shared some details of its plans for monitoring No Lean Season in the future, but, as many of these plans have not been fully implemented, we have seen limited results. Therefore, there is some uncertainty as to whether No Lean Season will produce the data required to give us confidence that loans are appropriately targeted and reach their intended recipients in full; that recipients are not pressured into accepting loans; and that participants successfully migrate, find work, and are not exposed to major physical and other risks while migrating.

As indicated above, No Lean Season conducted an additional RCT to evaluate its program during the 2017 lean season (approximately September to December), the preliminary results of which indicate the program failed to induce migration. With the evidence from the 2017 RCT, the case for the program’s impact and cost-effectiveness looks weaker.

Our updated perspective on No Lean Season

The 2017 RCT was a key factor in the decision to remove No Lean Season from our top charities list. Below, we discuss:

What did the 2017 RCT find?

The 2017 RCT was a collaboration between Evidence Action, Innovations for Poverty Action, and researchers from Yale University, the London School of Economics, and the University of California, Davis. In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.[1]

However, the implementation of the program during the 2017[2] lean season and the evaluation of it differed from previous iterations. No Lean Season operated at a larger scale in the fall of 2017 than it had previously, offering loans to 158,155 households, compared with 16,268 households in 2016. Relative to earlier versions of the program, the program in 2017 involved (a) higher-intensity delivery of the intervention (offering loans to most eligible individuals) and (b) broader eligibility requirements (the eligibility rate in 2017 was 77 percent, compared with 49 percent in 2016).[3]

At this point, neither GiveWell, nor No Lean Season, nor the researchers feel we have a conclusive understanding of why the program failed to induce migration. However, No Lean Season and the researchers are exploring various hypotheses about what may explain the failure to induce migration, and they note that some suggestive evidence supports some hypotheses more than others. The researchers have posited several possibilities:

  1. The way the program was targeted in 2017 was suboptimal. The Migration Organizers, who survey households for eligibility and offer and disburse loans (more detail here under “Migration Organizers”), may have focused their efforts on the individuals that were seen as most likely to migrate, rather than those who needed a loan to afford migration. The use of loan targets during implementation may have inadvertently incentivized this behavior.[4] If, for example, loan officers mostly made loans to people who would have migrated regardless of receiving a loan, this could have led to the lack of impact on migration found in the study.
  2. The 2017 lean season was particularly bad for the program. The researchers note that severe flooding and associated implementation delays in some regions may have caused problems in 2017. The researchers plan to look more closely at the regions that experienced flooding, though they note that they don’t have the data necessary to make experimental comparisons.[5] In addition, a 2013 trial may have failed due to issues that were specific to the year of that trial, such as increased labor strikes.
  3. There exists another (currently unknown) reason why this program won’t work at scale. Conditions in Bangladesh may have changed, negative spillovers (harmful impacts for individuals who did not receive loans) may cancel out gains, or pilot villages may have been strategically picked in earlier trials.[6]

The researchers are considering all of these possibilities. After considering various possible theories as well as some non-experimental data (including administrative data and data from a special-purpose survey of Migration Organizers who worked on the program in 2017), they feel that the ‘mistargeting’ theory is the most likely explanation and the explanation most consistent with the analysis.[7]

In scenario (1), No Lean Season may be able to identify and fix the problem. In scenario (2), GiveWell will need to update our estimate of the impact of the program to take into account the fact that periodic program failures due to external factors are more likely than we previously thought. In scenario (3), the program is unlikely to be effective in the future.

How did we interpret the RCT results?

We don’t know the extent to which each of the above explanations contributed to the study not finding an effect on migration.

We used the results of the 2017 RCT to update our cost-effectiveness estimate for the program. Cost-effectiveness estimates form arguably the most important single input into our decisions about whether or not to recommend charities (more on how GiveWell uses cost-effectiveness analyses here). When we calculate a program’s cost-effectiveness, we take many different factors into account, such as the administrative and program costs and the expected impact. We also make a number of educated guesses, such as the likelihood that a program’s impact in a new country will be similar to that in a country where it has previously worked. Below, we describe the mechanism by which the 2017 RCT result was incorporated into our model and how it changed our conclusion.

Prior to this year, we formed our view of No Lean Season based on the three small-scale RCTs mentioned above (conducted in 2008, 2013, and 2014). Each of these RCTs looked at a slightly different version of the program. We believed that the ‘high-intensity’ arm of the 2014 RCT was the version most likely to resemble the program at scale. We thus used the migration rate measured in this arm of the RCT as our starting point for calculating the program’s impact.

The high-intensity arm of the 2014 RCT also had the highest measured migration rate of the three RCTs we assessed, and so we wanted to give some consideration to the less-positive results found in the other two assessments. We applied a small, downward adjustment to the rate of induced migration observed in the 2014 high-intensity arm in our cost-effectiveness model; this was an educated guess, based on the information we had. Our best guess was that the program would lead, in expectation, to 80% of the induced migration seen in the 2014 high-intensity arm.[8]

Now, the preliminary 2017 RCT results show no significant impact on migration rates or incomes. Because this trial was large and very recent, we updated our expectations of the impact of the program substantially, and in a negative direction. Our best guess now is that the program will lead, in expectation, to 40% of the induced migration seen in the 2014 high-intensity arm. Holding other inputs constant, this adjustment reduces our estimate of No Lean Season’s cost-effectiveness by a factor of two.

This reduced cost-effectiveness, along with our updated qualitative picture of No Lean Season’s evidence of effectiveness, led to the decision to remove No Lean Season from our top charities list.

What does the future of No Lean Season look like?

Although they are not raising more funding at this time, No Lean Season has over two years’ worth of remaining funding. We understand that the organization has made changes to the program design in 2018 based on emerging interpretations of the 2017 results, and has collected additional data to evaluate some of the hypotheses which may explain those results (including, for example, a survey of Migration Organizers who worked on the 2017 program). They plan to subject the 2018 implementation round to an additional ‘RCT-at-scale,’ with a particular focus on reassessing the program’s effects on migration, income and consumption, as well as potential effects at migration destinations. They will continue to explore what may have caused the issue in the 2017 program at scale, and to see whether they can find a solution. If they do that, we’ll want to reassess the evidence and the costs to determine whether No Lean Season meets our bar for top charity status. Evidence Action believes we should have the necessary information to reassess starting in mid-2019, based on the results of the RCT conducted during the 2018 lean season and other analyses they perform.

Conclusion

This is the second time since 2011 that we have removed a top charity from our list (prior to 2011, our top charities list was fairly different from today; we made a big-picture shift in our priorities that year that led us to our more recent lists). The previous removal occurred in 2013, when we took the Against Malaria Foundation (AMF) off of our list because we didn’t believe it could absorb additional funding effectively in the near term. AMF was reinstated as a top charity in 2014.

The decision to remove a top charity is never easy. But continuously evaluating GiveWell’s recommended charities is an important part of our work, and we take it seriously. It’s easy to talk about a commitment to evidence when the results are positive. It’s hard to maintain that commitment when the results are not. We’re excited to work with a group like Evidence Action that is committed to rigorous program evaluation and open discussion of the results of those evaluations. Its openness about these results has increased our confidence in Evidence Action as an organization. We look forward to seeing the results from the 2018 RCT in 2019.

Notes

[1] “At this early stage in analysis, we find no evidence that the program had an impact (positive or negative) on migration, caloric intake, food expenditure, or income.” Evidence Action, unpublished summary document, Page 1.

[2] The 2017 RCT studied a period from the fall of 2017 through early 2018.

[3] “This study has two main goals:

  1. “A replication of previous findings showing positive impact of incentivized migration on seasonal migration, caloric intake, food and non-food expenditure, income, and food security. Our aim is to estimate impact of a scaled version of the No Lean Season program: intensifying program implementation within branches and expanding the provision of loans to all eligible households.”

Unpublished summary document, Page 1.

[4] “The second set of explanations focus on unintentional implementation changes caused by the change ineligibility, the vastly expanded scope of the program, or other factors. In the most recent round, it is possible that Migration Organizers (MOs) focused their efforts on those households who were most likely to migrate even without a loan to the exclusion of the target population households who need a loan to afford migration. Such behavior may have even been encouraged by the use of targets set by the NGO to manage implementation at such a large scale. We have implemented a qualitative survey to understand the incentives and actions of MOs last year, and are revising our instructions to avoid any possibility of this issue this year.” Evidence Action, unpublished summary document (with minor revision from Evidence Action), Page 11.

[5] “Most notably, the program was affected by severe flooding in many regions, and implementation was subsequently delayed as well. We are still evaluating whether these regions are the ones with the most diminished effects, although we lack the data in control areas to conduct an experimental comparison.” Evidence Action, unpublished summary document, Page 11-12.

[6] “It is possible that what we observe this year may be the true effect of the No Lean Season program when implemented at scale. This may be because conditions in rural Bangladesh have changed since the initial years of success, spillovers at scale cancel out any gains observed in small-scale pilots, or pilot villages were selected because they were most likely to be receptive to the program.” Evidence Action, unpublished summary document, Page 11.

[7] Evidence Action, “Interpretation of 2017 Results” deck and narrative (unpublished)

[8] “This adjustment is used to account for external validity concerns not accounted for elsewhere in the CEA.

“The default adjustment value of 80% is our best guess about the appropriate value, but it is not based on a formal calculation.

“The program at scale takes place in the same region with the same implementers (RDRS and Evidence Action) as the source of our key evidence for the intervention (the 2014 RCT). The program at scale differs in some aspects of implementation, particularly the inclusiveness of the eligibility criteria and the proportion of eligible households offered an incentive. In the 2014 RCT, the subsidy was a cash transfer rather than an interest-free loan, however the 2008 RCT found a similar effect regardless of whether the subsidy was a cash transfer or an interest-free loan.

“There is some evidence (from a 2013 RCT) suggesting that the program may be ineffective when the perceived risk of migrating increases for reasons such as labor strikes and violence. The researchers estimated that these are 1-in-10 year events.

“Additional discussion related to this parameter can be found at https://www.givewell.org/charities/no-lean-season#programdifferentfromRCTs.” 2018 GiveWell Cost-Effectiveness Model — Version 10, “Migration subsidies” tab, note on cell A19.

The post Update on No Lean Season’s top charity status appeared first on The GiveWell Blog.

Catherine Hollander

Update on No Lean Season’s top charity status

5 years 10 months ago

At the end of 2017, we named Evidence Action’s No Lean Season one of GiveWell’s nine top charities. Now, GiveWell and Evidence Action agree that No Lean Season should not be a GiveWell top charity this year, and Evidence Action is not seeking additional funding to support its work at this time.

This post will discuss this decision in detail. In brief, we updated our assessment of No Lean Season, a program that provides loans to support seasonal migration, based on preliminary results Evidence Action began discussing with us in July from a study of the 2017 implementation of the program (hereinafter referred to as “2017 RCT”). These results suggested the program, as implemented in 2017, did not successfully induce migration. Taking this new information into account alongside previous studies of the program, we and Evidence Action do not believe No Lean Season meets our top charity criteria at this time.

Evidence Action’s post on this decision is here.

GiveWell’s mission is to identify and recommend charities that can most effectively use additional donations. While it may be disappointing for a top charity to be removed from our list of recommendations, we believe that adding and removing top charities from our list is an important part of our process. If our top charities list never changed, we would guess we were (a) acting too conservatively (i.e. not being open enough to adding new top charities), or (b) not being critical enough of groups once they’ve been added to our list (i.e. not being open enough to removing existing top charities).

We believe this decision speaks positively of Evidence Action and demonstrates our mutual commitment to updating our views based on new evidence. GiveWell has interacted with hundreds of organizations in our history, and very few have subjected their programs to a rigorous study in the way that Evidence Action did last year and, at smaller scale, in 2014. We’re excited to work with a group like Evidence Action that is committed to rigorous study and openness about results.

Summary

In this post, we will discuss:

  • The history of GiveWell and No Lean Season. (More)
  • How the 2017 RCT updated our views of No Lean Season. (More)
    • What did the 2017 RCT find? (More)
    • How did we interpret the RCT results? (More)
    • What does the future of No Lean Season look like? (More)
  • Conclusion
GiveWell and No Lean Season

No Lean Season provides support for low-income agricultural workers in rural Bangladesh during the time of seasonal income and food insecurity (“lean season”). The program provides small, interest-free loans to support workers’ temporary migration to seek employment. No Lean Season is implemented by RDRS Bangladesh; Evidence Action provides strategic direction, conducts program monitoring, and provides technical assistance, among other functions. Evidence Action developed No Lean Season as part of its Beta portfolio, which is focused on prototyping and scaling cost-effective programs.

GiveWell began engaging with No Lean Season as a potential top charity in 2013, when we began to explore making an Incubation Grant to support its scale-up. We saw No Lean Season as a promising program that lacked the track record to be considered for a top charity recommendation at that time. We describe our initial interest in the program in a February 2017 blog post:

We approached Evidence Action in late 2013 to express our interest in supporting the creation of new GiveWell top charities.

In March 2014, Good Ventures made a $250,000 grant to Evidence Action to support the investigation and scale-up of promising programs.

Since then, Good Ventures has made three additional grants totaling approximately $2.7 million to support the program’s scale-up.

No Lean Season continued to test and scale their program with this and other support. We decided to recommend No Lean Season as a top charity in late 2017. We based our recommendation on three randomized controlled trials (RCTs) of the program. (We generally consider RCTs to be one of the strongest types of evidence available; you can read more about why we rely on RCTs here.)

Two of the RCTs (conducted in 2008 and 2014) indicated increased migration, income, and consumption for program participants. In the third RCT, which was conducted in 2013 and has not been published, the program is considered to have failed to induce migration, potentially due to political violence that year. We discuss the RCT evidence in greater depth in our intervention report on conditional subsidies for seasonal labor migration in northern Bangladesh.

Weighing the evidence, the cost of the program, and the potential impacts, we decided No Lean Season met our criteria to be named a top charity in November 2017. We summarized our reasoning in our blog post announcing our 2017 list of top charities, and noted the risks of this recommendation:

Several randomized controlled trials (RCTs) of subsidies to increase migration provide moderately strong evidence that such an intervention increases household income and consumption during the lean season. An additional RCT is ongoing. We estimate that No Lean Season is roughly five times as cost-effective as cash transfers (see our cost-effectiveness analysis).

Evidence Action has shared some details of its plans for monitoring No Lean Season in the future, but, as many of these plans have not been fully implemented, we have seen limited results. Therefore, there is some uncertainty as to whether No Lean Season will produce the data required to give us confidence that loans are appropriately targeted and reach their intended recipients in full; that recipients are not pressured into accepting loans; and that participants successfully migrate, find work, and are not exposed to major physical and other risks while migrating.

As indicated above, No Lean Season conducted an additional RCT to evaluate its program during the 2017 lean season (approximately September to December), the preliminary results of which indicate the program failed to induce migration. With the evidence from the 2017 RCT, the case for the program’s impact and cost-effectiveness looks weaker.

Our updated perspective on No Lean Season

The 2017 RCT was a key factor in the decision to remove No Lean Season from our top charities list. Below, we discuss:

What did the 2017 RCT find?

The 2017 RCT was a collaboration between Evidence Action, Innovations for Poverty Action, and researchers from Yale University, the London School of Economics, and the University of California, Davis. In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.[1]

However, the implementation of the program during the 2017[2] lean season and the evaluation of it differed from previous iterations. No Lean Season operated at a larger scale in the fall of 2017 than it had previously, offering loans to 158,155 households, compared with 16,268 households in 2016. Relative to earlier versions of the program, the program in 2017 involved (a) higher-intensity delivery of the intervention (offering loans to most eligible individuals) and (b) broader eligibility requirements (the eligibility rate in 2017 was 77 percent, compared with 49 percent in 2016).[3]

At this point, neither GiveWell, nor No Lean Season, nor the researchers feel we have a conclusive understanding of why the program failed to induce migration. However, No Lean Season and the researchers are exploring various hypotheses about what may explain the failure to induce migration, and they note that some suggestive evidence supports some hypotheses more than others. The researchers have posited several possibilities:

  1. The way the program was targeted in 2017 was suboptimal. The Migration Organizers, who survey households for eligibility and offer and disburse loans (more detail here under “Migration Organizers”), may have focused their efforts on the individuals that were seen as most likely to migrate, rather than those who needed a loan to afford migration. The use of loan targets during implementation may have inadvertently incentivized this behavior.[4] If, for example, loan officers mostly made loans to people who would have migrated regardless of receiving a loan, this could have led to the lack of impact on migration found in the study.
  2. The 2017 lean season was particularly bad for the program. The researchers note that severe flooding and associated implementation delays in some regions may have caused problems in 2017. The researchers plan to look more closely at the regions that experienced flooding, though they note that they don’t have the data necessary to make experimental comparisons.[5] In addition, a 2013 trial may have failed due to issues that were specific to the year of that trial, such as increased labor strikes.
  3. There exists another (currently unknown) reason why this program won’t work at scale. Conditions in Bangladesh may have changed, negative spillovers (harmful impacts for individuals who did not receive loans) may cancel out gains, or pilot villages may have been strategically picked in earlier trials.[6]

The researchers are considering all of these possibilities. After considering various possible theories as well as some non-experimental data (including administrative data and data from a special-purpose survey of Migration Organizers who worked on the program in 2017), they feel that the ‘mistargeting’ theory is the most likely explanation and the explanation most consistent with the analysis.[7]

In scenario (1), No Lean Season may be able to identify and fix the problem. In scenario (2), GiveWell will need to update our estimate of the impact of the program to take into account the fact that periodic program failures due to external factors are more likely than we previously thought. In scenario (3), the program is unlikely to be effective in the future.

How did we interpret the RCT results?

We don’t know the extent to which each of the above explanations contributed to the study not finding an effect on migration.

We used the results of the 2017 RCT to update our cost-effectiveness estimate for the program. Cost-effectiveness estimates form arguably the most important single input into our decisions about whether or not to recommend charities (more on how GiveWell uses cost-effectiveness analyses here). When we calculate a program’s cost-effectiveness, we take many different factors into account, such as the administrative and program costs and the expected impact. We also make a number of educated guesses, such as the likelihood that a program’s impact in a new country will be similar to that in a country where it has previously worked. Below, we describe the mechanism by which the 2017 RCT result was incorporated into our model and how it changed our conclusion.

Prior to this year, we formed our view of No Lean Season based on the three small-scale RCTs mentioned above (conducted in 2008, 2013, and 2014). Each of these RCTs looked at a slightly different version of the program. We believed that the ‘high-intensity’ arm of the 2014 RCT was the version most likely to resemble the program at scale. We thus used the migration rate measured in this arm of the RCT as our starting point for calculating the program’s impact.

The high-intensity arm of the 2014 RCT also had the highest measured migration rate of the three RCTs we assessed, and so we wanted to give some consideration to the less-positive results found in the other two assessments. We applied a small, downward adjustment to the rate of induced migration observed in the 2014 high-intensity arm in our cost-effectiveness model; this was an educated guess, based on the information we had. Our best guess was that the program would lead, in expectation, to 80% of the induced migration seen in the 2014 high-intensity arm.[8]

Now, the preliminary 2017 RCT results show no significant impact on migration rates or incomes. Because this trial was large and very recent, we updated our expectations of the impact of the program substantially, and in a negative direction. Our best guess now is that the program will lead, in expectation, to 40% of the induced migration seen in the 2014 high-intensity arm. Holding other inputs constant, this adjustment reduces our estimate of No Lean Season’s cost-effectiveness by a factor of two.

This reduced cost-effectiveness, along with our updated qualitative picture of No Lean Season’s evidence of effectiveness, led to the decision to remove No Lean Season from our top charities list.

What does the future of No Lean Season look like?

Although they are not raising more funding at this time, No Lean Season has over two years’ worth of remaining funding. We understand that the organization has made changes to the program design in 2018 based on emerging interpretations of the 2017 results, and has collected additional data to evaluate some of the hypotheses which may explain those results (including, for example, a survey of Migration Organizers who worked on the 2017 program). They plan to subject the 2018 implementation round to an additional ‘RCT-at-scale,’ with a particular focus on reassessing the program’s effects on migration, income and consumption, as well as potential effects at migration destinations. They will continue to explore what may have caused the issue in the 2017 program at scale, and to see whether they can find a solution. If they do that, we’ll want to reassess the evidence and the costs to determine whether No Lean Season meets our bar for top charity status. Evidence Action believes we should have the necessary information to reassess starting in mid-2019, based on the results of the RCT conducted during the 2018 lean season and other analyses they perform.

Conclusion

This is the second time since 2011 that we have removed a top charity from our list (prior to 2011, our top charities list was fairly different from today; we made a big-picture shift in our priorities that year that led us to our more recent lists). The previous removal occurred in 2013, when we took the Against Malaria Foundation (AMF) off of our list because we didn’t believe it could absorb additional funding effectively in the near term. AMF was reinstated as a top charity in 2014.

The decision to remove a top charity is never easy. But continuously evaluating GiveWell’s recommended charities is an important part of our work, and we take it seriously. It’s easy to talk about a commitment to evidence when the results are positive. It’s hard to maintain that commitment when the results are not. We’re excited to work with a group like Evidence Action that is committed to rigorous program evaluation and open discussion of the results of those evaluations. Its openness about these results has increased our confidence in Evidence Action as an organization. We look forward to seeing the results from the 2018 RCT in 2019.

Notes

[1] “At this early stage in analysis, we find no evidence that the program had an impact (positive or negative) on migration, caloric intake, food expenditure, or income.” Evidence Action, unpublished summary document, Page 1.

[2] The 2017 RCT studied a period from the fall of 2017 through early 2018.

[3] “This study has two main goals:

  1. “A replication of previous findings showing positive impact of incentivized migration on seasonal migration, caloric intake, food and non-food expenditure, income, and food security. Our aim is to estimate impact of a scaled version of the No Lean Season program: intensifying program implementation within branches and expanding the provision of loans to all eligible households.”

Unpublished summary document, Page 1.

[4] “The second set of explanations focus on unintentional implementation changes caused by the change ineligibility, the vastly expanded scope of the program, or other factors. In the most recent round, it is possible that Migration Organizers (MOs) focused their efforts on those households who were most likely to migrate even without a loan to the exclusion of the target population households who need a loan to afford migration. Such behavior may have even been encouraged by the use of targets set by the NGO to manage implementation at such a large scale. We have implemented a qualitative survey to understand the incentives and actions of MOs last year, and are revising our instructions to avoid any possibility of this issue this year.” Evidence Action, unpublished summary document (with minor revision from Evidence Action), Page 11.

[5] “Most notably, the program was affected by severe flooding in many regions, and implementation was subsequently delayed as well. We are still evaluating whether these regions are the ones with the most diminished effects, although we lack the data in control areas to conduct an experimental comparison.” Evidence Action, unpublished summary document, Page 11-12.

[6] “It is possible that what we observe this year may be the true effect of the No Lean Season program when implemented at scale. This may be because conditions in rural Bangladesh have changed since the initial years of success, spillovers at scale cancel out any gains observed in small-scale pilots, or pilot villages were selected because they were most likely to be receptive to the program.” Evidence Action, unpublished summary document, Page 11.

[7] Evidence Action, “Interpretation of 2017 Results” deck and narrative (unpublished)

[8] “This adjustment is used to account for external validity concerns not accounted for elsewhere in the CEA.

“The default adjustment value of 80% is our best guess about the appropriate value, but it is not based on a formal calculation.

“The program at scale takes place in the same region with the same implementers (RDRS and Evidence Action) as the source of our key evidence for the intervention (the 2014 RCT). The program at scale differs in some aspects of implementation, particularly the inclusiveness of the eligibility criteria and the proportion of eligible households offered an incentive. In the 2014 RCT, the subsidy was a cash transfer rather than an interest-free loan, however the 2008 RCT found a similar effect regardless of whether the subsidy was a cash transfer or an interest-free loan.

“There is some evidence (from a 2013 RCT) suggesting that the program may be ineffective when the perceived risk of migrating increases for reasons such as labor strikes and violence. The researchers estimated that these are 1-in-10 year events.

“Additional discussion related to this parameter can be found at https://www.givewell.org/charities/no-lean-season#programdifferentfromRCTs.” 2018 GiveWell Cost-Effectiveness Model — Version 10, “Migration subsidies” tab, note on cell A19.

The post Update on No Lean Season’s top charity status appeared first on The GiveWell Blog.

Catherine Hollander

A grant to Evidence Action Beta to prototype, test, and scale promising programs

5 years 11 months ago

In July 2018, we recommended a $5.1 million grant to Evidence Action Beta to create a program dedicated to developing potential GiveWell top charities by prototyping, testing, and scaling programs which have the potential to be highly impactful and cost-effective.

This grant was made as part of GiveWell’s Incubation Grants program, which aims to support potential future GiveWell top charities and to help grow the pipeline of organizations we can consider for a recommendation. Funding for Incubation Grants comes from Good Ventures, a large foundation with which we work closely.

Summary

This post will discuss the following:

  • Why Evidence Action Beta is promising. (More)
  • Risks we see with this Incubation Grant. (More)
  • Our plans for following Evidence Action Beta’s work going forward. (More)

Read More

The post A grant to Evidence Action Beta to prototype, test, and scale promising programs appeared first on The GiveWell Blog.

Olivia Larsen

A grant to Evidence Action Beta to prototype, test, and scale promising programs

5 years 11 months ago

In July 2018, we recommended a $5.1 million grant to Evidence Action Beta to create a program dedicated to developing potential GiveWell top charities by prototyping, testing, and scaling programs which have the potential to be highly impactful and cost-effective.

This grant was made as part of GiveWell’s Incubation Grants program, which aims to support potential future GiveWell top charities and to help grow the pipeline of organizations we can consider for a recommendation. Funding for Incubation Grants comes from Good Ventures, a large foundation with which we work closely.

Summary

This post will discuss the following:

  • Why Evidence Action Beta is promising. (More)
  • Risks we see with this Incubation Grant. (More)
  • Our plans for following Evidence Action Beta’s work going forward. (More)
Incubation Grant to Evidence Action Beta

We summarized our case for making this grant in a recently-published write-up:

A key part of GiveWell’s research process is trying to identify evidence-backed, cost-effective programs. GiveWell sometimes finds programs that seem potentially highly impactful based on academic research, but for which there is no obvious organizational partner that could scale up and test them. This grant will fund Evidence Action Beta to create … [an] incubator … focused on interventions that GiveWell and Evidence Action believe are promising but that lack existing organizations to scale them.

We have found that which program a charity works on is generally the most important factor in determining its overall cost-effectiveness. Through partnering with Evidence Action Beta to test programs that we think have the potential to be very cost-effective, … our hope is that programs tested and scaled up through this partnership may eventually become GiveWell top charities.

We believe this incubator has the potential to fill a major gap in the nonprofit world by providing a well-defined path for testing and potentially scaling … promising idea[s] for helping the global poor.

For full details on the grant activities and budget, see this page.

We believe that Evidence Action Beta is well-positioned to run this incubator because of its track record of scaling up cost-effective programs with high-quality monitoring. Evidence Action Beta’s parent organization, Evidence Action, leads two of our top charities (Deworm the World Initiative and No Lean Season) and one standout charity (Dispensers for Safe Water).

Modeling cost-effectiveness

In addition to the theoretical case for the grant outlined above, we also made explicit predictions and modeled the potential cost-effectiveness of this grant, so we could better consider it relative to other options. In this section, we provide more details on our process for estimating the grant’s cost-effectiveness.

The main path to impact we see with this grant is by creating new top charities which could use GiveWell-directed funds more cost-effectively than alternatives could.

This could occur:

  1. if Evidence Action Beta incubates charities which are more cost-effective than our current top charities, or
  2. if Evidence Action Beta incubates charities which are similarly cost-effective to our current top charities—in a scenario in which we have mostly filled our current top charities’ funding gaps. Right now, we believe our top charities can absorb significantly more funding than we expect to direct to them; this diminishes our view of the value of finding additional, similarly cost-effective opportunities. If our current top charities’ funding gaps were close to filled, we would place higher value on identifying additional room for more funding at a similarly cost-effective level.

This grant could also have an impact if it causes other, non-GiveWell funders to allocate resources to charities incubated by this grant. This incubator may create programs that GiveWell doesn’t direct funding to but others do. If these new opportunities are more cost-effective than what these funders would have otherwise supported, then this grant will have had a positive impact by causing funds to be spent more cost-effectively, even if GiveWell never recommends funding to the new programs directly.

We register forecasts for all Incubation Grants we make. We register these not because we are confident in them but because they help us clarify and communicate our expectation for the outcomes of the grant. Here, we forecast a 55% chance that Evidence Action Beta’s incubator leads to a new top charity by December 2023 that is 1-2x as cost-effective as the giving opportunity to which we would have otherwise directed those funds and a 30% chance that the grant does not lead to any new top charities by that time. (For more forecasts we made surrounding this grant, see here.)

We incorporated our forecasts as well as the potential impacts outlined above in our cost-effectiveness estimate for the grant: note that the potential upside coming from other funders is a particularly rough estimate which could change substantially with additional research.

Our best guess is that this grant is approximately ~9x as cost-effective as cash transfers, but we have spent limited time on this estimate and are highly uncertain about it. For context, we estimate that the average cost-effectiveness of our current top charities is between ~3x and ~12x as cost-effective as cash transfers.

Risks to the success of the grant

We do see risks to the success of this grant:

  • Few programs may be more cost-effective than our current top charities, or our top charities may remain underfunded for a long time. If Evidence Action Beta fails to identify more cost-effective giving opportunities than GiveWell’s 2017 top charities, or if it only identifies similarly cost-effective giving opportunities while our current top charities remain underfunded, barring any major upside effects, this grant will have failed to make an impact.
  • We expect this partnership with Evidence Action Beta to require a fair amount of senior staff capacity. If other means of identifying cost-effective giving opportunities, such as our work to evaluate policy opportunities, end up seeming more promising, this capacity may have been misused.
Going forward

This grant initiates a partnership with Evidence Action Beta toward which we might contribute substantial additional GiveWell Incubation Grant funding in the future. We plan to spend a fair amount of staff time on this ongoing partnership and follow this work closely.

We look forward to sharing updates and the results.

The post A grant to Evidence Action Beta to prototype, test, and scale promising programs appeared first on The GiveWell Blog.

Olivia Larsen

A grant to Evidence Action Beta to prototype, test, and scale promising programs

5 years 11 months ago

In July 2018, we recommended a $5.1 million grant to Evidence Action Beta to create a program dedicated to developing potential GiveWell top charities by prototyping, testing, and scaling programs which have the potential to be highly impactful and cost-effective.

This grant was made as part of GiveWell’s Incubation Grants program, which aims to support potential future GiveWell top charities and to help grow the pipeline of organizations we can consider for a recommendation. Funding for Incubation Grants comes from Good Ventures, a large foundation with which we work closely.

Summary

This post will discuss the following:

  • Why Evidence Action Beta is promising. (More)
  • Risks we see with this Incubation Grant. (More)
  • Our plans for following Evidence Action Beta’s work going forward. (More)
Incubation Grant to Evidence Action Beta

We summarized our case for making this grant in a recently-published write-up:

A key part of GiveWell’s research process is trying to identify evidence-backed, cost-effective programs. GiveWell sometimes finds programs that seem potentially highly impactful based on academic research, but for which there is no obvious organizational partner that could scale up and test them. This grant will fund Evidence Action Beta to create … [an] incubator … focused on interventions that GiveWell and Evidence Action believe are promising but that lack existing organizations to scale them.

We have found that which program a charity works on is generally the most important factor in determining its overall cost-effectiveness. Through partnering with Evidence Action Beta to test programs that we think have the potential to be very cost-effective, … our hope is that programs tested and scaled up through this partnership may eventually become GiveWell top charities.

We believe this incubator has the potential to fill a major gap in the nonprofit world by providing a well-defined path for testing and potentially scaling … promising idea[s] for helping the global poor.

For full details on the grant activities and budget, see this page.

We believe that Evidence Action Beta is well-positioned to run this incubator because of its track record of scaling up cost-effective programs with high-quality monitoring. Evidence Action Beta’s parent organization, Evidence Action, leads two of our top charities (Deworm the World Initiative and No Lean Season) and one standout charity (Dispensers for Safe Water).

Modeling cost-effectiveness

In addition to the theoretical case for the grant outlined above, we also made explicit predictions and modeled the potential cost-effectiveness of this grant, so we could better consider it relative to other options. In this section, we provide more details on our process for estimating the grant’s cost-effectiveness.

The main path to impact we see with this grant is by creating new top charities which could use GiveWell-directed funds more cost-effectively than alternatives could.

This could occur:

  1. if Evidence Action Beta incubates charities which are more cost-effective than our current top charities, or
  2. if Evidence Action Beta incubates charities which are similarly cost-effective to our current top charities—in a scenario in which we have mostly filled our current top charities’ funding gaps. Right now, we believe our top charities can absorb significantly more funding than we expect to direct to them; this diminishes our view of the value of finding additional, similarly cost-effective opportunities. If our current top charities’ funding gaps were close to filled, we would place higher value on identifying additional room for more funding at a similarly cost-effective level.

This grant could also have an impact if it causes other, non-GiveWell funders to allocate resources to charities incubated by this grant. This incubator may create programs that GiveWell doesn’t direct funding to but others do. If these new opportunities are more cost-effective than what these funders would have otherwise supported, then this grant will have had a positive impact by causing funds to be spent more cost-effectively, even if GiveWell never recommends funding to the new programs directly.

We register forecasts for all Incubation Grants we make. We register these not because we are confident in them but because they help us clarify and communicate our expectation for the outcomes of the grant. Here, we forecast a 55% chance that Evidence Action Beta’s incubator leads to a new top charity by December 2023 that is 1-2x as cost-effective as the giving opportunity to which we would have otherwise directed those funds and a 30% chance that the grant does not lead to any new top charities by that time. (For more forecasts we made surrounding this grant, see here.)

We incorporated our forecasts as well as the potential impacts outlined above in our cost-effectiveness estimate for the grant: note that the potential upside coming from other funders is a particularly rough estimate which could change substantially with additional research.

Our best guess is that this grant is approximately ~9x as cost-effective as cash transfers, but we have spent limited time on this estimate and are highly uncertain about it. For context, we estimate that the average cost-effectiveness of our current top charities is between ~3x and ~12x as cost-effective as cash transfers.

Risks to the success of the grant

We do see risks to the success of this grant:

  • Few programs may be more cost-effective than our current top charities, or our top charities may remain underfunded for a long time. If Evidence Action Beta fails to identify more cost-effective giving opportunities than GiveWell’s 2017 top charities, or if it only identifies similarly cost-effective giving opportunities while our current top charities remain underfunded, barring any major upside effects, this grant will have failed to make an impact.
  • We expect this partnership with Evidence Action Beta to require a fair amount of senior staff capacity. If other means of identifying cost-effective giving opportunities, such as our work to evaluate policy opportunities, end up seeming more promising, this capacity may have been misused.
Going forward

This grant initiates a partnership with Evidence Action Beta toward which we might contribute substantial additional GiveWell Incubation Grant funding in the future. We plan to spend a fair amount of staff time on this ongoing partnership and follow this work closely.

We look forward to sharing updates and the results.

The post A grant to Evidence Action Beta to prototype, test, and scale promising programs appeared first on The GiveWell Blog.

Olivia Larsen

Publishing more frequent updates to our cost-effectiveness model

5 years 11 months ago

We’ve recently made a number of adjustments to improve our research process. Not all of them are easily visible outside of the organization.

This post is to highlight one of them: Publishing more frequent updates to our cost-effectiveness model throughout the year.

Summary

This post will explain:

  • What changed in how we make updates to our cost-effectiveness model. (More)
  • Why we made this change. (More)
  • How to engage with updates to our model. (More)
What changed?

Last week, we published the ninth and tenth versions of our cost-effectiveness model in 2018. We made a number of updates to the newest versions of the model. They included accounting for reductions in malaria incidence for individuals who don’t receive seasonal malaria chemoprevention (SMC), the treatment one of our top charities distributes to prevent malaria, but who might benefit from living near other people receiving SMC (version 9) and the cost per deworming treatment delivered by another top charity, Sightsavers (version 10). These changes, and six others that were incorporated in the two latest versions, are described in our changelog.

Up until last year, we generally updated our cost-effectiveness model once or twice per year. However, as our model grew in complexity and we dedicated more research staff capacity to it, we decided that it would be beneficial to publish updates more regularly. We published our first in this series of more-frequent updates to our cost-effectiveness model in May 2017, as well as “release notes” (PDF) detailing the changes we made and the impact each had on our cost-effectiveness estimates.

We published five versions of our cost-effectiveness model in 2017. In 2018, we shifted from publishing PDF release notes to creating a “changelog“—a public page listing the changes we made to each version of the model, to be updated in tandem with the publication of each new version.

Internally, we moved toward having one staff member, Christian Smith, who is responsible for managing all changes to our cost-effectiveness model. He aims to publish a new version whenever there is a large, structurally complicated change to the model, or if there are several small and simple changes. Our internal process prioritizes being able to track how each change to the model moves the bottom line.

Changes we’ve published this year include updated inputs based on new research, such as the impact of insecticide resistance on the effectiveness of insecticide-treated nets; changes to inputs we include or exclude from the model altogether, such as removing short-term health benefits from deworming; and cosmetic changes to make the model easier to engage with, such as removing adjustments to account for the influence of GiveWell’s top charities on other actors from a particular tab.

Why we moved to this approach

Although it involves uncertainty, GiveWell’s cost-effectiveness model is a core piece of our research work and important input into our decisions about which charities to research and recommend. However, we believe it is challenging to engage with our model—to give a sense of the scale, our current model has 16 tabs, some of which use over 100 rows—and to keep up with changes we’ve made to the model over time.

Our hope is that publishing more frequent and transparent updates brings us closer in line to our goal of intense transparency and presenting a clear, vettable case for our recommendations to the public. It makes clearer the magnitude of any given change’s impact on our bottom line, and makes the evolution of the model over time easier to track. We also expect that it reduces the likelihood for errors, as fewer elements are being changed at any given time.

How to engage with updates to our model

We update our changelog, viewable here, when we publish a new version.

Going forward, we also plan to publish an announcement to our “Newly published GiveWell materials” email list when we do this. You can sign up to receive alerts from this email address here.

The post Publishing more frequent updates to our cost-effectiveness model appeared first on The GiveWell Blog.

Catherine

Publishing more frequent updates to our cost-effectiveness model

5 years 11 months ago

We’ve recently made a number of adjustments to improve our research process. Not all of them are easily visible outside of the organization.

This post is to highlight one of them: Publishing more frequent updates to our cost-effectiveness model throughout the year.

Summary

This post will explain:

  • What changed in how we make updates to our cost-effectiveness model. (More)
  • Why we made this change. (More)
  • How to engage with updates to our model. (More)
What changed?

Last week, we published the ninth and tenth versions of our cost-effectiveness model in 2018. We made a number of updates to the newest versions of the model. They included accounting for reductions in malaria incidence for individuals who don’t receive seasonal malaria chemoprevention (SMC), the treatment one of our top charities distributes to prevent malaria, but who might benefit from living near other people receiving SMC (version 9) and the cost per deworming treatment delivered by another top charity, Sightsavers (version 10). These changes, and six others that were incorporated in the two latest versions, are described in our changelog.

Up until last year, we generally updated our cost-effectiveness model once or twice per year. However, as our model grew in complexity and we dedicated more research staff capacity to it, we decided that it would be beneficial to publish updates more regularly. We published our first in this series of more-frequent updates to our cost-effectiveness model in May 2017, as well as “release notes” (PDF) detailing the changes we made and the impact each had on our cost-effectiveness estimates.

We published five versions of our cost-effectiveness model in 2017. In 2018, we shifted from publishing PDF release notes to creating a “changelog“—a public page listing the changes we made to each version of the model, to be updated in tandem with the publication of each new version.

Internally, we moved toward having one staff member, Christian Smith, who is responsible for managing all changes to our cost-effectiveness model. He aims to publish a new version whenever there is a large, structurally complicated change to the model, or if there are several small and simple changes. Our internal process prioritizes being able to track how each change to the model moves the bottom line.

Changes we’ve published this year include updated inputs based on new research, such as the impact of insecticide resistance on the effectiveness of insecticide-treated nets; changes to inputs we include or exclude from the model altogether, such as removing short-term health benefits from deworming; and cosmetic changes to make the model easier to engage with, such as removing adjustments to account for the influence of GiveWell’s top charities on other actors from a particular tab.

Why we moved to this approach

Although it involves uncertainty, GiveWell’s cost-effectiveness model is a core piece of our research work and important input into our decisions about which charities to research and recommend. However, we believe it is challenging to engage with our model—to give a sense of the scale, our current model has 16 tabs, some of which use over 100 rows—and to keep up with changes we’ve made to the model over time.

Our hope is that publishing more frequent and transparent updates brings us closer in line to our goal of intense transparency and presenting a clear, vettable case for our recommendations to the public. It makes clearer the magnitude of any given change’s impact on our bottom line, and makes the evolution of the model over time easier to track. We also expect that it reduces the likelihood for errors, as fewer elements are being changed at any given time.

How to engage with updates to our model

We update our changelog, viewable here, when we publish a new version.

Going forward, we also plan to publish an announcement to our “Newly published GiveWell materials” email list when we do this. You can sign up to receive alerts from this email address here.

The post Publishing more frequent updates to our cost-effectiveness model appeared first on The GiveWell Blog.

Catherine

September 2018 open thread

6 years ago

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our June 2018 open thread here.

The post September 2018 open thread appeared first on The GiveWell Blog.

Catherine

September 2018 open thread

6 years ago

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our June 2018 open thread here.

The post September 2018 open thread appeared first on The GiveWell Blog.

Catherine

Allocation of discretionary funds from Q2 2018

6 years ago

In April to June 2018, we received $1.2 million in funding for making grants at our discretion. In addition, GiveWell’s Board of Directors voted to allocate $2.9 million in unrestricted funds to making grants to recommended charities. In this post we discuss:

  • The decision to allocate the $4.1 million to the Against Malaria Foundation (AMF) (70 percent) and the Schistosomiasis Control Initiative (SCI) (30 percent).
  • Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact.
  • Why we have allocated unrestricted funds to making grants to recommended charities.

Allocation of discretionary funds

The allocation of 70 percent of the funds to AMF and 30 percent to SCI follows the recommendation we have made, and continue to make, to donors. For more discussion on this allocation, see our blog post about allocating discretionary funds from the fourth quarter of 2017.

We ask each top charity to provide details of how they will use additional funding each year, as part of our process to update our “room for more funding” summary for each top charity. This year, we have asked for this information by the end of July. We also ask each of our top charities to let us know if they encounter unexpected funding gaps at other times of year. We have not learned of new funding gaps in the last quarter.

What is our recommendation to donors?

We continue to recommend that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we are continuing to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. The reasons for this recommendation are the same as in our Q4 2017 post on allocating discretionary funding.

We will complete a full analysis of our top charities’ funding gaps and cost-effectiveness by November and expect to update our recommendation to donors at that time.

Why we have allocated unrestricted funds to making grants to recommended charities

In June, GiveWell’s Board of Directors voted to allocate $2.9 million in unrestricted funds to making grants to recommended charities. We generally use unrestricted funds to support GiveWell’s operating costs. The decision was made to grant out some of the unrestricted funds we hold in accordance with two policies:

  • Our “excess assets” policy specifies that once we surpass a certain level of unrestricted assets, we grant out the excess rather than continue to hold it ourselves. We reviewed our unrestricted asset holdings and projected revenue and expenses for 2018-2020 and concluded that we held $1.8 million more than was required to give us a stable, predictable financial situation (details of how this rule is applied are at the previous link). The Board voted to irrevocably restrict this amount to making grants to recommended charities. Note that we continue to need ongoing donor support for our operations. This decision incorporates our projections for future donations.
  • In order to limit the risks of relying too heavily on any single source of revenue, we cap the amount of funding that we will use from one source to support our operating costs at 20% of our projected annual expenses. In early 2018, we received a donation of $2.1 million in unrestricted funds. Our operating expense budget for 2018 is $4.9 million. Therefore, the Board voted to retain $1.0 million to support operating costs in 2018 and irrevocably restrict $1.1 million to making grants to recommended charities.

The post Allocation of discretionary funds from Q2 2018 appeared first on The GiveWell Blog.

Natalie Crispin