All Categories Blogs

How GiveWell’s research is evolving

5 years 5 months ago

To date, most of GiveWell’s research capacity has focused on finding the most impactful programs among those whose results can be rigorously measured. This work has led us to recommend, and direct several hundred million dollars to, charities improving health, saving lives, and increasing income in low-income countries.

One of the most important reasons we have focused on programs where robust measurement is possible is because this approach largely does not rely on subject-matter expertise. When Holden and I started GiveWell, neither of us had any experience in philanthropy, so we looked for charities that we could evaluate through data and evidence that we could analyze, to make recommendations that we could fully explain. This led us to focus on organizations that had impacts that were relatively easy to measure.

The output of this process is reflected in our current top charities and the programs they run, which are analyzed in our intervention reports.

GiveWell has now been doing research to find the best giving opportunities in global health and development for 11 years, and we plan to increase the scope of giving opportunities we consider. We plan to expand our research team and scope in order to determine whether there are giving opportunities in global health and development that are more cost-effective than those we have identified to date.

We expect this expansion of our work to take us in a number of new directions, some of which we have begun to explore over the past few years. We have considered, in a few cases, the impact our top and standout charities have through providing technical assistance (for example, Deworm the World and Project Healthy Children), supported work to change government policies through our Incubation Grants program (for example, grants to the Centre for Pesticide Suicide Prevention and Innovation in Government Initiative), and begun to explore areas like tobacco policy and lead paint elimination.

Over the next several years, we plan to consider everything that we believe could be among the most cost-effective (broadly defined) giving opportunities in global health and development. This includes more comprehensively reviewing direct interventions in sectors where impacts are more difficult to measure, investigating opportunities to influence government policy, as well as other areas.

Making progress in areas where it is harder to determine causality will be challenging. In my opinion, we are excellent evaluators of empirical research, but we have yet to demonstrate the ability to make good judgments about giving opportunities when less empirical information is available. Our values, intellectual framework, culture, and the quality of our staff make me optimistic about our chances, but all of us at GiveWell recognize the difficulty of the project we are embarking on.

Our staff does not currently have the capacity or the capabilities to make enough progress in this direction, so we are planning to significantly increase the size of our staff. We have a research team of ten people, and we are planning to more than double in size over the next three years. We are planning to add some junior staff but are primarily aiming to hire people with relevant experience who can contribute as researchers and/or managers on our team.

GiveWell’s top charities list is not going to change dramatically in the near future, and it may always include the charities we recommend today. Our top charities achieve outstanding, cost-effective results, and we believe they are some of the best giving opportunities in global health and development. We expect to conclude that many of the opportunities we consider in areas that are new for us are less cost-effective than those we currently recommend, but we also think it is possible that we will identify some opportunities that are much more cost-effective. We believe it is worth a major effort to find out.

What areas will we look into?

As with any exploration into a new area, we expect the specifics of the work we will undertake to shift as we learn more. Below we discuss two major areas of work we are embarking on and building our team for currently. In the long term, we are open to considering making grants or recommendations in all areas of global health and development. We have not yet comprehensively considered what those areas might be, but they could include (for example) research and development, or social entrepreneurship.

Using reasoned judgment and less robust evidence to come to conclusions about additional direct-delivery interventions

In the past, we have often asked, “does this intervention meet our criteria?” rather than “what is our best guess about how promising this intervention is relative to our top charities?” Our intervention report on education is a good example of asking the question, “does this meet our criteria?” It reviews all randomized controlled trials of education programs that measure long-term outcomes, but it does not attempt to reach a bottom line about how cost-effective education in developing countries is.

We plan to more deeply explore how we can reach conclusions about how areas such as nutrition, agriculture, education, reproductive health, surgical interventions, mental health, and non-communicable diseases compare to our current top charities.

Investigating opportunities to improve government spending and influence government policy

Some of the areas we will consider exploring to leverage government resources and affect government policy are:

Broad thematic area Examples Brief rationale Public health regulation Tobacco control; lead paint regulation; road traffic safety; air pollution regulation; micronutrient fortification and biofortification; sugar control; salt control; trans-fats control; legislation to reduce counterfeit drugs; soil pollution; pesticide regulation; occupational safety laws Some regulatory interventions to improve public health have had a large impact in high-income countries. Low-income countries can lack the government capacity or political will to implement these regulations. Charities can advocate or provide technical assistance to accelerate regulation and improve implementation. Improving government program selection Innovation in Government Initiative; Innovations for Poverty Action; IDInsight; Center for Effective Global Action Low-income country governments may not have the capabilities to select good programs to support with their limited budgets. Charities can directly assist governments to make better decisions in the short term, or help improve their capabilities to do so independently over the longer term. Improving government implementation Results for Development; Deworm the World in India Low-income countries may not have the capabilities to implement programs effectively. Charities can directly assist governments to improve the reach or quality of programs in the short term, or help improve their capabilities to do so independently over the longer term. Improving non-programmatic government capabilities Building State Capability Improving the administrative capabilities of a government can result in broad improvements in the way countries function. Improved or increased aid spending Center for Global Development; ONE Campaign; Overseas Development Institute; Brookings Institution Spending by high-income countries on global health and development accounts for a large portion of total spending in this area. There are groups who advocate for, and provide technical assistance to improve aid spending. Advocating for increased spending on highly cost-effective, direct-delivery programs Malaria No More; Uniting to Combat Neglected Tropical Diseases GiveWell’s money moved is a small proportion of total global spending on aid. We believe these dollars would go further if a portion were redirected to the highly cost-effective, direct-delivery programs we recommend. Increasing economic growth and redistribution Charter cities; infrastructure programs; trade liberalization; macroeconomic policy; International Growth Centre; tax reform Economic growth is an important driver of economic well-being over the long term. Government policies can be an important determinant of the rate of economic growth and the degree to which growth translates into well-being for the population. There may be opportunities for charities to assist in promoting growth and better distributional outcomes. Negative externalities of high-income country policies Immigration reform; trade liberalization; reducing carbon emissions Governments of high-income countries are incentivized to select policies which are popular with their own voters. These policies can impose substantial costs on low-income countries. Charities can advocate for these policies to be changed. Improving governance Election monitoring; anti-corruption; good governance awards; term limits; peace programs There are particular characteristics of the governance of a country (e.g. democratic accountability, stability, human rights, lack of corruption) which are strongly associated with the well-being of its people. Charities can advocate for these characteristics to be adopted or strengthened. Reducing the cost of health commodities Clinton Health Access Initiative Reductions in the cost of medical commodities can result in improved coverage and improved economic well-being for low-income households. Improving data collection Institute for Health Metrics and Evaluation Improved data can be used by a variety of actors to make better decisions. One-off big bets Mosquito gene drives advocacy and research We may come across promising projects that do not fit neatly into one of the above categories. How will our analysis change? How will it be the same?

Writing up and publishing the details of the reasoning behind the recommendations we make is a core part of GiveWell. We will remain fully transparent about our research.

Judgment calls that are not easily grounded in empirical data have long been a part of GiveWell’s research. For example, we make difficult, decision-relevant judgment calls about moral weights, interpreting conflicting evidence about deworming, and estimating the crowding-out and crowding-in effects of our donations on other actors (what we call leverage and funging).

As we move into areas where measuring outcomes and attributing causal impact is more difficult, we expect subjective judgments to play a larger role in our decision making. For examples of the approach we have taken to date, see our writeup of our recent recommendation for a grant to the Innovation in Government Initiative, a grantmaking entity within the Abdul Latif Jameel Poverty Action Lab (J-PAL) or our page evaluating phase I of our 2016 grant to Results for Development (R4D). While writing about such judgments will be a challenge of this work, we are fully committed to sharing what has led us to our decisions, with only limited exceptions due to confidential or sensitive information.

What does this mean for staffing and organizational growth?

We need to grow our team to achieve our goals. Repeatedly this past year, we had to make the difficult choice to not take on a research project or investigate a grant opportunity that seemed promising because we did not have the capacity.

We are planning to roughly double our research team over the next few years, primarily by adding researchers who have experience and/or an academic background in global health and development. We are looking to add both individual contributors and research managers to the team. We expect that the people we hire in the next few years will play a critical role in shaping GiveWell’s future research agenda and will be some of the leaders of GiveWell in the future.

For more information about the research roles we’re hiring for, see our jobs.

The post How GiveWell’s research is evolving appeared first on The GiveWell Blog.

Elie

Schedule a quick call to make giving easier

5 years 6 months ago

If you’re thinking about where to give to charity this year and it would be helpful to speak with a member of GiveWell’s staff about your decision, please let us know. We’re happy to answer questions sent to info@givewell.org or to schedule a call via the form here.

On a call, we’d be glad to:

  • Provide an overview of our recommendations. We know it can be time-consuming to read and digest all of the content on our website. We’re glad to share a quick summary of our top charities list.
  • Assist with the logistics of making a donation and discuss different options for donating, such as appreciated securities, checks, and wire transfers.
  • Answer any questions about our research or recommendations.

Due to limited staff capacity, it’s possible we won’t be able to speak with everyone who requests a call, although based on past experience we hope to be able to connect with anyone who gets in touch.

We look forward to hearing from you!

The post Schedule a quick call to make giving easier appeared first on The GiveWell Blog.

Catherine Hollander

December 2018 open thread

5 years 7 months ago

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our September 2018 open thread here.

The post December 2018 open thread appeared first on The GiveWell Blog.

Catherine Hollander

Staff members’ personal donations for giving season 2018

5 years 7 months ago

For this post, GiveWell staff members wrote up the thinking behind their personal donations for the year. We made similar posts in previous years.1See our staff giving posts from 2017, 2016, 2015, 2014, and 2013. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Staff are listed in order of their start dates at GiveWell.

You can click the below links to jump to a staff member’s entry:

Elie Hassenfeld

This year, I’m planning to donate to GiveWell for granting to top charities at its discretion.

I feel the same way I did last year, when I wrote, “GiveWell is currently producing the highest-quality research it ever has, which has led to more thoroughly researched, higher-quality recommendations that have been compared to more potential alternatives than ever before.”

I asked Holden Karnofsky, GiveWell’s co-founder, whether he thought there were promising opportunities for individuals with long-termist views; after checking with him, I believed that the Open Philanthropy Project and other donors were covering most of the opportunities I would find most promising.

I also considered giving to animal welfare organizations. I looked briefly at Animal Charity Evaluators’ research but ultimately didn’t feel like I had enough time to think through how their recommendations compared to giving to GiveWell, so I defaulted to GiveWell. I hope to give this more consideration in the future.

Natalie Crispin

I will be giving my annual gift to GiveWell for granting at its discretion to top charities. We expect that all of our top charities will be constrained by funding in the next year and that several will have unfunded opportunities to spend funds in highly cost-effective ways (at least 5 times as cost-effective as cash transfers). Our current best guess is that GiveWell will grant the funds it receives for granting at its discretion to Malaria Consortium, which would allow it to expand its work preventing child deaths from malaria in Nigeria or other countries. There is also a possibility that we will identify an opportunity that is more cost-effective than how Malaria Consortium would use funding at the current margin. Over the next few months, we will be discussing with our top charities how they plan to use funding from Good Ventures and other funders and what that means for how they would use additional funding. Giving to GiveWell for granting at its discretion allows for flexibility to take advantage of those opportunities.

I am very grateful for all the work, thoughtfulness, and hours of debate that my colleagues put into GiveWell’s recommendations this year. I am excited to support the most effective charities I know of.

Josh Rosenberg

I’m planning to give the same way that I did last year:

  • 80% to GiveWell for granting at its discretion to top charities. GiveWell’s top charities are the most cost-effective ways to help people that I know of. I see Malaria Consortium’s work on seasonal malaria chemoprevention (the current default option for discretionary funding) as a robust and highly effective giving opportunity.
  • 10% to animal welfare charities. I believe that animal welfare is a particularly important and neglected problem.
  • 10% to long-term future-oriented causes. I have not yet chosen a donation target in this cause area. If I do not find an opportunity I am satisfied with after a small amount of additional research, I will enter this portion of my giving into a donor lottery.

I focused most of my giving on global health and development since GiveWell’s top charities have the most pressing funding gaps I am aware of. If I knew of an especially strong case for a particular giving opportunity in another cause area, I would be open to changing my allocation in the future.

Devin Jacob

I plan on making approximately 80% of my charitable donations in 2018 to GiveWell, with 100% of that money allocated to GiveDirectly. Compared to my colleagues at GiveWell, I value near-term improvements in material well-being more than I value reducing deaths. Donating to GiveDirectly is the best means of supporting this goal that I know of.

I struggle each year when attempting to assess whether I should bet on the possible long-term income effects of deworming. To date I have been unable to convince myself I should make this bet, even though I find little to argue with in our work on the expected value of donations to charities implementing deworming programs. I am making a decision to ignore the difference in expected value between a donation to a deworming charity and a donation to GiveDirectly due to the greater certainty of impact via the latter. I think my approach to charitable giving is conservative relative to other staff at GiveWell and many of our donors but I also think that my approach is reasonable given my specific ethical commitments.

I also support other organizations with gifts each year. This year, approximately 10-15% of my giving will go to organizations that do not meet GiveWell’s criteria. These organizations work in a number of areas including:

  • Immigration policy, activism, and legal aid – International Refugee Assistance Project, RAICES, and the National Immigration Law Center
  • Nonprofit news – primarily CALmatters, the Center for Investigative Reporting, and ProPublica
  • Local issues I care about such as transit infrastructure – eg, Bike East Bay
  • Other political causes

I choose to keep the political contributions I make private as some of the causes I support are controversial and I would not want my political beliefs to have any potential impact on GiveWell’s work.

In the course of my day-to-day work duties at GiveWell, I also frequently make small donations to our charities when testing various payment platforms. To date, these donations account for approximately 5-10% of my remaining planned gifts in 2018. These gifts are distributed among our recommended and standout charities haphazardly. I could refund these transactions, but choose not to do that as I think all of our recommended charities do excellent work and I am happy to support them.

Catherine Hollander

I plan to give 75% of my total charitable giving to Malaria Consortium’s seasonal malaria chemoprevention program. I value averting deaths quite highly and I believe, based on GiveWell’s assessment, that contributing toward filling Malaria Consortium’s funding gap will accomplish a lot of good in the world. In previous years (2017, 2016, and 2015), the majority of my gift has been directed to the Against Malaria Foundation (AMF), but I believe Malaria Consortium currently has a more pressing funding gap for its seasonal malaria chemoprevention work.

I plan to give 10% of my total giving to AMF to continue their work. I understand that giving predictably is helpful for organizations’ planning and I don’t wish to abruptly alter my support for AMF. I also think that AMF continues to represent an outstanding giving opportunity as one of GiveWell’s top charities.

I plan to give 5% of my total giving to StrongMinds, an organization focused on treating depression in Africa. I have not vetted this organization anywhere nearly as closely as GiveWell’s top charities have been vetted, though I understand that a number of people in the effective altruism community have a positive view of StrongMinds within the cause area of mental health (though I don’t have any reason to think it is more cost-effective than GiveWell’s top charities). Intuitively, I believe mental health is an important cause area for donors to consider, and although we do not have GiveWell recommendations in this space, I would like to learn more about this area by making a relatively small donation to an organization that focuses on it.

I plan to give the remaining 10% of my charitable giving this year in conjunction with my partner to an organization working on criminal justice reform in the United States. We are going to discuss and review organizations together between now and the end of the year and make a joint gift in this space. I plan to consult previous recommendations made by Open Philanthropy Project’s program officer focused on criminal justice reform, Chloe Cockburn, as well as checking with friends who are better informed of the needs in this space than I am.

Andrew Martin

I think there’s a strong case for donating to GiveWell to grant to top charities at its discretion this year.

Our top charities have substantial funding gaps for highly cost-effective programs, even after taking the $63.2 million that we’ve recommended that Good Ventures allocate between our top charities into account. These funding gaps include expanding Malaria Consortium’s work on seasonal malaria chemoprevention in Nigeria, Chad, and Burkina Faso, extending HKI’s vitamin A supplementation programs in several countries over the next three years, and extending Deworm the World’s programs in Pakistan and Nigeria.

As Natalie and James have noted, it seems likely that donations given to GiveWell at the end of 2018 to allocate at its discretion will be directed to Malaria Consortium’s seasonal malaria chemoprevention program. I’m planning to donate to GiveWell to allocate at its discretion because I expect that GiveWell will either direct those funds to Malaria Consortium or to another funding gap it judges to be even more valuable to fill.

Christian Smith

I’m planning to make my year-end donation to Malaria Consortium for its seasonal malaria chemoprevention (SMC) program. As my colleagues have mentioned, Malaria Consortium appears to be in a great position for scaling up a highly-effective intervention in areas with substantial malaria burdens.

I decided not to give to GiveWell for granting at its discretion because I think there’s a chance GiveWell will decide deworming programs look more worthwhile than SMC on the margin. I take a more skeptical stance than most of my colleagues on the value of deworming programs. While I’m not confident, I would guess that our process for modeling the value of deworming relative to malaria prevention puts deworming in too favorable a light.

Isabel Arjmand

My giving this year looks very similar to last year’s. It’s important to me for the bulk of my giving to go to organizations where I’m confident that my donation will have a substantial impact, and I don’t know of any giving opportunities in that vein that are as strong as GiveWell’s top charities. Each year I also give to a handful of other organizations, some in international development and others operating in the United States. I intend each of those donations to be large enough to be meaningful to me and to signal support for these programs, while still leaving the vast majority for GiveWell-recommended charities. In all, 80% of my charitable budget is going to GiveWell’s top charities and 20% to other causes, which is the same as my donation last year.

I’m giving 75% of my total year-end donation to grants to recommended charities at GiveWell’s discretion. I strongly considered designating my donation to Malaria Consortium’s seasonal malaria chemoprevention (SMC) program instead. I’m very excited about Malaria Consortium’s opportunity to provide SMC in Nigeria; I’ve been particularly impressed by Malaria Consortium as an organization over the past year; and I have more confidence in SMC as an intervention than I do in some others. It’s hard for me to imagine preferring for my donation to go elsewhere when it’s time for GiveWell to grant out its discretionary funding from the fourth quarter of 2018. But, I believe that if GiveWell does decide to give the next round of discretionary funding elsewhere, I’m more likely than not to agree with that decision. I hold this belief in part because my moral weights and overall outputs in our cost-effectiveness analysis are quite similar to the median staff member, and while I’m concerned about the evidence for deworming, I think that concern is adequately reflected in my cost-effectiveness analysis inputs.

An additional 5% of my donation will go to GiveDirectly. I look forward to continuing to follow the work they do, particularly their cash benchmarking project, their work with refugees, and their continual research to improve the effectiveness of their programs.

I plan to distribute the remaining 20% of my donation across the following organizations:

  • International Refugee Assistance Project, which advocates for refugees and displaced people with a focus on those from the Middle East.
  • StrongMinds, which is the most promising organization I know of focused on mental health in low- and middle-income countries.
  • Planned Parenthood Action Fund, which takes a comprehensive, intersectional view of women’s health and reproductive justice.
  • Cool Earth, which works with local communities to protect rainforests and reduce carbon dioxide emissions.

As I wrote last year, I’d be somewhat surprised if these organizations were competitively cost-effective with GiveWell’s top charities, and I haven’t vetted them with an intensity that comes anywhere close to the rigor of GiveWell evaluations. I choose to support these programs in order to promote more justice-focused causes, further my own civic engagement, and signal support for work I think is important.

I also make small donations throughout the year to grassroots organizations working in the Bay Area like Causa Justa :: Just Cause, Initiate Justice, and the Sogorea Te Land Trust. These donations, which are motivated primarily by community engagement and relationship-building, come out of my personal discretionary spending, rather than what I budget for charitable giving.

As always, I’m grateful for the thoughtfulness of my colleagues, the work that went into producing this year’s recommendations, and the conversations we’ve had that have informed my own giving.

James Snowden

I’m planning to donate to GiveWell for allocating funds at its discretion because (i) I prefer GiveWell to have the flexibility to react to new information, and (ii) in the absence of new information, I expect additional funds will be allocated to Malaria Consortium, the charity I would have given to. I expect Malaria Consortium would use those funds to scale up seasonal malaria chemoprevention in Nigeria, Chad and Burkina Faso. According to the Global Burden of Disease, Nigeria has the most deaths from malaria of any country, and Burkina Faso has the highest rate of deaths from malaria given its population size. This drives my view that donations to Malaria Consortium are likely to be more cost-effective than donations to the Against Malaria Foundation, which sometimes distributes nets in countries with a lower malaria burden.

I may also continue to give a smaller proportion of my donations to organizations working on improving animal welfare, and focused on the long-term future, but haven’t yet decided whether to do so, or where to give.

Dan Brown

I will give 75% of my 2018 charity donation to GiveWell to allocate to recommended charities at its discretion. This is my first year working for GiveWell and I’ve been very impressed with the quality of work that goes into our recommendations. My moral values seem to be quite close to the median values across staff members in our cost-effectiveness analysis, and so I see no reason to deviate from GiveWell’s choice on that basis. As Natalie and James note, our best guess is that these funds will be allocated to Malaria Consortium to scale up its seasonal malaria chemoprevention programs.

I will give 15% of my donation to No Means No Worldwide, a global rape prevention organisation. I spent a reasonable amount of time during my PhD researching gender based violence. This encouraged me to donate to an organisation tackling sexual violence, particularly because the frequency of sexual violence globally is staggering. I have not vetted No Means No Worldwide with anything like the rigor of a GiveWell evaluation, but I have been impressed by what I have read so far (e.g. they are evaluating their program using RCTs, and I like that part of their approach is to promote positive masculinity amongst boys).

I will give 6% of my donation to Stonewall (UK), an organisation tackling discrimination against LGBT people. Whilst I have focused most of my donation on global health and development, I would also like to support a more justice-focused cause. I have fairly limited information with which to choose amongst charities in this area as I’m not aware of a GiveWell-type organisation to help direct my donation. However, I would like to see more done to tackle homophobia in sport, and the main organisation I am aware of that has tried to do this is Stonewall (UK) (through its Rainbow Laces campaign).

I will give the remaining 4% of my donation to Afrinspire. I have donated to this charity for a number of years. To my knowledge, the money I donate is used to help pay for school costs for orphaned children in Kampala (through the Jaguza Initiative). I do not expect this to be as cost-effective as other charitable giving opportunities, but I do not think it would be responsible to unexpectedly decrease this donation now that I am paying more attention personally to cost-effectiveness.

Olivia Larsen

This year, I plan to give 95% of my year-end donation to GiveWell for granting at its discretion. This is my first year working at GiveWell full-time, and it will be my first time contributing to GiveWell’s discretionary fund.

In previous years, I have chosen to support specific top charities among GiveWell’s recommendations. Knowing which charity I was supporting in advance of my donation helped me more clearly conceptualize the impact I was making. Since starting at GiveWell, however, I’ve seen the level of detail and thought that the research team puts into analyzing each top charity’s funding gaps and identifying where a marginal dollar will have the largest impact. I’m convinced that the additional good associated with GiveWell being able to adapt to additional information and allocate my donation to the highest-impact charity we see when the grants are disbursed outweighs my desire to know where my donation will go ahead of time.

I also expect to allocate 5% of my year-end donation to helping factory farmed animals. This will be my first donation to an animal-focused charity, and it is a decision I went back and forth on. I believe that animals suffer, and I believe that I should act to alleviate that suffering; for example, by not eating animal products. Due to the scale of factory farming, the intensity of factory farming, and the neglectedness of the cause, I think it’s reasonable that interventions there might be orders of magnitude more cost-effective at averting the suffering of animals than GiveWell’s charities are at averting the suffering of humans. However, I’m very uncertain about how to compare helping animals to helping humans. I’m uncomfortable about the idea of allowing a human to suffer, even if I can alleviate the suffering of many animals with the same donation. I haven’t fully engaged with this discomfort yet, but I’m planning to make a donation targeted at helping animals this year to help me both clarify my own values and learn more about the effective animal advocacy space. I haven’t yet decided how to allocate this donation, but I expect that I’ll either donate to the Animal Welfare Fund through Effective Altruism Funds or through outsourcing the decision to a trusted friend who knows more about effective animal advocacy than I do.

Amar Radia

This year, I plan to give 75% of my donations to GiveWell to allocate at its discretion. I believe that this will ensure that my donations go the furthest in global health and development. In previous years, I have given to either one of GiveWell’s top charities, or to the Global Health Effective Altruism fund. This year, my greater understanding of the advantages in allowing my donations to be channelled at GiveWell’s discretion, coupled with my U.S. taxpayer status, have caused me to prefer to give to GiveWell for regranting.

I plan to give the remaining 25% of my donations to an organization working on animal welfare but have not yet decided which one. It will likely be one of Animal Charity Evaluators top charities, and I expect to rely on the advice of a friend who has thought about effective animal charities far more than I have. I also considered giving some money to organizations focusing on the long-term future, but my view is that these organizations are not funding constrained.

Notes   [ + ]

1. ↑ See our staff giving posts from 2017, 2016, 2015, 2014, and 2013. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Staff members’ personal donations for giving season 2018 appeared first on The GiveWell Blog.

Catherine Hollander

We’ve added more options for cryptocurrency donors

5 years 7 months ago

We’ve updated our donations processing to better meet the needs of those who want to give via cryptocurrencies. Last year, after we began to accept Bitcoin, we received over $290,000 in Bitcoin donations.

By allowing more types of cryptocurrency donations, we’re enabling donors to realize tax deductions and to contribute more funding to their chosen charity based on gains in the cryptocurrencies they hold.

We’re now accepting donations in the following cryptocurrencies:

  • Bitcoin (BTC)
  • Bitcoin Cash (BCH)
  • Ethereum (ETH)
  • Ethereum Classic (ETC)
  • Litecoin (LTC)
  • 0x (ZRX)

We’ve built different pages for donating based on where you’d like to direct your support. To donate cryptocurrency, click the option you prefer:

If you have any questions or would like to donate in a currency not listed above, please reach out to us at donations@givewell.org.

If you have questions about the different options for directing your donation (top charities, standout charities, or operating expenses), please let us know.

The post We’ve added more options for cryptocurrency donors appeared first on The GiveWell Blog.

Ben Bateman

Response to concerns about GiveWell’s spillovers analysis

5 years 7 months ago

Last week, we published an updated analysis on “spillover” effects of GiveDirectly‘s cash transfer program: i.e., effects that cash transfers may have on people who don’t receive cash transfers but who live nearby those who do receive cash transfers.1For more context on this topic, see our May 2018 blog post. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We concluded: “[O]ur best guess is that negative or positive spillover effects of cash are minimal on net.” (More)

Economist Berk Özler posted a series of tweets expressing concern over GiveWell’s research process for this report. We understood his major questions to be:

  1. Why did GiveWell publish its analysis on spillover effects before a key study it relied on was public? Is this consistent with GiveWell’s commitment to transparency? Has GiveWell done this in other cases?
  2. Why did GiveWell place little weight on some papers in its analysis of spillover effects?
  3. Why did GiveWell’s analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?

These questions apply to GiveWell’s research process generally, not just our spillovers analysis, so the discussion below addresses topics such as:

  • When do our recommendations rely on private information, and why?
  • How do we decide on which evidence to review in our analyses of charities’ impact?
  • How do we decide which outcomes to include in our cost-effectiveness analyses?

Finally, this feedback led us to realize a communication mistake we made: our initial report did not communicate as clearly as it should have that we were specifically estimating spillovers of GiveDirectly’s current program, not commenting on spillovers of cash transfers in general. We will now revise the report to clarify this.

Note: It may be difficult to follow some of the details of this post without having read our report on the spillover effects of GiveDirectly’s cash transfers.

Summary

In brief, our responses to Özler’s questions are:

  • Why did GiveWell publish its analysis on spillover effects before a key paper it relied on was public? One of our major goals is to allocate money to charities as effectively as possible. Sometimes, research we learn about cannot yet be made public but we believe it should affect our recommendations. In these cases, we incorporate the private information into our recommendations and we are explicit about how it is affecting our views. We expect that private results may be more likely to change but nonetheless believe that they contain useful information; we believe ignoring such results because they are private would lead us to reach less accurate conclusions. For another recent example of an important conclusion that relied on private results, see our update on the preliminary (private) results from a study on No Lean Season, which was key to the decision to remove No Lean Season as a top charity in 2018. We discuss other examples below.
  • Why did GiveWell place little weight on some papers in its analysis of spillover effects? In general, our analyses aim to estimate the impact of programs as implemented by particular charities. The goal of our spillovers analysis is to make our best guess about the size of spillover effects caused by GiveDirectly’s programs in Kenya, Uganda, and Rwanda. We are not trying to communicate an opinion on the size of spillover effects of cash transfers in other countries or in development economics more broadly. Therefore, our analysis places substantially more weight on studies that are most similar to GiveDirectly’s program on basic characteristics such as geographic location and program type. Correspondingly, we place little weight on papers that do not meet these criteria. However, we’d welcome additional information that would help us improve our future decisionmaking about which papers to put the most weight on in our analyses.
  • Why did GiveWell’s analysis of spillovers focus on effects on consumption? Our cost-effectiveness models focus on key outcomes that we expect to drive the bulk of the welfare effects of a program. In the case of our spillovers analysis, we believe the two most relevant outcomes for estimating spillover effects on welfare are consumption and subjective well-being. We chose to focus on consumption effects in large part because (a) this is consistent with how we model the impacts of other programs, such as deworming, and (b) distinguishing effects on subjective well-being from effects on consumption in a way that avoids double-counting benefits was too complex to do in the time we had available. It is possible that additional work on subjective well-being measures would meaningfully change how we assess benefits of programs (for this program and potentially others). This is a question we plan to return to in the future.

As noted above, our current best guess is that negative or positive spillover effects of GiveDirectly’s cash transfers are minimal on net. However, we emphasize that our conclusion at this point is very tentative, and we hope to update our views next year if there is more public discussion or research on the areas of uncertainty highlighted in our analysis and/or if public debate about the studies covered in our report raises major issues we had not previously considered.

Details follow.

Why did GiveWell publish its analysis on spillover effects before a key paper it relied on was public?

In our analysis of the spillover effects of GiveDirectly’s cash transfer program, we place substantial weight on GiveDirectly’s “general equilibrium” (GE) study (as we noted we would do in May 2018,2“We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.” (More.) jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); prior to seeing the study’s results) because:

  • it is the study with the largest sample size,
  • its methodology was designed to estimate both across-village and within-village spillover effects, and
  • it is a direct study of a version of GiveDirectly’s program.

The details of this study are currently private, though we were able to share the headline results and methodology when we published our report.

This represents one example of a general policy we follow, which is to be willing to compromise to some degree on transparency in order to use the best information available to us to improve the quality of our recommendations. More on the reasoning behind this policy:

  • Since our recommendations affect the allocation of over $100 million each year, the value of improving our recommendations by factoring in the best information (even if private) can be high. Every November we publish updates to our recommended charities so that donors giving in December and January (when the bulk of charitable giving occurs) can act on the most up-to-date information.
  • We have ongoing communications with charities and researchers to learn about new information that could affect our recommendations. Private information (both positive and negative) has been important to our views on a number of occasions. Beyond the example of our spillovers analysis, early private results were key to our views on topics including:
    • No Lean Season in 2018 (negative result)3“In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.” (More.) jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Deworming in 2017 (positive result)4“We have seen preliminary, confidential results from a 15-year follow-up to Miguel and Kremer 2004. We are not yet able to discuss the results in detail, but they are broadly consistent with the findings from the 10-year follow-up analyzed in Baird et al. 2016.” (More.) jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Insecticide resistance in 2016 (modeling study)5“We have seen two modeling studies which model clinical malaria outcomes in areas with ITN coverage for different levels of resistance based on experimental hut trial data. Of these two studies, the most recent study we have seen is unpublished (it was shared with us privately), but we prefer it because the insecticide resistance data it draws from is more recent and more comprehensive.” (More.) jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Development Media International in 2015 (negative result)6“The preliminary endline results did not find any effect of DMI’s program on child mortality (it was powered to detect a reduction of 15% or more), and it found substantially less effect on behavior change than was found at midline. We cannot publicly discuss the details of the endline results we have seen, because they are not yet finalised and because the finalised results will be embargoed prior to publication, but we have informally incorporated the results into our view of DMI’s program effectiveness.” (More.) jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
    • Living Goods in 2014 (positive result)7“The researchers have published an abstract on the study, and shared a more in-depth report with us. The more in-depth report is not yet cleared for publication because the authors are seeking publication in an academic journal.” (More.) jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
  • Note that in all of the above cases we worked with the relevant researchers to get permission to publicly share basic information about the results we were relying on, as we did in the case of the GE study.
  • In all cases, we expected that full results would be made public in the future. Our understanding is that oftentimes early headline results from studies can be shared publicly while it may take substantially longer to publicly release full working papers because working papers are time-intensive to produce. We would be more hesitant to rely on a study that has been private for an unusually long period of time unless there were a good reason for it.
  • However, relying on private studies conflicts to some extent with our goal to be transparent. In particular, we believe two major downsides of our policy with respect to private information are (a) early private results are more likely to contain errors, and (b) we are not able to benefit from public scrutiny and discussion of the research. We would have ideally seen a robust public discussion of the GE study before we released our recommendations in November, but the timeline for the public release of GE study results did not allow that. We look forward to closely following the public debate in the future and plan to update our views based on what we learn.
  • Despite these limitations, we have generally found early, private results to be predictive of final, public results. This, combined with the fact that we believe private results have improved our recommendations on a number of occasions, leads us to believe that the benefits of our current policy on using private information outweigh the costs.

A few other notes:

  • Although we provide a number of cases above in which we relied on private information, the vast majority of the key information we rely on for our charity recommendations is public.
  • When private information is shared with us that implies a positive update about a charity’s program, we try to be especially attentive about potential conflicts of interest. In this case, there is potential for concern because the GE study was co-authored by Paul Niehaus, Chairman of GiveDirectly. We chose not to substantially limit the weight we place on the GE study because (a) a detailed pre-analysis plan was submitted for this study, and (b) three of the four co-authors (Ted Miguel, Johannes Haushofer, and Michael Walker) do not have an affiliation with GiveDirectly. We have no reason to believe that GiveDirectly’s involvement altered the analysis undertaken. In addition, the GE study team informed us that Paul Niehaus recused himself from final decisions about what the team communicated to GiveWell.
  • When we published our report (about one week ago), we expected that some additional analysis from the GE study would be shared publicly soon (which we still expect). We do not yet have an exact date and do not know precisely what content will be shared (though we expect it to be similar to what was shared with us privately).
Why did GiveWell place little weight on some papers in its analysis of spillover effects?

Some general context on GiveWell’s research that we think is useful for understanding our approach in this case is:

  • We are typically estimating the impact of programs as implemented by particular charities, not aiming to publish formal meta-analyses about program areas as a whole. As noted above, we believe we should have communicated more clearly about this in our original report on spillovers and we will revise the report to clarify.
  • We focus our limited time on the research that we think is most likely to affect our decisions, so our style of analysis is often different from what is typically seen in academia. (We think the differences in the kind of work we do is captured well by a relevant Rachel Glennerster blog post.)

Consistent with the above, the goal of our spillovers analysis was to make a best guess for the size of the spillover effect of GiveDirectly’s (GD’s) program in Kenya, Uganda, and Rwanda specifically.8This program provides $1,000 unconditional transfers and treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We are not trying to communicate an opinion on the size of spillover effects of cash transfers in other countries or development economics more broadly. If we were trying to do the latter, we would have considered a much wider range of literature.

We expect that studies that are most similar to GD’s program on basic characteristics such as geographic location and program type will be most useful for predicting spillovers in the GD context. So, we prioritize looking at studies that 1) took place in sub-Saharan Africa, and 2) evaluate unconditional cash transfer programs (further explanation in footnote).9On (1): Our understanding is that the nature and size of spillover effects is likely to be highly dependent on the context studied, for example because the extent to which village economies are integrated might differ substantially across contexts (e.g. how close households are to larger markets outside of the village in which they live, how easily goods can be transported, etc.).
On (2): We expect that providing cash transfers conditional on behavioral choices is a fairly different intervention from providing unconditional cash transfers, and so may have different spillover effects. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We would welcome additional engagement on this topic: that is, (a) to what extent should we believe that effects estimated in studies not meeting these criteria would apply to GD’s cash transfer programs, and (b) are there other criteria that we should have used?

A further factor that causes us to put more weight on the five studies we chose to review deeply is that they all study transfers distributed by GD, which we see as increasing their relevance to GD’s current work (though the specifics of the programs that were studied vary from GD’s current program). We believe that studies that do not meet the above criteria could affect our views on spillovers of GD’s program to some extent, but they would receive lower weight in our conclusions since they are less directly relevant to GD’s program.

We saw further review of studies that did not meet the above criteria as lower priority than a number of other analyses that we think would be more likely to shift our bottom line estimate of the spillovers of GD’s program. Even though we focused on the subset of studies most relevant to GD’s program, we were not able to combine their results to create a reasonable explicit model of spillover effects because we found that key questions were not answered by the available data (our attempt at an explicit model is in the following footnote).10We tried to create such an explicit model here (explanation here). jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); One fundamental challenge is that we are trying to apply estimates of “within-village” spillover effects to predict across-village spillover effects.11GiveDirectly treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Additional complications are described here.

More on why we placed little weight on particular studies that Özler highlighted in his comments:12Note on terminology: In our spillovers analysis report, we talk about studies in terms of “inclusion” and “exclusion.” We may use the term “exclude” differently than it is sometimes used in, e.g., academic meta-analyses. When we say that we have excluded studies, we have typically lightly reviewed their results and placed little weight on them in our conclusions. We did not ignore them entirely, as may happen for papers excluded from an academic meta-analysis. To try to clarify this, in this blog post we have used the term “place little weight.” We will try to be attentive to this in future research that we publish. jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

  • We placed little weight on the following papers in our initial analysis for the reasons given in parentheses: Angelucci & DiGiorgi 2009 (conditional transfers, study took place in Mexico), Cunha et al. 2017 (study took place in Mexico), Filmer et al. 2018 (conditional transfers, study took place in the Philippines), and Baird, de Hoop, and Özler 2013 (mix of conditional and unconditional transfers).
  • In addition, the estimates of mental health effects on teenage schoolgirls in Baird, de Hoop, and Özler 2013 seem like they would be relatively less useful for predicting the impacts of spillovers from cash transfers given to households, particularly in villages where almost all households receive transfers as is often the case in GD’s program.13We expect that local spillover effects via psychological mechanisms are less likely to occur with the current spatial distribution of GD’s program. In GD’s program in Kenya and Uganda, almost all households are treated within its target villages. In addition, the majority of villages within a region are treated in a block. Baird, de Hoop, and Özler 2013 estimate spillover effects within enumeration areas (groups of several villages), and the authors believe that the “detrimental effects on the mental well-being of those randomly excluded from the program in intervention areas is consistent with the idea that an individual’s utility depends on her relative consumption (or income or status) within her peer group”, p.372. The spatial distribution of GD’s program in Kenya and Uganda makes it more likely that the majority of one’s local peer group receives the same treatment assignment. jQuery("#footnote_plugin_tooltip_13").tooltip({ tip: "#footnote_plugin_tooltip_text_13", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
Why did GiveWell’s analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?

Some general context on GiveWell’s research that we think is useful for understanding our approach in this case is:

  • When modeling the cost-effectiveness of any program, there are typically a large number of outcomes that could be included in the model. In our analyses, we focus on the key outcomes that we expect to drive the bulk of the welfare effects of a program.
  • For example, our core cost-effectiveness model primarily considers various programs’ effects on averting deaths and increasing consumption (either immediately or later in life). This means that, e.g., we do not include benefits of averting vision impairment in our cost-effectiveness model for vitamin A supplementation (in part because we expect those effects to be relatively small as a portion of the overall impact of the program).
  • This does not mean that we think excluded outcomes are unimportant. We focus on the largest impacts of programs because (a) we think they are a good proxy for the overall impact of the relevant programs, and (b) having fewer outcomes simplifies our analysis, which leads to less potential for error, better comparability between programs, and a more manageable time investment in modeling.
  • For a deeper assessment of which program impacts we include and exclude from our core cost-effectiveness model and why, see our model’s “Inclusion/exclusion” sheet.14We have not yet added it, but we plan to add “Subjective well-being” under the list of outcomes excluded in the “Cross-cutting / Structural” section of the sheet, since it may be relevant to all programs. jQuery("#footnote_plugin_tooltip_14").tooltip({ tip: "#footnote_plugin_tooltip_text_14", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We aim to include outcomes that can be justified by evidence, feasibly modeled, and are consistent with how we handle other program outcomes. We revisit our list of excluded outcomes periodically to assess whether such outcomes could lead to a major shift in our cost-effectiveness estimate for a particular program.

In our spillovers analysis, we applied the above principles to try to identify the key welfare effects. Among the main five studies we reviewed on spillovers, it seems like the two most relevant outcomes are consumption and subjective well-being. We chose to focus on consumption for the following reasons:

  • Assessing the effects of cash transfers on consumption (rather than subjective well-being) is consistent with how we model the welfare effects of other programs that we think increase consumption on expectation, such as deworming.
  • Distinguishing effects on subjective well-being from effects on consumption in order to avoid double-counting benefits was too complex to do in the time we had available. It seems intuitively likely that standards of living (proxied by consumption) affect subjective well-being. In the Haushofer and Shapiro studies and in the GE study, the spillover effects act in the same direction for both consumption and subjective well-being. We do not think it would be appropriate to simply add subjective well-being effects into our model over and above effects on consumption since that risks double-counting benefits.
  • We do not have a strong argument that consumption is a more robust proxy for “true well-being” than subjective well-being, but given that consumption effects can be more easily compared across our programs we have chosen it as the default option at this point.

We hope to broadly revisit in the future whether we should be placing more weight on measures of subjective well-being across programs. It is possible that additional work on subjective well-being measures would meaningfully change how we assess benefits of programs (for this program and potentially others).

Examples of our questions about how to interpret subjective well-being effects in the cash spillovers literature include:

  • In the Haushofer and Shapiro studies, how should we interpret each of the underlying components of the subjective well-being indices? For example, how does self-reported life satisfaction map onto utility versus self-reported happiness?
  • In Haushofer, Reisinger, & Shapiro 2015, there is a statistically significant negative spillover effect on life-satisfaction, but there are no statistically significant effects on happiness, depression, stress, cortisol levels or the overall subjective well-being index (column (4) of Table 1). How should we interpret these findings?
Next steps
  • We hope that there is more public discussion on some of the policy-relevant questions we highlighted in our report and on the other points of uncertainty highlighted throughout this post. Our conclusions on spillovers are very tentative and could be affected substantially by more analysis, so we would greatly appreciate any feedback or pointers to relevant work.15If you are aware of relevant analyses or studies that we have not covered here, please let us know at info@givewell.org. jQuery("#footnote_plugin_tooltip_15").tooltip({ tip: "#footnote_plugin_tooltip_text_15", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
  • We are planning to follow up with Dr. Özler to better understand his views on spillover effects of cash transfers. We have appreciated his previous blog posts on this topic and want to ensure we are getting multiple perspectives on the relevant issues.

Notes   [ + ]

1. ↑ For more context on this topic, see our May 2018 blog post. 2. ↑ “We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.” (More.) 3. ↑ “In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.” (More.) 4. ↑ “We have seen preliminary, confidential results from a 15-year follow-up to Miguel and Kremer 2004. We are not yet able to discuss the results in detail, but they are broadly consistent with the findings from the 10-year follow-up analyzed in Baird et al. 2016.” (More.) 5. ↑ “We have seen two modeling studies which model clinical malaria outcomes in areas with ITN coverage for different levels of resistance based on experimental hut trial data. Of these two studies, the most recent study we have seen is unpublished (it was shared with us privately), but we prefer it because the insecticide resistance data it draws from is more recent and more comprehensive.” (More.) 6. ↑ “The preliminary endline results did not find any effect of DMI’s program on child mortality (it was powered to detect a reduction of 15% or more), and it found substantially less effect on behavior change than was found at midline. We cannot publicly discuss the details of the endline results we have seen, because they are not yet finalised and because the finalised results will be embargoed prior to publication, but we have informally incorporated the results into our view of DMI’s program effectiveness.” (More.) 7. ↑ “The researchers have published an abstract on the study, and shared a more in-depth report with us. The more in-depth report is not yet cleared for publication because the authors are seeking publication in an academic journal.” (More.) 8. ↑ This program provides $1,000 unconditional transfers and treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). 9. ↑ On (1): Our understanding is that the nature and size of spillover effects is likely to be highly dependent on the context studied, for example because the extent to which village economies are integrated might differ substantially across contexts (e.g. how close households are to larger markets outside of the village in which they live, how easily goods can be transported, etc.).
On (2): We expect that providing cash transfers conditional on behavioral choices is a fairly different intervention from providing unconditional cash transfers, and so may have different spillover effects. 10. ↑ We tried to create such an explicit model here (explanation here). 11. ↑ GiveDirectly treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). 12. ↑ Note on terminology: In our spillovers analysis report, we talk about studies in terms of “inclusion” and “exclusion.” We may use the term “exclude” differently than it is sometimes used in, e.g., academic meta-analyses. When we say that we have excluded studies, we have typically lightly reviewed their results and placed little weight on them in our conclusions. We did not ignore them entirely, as may happen for papers excluded from an academic meta-analysis. To try to clarify this, in this blog post we have used the term “place little weight.” We will try to be attentive to this in future research that we publish. 13. ↑ We expect that local spillover effects via psychological mechanisms are less likely to occur with the current spatial distribution of GD’s program. In GD’s program in Kenya and Uganda, almost all households are treated within its target villages. In addition, the majority of villages within a region are treated in a block. Baird, de Hoop, and Özler 2013 estimate spillover effects within enumeration areas (groups of several villages), and the authors believe that the “detrimental effects on the mental well-being of those randomly excluded from the program in intervention areas is consistent with the idea that an individual’s utility depends on her relative consumption (or income or status) within her peer group”, p.372. The spatial distribution of GD’s program in Kenya and Uganda makes it more likely that the majority of one’s local peer group receives the same treatment assignment. 14. ↑ We have not yet added it, but we plan to add “Subjective well-being” under the list of outcomes excluded in the “Cross-cutting / Structural” section of the sheet, since it may be relevant to all programs. 15. ↑ If you are aware of relevant analyses or studies that we have not covered here, please let us know at info@givewell.org. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Response to concerns about GiveWell’s spillovers analysis appeared first on The GiveWell Blog.

Josh (GiveWell)

Our updated top charities for giving season 2018

5 years 7 months ago

We’re excited to share our list of top charities for the 2018 giving season. We recommend eight top charities, all of which we also recommended last year.

Our bottom line

We recommend three top charities implementing programs whose primary benefit is reducing deaths. They are:

Five of our top charities implement programs that aim to increase recipients’ incomes and consumption. They are:

These charities represent the best opportunities we’re aware of to help people, according to our criteria. We expect GiveWell’s recommendations to direct more than $100 million to these organizations collectively over the next year. We expect our top charities to be able to effectively absorb hundreds of millions of dollars beyond that amount.

Our list of top charities is the same as it was last year, with the exception of Evidence Action’s No Lean Season. We removed No Lean Season from the list following our review of the results of a 2017 study of the program.

We also recognize a group of standout charities. We believe these charities are implementing programs that are evidence-backed and may be extremely cost-effective. However, we do not feel as confident in the impact of these organizations as we do in our top charities. We provide more information about our standout organizations here.

Where do we recommend donors give?
  • We recommend that donors choose the “Grants to recommended charities at GiveWell’s discretion” option on our donation forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. If we had additional funds to allocate now, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. If you have not supported GiveWell’s operations in the past, we ask that you consider designating 10 percent of your donation to help fund GiveWell’s operations.
How should donors give? Conference call to discuss recommendations

We’re holding a conference call on Tuesday, December 4, at 12pm ET/9am PT to discuss our latest recommendations and to answer any questions you have. Sign up here to join the call.

Additional details

Below, we provide:

Our research process in 2018

We plan to summarize all of the research we completed this year in a future post as part of our annual review process. A major focus of 2018 was improving our recommendations in future years, in particular through our work on GiveWell Incubation Grants and completing intervention reports on promising programs.

Below, we highlight the key research that led to our current charity recommendations. This page describes our general process for conducting research.

  • Following existing top charities. We followed the progress and plans of each of our 2017 top charities. We had several conversations with each organization and reviewed documents they shared with us. We published updated reviews of each of our top charities. Key information from this work is available in the following locations:
    • Our page summarizing changes at each of our top charities and standouts in 2018.
    • Our workbook with each charity’s funding needs and our estimates of the cost-effectiveness of filling each need.
    • Our full reviews for each charity are linked from this page.
  • Staying up to date on the research on the interventions implemented by our top charities. Details on some of what we learned in the section below.
  • Making extensive updates to our cost-effectiveness model and publishing 14 updates to the model over the course of the year. In addition to updating our cost-effectiveness model with information from the intervention research described above, we added a “country selection” tab to our cost-effectiveness analysis (so that users can toggle between overall and country-specific cost-effectiveness estimates); an “inclusion/exclusion” tab, which lists different items that we considered whether or not to account for in our cost-effectiveness analysis; and we explicitly modeled factors that could lead to wastage (charities failing to use the funds they receive to implement their programs effectively).
  • Completing a review of Zusha! We completed our review of the Georgetown University Initiative on Innovation, Development, and Evaluation—Zusha! Road Safety Campaign and determined that it did not meet all of our criteria to be a top charity. We named Zusha! a standout charity.
Major updates from the last 12 months

Below, we summarize major updates across our recommended charities over the past year. For detailed information on what changed at each of our top and standout charities, see this page.

  • We removed Evidence Action’s No Lean Season from our top charity list. At the end of 2017, we named No Lean Season, a program that provides loans to support seasonal migration in Bangladesh, as one of GiveWell’s top charities. This year, we updated our assessment of No Lean Season based on preliminary results we received from a 2017 study of the program. These results suggested the program did not successfully induce migration in the 2017 lean season. Taking this new information into account alongside previous studies of the program, we and Evidence Action no longer believe No Lean Season meets our top charity criteria. We provide more details on this decision in this blog post.
  • We received better information about Sightsavers’ deworming program. In previous years, we had limited information from Sightsavers documenting how it knew that its deworming programs were effectively reaching their intended beneficiaries. This year, Sightsavers shared significantly more monitoring information with us. This additional information substantially increased our confidence in Sightsavers’ deworming program. This spreadsheet shows the monitoring we received from Sightsavers in 2018.
  • We reviewed new research on the priority programs implemented by our top charities and updated our views and cost-effectiveness analyses accordingly. Examples of such updates include:
Recommended allocation of funding for Good Ventures and top charities’ remaining room for more funding Allocation recommended to Good Ventures

Good Ventures is a large foundation with which GiveWell works closely; it has been a major supporter of GiveWell’s top charities since 2011. Each year, we provide recommendations to Good Ventures regarding how we believe it can most effectively allocate its grants to GiveWell’s recommended charities, in terms of the total amount donated (within the constraints of Good Ventures’ planning, based in part on the Open Philanthropy Project’s recommendations on how to allocate funding across time and across cause areas) as well as the distribution between recipient charities.

Because Good Ventures is a major funder that we expect to follow our recommendations, we think it’s important for other donors to take its actions into account; we also want to be transparent about the research that leads us to make our recommendations to Good Ventures. That said, Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

This year, GiveWell recommended that Good Ventures grant $64.0 million to our recommended charities, allocated as shown in the table below.

Charity Recommended allocation from Good Ventures Remaining room for more funding1This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Malaria Consortium (SMC program) $26.6 million $43.9 million Evidence Action (Deworm the World Initiative) $10.4 million $27.0 million Sightsavers (deworming program) $9.7 million $1.6 million Helen Keller International (VAS program) $6.5 million $20.6 million Against Malaria Foundation $2.5 million $72.5 million Schistosomiasis Control Initiative $2.5 million $16.9 million The END Fund (deworming program) $2.5 million $45.8 million GiveDirectly $2.5 million >$100 million Standout charities $800,000 (combined)

We discuss our process for making our recommendation to Good Ventures in detail in this blog post.

Allocation of GiveWell discretionary funds

As part of reviewing our top charities’ funding gaps to make a recommendation to Good Ventures, we also decided how to allocate the $1.1 million in discretionary funding we currently hold. The latter comes from donors who chose to donate to “Grants to recommended charities at GiveWell’s discretion” in recent months. We decided to allocate this funding to Malaria Consortium’s seasonal malaria chemoprevention program, due to how large and cost-effective we believe Malaria Consortium’s funding gap is.

Top charities’ remaining room for more funding

Although we are expecting to direct a significant amount of funding to our top charities ($65.1 million between Good Ventures and our discretionary funding), we believe that nearly all of our top charities could productively absorb considerably more funding than we expect them to receive from Good Ventures, our discretionary funding, and additional donations we direct based on our recommendation. This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding gaps by charity.

Our recommendation for donors The bottom line
  • We recommend that donors choose the option to support “Grants to recommended charities at GiveWell’s discretion” on our donate forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good. We take into account charities’ funding needs and donations they have received from other sources when deciding where to grant discretionary funds. (The principles we outline in this post are indicative of how we will make decisions on what to fund.) We then make these grants to the highest-value funding opportunities we see among our recommended charities. This page lists discretionary grants we have made since 2014.
  • If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. See below for information that may be helpful in deciding between charities we recommend.
  • If we had additional funds to allocate, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
Comparing our top charities

If you’re interested in donating to a specific top charity or charities, the following information may be helpful as you compare the options on our list. The table summarizes key facts about our top charities; column headings are defined below.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.2“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Organization Modeled cost-effectiveness (relative to cash transfers) at the present margin3For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Primary benefits of the intervention Quality of the organization’s communication Ongoing monitoring and likelihood of detecting future problems Malaria Consortium (SMC program) 8.8 Averting deaths of children under 5 Strong Strong Evidence Action (Deworm the World Initiative) See footnote4At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Strong Strong Helen Keller International (VAS program) 6.4 Averting deaths of children under 5 Strong Moderate Against Malaria Foundation 7.3 Averting deaths Moderate Moderate Schistosomiasis Control Initiative 8.3 Possibly increasing income in adulthood Moderate Relatively weak Sightsavers (deworming program) See footnote5We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Moderate Moderate The END Fund (deworming program) 5.4 Possibly increasing income in adulthood Moderate Relatively weak GiveDirectly 1 Immediately increasing income and assets Strong Strong

Definitions of column headings follow:

  • Estimated cost-effectiveness (relative to cash transfers) at the present margin. We recommended that Good Ventures give $64.0 million to our top and standout charities, prioritizing the funding gaps that we believe are most cost-effective. The table above shows our estimates for the cost-effectiveness of additional donations to each charity, after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors).
  • Primary benefits of the intervention. This column describes the major benefit we see to supporting a charity implementing this intervention.
  • Quality of the organization’s communication. In most cases, we have have spent dozens or hundreds of hours interacting with our top charities. Here, we share our subjective impression of how well each organization has communicated with us. Our assessment of the quality of a charity’s communications is driven by whether we have been able to resolve our questions—particularly our less straightforward questions—about the organization’s activities, impact, and plans; how much time and effort was required to resolve those questions; how often the charity has sent us information that we later learned is inaccurate; and how direct we believe the charity is in acknowledging their weaknesses and mistakes.

    The organizations that stand out for high-quality communications are those that have most thoughtfully and completely answered our questions; brought problems with the program to our attention; and communicated clearly with us about timelines for providing additional information. High-quality communications reduce the time that we need to spend answering each question and therefore allow us to gain a greater degree of confidence in an organization. More importantly, our communication with an organization is one of the few ways that we can directly observe an organization’s general competence and thoughtfulness, so we see this as a proxy for unobserved ways in which the organization’s staff affect the impact of the program.

  • Ongoing monitoring and likelihood of detecting future problems. The quality of the monitoring we have received from our top charities varies widely, although we believe it stands out from that of the majority of charities. Ideally, the monitoring data charities collect would be representative of the program overall (by sampling all or a random selection of locations or other relevant units); would measure the outcomes of greatest interest for understanding the impact of the program; and would use methods that result in a low risk of bias or fraud in the results. In assessing the quality of a charity’s monitoring, we ask ourselves, “how likely do we believe it is that there are substantive problems with the program that are not detected by this monitoring?”

    Monitoring results inform our cost-effectiveness analyses directly. In addition, we believe that the quality of an organization’s monitoring give us information that is not fully captured in these analyses. Similar to how we view communication quality, we believe that understanding how an organization designs and implements monitoring is a opportunity to observe its general competence and degree of openness to learning and program improvement.

Other key factors donors might want to consider when making their giving decision:

  • As shown in the table above, our top charities implement programs with different primary benefits: some primarily avert deaths; others primarily increase incomes or consumption. Donors’ preference for programs that avert deaths relative to those that increase incomes (or how one weighs the value of averting a death at a given cost or increasing incomes a certain amount at a given cost) depends on their moral values. The cost-effectiveness estimates shown above rely on the GiveWell research team’s moral values. For more on how we (and others) compare the “good” accomplished by different programs, see this blog post. Donors may make a copy of our cost-effectiveness model to input their own moral weights and see how that impacts the relative cost-effectiveness of our top charities.
  • The table above shows cost-effectiveness estimates for different charities. We put significant weight on cost-effectiveness figures, but they have limitations. Read more about how we use cost-effectiveness estimates in this blog post.
  • Ultimately, donors are faced with a decision about how to weigh estimated cost-effectiveness (incorporating their moral values) against additional information about an organization that we have not explicitly modeled. We’ve written about this choice in the context of choosing between GiveDirectly and SCI in this 2016 blog post.
  • Four of our top charities implement deworming programs. We recommend the provision of deworming treatments to children for its possible impact on recipients’ incomes in adulthood. We work in an expected value framework; in other words, we’re willing to support a higher-risk intervention if it has the potential for higher impact (more in this post about our worldview). Deworming is such an intervention. We believe that deworming may have very little impact, but that risk is outweighed by the possibility that it has very large impact, and it’s very cheap to implement. We describe our assessment of deworming in this summary blog post as well as this detailed post. Donors who have lower risk tolerance may choose not to support charities implementing deworming programs.
  • The table above lists our views on the quality of each of our top charities’ monitoring. This 2016 blog post describes our view of AMF’s monitoring and may give donors more insight into how we think about monitoring quality.
Giving to support GiveWell’s operations

GiveWell is currently in a financially stable position. Over the next few years, we are planning to significantly increase our spending, driven by hiring additional research and outreach staff. We project that our revenue will approximately equal our expenses over the next few years; however, this projection includes an expectation of growth in the level of operating support we receive.

We retain our “excess assets policy” to ensure that if we fundraise for our own operations beyond a certain level, we will grant the excess to our recommended charities. In June of 2018, we applied our excess assets policy and designated $1.75 million in unrestricted funding for grants to recommended charities.

We cap the amount of operating support we ask Good Ventures to provide to GiveWell at 20 percent of our operating expenses, for reasons described here. We ask that donors who use GiveWell’s research consider the following:

  • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. Having a strong base of consistent operations support allows us to make valuable hires when opportunities arise and to minimize staff time spent on fundraising for our operating expenses.
  • If you have not supported GiveWell’s operations in the past, we ask that you designate 10 percent of your donation to help fund GiveWell’s operations. This can be done by selecting the option to “Add 10% to help fund GiveWell’s operations” on our credit card donation form or letting us know how you would like to designate your funding when giving another way.
Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the $64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. 2. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 3. ↑ For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. 4. ↑ At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. 5. ↑ We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers.

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our updated top charities for giving season 2018 appeared first on The GiveWell Blog.

Catherine Hollander

Our recommendation to Good Ventures

5 years 7 months ago

Today, we announce our list of top charities for the 2018 giving season. We expect to direct over $100 million to the eight charities on our list as a result of our recommendation.

Good Ventures, a large foundation with which we work closely, is the largest single funder of our top charities. We make recommendations to Good Ventures each year for how much funding to provide to our top charities and how to allocate that funding among them. As this funding is significant, we think it’s important for other donors to take into account the recommendation we make to Good Ventures.

This blog post explains in detail how we decide what to recommend to Good Ventures and why; we want to be transparent about the research that leads us to our recommendations to Good Ventures. If you’re interested in a bottom-line recommendation for where to donate this year, please view our post with recommendations for non-Good Ventures donors.

Note that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

Summary

In this post, we discuss:

How we decided how much funding to recommend to Good Ventures

This year, GiveWell recommended that Good Ventures grant $64.0 million to our top charities and standout charities. The amount Good Ventures gives to our top charities is based in part on how the Open Philanthropy Project plans to allocate funding across time and across cause areas. (Read more about our relationships with Good Ventures and the Open Philanthropy Project here.)

The Open Philanthropy Project currently plans to allocate around 10% of its total available capital to “straightforward charity,” which it currently allocates to global health and development causes based on GiveWell’s recommendations. This 10% allocation includes two “buckets”—a fixed percentage of total giving each year of 5% and another “flexible” bucket of 5%, which can be spent down quickly (over a few years) or slowly (over many years). GiveWell’s recommendation that Good Ventures grant $64.0 million this year puts the flexible bucket on track to be spent down within the next 14 years.

We’re recommending $64.0 million this year to balance two considerations:

  • As the world gets richer, giving opportunities in global health and development generally seem likely to get worse over time. This implies that giving now has a larger impact.
  • In the coming years, GiveWell may find opportunities that are considerably more cost-effective than our current recommendations (e.g., among policy advocacy organizations). This would make spending in future years have a larger impact.
Our recommended allocation for Good Ventures

The table below summarizes how much funding we recommend Good Ventures grant to each of our top charities, along with our explicit cost-effectiveness estimate for each organization and organizational factors we don’t model explicitly that affect our assessment of impact.

As always, cost-effectiveness figures should be interpreted with caution.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.1“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_3814_1").tooltip({ tip: "#footnote_plugin_tooltip_text_3814_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Charity Modeled cost-effectiveness (relative to cash transfers)2“We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. jQuery("#footnote_plugin_tooltip_3814_2").tooltip({ tip: "#footnote_plugin_tooltip_text_3814_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Organizational factors we don’t model explicitly3We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. jQuery("#footnote_plugin_tooltip_3814_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3814_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Recommended allocation Malaria Consortium (SMC program) 8.8 Very strong $26.6 million Evidence Action (Deworm the World Initiative) 14.6 Very strong $10.4 million Sightsavers (deworming program) 12.0 Moderate $9.7 million Helen Keller International (VAS program) 7.0 Strong $6.5 million Against Malaria Foundation 7.3 Moderate $2.5 million Schistosomiasis Control Initiative 8.3 Relatively weak $2.5 million The END Fund (deworming program) 5.4 Relatively weak $2.5 million GiveDirectly 1 Very strong $2.5 million Standout charities $800,000 (combined) Sum $64.0 million

The underlying objective of GiveWell’s allocation is to direct as much money as possible to the most cost-effective giving opportunities over the long run. (We aim to optimize cost-effectiveness, as defined broadly—we recognize the limitations of our cost-effectiveness model and consider additional factors in our assessment.) We relied on modeled cost-effectiveness figures as well as the organizational factors described above to inform our recommendations.

To meet this objective, our allocation this year was driven by the principles described below.

Principles we followed in arriving at this allocation

Principle 1: Put significant weight on our cost-effectiveness estimates. Our cost-effectiveness estimates incorporate a substantial amount of information relevant to our decisionmaking. While we recognize the high levels of uncertainty around our cost-effectiveness estimates, they are the single largest factor we take into consideration. More on how we use cost-effectiveness to inform our decisions here.

Principle 2: Consider additional information about an organization that we have not explicitly modeled. While our cost-effectiveness estimates are the best tool we know of to estimate the amount of good a charity accomplishes, we believe it’s infeasible to try to incorporate all relevant considerations into a single quantitative estimate. Subjective assessments that aren’t included in our cost-effectiveness calculations but affect how much impact a charity has include:

  • A charity’s ability to make good decisions on how to prioritize. Our top charities often take factors that aren’t included in our cost-effectiveness estimates into account when deciding how to spend their limited budgets. We use our subjective assessment of how well charities answer our questions about their activities as a proxy for how well they make these decisions.
  • Upside. Our top charities often perform activities that go beyond the scope of their direct work, such as conducting and sharing research that influences others, or raising funds for their programs from funders that would otherwise give to less cost-effective programs.

For the most part, we do not have the opportunity to directly observe these factors. Our subjective assessments of these factors are based on the observed but unmodeled factors that we discuss in this post: the quality of the organization’s communication and ongoing monitoring, and the likelihood of detecting future problems.

Principle 3: Assess charities’ funding gaps at the margin, i.e., where they would spend additional funding, where possible. We try to understand how charities’ funding would be spent among different programs or locations. Our cost-effectiveness estimates for charities’ projects often vary substantially (depending, for example, on the underlying disease burden in a particular country the charity plans to work in). Where possible, we compare our best guess of how funding would be used on the margin, rather than on average. As part of assessing charities’ marginal cost-effectiveness, we intend to capture whether there are diminishing returns to their receiving additional funding.

Principle 4: Default towards not imposing restrictions on charity spending. While we rely on our expectation of how charities would prioritize funding gaps to estimate marginal cost-effectiveness, we do not plan to impose any restrictions on how the funding is actually used in practice. (There is one exception to this: in cases where a top charity implements multiple global health and development programs and our recommendation is restricted to one of those programs, we do restrict funding to the priority program we recommend, such as deworming or vitamin A supplementation.) We believe our top charities are often better placed to make decisions about which projects to fund than we are, and we want to ensure maximum flexibility for them to do so.

Principle 5: Fund on a three-year horizon, unless we are particularly uncertain whether we will want to continue recommending a program in the future. Our top charities have communicated to us that there are often substantial benefits to knowing that funding for a program is secure for the future. As a general rule, we aim to provide funding for three years for each program we choose to fund. The exception is when we are more uncertain whether we would want to renew funding for a third year (e.g. because our estimated cost-effectiveness of a program is close to the marginal program we decided not to fund).

Principle 6: Ensure charities are incentivized to engage with our process. We recognize that our charity review process requires deep engagement from senior members of charities’ staff. We want to ensure that charities are incentivized to keep engaging with our process. To this end, since 2016, we recommended that Good Ventures provide a minimum “incentive grant” to top charities ($2.5 million) and standout charities ($100,000).

We hope that providing significant incentive grants increases the chances that charities are motivated to compete for a GiveWell recommendation. We fear that without ensuring that every top charity or standout receives a substantial amount of funding, some charities might be deterred from applying for a GiveWell recommendation or from making changes to their programs to potentially become top charities.

Our process for determining our recommended allocation for Good Ventures

In line with the principles above, we used the following process to arrive at our recommended allocation for Good Ventures:

  1. We recommended that Good Ventures provide each charity with an incentive grant ($2.5 million per top charity and $100,000 per standout charity).
  2. We identified the most cost-effective gap we were unable to entirely fill with the $64.0 million we recommended to Good Ventures (noting again that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended): Malaria Consortium’s seasonal malaria chemoprevention program in Nigeria, Burkina Faso, and Chad. Our cost-effectiveness analysis suggests this gap is about 8.8x as cost-effective as cash transfers, and that Malaria Consortium could absorb about $70 million in additional funding to support this work. We have a high opinion of Malaria Consortium as an organization, and this qualitative assessment supports our consideration of this gap as highly cost-effective to fill.

    Our best guess is there are limited diminishing marginal returns over the interval of this funding gap.

  3. Remaining funding gaps were compared to the Malaria Consortium funding gap in Nigeria, Burkina Faso, and Chad based on (i) their estimated cost-effectiveness, (ii) our subjective assessment of the organization’s quality, and (iii) particular arguments relevant to that funding gap but not captured elsewhere in our analysis (e.g., whether our decision to not fund a particular gap would be disproportionately disruptive to an organization’s activities).

This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding needs gaps by charity. We relied on this list of funding needs in determining our recommendation to Good Ventures, as well as in making our assessment of how much additional funding our top charities can absorb, after taking into account our recommendation to Good Ventures.

In brief, we concluded that some charities’ funding gaps compared favorably to Malaria Consortium’s seasonal malaria chemoprevention gap, which led us to recommend a total of ~$6-10 million in funding to each of Deworm the World Initiative, Sightsavers’ deworming program, and Helen Keller International’s vitamin A supplementation program. We did not see compelling reasons to recommend funding to the other top charities ahead of Malaria Consortium’s funding gap, so we only recommended that those charities receive the $2.5 million incentive grant.

We explain our recommended allocation to Good Ventures for each of our top charities in more detail on this page.

Questions?

We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions.

This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg.

Notes   [ + ]

1. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: See Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 2. ↑ “We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.” June 1, 2017, GiveWell blog, How GiveWell uses cost-effectiveness analyses. The estimates presented here differ from the estimates presented in our recommendation to donors because they estimate weighted average cost-effectiveness over the whole funding gap, rather than on the margin. 3. ↑ We take into account an organization’s strength of communication with us and the comprehensiveness of its program monitoring. We factor this into our broad assessment of the organization’s cost-effectiveness. Read more: November 26, 2018, GiveWell blog, Our updated top charities for giving season 2018. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Our recommendation to Good Ventures appeared first on The GiveWell Blog.

Catherine Hollander

Update on No Lean Season’s top charity status

5 years 7 months ago

At the end of 2017, we named Evidence Action’s No Lean Season one of GiveWell’s nine top charities. Now, GiveWell and Evidence Action agree that No Lean Season should not be a GiveWell top charity this year, and Evidence Action is not seeking additional funding to support its work at this time.

This post will discuss this decision in detail. In brief, we updated our assessment of No Lean Season, a program that provides loans to support seasonal migration, based on preliminary results Evidence Action began discussing with us in July from a study of the 2017 implementation of the program (hereinafter referred to as “2017 RCT”). These results suggested the program, as implemented in 2017, did not successfully induce migration. Taking this new information into account alongside previous studies of the program, we and Evidence Action do not believe No Lean Season meets our top charity criteria at this time.

Evidence Action’s post on this decision is here.

GiveWell’s mission is to identify and recommend charities that can most effectively use additional donations. While it may be disappointing for a top charity to be removed from our list of recommendations, we believe that adding and removing top charities from our list is an important part of our process. If our top charities list never changed, we would guess we were (a) acting too conservatively (i.e. not being open enough to adding new top charities), or (b) not being critical enough of groups once they’ve been added to our list (i.e. not being open enough to removing existing top charities).

We believe this decision speaks positively of Evidence Action and demonstrates our mutual commitment to updating our views based on new evidence. GiveWell has interacted with hundreds of organizations in our history, and very few have subjected their programs to a rigorous study in the way that Evidence Action did last year and, at smaller scale, in 2014. We’re excited to work with a group like Evidence Action that is committed to rigorous study and openness about results.

Summary

In this post, we will discuss:

  • The history of GiveWell and No Lean Season. (More)
  • How the 2017 RCT updated our views of No Lean Season. (More)
    • What did the 2017 RCT find? (More)
    • How did we interpret the RCT results? (More)
    • What does the future of No Lean Season look like? (More)
  • Conclusion
GiveWell and No Lean Season

No Lean Season provides support for low-income agricultural workers in rural Bangladesh during the time of seasonal income and food insecurity (“lean season”). The program provides small, interest-free loans to support workers’ temporary migration to seek employment. No Lean Season is implemented by RDRS Bangladesh; Evidence Action provides strategic direction, conducts program monitoring, and provides technical assistance, among other functions. Evidence Action developed No Lean Season as part of its Beta portfolio, which is focused on prototyping and scaling cost-effective programs.

GiveWell began engaging with No Lean Season as a potential top charity in 2013, when we began to explore making an Incubation Grant to support its scale-up. We saw No Lean Season as a promising program that lacked the track record to be considered for a top charity recommendation at that time. We describe our initial interest in the program in a February 2017 blog post:

We approached Evidence Action in late 2013 to express our interest in supporting the creation of new GiveWell top charities.

In March 2014, Good Ventures made a $250,000 grant to Evidence Action to support the investigation and scale-up of promising programs.

Since then, Good Ventures has made three additional grants totaling approximately $2.7 million to support the program’s scale-up.

No Lean Season continued to test and scale their program with this and other support. We decided to recommend No Lean Season as a top charity in late 2017. We based our recommendation on three randomized controlled trials (RCTs) of the program. (We generally consider RCTs to be one of the strongest types of evidence available; you can read more about why we rely on RCTs here.)

Two of the RCTs (conducted in 2008 and 2014) indicated increased migration, income, and consumption for program participants. In the third RCT, which was conducted in 2013 and has not been published, the program is considered to have failed to induce migration, potentially due to political violence that year. We discuss the RCT evidence in greater depth in our intervention report on conditional subsidies for seasonal labor migration in northern Bangladesh.

Weighing the evidence, the cost of the program, and the potential impacts, we decided No Lean Season met our criteria to be named a top charity in November 2017. We summarized our reasoning in our blog post announcing our 2017 list of top charities, and noted the risks of this recommendation:

Several randomized controlled trials (RCTs) of subsidies to increase migration provide moderately strong evidence that such an intervention increases household income and consumption during the lean season. An additional RCT is ongoing. We estimate that No Lean Season is roughly five times as cost-effective as cash transfers (see our cost-effectiveness analysis).

Evidence Action has shared some details of its plans for monitoring No Lean Season in the future, but, as many of these plans have not been fully implemented, we have seen limited results. Therefore, there is some uncertainty as to whether No Lean Season will produce the data required to give us confidence that loans are appropriately targeted and reach their intended recipients in full; that recipients are not pressured into accepting loans; and that participants successfully migrate, find work, and are not exposed to major physical and other risks while migrating.

As indicated above, No Lean Season conducted an additional RCT to evaluate its program during the 2017 lean season (approximately September to December), the preliminary results of which indicate the program failed to induce migration. With the evidence from the 2017 RCT, the case for the program’s impact and cost-effectiveness looks weaker.

Our updated perspective on No Lean Season

The 2017 RCT was a key factor in the decision to remove No Lean Season from our top charities list. Below, we discuss:

What did the 2017 RCT find?

The 2017 RCT was a collaboration between Evidence Action, Innovations for Poverty Action, and researchers from Yale University, the London School of Economics, and the University of California, Davis. In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.[1]

However, the implementation of the program during the 2017[2] lean season and the evaluation of it differed from previous iterations. No Lean Season operated at a larger scale in the fall of 2017 than it had previously, offering loans to 158,155 households, compared with 16,268 households in 2016. Relative to earlier versions of the program, the program in 2017 involved (a) higher-intensity delivery of the intervention (offering loans to most eligible individuals) and (b) broader eligibility requirements (the eligibility rate in 2017 was 77 percent, compared with 49 percent in 2016).[3]

At this point, neither GiveWell, nor No Lean Season, nor the researchers feel we have a conclusive understanding of why the program failed to induce migration. However, No Lean Season and the researchers are exploring various hypotheses about what may explain the failure to induce migration, and they note that some suggestive evidence supports some hypotheses more than others. The researchers have posited several possibilities:

  1. The way the program was targeted in 2017 was suboptimal. The Migration Organizers, who survey households for eligibility and offer and disburse loans (more detail here under “Migration Organizers”), may have focused their efforts on the individuals that were seen as most likely to migrate, rather than those who needed a loan to afford migration. The use of loan targets during implementation may have inadvertently incentivized this behavior.[4] If, for example, loan officers mostly made loans to people who would have migrated regardless of receiving a loan, this could have led to the lack of impact on migration found in the study.
  2. The 2017 lean season was particularly bad for the program. The researchers note that severe flooding and associated implementation delays in some regions may have caused problems in 2017. The researchers plan to look more closely at the regions that experienced flooding, though they note that they don’t have the data necessary to make experimental comparisons.[5] In addition, a 2013 trial may have failed due to issues that were specific to the year of that trial, such as increased labor strikes.
  3. There exists another (currently unknown) reason why this program won’t work at scale. Conditions in Bangladesh may have changed, negative spillovers (harmful impacts for individuals who did not receive loans) may cancel out gains, or pilot villages may have been strategically picked in earlier trials.[6]

The researchers are considering all of these possibilities. After considering various possible theories as well as some non-experimental data (including administrative data and data from a special-purpose survey of Migration Organizers who worked on the program in 2017), they feel that the ‘mistargeting’ theory is the most likely explanation and the explanation most consistent with the analysis.[7]

In scenario (1), No Lean Season may be able to identify and fix the problem. In scenario (2), GiveWell will need to update our estimate of the impact of the program to take into account the fact that periodic program failures due to external factors are more likely than we previously thought. In scenario (3), the program is unlikely to be effective in the future.

How did we interpret the RCT results?

We don’t know the extent to which each of the above explanations contributed to the study not finding an effect on migration.

We used the results of the 2017 RCT to update our cost-effectiveness estimate for the program. Cost-effectiveness estimates form arguably the most important single input into our decisions about whether or not to recommend charities (more on how GiveWell uses cost-effectiveness analyses here). When we calculate a program’s cost-effectiveness, we take many different factors into account, such as the administrative and program costs and the expected impact. We also make a number of educated guesses, such as the likelihood that a program’s impact in a new country will be similar to that in a country where it has previously worked. Below, we describe the mechanism by which the 2017 RCT result was incorporated into our model and how it changed our conclusion.

Prior to this year, we formed our view of No Lean Season based on the three small-scale RCTs mentioned above (conducted in 2008, 2013, and 2014). Each of these RCTs looked at a slightly different version of the program. We believed that the ‘high-intensity’ arm of the 2014 RCT was the version most likely to resemble the program at scale. We thus used the migration rate measured in this arm of the RCT as our starting point for calculating the program’s impact.

The high-intensity arm of the 2014 RCT also had the highest measured migration rate of the three RCTs we assessed, and so we wanted to give some consideration to the less-positive results found in the other two assessments. We applied a small, downward adjustment to the rate of induced migration observed in the 2014 high-intensity arm in our cost-effectiveness model; this was an educated guess, based on the information we had. Our best guess was that the program would lead, in expectation, to 80% of the induced migration seen in the 2014 high-intensity arm.[8]

Now, the preliminary 2017 RCT results show no significant impact on migration rates or incomes. Because this trial was large and very recent, we updated our expectations of the impact of the program substantially, and in a negative direction. Our best guess now is that the program will lead, in expectation, to 40% of the induced migration seen in the 2014 high-intensity arm. Holding other inputs constant, this adjustment reduces our estimate of No Lean Season’s cost-effectiveness by a factor of two.

This reduced cost-effectiveness, along with our updated qualitative picture of No Lean Season’s evidence of effectiveness, led to the decision to remove No Lean Season from our top charities list.

What does the future of No Lean Season look like?

Although they are not raising more funding at this time, No Lean Season has over two years’ worth of remaining funding. We understand that the organization has made changes to the program design in 2018 based on emerging interpretations of the 2017 results, and has collected additional data to evaluate some of the hypotheses which may explain those results (including, for example, a survey of Migration Organizers who worked on the 2017 program). They plan to subject the 2018 implementation round to an additional ‘RCT-at-scale,’ with a particular focus on reassessing the program’s effects on migration, income and consumption, as well as potential effects at migration destinations. They will continue to explore what may have caused the issue in the 2017 program at scale, and to see whether they can find a solution. If they do that, we’ll want to reassess the evidence and the costs to determine whether No Lean Season meets our bar for top charity status. Evidence Action believes we should have the necessary information to reassess starting in mid-2019, based on the results of the RCT conducted during the 2018 lean season and other analyses they perform.

Conclusion

This is the second time since 2011 that we have removed a top charity from our list (prior to 2011, our top charities list was fairly different from today; we made a big-picture shift in our priorities that year that led us to our more recent lists). The previous removal occurred in 2013, when we took the Against Malaria Foundation (AMF) off of our list because we didn’t believe it could absorb additional funding effectively in the near term. AMF was reinstated as a top charity in 2014.

The decision to remove a top charity is never easy. But continuously evaluating GiveWell’s recommended charities is an important part of our work, and we take it seriously. It’s easy to talk about a commitment to evidence when the results are positive. It’s hard to maintain that commitment when the results are not. We’re excited to work with a group like Evidence Action that is committed to rigorous program evaluation and open discussion of the results of those evaluations. Its openness about these results has increased our confidence in Evidence Action as an organization. We look forward to seeing the results from the 2018 RCT in 2019.

Notes

[1] “At this early stage in analysis, we find no evidence that the program had an impact (positive or negative) on migration, caloric intake, food expenditure, or income.” Evidence Action, unpublished summary document, Page 1.

[2] The 2017 RCT studied a period from the fall of 2017 through early 2018.

[3] “This study has two main goals:

  1. “A replication of previous findings showing positive impact of incentivized migration on seasonal migration, caloric intake, food and non-food expenditure, income, and food security. Our aim is to estimate impact of a scaled version of the No Lean Season program: intensifying program implementation within branches and expanding the provision of loans to all eligible households.”

Unpublished summary document, Page 1.

[4] “The second set of explanations focus on unintentional implementation changes caused by the change ineligibility, the vastly expanded scope of the program, or other factors. In the most recent round, it is possible that Migration Organizers (MOs) focused their efforts on those households who were most likely to migrate even without a loan to the exclusion of the target population households who need a loan to afford migration. Such behavior may have even been encouraged by the use of targets set by the NGO to manage implementation at such a large scale. We have implemented a qualitative survey to understand the incentives and actions of MOs last year, and are revising our instructions to avoid any possibility of this issue this year.” Evidence Action, unpublished summary document (with minor revision from Evidence Action), Page 11.

[5] “Most notably, the program was affected by severe flooding in many regions, and implementation was subsequently delayed as well. We are still evaluating whether these regions are the ones with the most diminished effects, although we lack the data in control areas to conduct an experimental comparison.” Evidence Action, unpublished summary document, Page 11-12.

[6] “It is possible that what we observe this year may be the true effect of the No Lean Season program when implemented at scale. This may be because conditions in rural Bangladesh have changed since the initial years of success, spillovers at scale cancel out any gains observed in small-scale pilots, or pilot villages were selected because they were most likely to be receptive to the program.” Evidence Action, unpublished summary document, Page 11.

[7] Evidence Action, “Interpretation of 2017 Results” deck and narrative (unpublished)

[8] “This adjustment is used to account for external validity concerns not accounted for elsewhere in the CEA.

“The default adjustment value of 80% is our best guess about the appropriate value, but it is not based on a formal calculation.

“The program at scale takes place in the same region with the same implementers (RDRS and Evidence Action) as the source of our key evidence for the intervention (the 2014 RCT). The program at scale differs in some aspects of implementation, particularly the inclusiveness of the eligibility criteria and the proportion of eligible households offered an incentive. In the 2014 RCT, the subsidy was a cash transfer rather than an interest-free loan, however the 2008 RCT found a similar effect regardless of whether the subsidy was a cash transfer or an interest-free loan.

“There is some evidence (from a 2013 RCT) suggesting that the program may be ineffective when the perceived risk of migrating increases for reasons such as labor strikes and violence. The researchers estimated that these are 1-in-10 year events.

“Additional discussion related to this parameter can be found at https://www.givewell.org/charities/no-lean-season#programdifferentfromRCTs.” 2018 GiveWell Cost-Effectiveness Model — Version 10, “Migration subsidies” tab, note on cell A19.

The post Update on No Lean Season’s top charity status appeared first on The GiveWell Blog.

Catherine Hollander

A grant to Evidence Action Beta to prototype, test, and scale promising programs

5 years 9 months ago

In July 2018, we recommended a $5.1 million grant to Evidence Action Beta to create a program dedicated to developing potential GiveWell top charities by prototyping, testing, and scaling programs which have the potential to be highly impactful and cost-effective.

This grant was made as part of GiveWell’s Incubation Grants program, which aims to support potential future GiveWell top charities and to help grow the pipeline of organizations we can consider for a recommendation. Funding for Incubation Grants comes from Good Ventures, a large foundation with which we work closely.

Summary

This post will discuss the following:

  • Why Evidence Action Beta is promising. (More)
  • Risks we see with this Incubation Grant. (More)
  • Our plans for following Evidence Action Beta’s work going forward. (More)
Incubation Grant to Evidence Action Beta

We summarized our case for making this grant in a recently-published write-up:

A key part of GiveWell’s research process is trying to identify evidence-backed, cost-effective programs. GiveWell sometimes finds programs that seem potentially highly impactful based on academic research, but for which there is no obvious organizational partner that could scale up and test them. This grant will fund Evidence Action Beta to create … [an] incubator … focused on interventions that GiveWell and Evidence Action believe are promising but that lack existing organizations to scale them.

We have found that which program a charity works on is generally the most important factor in determining its overall cost-effectiveness. Through partnering with Evidence Action Beta to test programs that we think have the potential to be very cost-effective, … our hope is that programs tested and scaled up through this partnership may eventually become GiveWell top charities.

We believe this incubator has the potential to fill a major gap in the nonprofit world by providing a well-defined path for testing and potentially scaling … promising idea[s] for helping the global poor.

For full details on the grant activities and budget, see this page.

We believe that Evidence Action Beta is well-positioned to run this incubator because of its track record of scaling up cost-effective programs with high-quality monitoring. Evidence Action Beta’s parent organization, Evidence Action, leads two of our top charities (Deworm the World Initiative and No Lean Season) and one standout charity (Dispensers for Safe Water).

Modeling cost-effectiveness

In addition to the theoretical case for the grant outlined above, we also made explicit predictions and modeled the potential cost-effectiveness of this grant, so we could better consider it relative to other options. In this section, we provide more details on our process for estimating the grant’s cost-effectiveness.

The main path to impact we see with this grant is by creating new top charities which could use GiveWell-directed funds more cost-effectively than alternatives could.

This could occur:

  1. if Evidence Action Beta incubates charities which are more cost-effective than our current top charities, or
  2. if Evidence Action Beta incubates charities which are similarly cost-effective to our current top charities—in a scenario in which we have mostly filled our current top charities’ funding gaps. Right now, we believe our top charities can absorb significantly more funding than we expect to direct to them; this diminishes our view of the value of finding additional, similarly cost-effective opportunities. If our current top charities’ funding gaps were close to filled, we would place higher value on identifying additional room for more funding at a similarly cost-effective level.

This grant could also have an impact if it causes other, non-GiveWell funders to allocate resources to charities incubated by this grant. This incubator may create programs that GiveWell doesn’t direct funding to but others do. If these new opportunities are more cost-effective than what these funders would have otherwise supported, then this grant will have had a positive impact by causing funds to be spent more cost-effectively, even if GiveWell never recommends funding to the new programs directly.

We register forecasts for all Incubation Grants we make. We register these not because we are confident in them but because they help us clarify and communicate our expectation for the outcomes of the grant. Here, we forecast a 55% chance that Evidence Action Beta’s incubator leads to a new top charity by December 2023 that is 1-2x as cost-effective as the giving opportunity to which we would have otherwise directed those funds and a 30% chance that the grant does not lead to any new top charities by that time. (For more forecasts we made surrounding this grant, see here.)

We incorporated our forecasts as well as the potential impacts outlined above in our cost-effectiveness estimate for the grant: note that the potential upside coming from other funders is a particularly rough estimate which could change substantially with additional research.

Our best guess is that this grant is approximately ~9x as cost-effective as cash transfers, but we have spent limited time on this estimate and are highly uncertain about it. For context, we estimate that the average cost-effectiveness of our current top charities is between ~3x and ~12x as cost-effective as cash transfers.

Risks to the success of the grant

We do see risks to the success of this grant:

  • Few programs may be more cost-effective than our current top charities, or our top charities may remain underfunded for a long time. If Evidence Action Beta fails to identify more cost-effective giving opportunities than GiveWell’s 2017 top charities, or if it only identifies similarly cost-effective giving opportunities while our current top charities remain underfunded, barring any major upside effects, this grant will have failed to make an impact.
  • We expect this partnership with Evidence Action Beta to require a fair amount of senior staff capacity. If other means of identifying cost-effective giving opportunities, such as our work to evaluate policy opportunities, end up seeming more promising, this capacity may have been misused.
Going forward

This grant initiates a partnership with Evidence Action Beta toward which we might contribute substantial additional GiveWell Incubation Grant funding in the future. We plan to spend a fair amount of staff time on this ongoing partnership and follow this work closely.

We look forward to sharing updates and the results.

The post A grant to Evidence Action Beta to prototype, test, and scale promising programs appeared first on The GiveWell Blog.

Olivia Larsen

Publishing more frequent updates to our cost-effectiveness model

5 years 9 months ago

We’ve recently made a number of adjustments to improve our research process. Not all of them are easily visible outside of the organization.

This post is to highlight one of them: Publishing more frequent updates to our cost-effectiveness model throughout the year.

Summary

This post will explain:

  • What changed in how we make updates to our cost-effectiveness model. (More)
  • Why we made this change. (More)
  • How to engage with updates to our model. (More)
What changed?

Last week, we published the ninth and tenth versions of our cost-effectiveness model in 2018. We made a number of updates to the newest versions of the model. They included accounting for reductions in malaria incidence for individuals who don’t receive seasonal malaria chemoprevention (SMC), the treatment one of our top charities distributes to prevent malaria, but who might benefit from living near other people receiving SMC (version 9) and the cost per deworming treatment delivered by another top charity, Sightsavers (version 10). These changes, and six others that were incorporated in the two latest versions, are described in our changelog.

Up until last year, we generally updated our cost-effectiveness model once or twice per year. However, as our model grew in complexity and we dedicated more research staff capacity to it, we decided that it would be beneficial to publish updates more regularly. We published our first in this series of more-frequent updates to our cost-effectiveness model in May 2017, as well as “release notes” (PDF) detailing the changes we made and the impact each had on our cost-effectiveness estimates.

We published five versions of our cost-effectiveness model in 2017. In 2018, we shifted from publishing PDF release notes to creating a “changelog“—a public page listing the changes we made to each version of the model, to be updated in tandem with the publication of each new version.

Internally, we moved toward having one staff member, Christian Smith, who is responsible for managing all changes to our cost-effectiveness model. He aims to publish a new version whenever there is a large, structurally complicated change to the model, or if there are several small and simple changes. Our internal process prioritizes being able to track how each change to the model moves the bottom line.

Changes we’ve published this year include updated inputs based on new research, such as the impact of insecticide resistance on the effectiveness of insecticide-treated nets; changes to inputs we include or exclude from the model altogether, such as removing short-term health benefits from deworming; and cosmetic changes to make the model easier to engage with, such as removing adjustments to account for the influence of GiveWell’s top charities on other actors from a particular tab.

Why we moved to this approach

Although it involves uncertainty, GiveWell’s cost-effectiveness model is a core piece of our research work and important input into our decisions about which charities to research and recommend. However, we believe it is challenging to engage with our model—to give a sense of the scale, our current model has 16 tabs, some of which use over 100 rows—and to keep up with changes we’ve made to the model over time.

Our hope is that publishing more frequent and transparent updates brings us closer in line to our goal of intense transparency and presenting a clear, vettable case for our recommendations to the public. It makes clearer the magnitude of any given change’s impact on our bottom line, and makes the evolution of the model over time easier to track. We also expect that it reduces the likelihood for errors, as fewer elements are being changed at any given time.

How to engage with updates to our model

We update our changelog, viewable here, when we publish a new version.

Going forward, we also plan to publish an announcement to our “Newly published GiveWell materials” email list when we do this. You can sign up to receive alerts from this email address here.

The post Publishing more frequent updates to our cost-effectiveness model appeared first on The GiveWell Blog.

Catherine

September 2018 open thread

5 years 10 months ago

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our June 2018 open thread here.

The post September 2018 open thread appeared first on The GiveWell Blog.

Catherine

Allocation of discretionary funds from Q2 2018

5 years 10 months ago

In April to June 2018, we received $1.2 million in funding for making grants at our discretion. In addition, GiveWell’s Board of Directors voted to allocate $2.9 million in unrestricted funds to making grants to recommended charities. In this post we discuss:

  • The decision to allocate the $4.1 million to the Against Malaria Foundation (AMF) (70 percent) and the Schistosomiasis Control Initiative (SCI) (30 percent).
  • Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact.
  • Why we have allocated unrestricted funds to making grants to recommended charities.

Allocation of discretionary funds

The allocation of 70 percent of the funds to AMF and 30 percent to SCI follows the recommendation we have made, and continue to make, to donors. For more discussion on this allocation, see our blog post about allocating discretionary funds from the fourth quarter of 2017.

We ask each top charity to provide details of how they will use additional funding each year, as part of our process to update our “room for more funding” summary for each top charity. This year, we have asked for this information by the end of July. We also ask each of our top charities to let us know if they encounter unexpected funding gaps at other times of year. We have not learned of new funding gaps in the last quarter.

What is our recommendation to donors?

We continue to recommend that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we are continuing to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. The reasons for this recommendation are the same as in our Q4 2017 post on allocating discretionary funding.

We will complete a full analysis of our top charities’ funding gaps and cost-effectiveness by November and expect to update our recommendation to donors at that time.

Why we have allocated unrestricted funds to making grants to recommended charities

In June, GiveWell’s Board of Directors voted to allocate $2.9 million in unrestricted funds to making grants to recommended charities. We generally use unrestricted funds to support GiveWell’s operating costs. The decision was made to grant out some of the unrestricted funds we hold in accordance with two policies:

  • Our “excess assets” policy specifies that once we surpass a certain level of unrestricted assets, we grant out the excess rather than continue to hold it ourselves. We reviewed our unrestricted asset holdings and projected revenue and expenses for 2018-2020 and concluded that we held $1.8 million more than was required to give us a stable, predictable financial situation (details of how this rule is applied are at the previous link). The Board voted to irrevocably restrict this amount to making grants to recommended charities. Note that we continue to need ongoing donor support for our operations. This decision incorporates our projections for future donations.
  • In order to limit the risks of relying too heavily on any single source of revenue, we cap the amount of funding that we will use from one source to support our operating costs at 20% of our projected annual expenses. In early 2018, we received a donation of $2.1 million in unrestricted funds. Our operating expense budget for 2018 is $4.9 million. Therefore, the Board voted to retain $1.0 million to support operating costs in 2018 and irrevocably restrict $1.1 million to making grants to recommended charities.

The post Allocation of discretionary funds from Q2 2018 appeared first on The GiveWell Blog.

Natalie Crispin

Why we don’t use subnational malaria mortality estimates in our cost-effectiveness models

5 years 10 months ago
Summary

We recently completed a small project to determine whether using subnational baseline malaria mortality estimates would make a difference to our estimates of the cost-effectiveness of two of our top charities, the Against Malaria Foundation and Malaria Consortium. We ultimately decided not to include these adjustments because they added complexity to our models and would require frequent updating, while only making a small difference (a 3-4% improvement) to our bottom line.

Though this post is on a fairly narrow topic, we believe this example illustrates the principles we use to make decisions about what to include in our cost-effectiveness model.

Background

Two of our top charities—the Against Malaria Foundation (AMF) and Malaria Consortium’s seasonal malaria chemoprevention program—implement programs to prevent malaria, a leading killer of people in low- and middle-income countries.

One of the core reasons we recommend AMF and Malaria Consortium is their cost-effectiveness: how much impact they have (e.g., cases of malaria prevented, malaria deaths averted) with the funds they receive. Our estimates of charities’ cost-effectiveness isn’t just helpful to us in determining which charities should be GiveWell top charities; we also rely on these estimates to guide our decisions about how to allocate funding between our top charities.

Our cost-effectiveness estimates for AMF and Malaria Consortium use country-wide data on malaria mortality and malaria incidence in the places that both organizations work.1In both cases, we rely on reports by Cochrane, an organization that produces systematic reviews and other synthesized research to inform decision-makers. For AMF, we use a decline in all-cause mortality, because the Cochrane review of anti-malarial bed net distributions reports the effect in terms of a reduction in all-cause mortality. For Malaria Consortium, we use a decline in malaria mortality (proxied by a decline in malaria incidence), as the Cochrane review of seasonal malaria chemoprevention reports the effect in terms of a reduction in malaria incidence, but not all-cause mortality. See our cost-effectiveness analysis for more details. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); However, neither organization serves a whole country—rather, they operate in sub-national regions—so the use of country-level estimates could cause us to either underestimate or overestimate their cost-effectiveness. If, for example, these programs are focused in the areas of the country with the highest malaria burden, using the average burden for the country would lead us to underestimate their cost-effectiveness. So, we completed a project to determine how much of an impact using subnational estimates would have, to consider whether we ought to incorporate this information into our cost-effectiveness analysis.

How we estimated the impact of subnational malaria incidence

AMF distributes insecticide-treated nets to prevent malaria; Malaria Consortium’s seasonal malaria chemoprevention (SMC) program provides preventive anti-malarial drugs. We used estimates of subnational malaria incidence from the Malaria Atlas Project (MAP) to see if regions covered by nets or eligible for SMC had higher or lower incidence than the average in the country in which they are located.2We assume that the regional distribution of malaria incidence is a reasonable proxy for the regional distribution of malaria mortality. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

We focused on all areas covered by nets or eligible for SMC (rather than those covered by our top charities, specifically) for two reasons:

  1. Our understanding is that when our top charities contribute resources to a country’s net distribution or SMC programs, the marginal region covered by these additional resources is not necessarily the same as the region to which these resources are assigned (because these resources are fungible with other resources within the national programs).3A limitation of this analysis is it does not account for the possibility that AMF and Malaria Consortium are causing locations that are higher priority or lower priority than the average location already covered by nets or eligible for SMC to be covered on the margin. We do not explicitly include estimates of the marginal region funded in our cost-effectiveness analysis because we often have limited information about which regions would be covered with marginal additional funds. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
  2. Our aim is to estimate the cost-effectiveness of funds donated to these organizations in the future. The subnational region where AMF has worked in the past has not historically been a good indicator of the region where it will work in future.
Results for net distributions in countries where AMF works

We looked at geographical variation in malaria incidence in countries where AMF works, weighting each region by the number of nets it currently receives.4We assume that where nets have been delivered in the past is a good proxy for where new nets will be delivered in the future. The data and calculations are in this spreadsheet. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

The average net delivered in the countries in which AMF works is hung in an area with 0-9% higher malaria incidence than the average in that country, and the weighted average adjustment to AMF’s cost-effectiveness would be 3% (in other words, AMF becomes 3% more cost-effective if we incorporate subnational estimates).5See Cell J114. We did not include Papua New Guinea (where AMF funds some nets) in this analysis, as MAP only covers countries in Africa. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Country Adjustment Zambia +9% Uganda +4% Ghana +4% Democratic Republic of the Congo +1% Togo +1% Malawi +0% Results for SMC in countries where Malaria Consortium works

We looked at six countries comprising >95% of Malaria Consortium’s SMC spending and compared malaria incidence in districts eligible for SMC with the country-wide average.6“The suitability of an area for SMC is determined by the seasonal pattern of rainfall, malaria transmission and the burden of malaria. SMC is recommended for deployment in areas: (i) where more than 60% of the annual incidence of malaria occurs within 4 months (ii) where there are measures of disease burden consistent with a high burden of malaria in children (incidence ≥ 10 cases of malaria among every 100 children during the transmission season) (iii) where SP and AQ [the drugs used to treat children] retain their antimalarial efficacy.” WHO SMC field guide (2013), Pg 8. jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });7The data and calculations are in this spreadsheet. jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

The average region eligible for SMC in countries where Malaria Consortium works has -2% to 17% higher malaria incidence than the average in that country. The weighted average adjustment to Malaria Consortium’s cost-effectiveness would be 4%.8See Cell C126. jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Country Adjustment Commentary Guinea +17% Conakry, the capital, is ineligible for SMC and has low incidence. Nigeria +12% SMC appears to be targeted in the north, where malaria incidence is slightly higher. Niger +2% The majority of the population is either covered or planned to be covered from 2019. Burkina Faso 0% All districts are eligible. Mali 0% All districts are eligible. Chad -2% The four regions with very low malaria incidence (Borkou, Tibesti, Ennedi Est and Ouest) aren’t eligible for SMC, but are sparsely populated. What we concluded

We decided not to include these adjustments in our cost-effectiveness analysis because they increased complexity, without substantially affecting the bottom line.

When we decide whether to include adjustments in our model in general, we use a framework that first takes our best guess of the likely effect size and then rates each of the remaining question on a three-point scale.

Score9We use these scores as a qualitative guide to help us think through what to include in our cost-effectiveness analysis. You can see the rubric we use to assign scores in this spreadsheet. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Commentary Best guess of effect size 3-4% Can it be objectively justified? 3/3 While we have not investigated the MAP data in detail, we would guess that after further investigation, we would conclude it provides a reasonable approximation of subnational malaria incidence.10You can read more about MAP’s methodology in this paper. jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); How easily can it be modelled? 3/3 The methodology is clear and simple. Is it consistent with our other cost-effectiveness analyses? 2/3 We could include subnational adjustments for both of our top charities that implement malaria-prevention programs, but we believe it is unlikely there would be sufficient data to do the same for prevalence of worms or vitamin A deficiency (the focus of five of our other seven top charities).

Even though these adjustments can be objectively justified and are fairly easy to model, the bottom-line difference they make to our cost-effectiveness estimates is insufficient to warrant the (moderate) increase in the complexity of our models. These adjustments would also introduce an inconsistency between our methodologies for top charities. As a result, we are not planning to incorporate subnational adjustments at this time.

When would we revisit this conclusion?

We will revisit using subnational malaria mortality estimates if AMF or Malaria Consortium start working in countries where it would make a large difference to the bottom line. We would include subnational adjustments if AMF contributed nets in any of these countries: Djibouti (+500% adjustment), South Africa (+259%), and Swaziland (+126%), where malaria is endemic in some parts of the country but not others. We would also consider subnational adjustments if AMF contributed nets in Namibia (+25%), Kenya (+23%), Madagascar (+14%), or Rwanda (+10%).11The data and calculations are in this spreadsheet. jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

We will investigate whether subnational adjustments would make a substantial difference if Malaria Consortium enters additional countries; at this time, we do not have details on which regions are eligible for SMC in countries in which Malaria Consortium is not currently operating.12We have not yet prioritized getting details on which regions are eligible for SMC in countries in which Malaria Consortium does not currently work, as this would likely impose a substantial time cost on Malaria Consortium. jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

You can read the internal emails discussing our decision process here.

Notes   [ + ]

1. ↑ In both cases, we rely on reports by Cochrane, an organization that produces systematic reviews and other synthesized research to inform decision-makers. For AMF, we use a decline in all-cause mortality, because the Cochrane review of anti-malarial bed net distributions reports the effect in terms of a reduction in all-cause mortality. For Malaria Consortium, we use a decline in malaria mortality (proxied by a decline in malaria incidence), as the Cochrane review of seasonal malaria chemoprevention reports the effect in terms of a reduction in malaria incidence, but not all-cause mortality. See our cost-effectiveness analysis for more details. 2. ↑ We assume that the regional distribution of malaria incidence is a reasonable proxy for the regional distribution of malaria mortality. 3. ↑ A limitation of this analysis is it does not account for the possibility that AMF and Malaria Consortium are causing locations that are higher priority or lower priority than the average location already covered by nets or eligible for SMC to be covered on the margin. We do not explicitly include estimates of the marginal region funded in our cost-effectiveness analysis because we often have limited information about which regions would be covered with marginal additional funds. 4. ↑ We assume that where nets have been delivered in the past is a good proxy for where new nets will be delivered in the future. The data and calculations are in this spreadsheet. 5. ↑ See Cell J114. We did not include Papua New Guinea (where AMF funds some nets) in this analysis, as MAP only covers countries in Africa. 6. ↑ “The suitability of an area for SMC is determined by the seasonal pattern of rainfall, malaria transmission and the burden of malaria. SMC is recommended for deployment in areas: (i) where more than 60% of the annual incidence of malaria occurs within 4 months (ii) where there are measures of disease burden consistent with a high burden of malaria in children (incidence ≥ 10 cases of malaria among every 100 children during the transmission season) (iii) where SP and AQ [the drugs used to treat children] retain their antimalarial efficacy.” WHO SMC field guide (2013), Pg 8. 7, 11. ↑ The data and calculations are in this spreadsheet. 8. ↑ See Cell C126. 9. ↑ We use these scores as a qualitative guide to help us think through what to include in our cost-effectiveness analysis. You can see the rubric we use to assign scores in this spreadsheet. 10. ↑ You can read more about MAP’s methodology in this paper. 12. ↑ We have not yet prioritized getting details on which regions are eligible for SMC in countries in which Malaria Consortium does not currently work, as this would likely impose a substantial time cost on Malaria Consortium. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Why we don’t use subnational malaria mortality estimates in our cost-effectiveness models appeared first on The GiveWell Blog.

James Snowden (GiveWell)

GiveWell’s money moved and web traffic in 2017

6 years ago

GiveWell is dedicated to finding outstanding giving opportunities and publishing the full details of our analysis. In addition to evaluations of other charities, we publish substantial evaluation of our own work. This post lays out highlights from our 2017 metrics report, which reviews what we know about how our research impacted donors. Please note:

  • We report on “metrics years” that run from February through January; for example, our 2017 data cover February 1, 2017 through January 31, 2018.
  • We differentiate between our traditional charity recommendations and the work of the Open Philanthropy Project, which became a separate organization in 2017 and whose work we exclude from this report.
  • More context on the relationships between GiveWell, Good Ventures, and the Open Philanthropy Project can be found here.

Summary of influence: In 2017, GiveWell influenced charitable giving in several ways. The following table summarizes our understanding of this influence.

Headline money moved: In 2017, we tracked $117.5 million in money moved to our recommended charities. Our money moved only includes donations that we are confident were influenced by our recommendations.

Money moved by charity: Our nine top charities received the majority of our money moved. Our seven standout charities received a total of $1.8 million.

Money moved by size of donor: In 2017, the number of donors and amount donated increased across each donor size category, with the notable exception of donations from donors giving $1,000,000 or more. In 2017, 90% of our money moved (excluding Good Ventures) came from 20% of our donors, who gave $1,000 or more.

Donor retention: The total number of donors who gave to our recommended charities or to GiveWell unrestricted increased about 29% year-over-year to 23,049 in 2017. This included 14,653 donors who gave for the first time. Among all donors who gave in the previous year, about 42% gave again in 2017, up from about 35% who gave again in 2016.

Our retention was stronger among donors who gave larger amounts or who first gave to our recommendations prior to 2015. Of larger donors (those who gave $10,000 or more in either of the last two years), about 73% who gave in 2016 gave again in 2017.

GiveWell’s expenses: GiveWell’s total operating expenses in 2017 were $4.6 million. Our expenses decreased from about $5.5 million in 2016 due to the Open Philanthropy Project becoming a separate organization in June 2017. We estimate that 67% of our total expenses ($3.1 million) supported our traditional top charity work and about 33% supported the Open Philanthropy Project. In 2016, we estimated that expenses for our traditional top charity work were about $2.0 million.

Donations supporting GiveWell’s operations: GiveWell raised $5.7 million in unrestricted funding (which we use to support our operations) in 2017, compared to $5.6 million in 2016. Our major institutional supporters and the six largest individual donors contributed about 49% of GiveWell’s operational funding in 2017.

Web traffic: The number of unique visitors to our website remained flat in 2017 compared to 2016 (when excluding visitors driven by AdWords, Google’s online advertising product).

For more detail, see our full metrics report (PDF).

The post GiveWell’s money moved and web traffic in 2017 appeared first on The GiveWell Blog.

Maryana Pinchuk

Announcing Zusha! as a standout charity

6 years ago

We’ve added the Georgetown University Initiative on Innovation, Development, and Evaluation gui2de‘s Zusha! Road Safety Campaign (from here on, “Zusha!”) as a standout charity; see our full review here. Standout charities do not meet all of our criteria to be a GiveWell top charity, but we believe they stand out from the vast majority of organizations we have considered. See more information about our standout charities here.

Zusha! is a campaign intended to reduce road accidents. Zusha! supports distribution of stickers to public service vehicles encouraging passengers to speak up and urge drivers to drive more safely. We provided a GiveWell Incubation Grant to Zusha! in January 2017 and discussed it in a February 2017 blog post.

For more information, see our full review. Interested donors can give to Zusha! by clicking “Donate” on that page.

The post Announcing Zusha! as a standout charity appeared first on The GiveWell Blog.

Josh (GiveWell)

June 2018 open thread

6 years 1 month ago

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our March 2018 open thread here.

The post June 2018 open thread appeared first on The GiveWell Blog.

Catherine

Allocation of discretionary funds from Q1 2018

6 years 1 month ago

In the first quarter of 2018, we received $2.96 million in funding for making grants at our discretion. In this post we discuss:

  • The decision to allocate the $2.96 million to the Against Malaria Foundation (AMF) (70 percent) and the Schistosomiasis Control Initiative (SCI) (30 percent).
  • Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact.

Allocation of discretionary funds

The allocation of 70 percent of the funds to AMF and 30 percent to SCI follows the recommendation we have made, and continue to make, to donors. For more discussion on this allocation, see our blog post about allocating discretionary funds from the previous quarter.

We also considered the following possibilities for this quarter:

Helen Keller International (HKI) for stopgap funding in one additional country

We discussed this possibility in our blog post about allocating discretionary funds from the previous quarter. After further discussing this possibility with HKI, our understanding is that (a) the amount of funding needed to fill this gap will likely be small relative to the amount of GiveWell-directed funding that HKI currently holds, and (b) we will have limited additional information in time for this decision round that we could use to compare this new use of funding to HKI’s other planned uses of funding. We will continue discussing this opportunity with HKI and may allocate funding to it in the future. Our current expectation is that we will ask HKI to make the tradeoff between allocating the GiveWell-directed funding it holds to this new opportunity and continuing to hold the funds. Holding the funds gives the current programs more runway (originally designed to fund three years) and gives HKI more flexibility to fund highly cost-effective, unanticipated opportunities in the future. We believe that HKI is currently in a better position to assess cost-effectiveness of the opportunities it has than we are, while we will seek to maximize cost-effectiveness in the longer run by assessing HKI’s track record of cost-effectiveness and comparing that to the cost-effectiveness of other top charities.

We remain open to the possibility that HKI will share information with us that will lead us to conclude that this new opportunity is a better use of funds than our current recommendation of 70 percent to AMF and 30 percent to SCI. In that case, we would allocate funds from the next quarter to fill this funding gap (and could accelerate the timeline on that decision if it were helpful to HKI).

Evidence Action’s Deworm the World Initiative for funding gaps in India and Nigeria

We spoke with Deworm the World about two new funding gaps it has due to unexpected costs in its existing programs in India and Nigeria.

In India, the cost overruns total $166,000. Deworm the World has the option of drawing down a reserve of $5.5 million (from funds donated on GiveWell’s recommendation). The reserve was intended to backstop funds that were expected but not fully confirmed from another funder. Given the small size of the gap relative to the available reserves, our preference is for Deworm the World to use that funding and for us to consider recommending further reserves as part of our end-of-year review of our top charities’ room for more funding.

In Nigeria, there is a funding gap of $1.7 million in the states that Deworm the World is currently operating in. Previous budgets assumed annual treatment for all children, and Deworm the World has since become aware of the existence of areas where worm prevalence is high enough that twice per year treatment is recommended. Our best guess is that AMF and SCI are more cost-effective than Deworm the World’s Nigeria program (see discussion in this post). It is possible that because additional funding would go to support additional treatments in states where programs already operate, the cost to deliver these marginal treatments would be lower. We don’t currently have enough data to analyze whether that would significantly change the cost-effectiveness in this case.

Deworm the World also continues to have a funding gap for expansion to other states in Nigeria. We wrote about this opportunity in our previous post on allocating discretionary funding.

Malaria Consortium for seasonal malaria chemoprevention (SMC)

We continue to see a case for directing additional funding to Malaria Consortium for SMC, as we did last quarter. Our views on this program have not changed. For further discussion, see our previous post on allocating discretionary funding.

What is our recommendation to donors?

We continue to recommend that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we are continuing to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. The reasons for this recommendation are the same as in our previous post on allocating discretionary funding.

The post Allocation of discretionary funds from Q1 2018 appeared first on The GiveWell Blog.

Natalie Crispin

New research on cash transfers

6 years 2 months ago
Summary
  • There has been a good deal of discussion recently about new research on the effects of cash transfers, beginning with a post by economist Berk Özler on the World Bank’s Development Impact blog. We have not yet fully reviewed the new research, but wanted to provide a preliminary update for our followers about our plans for reviewing this research and how it might affect our views of cash transfers, a program implemented by one of our top charities, GiveDirectly.
  • In brief, the new research suggests that cash transfers may be less effective than we previously believed in two ways. First, cash transfers may have substantial negative effects on non-recipients who live near recipients (“negative spillovers”). Second, the benefits of cash transfers may fade quickly.
  • We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.
  • Our best guess is that we will reduce our estimate of the cost-effectiveness of cash transfers to some extent, but will likely continue to recommend GiveDirectly. However, major updates to our current views, either in the negative or positive direction, seem possible.

More detail below.

Background

GiveDirectly, one of our top charities, provides unconditional cash transfers to very poor households in Kenya, Uganda, and Rwanda.

Several new studies have recently been released that assess the impact of unconditional cash transfers, including a three-year follow-up study (Haushofer and Shapiro 2018, henceforth referred to as “HS 2018”) on the impact of transfers that were provided by GiveDirectly. Berk Özler, a senior economist at the World Bank, summarized some of this research in two posts on the World Bank Development Impact blog (here and here), noting that the results imply that cash transfers may be less effective than proponents previously believed. In particular, Özler raises the concerns that cash may:

  1. Have negative “spillovers”: i.e., negative effects on households that did not receive transfers but that live near recipient households.
  2. Have quickly-fading benefits: i.e., the standard of living for recipient households may converge to be similar to non-recipient households within a few years of receiving transfers.

Below, we discuss the topics of spillover effects and the duration of benefits of cash transfers in more detail, as well as some other considerations relevant to the effectiveness of cash transfers. In brief:

  • If substantial spillover effects exist, they have the potential to significantly affect our cost-effectiveness estimates for cash transfers. We are uncertain what we will conclude about spillover effects of cash transfers after deeply reviewing all relevant new literature, but we expect that upcoming midline results from GiveDirectly’s “general equilibrium” study will play a major role in our conclusions. Our best guess is that the general equilibrium study and other literature will not imply that GiveDirectly’s program has large negative spillovers, but we remain open to the possibility that we should substantially negatively update our views after reviewing the relevant literature.
  • Several new studies seem to find that cash may have little effect on recipients’ standard of living beyond the first year after receiving a transfer. Our best guess is that after reviewing the relevant research in more detail we will decrease our estimate of the cost-effectiveness of cash transfers to some extent. In the worst (unlikely) case, this factor could lead us to believe that cash is about 1.5-2x less cost-effective than we currently do.
Spillovers

Negative spillovers of cash transfers have the potential to lead us to majorly revise our estimates of the effects of cash; we currently assume that cash does not have major negative or positive spillover effects. At this point, we are uncertain what we will conclude about the likely spillover effects of cash after reviewing all relevant new literature, including GiveDirectly’s forthcoming “general equilibrium” study. Our best guess is that GiveDirectly’s current program does not have large spillover effects, but it seems plausible that we could ultimately conclude that cash either has meaningful negative spillovers or positive spillovers.

We will not rehash the methodological details and estimated effect sizes of HS 2018 in this post. For a basic understanding of the findings and methodological issues, we recommend reading Özler’s posts, the Center for Global Development’s Justin Sandefur’s post, GiveDirectly’s latest post, or Haushofer and Shapiro’s response to Özler’s posts. The basic conclusions that we draw from this research are:

  • Under one interpretation of its findings, HS 2018 measures negative spillover effects that could outweigh the positive effects of cash transfers.1From Sandefur’s post: “Households who had been randomly selected to receive cash were much better off than their neighbors who didn’t. They had $400 more assets—roughly the size of the original transfer, with all figures from here on out in PPP terms—and about $47 higher consumption each month. It looked like an amazing success.
     
    “But when Haushofer and Shapiro compared the whole sample in these villages—half of whom had gotten cash, half of whom hadn’t—they looked no different than a random sample of households in control villages. In fact, their consumption was about $6 per month less ($211 versus $217 a month).
     
    “There are basically two ways to resolve this paradox:
     
    “1) Good data, bad news. Cash left recipients only modestly better off after three years (lifting them from $217 to $235 in monthly consumption), and instead hurt their neighbors (dragging them down from $217 to $188 in monthly consumption). Taking the data at face value, this is the most straightforward interpretation of the results.
     
    “2) Bad data, good news. Alternatively, the $47 gap in consumption between recipients and their neighbors is driven by gains to the former not losses to the latter. The estimates of negative side-effects on neighbors are driven by comparisons with control villages where—if you get into the weeds of the paper—it appears sampling was done differently than in treatment villages. (In short, the $217 isn’t reliable.)” jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
  • We do not yet have a strong view on how likely it is that the negative interpretation of HS 2018’s findings is correct. This would require having a deeper understanding of what we should believe about a number of key methodological issues in HS 2018 (see following footnote for two examples).2One methodological issue is how to deal with attrition, as discussed in Haushofer and Shapiro 2018, Pg. 9: “However, there is a statistically significant difference in attrition levels for households in control villages relative to households in treatment villages from endline 1 to endline 2: 6 percentage points more pure control households were not found at endline 2 relative to either group of households in treatment villages. In the analysis of across-village treatment effects and spillover effects we use Lee bounds to deal with this differential attrition; details are given below.”
     
    Another potential issue as described by Özler’s post: “The short-term impacts in Haushofer and Shapiro (2016) were calculated using within-village comparisons, which was a big problem for an intervention with possibility of spillovers, on which the authors had to do a lot of work earlier (see section IV.B in that paper) and in the recent paper. They got around this problem by arguing that spillover effects were small and insignificant. Of course, then came the working paper on negative spillovers on psychological wellbeing mentioned above and now, the spillover effects look sustained and large and unfortunately negative on multiple domains three years post transfers.
     
    “The authors estimated program impacts by comparing T [treatment group] to S [spillover group], instead of the standard comparison of T to C [control group], in the 2016 paper because of a study design complication: researchers randomly selected control villages, but did not collect baseline data in these villages. The lack of baseline data in the control group is not just a harmless omission, as in ‘we lose some power, no big deal.’ Because there were eligibility criteria for receiving cash, but households were sampled a year later, no one can say for certain if the households sampled in the pure control villages at follow-up are representative of the would-be eligible households at baseline.
     
    “So, quite distressingly, we now have two choices to interpret the most recent findings:
     
    “1) We either believe the integrity of the counterfactual group in the pure control villages, in which case the negative spillover effects are real, implying that total causal effects comparing treated and control villages are zero at best. Furthermore, there are no ITT [intention to treat] effects on longer-term welfare of the beneficiaries themselves – other than an increase in the level of assets owned. In this scenario, it is harder to retain confidence in the earlier published impact findings that were based on within-village comparisons – although it is possible to believe that the negative spillovers are a longer-term phenomenon that truly did not exist at the nine-month follow-up.
     
    “2) Or, we find the pure control sample suspect, in which case we have an individually randomized intervention and need to assume away spillover effects to believe the ITT estimates.” jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); HS 2018 reports that the potential bias introduced by methodological issues may be able to explain much of the estimated spillover effects.3Haushofer and Shapiro 2018, Pgs. 24-25: “These results appear to differ from those found in the initial endline, where we found positive spillover effects on female empowerment, but no spillover effects on other dimensions. However, the present estimates are potentially affected by differential attrition from endline 1 to endline 2: as described above, the pure control group showed significantly greater attrition than both treatment and spillover households between these endlines. To assess the potential impact of attrition, we bound the spillover effects using Lee bounds (Table 8). This analysis suggests that differential attrition may account for several of these spillover effects. Specifically, for health, education, psychological well-being, and female empowerment, the Lee bounds confidence intervals include zero for all sample definitions. For asset holdings, revenue, and food security, they include zero in two of the three sample definitions. Only for expenditure do the Lee bounds confidence intervals exclude zero across all sample definitions. Thus, we find some evidence for spillover effects when using Lee bounds, although most of them are not significantly different from zero after bounding for differential attrition across treatment groups.” jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
  • The mechanism for what may have caused large negative spillovers (if they exist) in HS 2018 is uncertain, though the authors provide some speculation (see footnote).4Haushofer and Shapiro 2018, Pg. 3: “We do not have conclusive evidence of the mechanism behind spillovers, but speculate it could be due to the sale of productive assets by spillover households to treatment households, which in turn reduces consumption among the spillover group. Though not always statistically different from zero, we do see suggestive evidence of negative spillover effects on the value of productive assets such as livestock, bicycles, motorbikes and appliances. We note that GiveDirectly’s current operating model is to provide transfers to all eligible recipients in each village (within village randomization was conducted only for the purpose of research), which may mitigate any negative spillover effects.” jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We would increase our credence in the existence of negative spillover effects if there were strong evidence for a particular mechanism.

One further factor that complicates application of HS 2018’s estimate of spillover effects is that GiveDirectly’s current program is substantially different from the version of its program that was studied in HS 2018. GiveDirectly now provides $1,000 transfers to almost all households in its target villages in Uganda and Kenya; the intervention studied by HS 2018 predominantly involved providing ~$287 transfers to about half of eligible (i.e., very poor) households within treatment villages, and HS 2018 measured spillover effects on eligible households that did not receive transfers.5See this section of our cash transfers intervention report. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); GiveDirectly asked us to note that it now defaults to village-level (instead of within-village) randomization for the studies it participates in, barring exceptional circumstances. Since GiveDirectly’s current program provides transfers to almost all households in its target villages, spillovers of its program may largely operate across villages rather than within villages. These changes to the program and the spillover population of interest may lead to substantial differences in estimated spillover effects.

Fortunately, GiveDirectly is running a large (~650 villages) randomized controlled trial of an intervention similar to its current program that is explicitly designed to estimate the spillover (or “general equilibrium”) effects of GiveDirectly’s program.6From the registration for “General Equilibrium Effects of Cash Transfers in Kenya”: “The study will take place across 653 villages in Western Kenya. Villages are randomly allocated to treatment or control status. In treatment villages, GiveDirectly enrolls and distributes cash transfers to households that meet its eligibility criteria. In order to generate additional spatial variation in treatment density, groups of villages are assigned to high or low saturation. In high saturation zones, 2/3 of villages are targeted for treatment, while in low saturation zones, 1/3 of villages are targeted for treatment. The randomized assignment to treatment status and the spatial variation in treatment intensity will be used to identify direct and spillover effects of cash transfers.”
 
Note that this study will evaluate a variant of GiveDirectly’s program that is different from its current program in that it will not provide transfers to almost all households in target villages. The study will estimate the spillover effects of cash transfers on ineligible (i.e., slightly wealthier) households in treatment villages, among other populations. Since GiveDirectly’s standard program now provides transfers to almost all households in its target villages, estimates of effects on ineligible households may need to be extrapolated to other populations of interest (e.g., households in non-target villages) to be most relevant to GiveDirectly’s current program. jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Midline results from this study are expected to be released in the next few months.

Since we expect GiveDirectly’s general equilibrium study to play a large role in our view of spillovers, we expect that we will not publish an overview of the cash spillovers literature until we’ve had a chance to review its results. However, we see the potential for negative spillover effects of cash as very concerning and it is a high-priority research question for us; we plan to publish a detailed update that incorporates HS 2018, previous evidence for negative spillovers (such as studies on inflation and happiness), the general equilibrium study, and any other relevant literature in time for our November 2018 top charity recommendations at the latest.

Duration of benefits

Several new studies seem to find that cash may have little effect on recipients’ standard of living beyond the first year after receiving a transfer. Our best guess is that after reviewing the relevant research in more detail we will decrease our estimate of the cost-effectiveness of cash to some extent. In the worst (unlikely) case, this could lead us to believe that cash is about 1.5-2x less cost-effective than we currently do.

In our current cost-effectiveness analysis for cash transfers, we mainly consider two types of benefits that households experience due to receiving a transfer:

  1. Increases in short-term consumption (i.e., immediately after receiving the transfer, very poor households are able to spend money on goods such as food).
  2. Increases in medium-term consumption (i.e., recipients may invest some of their cash transfer in ways that lead them to have a higher standard of living in the 1-20 years after first receiving the transfer).

Potential spillover effects aside, our cost-effectiveness estimate for cash has a fairly stable lower bound because we place substantial value on increasing short-term consumption for very poor people, and providing cash allows for more short-term consumption almost by definition. In particular:

  • Our current estimates are consistent with assuming little medium-term benefit of cash transfers. We estimate that about 60% of a typical transfer is spent on short-term goods such as eating more food, and count this as about 40-60% of the benefits of the program.7For our estimate of the proportion of the benefits of cash transfers that come from short-term consumption increases, see row 30 of the “Cash” sheet in our 2018 cost-effectiveness model.
     
    For our estimate of the proportion of transfers that is spent on short-term consumption, we rely on results from GiveDirectly’s randomized controlled trial, which shows investments of $505.94 (USD PPP) (within villages, or $601.88 across villages) on a transfer of $1,525 USD PPP, or about one-third of the total. See Pg. 117 here and Pg. 1 here for total transfer size. jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); If we were to instead assume that 100% of the transfer was spent on short-term consumption (i.e., none of it was invested), our estimate of the cost-effectiveness of cash would become about 10-30% worse.8See a version of our cost-effectiveness analysis in which we made this assumption here. The calculations in row 35 of the “Cash” tab show how assuming that 0% of the transfer is invested would affect staff members’ bottom line estimates. jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We think using the 100% short-term consumption estimate may be a reasonable and robust way to model the lower bound of effects of cash given various measurement challenges (discussed below).
  • Nevertheless, our previous estimates of the medium-term benefits of cash transfers may have been too optimistic. Based partially on a speculative model of the investment returns of iron roofs (a commonly-purchased asset for GiveDirectly recipients), most staff assumed that about 40% of a transfer will be invested, and that those investments will lead to roughly 10% greater consumption for 10-15 years.9See rows 5, 8, and 14, “Cash” sheet, 2018 Cost-Effectiveness Analysis – Version 1. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Some new research discussed in Özler’s first post suggests that there may be little return on investment from cash transfers within 2-4 years after the transfer, though the new evidence is somewhat mixed (see footnote).10See this section of Özler’s post: “This new paper and Blattman’s (forthcoming) work mentioned above join a growing list of papers finding short-term impacts of unconditional cash transfers that fade away over time: Hicks et al. (2017), Brudevold et al. (2017), Baird et al. (2018, supplemental online materials). In fact, the final slide in Hicks et al. states: ‘Cash effects dissipate quickly, similar to Brudevold et al. (2017), but different to Blattman et al. (2014).’ If only they were presenting a couple of months later…”
     
    See also two other recent papers that find positive effects of cash transfers beyond the first year: Handa et al. 2018 and Parker and Vogl 2018. The latter finds intergenerational effects of a conditional cash transfer program in Mexico, so may be less relevant to GiveDirectly’s program. jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Additionally, under the negative interpretation of HS 2018’s results, it finds that cash transfers did not have positive consumption effects for recipients three years post-transfer, though it finds a ~40% increase in assets for treatment households (even in the negative interpretation).11Haushofer and Shapiro 2018, Abstract: “Comparing recipient households to non-recipients in distant villages, we find that transfer recipients have 40% more assets (USD 422 PPP) than control households three years after the transfer, equivalent to 60% of the initial transfer (USD 709 PPP).”
     
    Haushofer and Shapiro 2018, Pg. 28: “Since we have outcome data measured in the short run (~9 months after the beginning of the transfers) and in the long-run (˜3 years after the beginning of transfers), we test equality between short and long-run effects…Results are reported in Table 9. Focusing on the within-village treatment effects, we find no evidence for differential effects at endline 2 compared to endline 1, with the exception of assets, which show a significantly larger treatment effect at endline 2 than endline 1. However, this effect is largely driven by spillovers; for across-village treatment effects, we cannot reject equality of the endline 1 and endline 2 outcomes. This is true for all variables in the across-village treatment effects except for food security and psychological well-being, which show a smaller treatment effect at endline 2 compared to endline 1. Thus, we find some evidence for decreasing treatment effects over time, but for most outcome variables, the endline 1 and 2 outcomes are similar.” jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Note that any benefits from owning iron roofs were not factored in to the consumption estimates in HS 2018.12Haushofer and Shapiro 2018, pgs. 32-33: “Total consumption…Omitted: Durables expenditure, house expenditure (omission not pre-specified for endline 1 analysis)” jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); If we imagine the potential worst case scenario implied by these results and assume that the ~40% of a cash transfer that is invested has zero benefits, our cost-effectiveness estimate would get about 2x worse.

Our best guess is that we’ll decrease our estimate for the medium-term effects of cash to some extent, though we’re unsure by how much. Challenging questions we’ll need to consider in order to arrive at a final estimate include:

  • If we continue to assume that about 40% of transfers are invested, and that those investments do not lead to any future gains in consumption, then we are effectively assuming that money spent on investments is wasted. Is this an accurate reflection of reality, i.e. are recipients failing to invest transfers in a beneficial manner?
  • Is our cost-effectiveness model using a reasonable framework for estimating recipients’ standard of living over time? Currently, we only estimate cash’s effects on consumption. However, assets such as iron roofs may provide an increase in standard of living for multiple years even if they do not raise consumption. How, if at all, should we factor this into our estimates?
  • GiveDirectly’s cash transfer program differs in many ways from other programs that have been the subject of impact evaluations. For example, GiveDirectly provides large, one-time transfers whereas many government cash transfers provide smaller ongoing support to poor families. How should we apply new literature on other kinds of cash programs to our estimates of the effects of GiveDirectly?
Next steps

We plan to assess all literature relevant to the impact of cash transfers and provide an update on our view on the nature of spillover effects, duration of benefits, and other relevant issues for our understanding of cash transfers and their cost-effectiveness in time for our November 2018 top charity recommendations at the latest.

Notes   [ + ]

1. ↑ From Sandefur’s post: “Households who had been randomly selected to receive cash were much better off than their neighbors who didn’t. They had $400 more assets—roughly the size of the original transfer, with all figures from here on out in PPP terms—and about $47 higher consumption each month. It looked like an amazing success.
 
“But when Haushofer and Shapiro compared the whole sample in these villages—half of whom had gotten cash, half of whom hadn’t—they looked no different than a random sample of households in control villages. In fact, their consumption was about $6 per month less ($211 versus $217 a month).
 
“There are basically two ways to resolve this paradox:
 
“1) Good data, bad news. Cash left recipients only modestly better off after three years (lifting them from $217 to $235 in monthly consumption), and instead hurt their neighbors (dragging them down from $217 to $188 in monthly consumption). Taking the data at face value, this is the most straightforward interpretation of the results.
 
“2) Bad data, good news. Alternatively, the $47 gap in consumption between recipients and their neighbors is driven by gains to the former not losses to the latter. The estimates of negative side-effects on neighbors are driven by comparisons with control villages where—if you get into the weeds of the paper—it appears sampling was done differently than in treatment villages. (In short, the $217 isn’t reliable.)” 2. ↑ One methodological issue is how to deal with attrition, as discussed in Haushofer and Shapiro 2018, Pg. 9: “However, there is a statistically significant difference in attrition levels for households in control villages relative to households in treatment villages from endline 1 to endline 2: 6 percentage points more pure control households were not found at endline 2 relative to either group of households in treatment villages. In the analysis of across-village treatment effects and spillover effects we use Lee bounds to deal with this differential attrition; details are given below.”
 
Another potential issue as described by Özler’s post: “The short-term impacts in Haushofer and Shapiro (2016) were calculated using within-village comparisons, which was a big problem for an intervention with possibility of spillovers, on which the authors had to do a lot of work earlier (see section IV.B in that paper) and in the recent paper. They got around this problem by arguing that spillover effects were small and insignificant. Of course, then came the working paper on negative spillovers on psychological wellbeing mentioned above and now, the spillover effects look sustained and large and unfortunately negative on multiple domains three years post transfers.
 
“The authors estimated program impacts by comparing T [treatment group] to S [spillover group], instead of the standard comparison of T to C [control group], in the 2016 paper because of a study design complication: researchers randomly selected control villages, but did not collect baseline data in these villages. The lack of baseline data in the control group is not just a harmless omission, as in ‘we lose some power, no big deal.’ Because there were eligibility criteria for receiving cash, but households were sampled a year later, no one can say for certain if the households sampled in the pure control villages at follow-up are representative of the would-be eligible households at baseline.
 
“So, quite distressingly, we now have two choices to interpret the most recent findings:
 
“1) We either believe the integrity of the counterfactual group in the pure control villages, in which case the negative spillover effects are real, implying that total causal effects comparing treated and control villages are zero at best. Furthermore, there are no ITT [intention to treat] effects on longer-term welfare of the beneficiaries themselves – other than an increase in the level of assets owned. In this scenario, it is harder to retain confidence in the earlier published impact findings that were based on within-village comparisons – although it is possible to believe that the negative spillovers are a longer-term phenomenon that truly did not exist at the nine-month follow-up.
 
“2) Or, we find the pure control sample suspect, in which case we have an individually randomized intervention and need to assume away spillover effects to believe the ITT estimates.” 3. ↑ Haushofer and Shapiro 2018, Pgs. 24-25: “These results appear to differ from those found in the initial endline, where we found positive spillover effects on female empowerment, but no spillover effects on other dimensions. However, the present estimates are potentially affected by differential attrition from endline 1 to endline 2: as described above, the pure control group showed significantly greater attrition than both treatment and spillover households between these endlines. To assess the potential impact of attrition, we bound the spillover effects using Lee bounds (Table 8). This analysis suggests that differential attrition may account for several of these spillover effects. Specifically, for health, education, psychological well-being, and female empowerment, the Lee bounds confidence intervals include zero for all sample definitions. For asset holdings, revenue, and food security, they include zero in two of the three sample definitions. Only for expenditure do the Lee bounds confidence intervals exclude zero across all sample definitions. Thus, we find some evidence for spillover effects when using Lee bounds, although most of them are not significantly different from zero after bounding for differential attrition across treatment groups.” 4. ↑ Haushofer and Shapiro 2018, Pg. 3: “We do not have conclusive evidence of the mechanism behind spillovers, but speculate it could be due to the sale of productive assets by spillover households to treatment households, which in turn reduces consumption among the spillover group. Though not always statistically different from zero, we do see suggestive evidence of negative spillover effects on the value of productive assets such as livestock, bicycles, motorbikes and appliances. We note that GiveDirectly’s current operating model is to provide transfers to all eligible recipients in each village (within village randomization was conducted only for the purpose of research), which may mitigate any negative spillover effects.” 5. ↑ See this section of our cash transfers intervention report. 6. ↑ From the registration for “General Equilibrium Effects of Cash Transfers in Kenya”: “The study will take place across 653 villages in Western Kenya. Villages are randomly allocated to treatment or control status. In treatment villages, GiveDirectly enrolls and distributes cash transfers to households that meet its eligibility criteria. In order to generate additional spatial variation in treatment density, groups of villages are assigned to high or low saturation. In high saturation zones, 2/3 of villages are targeted for treatment, while in low saturation zones, 1/3 of villages are targeted for treatment. The randomized assignment to treatment status and the spatial variation in treatment intensity will be used to identify direct and spillover effects of cash transfers.”
 
Note that this study will evaluate a variant of GiveDirectly’s program that is different from its current program in that it will not provide transfers to almost all households in target villages. The study will estimate the spillover effects of cash transfers on ineligible (i.e., slightly wealthier) households in treatment villages, among other populations. Since GiveDirectly’s standard program now provides transfers to almost all households in its target villages, estimates of effects on ineligible households may need to be extrapolated to other populations of interest (e.g., households in non-target villages) to be most relevant to GiveDirectly’s current program. 7. ↑ For our estimate of the proportion of the benefits of cash transfers that come from short-term consumption increases, see row 30 of the “Cash” sheet in our 2018 cost-effectiveness model.
 
For our estimate of the proportion of transfers that is spent on short-term consumption, we rely on results from GiveDirectly’s randomized controlled trial, which shows investments of $505.94 (USD PPP) (within villages, or $601.88 across villages) on a transfer of $1,525 USD PPP, or about one-third of the total. See Pg. 117 here and Pg. 1 here for total transfer size. 8. ↑ See a version of our cost-effectiveness analysis in which we made this assumption here. The calculations in row 35 of the “Cash” tab show how assuming that 0% of the transfer is invested would affect staff members’ bottom line estimates. 9. ↑ See rows 5, 8, and 14, “Cash” sheet, 2018 Cost-Effectiveness Analysis – Version 1. 10. ↑ See this section of Özler’s post: “This new paper and Blattman’s (forthcoming) work mentioned above join a growing list of papers finding short-term impacts of unconditional cash transfers that fade away over time: Hicks et al. (2017), Brudevold et al. (2017), Baird et al. (2018, supplemental online materials). In fact, the final slide in Hicks et al. states: ‘Cash effects dissipate quickly, similar to Brudevold et al. (2017), but different to Blattman et al. (2014).’ If only they were presenting a couple of months later…”
 
See also two other recent papers that find positive effects of cash transfers beyond the first year: Handa et al. 2018 and Parker and Vogl 2018. The latter finds intergenerational effects of a conditional cash transfer program in Mexico, so may be less relevant to GiveDirectly’s program. 11. ↑ Haushofer and Shapiro 2018, Abstract: “Comparing recipient households to non-recipients in distant villages, we find that transfer recipients have 40% more assets (USD 422 PPP) than control households three years after the transfer, equivalent to 60% of the initial transfer (USD 709 PPP).”
 
Haushofer and Shapiro 2018, Pg. 28: “Since we have outcome data measured in the short run (~9 months after the beginning of the transfers) and in the long-run (˜3 years after the beginning of transfers), we test equality between short and long-run effects…Results are reported in Table 9. Focusing on the within-village treatment effects, we find no evidence for differential effects at endline 2 compared to endline 1, with the exception of assets, which show a significantly larger treatment effect at endline 2 than endline 1. However, this effect is largely driven by spillovers; for across-village treatment effects, we cannot reject equality of the endline 1 and endline 2 outcomes. This is true for all variables in the across-village treatment effects except for food security and psychological well-being, which show a smaller treatment effect at endline 2 compared to endline 1. Thus, we find some evidence for decreasing treatment effects over time, but for most outcome variables, the endline 1 and 2 outcomes are similar.” 12. ↑ Haushofer and Shapiro 2018, pgs. 32-33: “Total consumption…Omitted: Durables expenditure, house expenditure (omission not pre-specified for endline 1 analysis)” function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post New research on cash transfers appeared first on The GiveWell Blog.

Josh

GiveWell’s outreach and operations: 2017 review and 2018 plans

6 years 2 months ago

This is the third of three posts that form our annual review and plan for the following year. The first two posts covered GiveWell’s progress and plans on research. This post reviews and evaluates GiveWell’s progress last year in outreach and operations and sketches out some high-level goals for the current year. A separate post will look at metrics on our influence on donations in 2017. We aim to release our metrics on our influence on donations in 2017 by the end of June 2018.

Summary

Outreach: Before 2017, outreach wasn’t a major organizational priority at GiveWell (more in this 2014 blog post). In our plans for 2017, we wrote that we planned to put more emphasis on outreach, but were at the early stages of thinking through what that might involve. In the second half of 2017, we experimented with a number of different approaches to outreach (more on the results below). In 2018, we plan to increase the resources we devote to outreach primarily by hiring a Head of Growth and adding staff to improve our post-donation follow-up with donors.

Operations: In 2017, we completed the separation of GiveWell and the Open Philanthropy Project and increased our operations capacity with three new hires. In 2018, our top priorities are to hire a new Director of Operations (which we have now done), maintain our critical functions, and prepare our systems for increased growth in outreach.

Outreach 2017 review and 2018 plans

Before 2017, outreach wasn’t a major organizational priority at GiveWell (more in this 2014 blog post). In our plans for 2017, we wrote that we planned to put more emphasis on outreach, but were at the early stages of thinking through what that might involve.

We currently have one staff member, Catherine Hollander, who works on outreach full-time. Two others, Tracy Williams and Isabel Arjmand, each spend significant time on outreach. From August 2017, our Executive Director, Elie Hassenfeld, also started to allocate a significant amount of his time to outreach.

How did we do in 2017?

In 2017, we focused on experimentation. In brief, we found that:

  • Advertising on podcasts has had strong results. Using the methodology described in this blog post, our best guess is that each dollar we spent on podcast advertising returned $5-14 in donations to our top charities.
  • Increasing the consistency of our communication with members of the media had strong results for the time invested.
  • Retaining a digital marketing consultant yielded strong results.
  • Retaining a PR firm to generate media mentions did not have positive results.
  • We’ve had a limited number of conversations with high net worth donors. We don’t yet have enough information to conclude whether this was a good use of time.

You can see our estimates of the five-year net present value of donations generated by each of these activities here. Overall, we spent approximately $200,000 and devoted significant staff time to this work. Our best estimate is that these efforts resulted in $2.5 million to $5.9 million in additional donations to our recommended charities.

We conclude:

  • New work on outreach had a high return on investment in 2017.
  • Some activities, such as podcast advertising and digital marketing improvements, have shown particularly strong results and should be scaled up.

What are our priorities for 2018?

Our marketing funnel has three stages:

  1. Awareness/acquisition: more people hear about GiveWell and visit the website,
  2. Conversion: more people who visit the site donate, and
  3. Retention: over time, donors maintain or increase their donations.

Our current working theory is that we should prioritize (though not exclusively) improving the bottom of this funnel (retention and conversion) before moving more people through it. We also plan to scale up the activities that worked well in 2017 and to continue experimenting with different approaches.

Our primary outreach priorities (which we expect to achieve and devote substantial capacity to) for 2018 are:

  1. Hire a Head of Growth to improve our efforts to acquire and convert new donors via our website. Over the long term, the Head of Growth will be responsible for digital marketing.

    What does success look like? Hire a Head of Growth.

  2. Improve the post-donation experience. We believe we have substantial room to improve our post-donation communication with donors. We have hired a consultant to help us improve our process.

    What does success look like? Significantly improve our process for post-donation follow-up before giving season 2018.

    At this point, we’re still in the earliest stages of figuring out how we’ll do this, so we don’t have concrete goals for the year beyond finalizing our plan in the next few months. Our stretch goal for the year is to succeed in achieving measured improvement in our dollar retention rate/lifetime value of each donor.

Our secondary outreach priorities (which we expect to achieve, but not devote substantial capacity to) for 2018 are:

  1. Continue advertising on podcasts. This advertising was particularly successful in 2017. We want to systematically assess podcast advertising opportunities and increase our podcast advertising. We plan to spend approximately $250,000 to $350,000 on podcast advertising this year.

    What does success look like? Advertise on new podcasts and measure results to decide how much to spend in 2019.

  2. Receive coverage in major news outlets. This has led to increased donations in the past.

    What does success look like? Pitch major news outlets on at least five stories in total and get at least one story covered.

  3. Deepen relationships with the effective altruism community. We want to deepen our relationships with groups in the effective altruism community doing outreach, particularly to high net worth donors.

For a list of other potentially promising projects we’re unlikely to prioritize this year, see this spreadsheet.

Operations 2017 review and 2018 plans

In 2017, we increased our operations staff capacity, made a number of changes to our internal systems, and completed the separation of GiveWell and the Open Philanthropy Project. In addition to maintaining critical functions, our highest priorities for 2018 are to (i) appoint a new Director of Operations and (ii) make improvements to our processes across the board to prepare our systems for major growth in outreach.

How did we do in 2017?

We made a number of improvements to our operations. In brief:

  • We completed the separation of GiveWell and the Open Philanthropy Project.
  • Donations: We hired two new members of our donations team, which allowed us to process donations consistently notwithstanding increased volume. We also added Betterment and Bitpay (for Bitcoin) as donation options.
  • Finance: We hired a Controller. We rolled out a few systems to improve the efficiency of our internal processes (Expensify, Bill.com, and others).
  • Social cohesion: We created a regular schedule for visit days for remote staff and staff events to maintain cohesion.

In January 2018, Sarah Ward, our former Director of Operations, departed. Natalie Crispin (Senior Research Analyst) has been covering her previous responsibilities during our search for a new hire to take them on.

What are our priorities for 2018?

In the first half of 2018, we aim to move from a situation in which we were maintaining critical functions to positioning the organization to grow.

Our two main priorities for the first half of 2018 are to:

  1. Appoint a new Director of Operations (complete). In April 2018, we hired Whitney Shinkle as our new Director of Operations. Between January and April 2018, Natalie Crispin served as our interim Director of Operations.
  2. Prepare our systems for major growth in outreach, which we expect to lead to increases in spending, staff, and donations.
  3. Maintain critical operations across domains: donations, finance, HR, office, website, recruiting, and staff cohesion.

Major operations projects we aim to complete in the first half of 2018 include:

  • A significant improvement in our approach to budgeting making it significantly easier for us to share updated actual spending versus budget.
  • We retained a compensation consultant to help us benchmark GiveWell staff compensation to comparable organizations.
  • We published our 2016 metrics report and plan to publish our 2017 money moved report by the end of June.

The post GiveWell’s outreach and operations: 2017 review and 2018 plans appeared first on The GiveWell Blog.

James Snowden (GiveWell)
Checked
20 minutes 59 seconds ago
Exploring how to get real change for your dollar.