Grading Aid with AidGrade

You know what sucks? When you arrive somewhere for a month-long stay and discover that your blog is blocked there.

But I’m back in New York and on unrestricted internet just in time for the holiday blogging season. (That’s a thing, right?)

It’s also the holiday charitable giving season, and before I return to your regularly scheduled atrocity humor, I’d like to put in a quick plug for a not-for-profit organization that can help you figure out how to get the most bang for your donated buck: AidGrade.

AidGrade is dedicated to identifying development projects that actually work. They do this by aggregating impact evaluations of hundreds of individual interventions, and performing rigorous statistical meta-analyses of the data. The results show where aid is effective, and where it isn’t. Neat, huh?

For the development nerds among us, AidGrade also offers the opportunity to get hands-on with the data. Using their nifty app, you can run your very own meta-analysis. Want to focus on a specific set of countries or exclude a study whose authors you hate? Soon you’ll be able to do that too. (And in the meantime, you can download all of AidGrade’s data and do whatever you want with it.)

If you’re interested in supporting AidGrade’s work, there’s an Indiegogo campaign currently running. A matching donor will match your contribution between now and December 17th. Check it out:

*Note: I am a member of AidGrade’s Board of Directors.

Kate Cronin-Furman

3 Comments

  1. While I’m glad to see the work being done by AidGrade – it’s very thorough and their paper on meta-analysis is very good – I’m a bit concerned when you describe their approaches as able to “show where aid is effective, and where it isn’t.”

    As you should know well, working on rights and justice issues, the pool of research from which AidGrade can run their analyses is only those that are amenable to rigorous methods resulting in publication – primarily RCTs. Such approaches, while very powerful, aren’t applicable to all types of work, including to such areas as setting up transitional justice procedures, training supreme court or constitutional court judges, or a lot of other aid work that focuses on human rights and justice.

    By implying that their analyses show us what works and what doesn’t, it leaves out this important bias in the sample from which they draw, and might well lead people to conclude that a negative determination has been made about all sorts of aid-funded work around access to justice, citizen oversight, parliamentary strengthening, standing up national audit institutions, human rights defense, etc, that doesn’t appear in their considerations at all. It’s precisely this conflation of “what we have rigorous evidence for as working” with “the universe of what works” that has people concerned about this particular push for, and definition of, evidence of aid working. So please be more nuanced around these types of claims!

Leave a Reply

Your email address will not be published. Required fields are marked *