We just attended the "Super Heros of online Fundraising: Become a Data-Driven Strategist" breakout session at the 2010 Nonprofit Technology Conference (10NTC) in Atlanta. The session was run by Sarah Dijulio of M+R Strategic Services, and she posed some profound points and questions for online fundraisers.
For instance, when you use data to drive your strategic decisions, you'll make better decisions, avoid mistakes, and achieve a higher return on investment. But how do you transform your organizational culture to become data driven? And what kind of data are we talking about anyways? How do you sift through the massive volumes of online data to discover what is truly relevant? Superhero costume is not required.
Wilderness Society and AARP tried some targeted $5 vs. $10 ask email campaigns. AARP ran a deadline fundraising goal-oriented messaging campaign that resulted in a 144% increase in response rates over its usual average, while Wilderness Society's email campaign actually underperformed. The difference between the two campaigns was that Wilderness Society targeted non-donors, while AARP's targeted non-donor activists (i.e. people with a history of taking political action in emails) -- it only goes to show that when supporters are engaged in some form (e.g. advocacy) they are more likely to donate.
What should you test as an email campaigner? One thing M+R has tried is monthly giving asks, using javascripted pop-ups over the donation form that offers the monthly option. Another example is Mercy Corps, which allows people to start the donation process from the home page. Then there is Amnesty International, which does something similar, but shows users which program area the donation will go to -- a person from Amnesty in the audience raised his hand and shouted, "It worked!"
But in testing, there are several important questions you have to ask:
- What goal will this help you meet?
- How much of a lift can you expect? Is this likely to produce significant improvements?
- How long will it take to get statistically significant results?
- How much time will it take to implement?
- Is the lesson you learn applicable to future efforts?
- How will you evaluate the results?
One of the things that M+R heard from Amnesty was that adding the Verisign logo next to a donation button improved conversions. M+R ran with the idea with other clients, and found that this led to a 12% increase in response rates for the nonprofits that used it.
So then how do you evaluate your test results? Try creating a data grid, and make sure your sample sizes give you statistically significant results (i.e. you might have to call that stats geek friend from grad school). But there are several rules of thumb to follow:
- Bigger sample sizes are better
- 400 responses is usually valid
- The smaller the metric you are measuring, the bigger sample you will need (i.e. if you have a list of 100,000 people, a 4% response rate = 4,000. so you can run an A/B test with groups of 10,000 each)
As most people know, M+R invests a lot in nonprofit data research, and we're all grateful they do. But tactics like the ones Sarah exhibited today can be tweaked and accommodated to any nonprofit of any size. It just takes a little planning and guidance. Once those systems are in place, you can become a data-driven superstar by second nature!
Comments