by Kyla Winchester

The last two years have brought a lot of change—and things are still changing. Let’s consider: the “great resignation,” grocery delivery, virtual appointments and workshops, remote learning, less commuting, “nesting” (home office, home theatre, home retreat), inflation, supply chain issues, child care, misinformation… just to start.

Yes, yes, we all know this by now, you may be thinking. What’s the point? …Consider all these changes in light of your organization’s best practices. When did they originate? How were they developed and refined over time? What testing was they based on? Are there nuances to it, depending on channel or segment, for example? And consider, how have these best practices been affected by the thing that’s been affecting us daily and in every conceivable way for two years?

Think about all the best-practice assumptions you make in a single DM letter:

  • They like photos like ______
  • taglines that are more _____ in tone do better
  • the OE is ______
  • our stories are always about ______
  • our ask amounts are calculated like ______
  • the segments variables are phrased like ______
  • the letter is from ______
  • the final PS always includes ________
  • and so on and so on.

Now consider: what’s changed for these best practices in the last two years? For most of us, it’s not clear. But best practice is to periodically test best practices. …Especially if you don’t know if they are based on knowledge that might be outdated, if they are based on best practices from another org that might not apply to yours, if they were expanded to all things when they were really only confirmed to be applicable to one thing (for example, acquisition campaigns, or social posts), or if they were specific practices that were generalized (for example, “in this very sad email, a ‘positive’ image resulted in more clicks” leads to positive images in all emails).

To provide a real-world example, in Googling nonprofit best practice, I found an article on best practices, including the initial assumption that donors are digital-first. Mind-bogglingly, this was dropped in without context or explanation nor of course backed up with even one tiny test or crumb of backing data. This is an extreme example (the people who send DM might disagree that donors generally are digital-first, and the people who answer donor phone calls might disagree, and the donors who receive handwritten notes might disagree, and the donors who attend events… you get the idea) but consider the affect such a best practice might have if it affects all subsequent fundraising, or is the basis for your donor stewardship strategy. If this initial assumption is misguided or just plain wrong, everything that comes after will have reduced response rates, reduced renewal rates, reduced average gift, reduced engagement….

Even for those things you’ve tested—would results come up the same again? Was your testing validated on other channels, with other segments? (Maybe online test results only apply to online?) Did you compare apples to apples, i.e. was everything the same except the tagline? Did you carry out tests with only one variable, or did you try to test multiple things at once?

Resources are still tight, and time is still—very literally for fundraisers—money, so we are never able to test and verify as much as we’d like. If you don’t know where to start, take a couple basic assumptions underlying your best practices and reconfigure them into testable premises. If they hold, you are probably working on generally solid ground; but if they don’t, you have justification for more in-depth testing which will certainly bear fruit.