Whether "maintaining a win," "killing a bill," or simply, "stopping bad stuff from happening," the field has long acknowledged a unique aspect of advocacy: defense is as important as policy wins.
Yet, despite acknowledging that defensive advocacy matters, the development of defense-specific approaches has lagged behind other advocacy and policy change evaluation developments. In our brief, “When the Best Offense is a Good Defense: Understanding and Measuring Advocacy on the Defense,” we talked with a number of seasoned advocates and funders from several issue areas to explore this topic.
Based on our findings, we present four recommendations for measuring and evaluating advocacy when you find yourself on the defense.
1. Continue to use advocacy and policy change evaluation
field-accepted approaches rigorously.
The advocacy and policy change evaluation field has long had some standards for quality, including the use of strong theories of change and the acknowledgment of the value of interim outcomes. There’s no reason for defensive advocacy evaluation to do differently.
Being more explicit about assumptions behind the work, hypotheses for why certain outcomes are expected, and having crisper thinking about the right outcomes for defensive work can only strengthen advocacy strategy and measurement.
For ongoing learning and “real time” measurement and refinement, tools and methods exist to measure things such as political will, public will, and message framing, for example, that would work equally well in defensive contexts. Other methods, such as Intense Period Debriefs or Before and After-Action Reviews, could be useful in learning from more reactive efforts to capture lessons learned and outcomes observed after the heat of the moment.
2. Build race equity into your evaluation and analysis.
When defensive “wins” might mean more status quo or mitigated losses, funders and evaluators should question what groups are going to benefit and lose through different defensive results and consider differential effects in light of historical and structural inequities. Embedding a structural racism lens in any evaluation can be supported by:
- collecting and analyzing disaggregated data to understand effects on meaningfully different populations (e.g., urban Native Americans and Native Americans on a reservation) and different intersectional identities (e.g., black middle-income women);
- ensuring limitations of summarized or administrative data are clearly described;
- considering historical and ongoing inequities and privileges and their impact on different groups; and
- framing findings in terms of systemic issues (e.g., if reporting differential rates of high school graduation by race/ethnicity or language spoken at home, also share differential availability of Advanced Placement classes or other academic supports).1
Consider equity, considerations of power, and disproportionate benefits and risks across the evaluation, from questions to data collection, interpretation, and sharing of findings.2
3. Invest in learning from defensive successes.
We know of a number of efforts that have sought to understand the contribution of a particular advocacy organization or funder toward a big policy win. Evaluators keep using more and better techniques to look, after the fact, at how policy change actually happened.3
We don’t know, however, of cases where funders have applied the same rigor and analytics toward defensive results. We believe some more rigorous evaluative work around the maintenance of a prior win or lessening the blow could yield some useful lessons to add to the sector’s understanding of this work. While advocacy and policy change will never result in a recipe-like best practice, continued exploration into what works under complex, political conditions can only help funders and advocates become more strategic and rigorous in their thinking.
4. Consider an “ecosystem” approach to reporting (see our free tool below).
One of the frustrations we heard, largely on the funder side, was around grantee reporting. Reports do little to answer questions about how well the work is going for an individual organization or provide insight to a longer-view vantage point of the issue.
A possible solution to this is to use or support a cluster- or issue-level report. In our brief, we provide a sample tool we created for a cluster of policy grantees working on federal education policy. Below is a snapshot and a quick primer.
Worksheet Primer
This worksheet helps advocates take periodic stock of short-term policy goals and document progress, including when progress means maintaining a past win or defending against an unanticipated threat.
As such, this tool is intended to be used at two points in time. First, to help groups of advocates identify policy-related goals in specific areas for an upcoming legislative session. Second, to facilitate later reflection and documentation about progress on these policy goals, as well as additional, unplanned positive changes and disadvantageous policies that were avoided.
Click here to access the Defensive Advocacy Reflection Worksheet PDF.
Special thanks to Center for Evaluation Innovation for their support of the brief, "When the Best Offense is a Good Defense."
- - - - -
[3] Kane, Robin, Levine, Carlisle, Orians, Carlyn, and Reinelt, Claire. (2018). Contribution Analysis in Policy Work: Assessing Advocacy’s Influence. Retrieved from http://orsimpact.com/directory/contribution-analysis.htm
Punton, Melanie, Welle, Katherina. (2015). Applying process tracing in five steps (Annex No. 10). Brighton: Centre for Development Impact.