For fifteen years I was a business systems consultant specializing in equal employment and inclusion. I used data to hold organizations accountable for results, ensuring that remediation programs were effective. I prided myself on using the data collection process to re-energize commitment to mission and create new partnerships where trust had been broken.
So my heart went pitter-patter a couple of years later when the UUA Board moved toward Policy Governance. Monitoring reports, a key component of Policy Governance, requires demonstrating that programs are achieving results. As people of hope and compassion, we should yearn for evidence that Unitarian Universalism is making a measurable difference in our broken world.
However, while the world is full of enthusiastic data collecting and reporting, I maintain that meaningful evaluation is very difficult. It is tempting to assume that good effort and good intention net good results. We should expect to struggle mightily as we integrate evaluation into our systems in a meaningful way.
A couple of thoughts as we sweat it out:
1) Frequency of monitoring should not be determined by “importance”.
Counterintuitive? Not if you understand change and evaluation. Frequency of monitoring should be based on the type of change you are looking for in your results. Somethings do not warrant annual evaluation because measurable change is not likely to occur within that timeframe. Annual evaluation can mask gradual trends because data is usually only compared across three data sets. Too frequent evaluation can imply stability that isn’t actually there. (Think climate change).
2) The assessment process itself is a catalyst for change.
It is impossible to know from one data point if things are getting better or worse in relation to the desired outcome. However, many organizations will try to avoid longitudinal data.
Why? Because folks’ expectations are shaped by what is being measured. Consequently, the assessment process itself tends to move a culture toward more critical discernment, greater expectation, and sometimes a downturn in results after the first assessment.
Strategic organizations can use the increased expectation as a measurable positive byproduct of monitoring.
3) Prioritizing monitoring based on a traditional definition of risk is risky.
Risk is very subjective and often contextual. Somethings are considered more of risk because they are imminent, others because of the scope of impact. Most organizations focus heavily on financial or asset management and areas of recent industry legal liability. However, the integrity and vitality of a mission based organization is built on much more than good accounting and good counsel.
Our former Financial Advisor, Larry Ladd, considered enrollment in religious education to be as important to our health as our financial performance. There are many who consider our Association’s capacity to mobilize around justice issues to be as important as our asset management policy.
The UUA has many systems in place to audit areas of traditional institutional risk. What we haven’t quite mastered is meaningful evaluation of the things that are harder to measure. Because what we do matters, we should hold ourselves accountable for results. We have limited human and asset resources, so we should seek evidence that what we are doing is working effectively toward the goals we have set.
Responsible asset management includes ensuring we are using resources entrusted to us to make a measurable difference in the world. I would argue that one of our biggest risks would be not holding ourselves accountable to our mission or not knowing that what we are doing is working.