Raise your hand if you always read the entirety your efficiency programs’ finished evaluation reports. According to Kylie Hutchinson, principal of Community Solutions Planning & Evaluation, utility efficiency program staff often never read large portions of these monstrous documents. This was one of the points I heard Kylie make during Effectively Communicating Evaluation Results, an Association of Energy Service Professionals (AESP) web conference on October 27, 2011. The problem is that evaluators often publish lengthy and very detailed reports, in which the format and contents may not be ideally suited to the needs of utility program implementers. This mismatch ultimately wastes ratepayer money and can lead to utilities overlooking opportunities for program improvements.
To the extent that regulatory requirements allow, evaluation reports should be designed first and foremost to convey the messages necessary for implementers to improve existing programs and inform future program design. Ideally, an evaluator would spend a good deal of time crafting a report, filled with helpful analysis and actionable recommendations, that the utility implementer or program administrator uses to achieve their goals. Some of the strategies set forth in the AESP web conference for reaching this level of evaluation report usability sounded to me like they would be relatively simple to implement. For instance, evaluators can write more helpful reports if they’re given a set of clear guidelines that describe exactly what program administrators/implementers would like the final evaluation report to look like.
During the web conference, I learned that the Northwest Energy Efficiency Alliance (NEEA)—a nonprofit organization funded by Northwest utilities, the Energy Trust of Oregon, and the Bonneville Power Administration—takes this approach. Evaluators writing a NEEA Market Progress Evaluation Report (a cross between a process and impact evaluation) must follow a set of established guidelines. These guidelines are designed to keep the report under 40 pages. Each subsection in the report has an associated maximum page limit.
Evaluators initially pushed back against these requirements (understandably, they want to showcase the work they’ve done as much as possible). However, NEEA’s evaluators have since learned that the report’s value to program implementers does not necessarily lie in providing exhaustive detail for every step of the evaluation process. Instead, the value lies in providing accurate data analysis with clear conclusions and recommendations. It should be noted that NEEA does not limit the length of the appendices. In this way, NEEA guides evaluators to write reports with different layers of detail, which in turn allows readers to explore the data in as much depth as they desire.
Another important consideration for evaluation reports (and truly, for any piece of writing) is to ask “who is the audience?” A consistent message I received from the AESP conference was that evaluation reports should follow a communication plan, which program administrators and evaluators can work together to develop. Different people at different levels in an organization have different needs when it comes to reading reports. NEEA has tried to find a happy medium at a report limit of 40 pages, but some administrators may only have time for a 2-page summary or even just a single paragraph. The key is to clearly distill the “so whats.” Again, layering the details of the report is one good solution. For example, a report could provide a fact sheet, an executive summary, an online oral presentation, and a more detailed web summary, with the option to dive even deeper into the details in the body of the report.
I agreed with the web conference’s conclusion that program implementers ultimately have to communicate with evaluators clearly and often to realize such improvements to evaluation reports. Evaluators must understand program evaluation goals and who the audience will be for reported results. In some jurisdictions, barriers designed to keep evaluators from biasing their findings might restrict communication between the two parties. However, if one of the goals of program evaluation and its resulting reports is to improve programs, then it seems to me that potential bias is not the only thing these barriers block.