Doing Developmental Evaluation

I did my first two degrees in Psychology in the 1970s when behaviourism was the dominant paradigm. I was trained to think in terms of linear, cause-effect relationships between singular variables. I was socialized to believe that if something is not tangible and quantifiable, it is not worth considering. The goal of science was prediction and control. The first research I did was an experiment with white rats, trying to determine whether they ate more when other rats were around than when they were alone (they did).

As my career unfolded I was involved in more research projects. I worked in the non-profit sector, the private sector, government and health care doing everything from randomized controlled trials to population surveys to public consultations. With every project I moved farther and farther away from the positivist, quantitative paradigm. In my consulting practice, just before returning to UBC to do my PhD, I relied mostly on qualitative methods and participatory approaches, believing these were the best way to approach the policy-oriented and community-based issues I was exploring in collaboration with my clients. For my PhD research I used critical ethnography as my methodology, as far away on the research spectrum as you can get from positivist experiments.

During my consulting practice I did several evaluations of social or health programs as well as evaluations of organizational change processes. My guru was Michael Quinn Patton, a pragmatic evaluator whose books provided guidance that suited the contexts I was working in. I strove to involve my clients in all aspects of the research and evaluation work, believing it was important for them to deepen their own understanding of the issues under investigation so they could make better decisions. In my ten years as an independent consultant I came to appreciate the value of participatory evaluation processes that guided and supported professionals and citizens to ask searching questions, collect data to supplement what they already knew, and take action to achieve their goals. I learned that engaging in the discipline of systematic evaluation usually improved the overall functioning of a group or organization because it encouraged open questioning, reflective thinking, and a reliance on data from diverse sources as the basis for decision-making.

Building evaluation in from the start

When I became the Director of the Learning Exchange I was thrilled to finally be involved in a situation where evaluation could be built into the design of programs rather than being an afterthought. I had seen how powerful it can be to get front-line professionals and decision–makers directly involved in thinking about what they are trying to achieve, how they will know whether they are getting the results they want, and what the data they have collected tells them about how things are going. I knew that enormous benefits could be gained from training people to think like evaluators. So it was natural for me to integration evaluation into all our efforts.

Making decisions using evaluation data

We used program evaluation in more and less formal ways. In the Trek Program we were careful to elicit feedback systematically from students and community partners every year. In the program’s first year, we did phone interviews with every student in the program, using a semi-structured interview guide. In subsequent years, as the number of students grew, we did phone interviews and/or focus groups with a sample of the students. Staff met with representatives from all partner organizations every year, combining a discussion of predetermined evaluation questions with the development of a plan for the coming year.

IMG_0568The feedback we got from students and community partners shaped the evolution of the Trek Program. Students told us what elements of the volunteer experience were most important from their point of view. They alerted us to aspects of the program that worked well (e.g., the recruitment materials) or did not work so well (e.g., the process of getting a Criminal Record Check). Similarly, organizations told us what was working (e.g., they appreciated that Trek students were not completely naïve about the Downtown Eastside) or not working (e.g., students did not always follow through on their intention to volunteer).

Every year we adapted our approach to the processes we used for student recruitment, orientation, and placement. We balanced input from students and partners with our own goals as well as the demands arising from the steady growth in the number of participating students and organizations. For example, some students said the orientations were too long. But we knew our partner organizations wanted us to cover certain topics and we believed we had a responsibility to highlight things like the need to be respectful and non-judgmental when interacting with marginalized people. Some students might have thought they did not need to hear those messages, but we knew many did. But as the number of students grew exponentially and our staff team did not, we had to shorten the orientations just to be able to get all the students through this step in the process in a timely way (e.g., instead of doing a day long orientation we did two half-day orientations—same number of staff hours, double the number of students). In order to become more efficient without sacrificing expected outcomes, we had to pay close attention to what students and partner organizations told us so we could make good decisions about what to focus on.

Similarly, the evaluations we did after the Reading Week projects provided a vitally important barometric reading of how things were going. For example, students completed a short questionnaire at the end of their projects every year. This allowed us to track the results of efforts such as the fine-tuning of the key messages we delivered at the kick-off day and the refinement of our approach to structured reflection. These data, combined with debriefing sessions we did with project leaders and our own observations, gave us solid information we could use as the basis for the next round of planning decisions.

storefrontFor the storefront programs, our approach to evaluation was more informal, in keeping with the nature of the environment and the population we interacted with. For example, for the afternoon drop-in, we kept a record of the number of people who came in to use the computers every day and we had a suggestion box people were encouraged to use. These data complemented what staff knew from being so close to the action. We noticed when the number of people in the drop-in went up or down significantly but it was helpful to have numbers that allowed us to examine trends over time. Similarly, if patrons were particularly happy or unhappy about something they usually let us know. But the suggestion box allowed patrons to provide input anonymously.

At one point we had a consultant do 1:1 interviews with patrons. These affirmed that local residents valued what we were offering and gave us some insight into what patrons were saying about the storefront. We heard that, “The Learning Exchange is where the intellectuals in the neighbourhood go,” and “You can use big words and discuss big ideas here and not get laughed at.”

We did evaluations of other storefront programs, too, especially those we considered pilot projects. For example, with the English as a Second Language (ESL) conversation program, we conducted focus groups with participants and did interviews and/or focus groups with facilitators. In this context, we had to be alert to differences in people’s facility with English and their comfort level with structured discussion techniques. We sometimes had to adapt our methods in order to create an open space where people could offer input on their own terms. As with the Trek Program, the input from participants was very important in shaping the ESL program as it evolved.

With other educational programs such as the 101 courses, we elicited feedback from participants during and at the end of the programs. For example, a student staff member conducted individual interviews with participants at the end of the Entrepreneurship 101 course. These data, too, influenced our subsequent decision-making. In some cases, even when participants were enthusiastic about a program (e.g., Music 101), we decided to discontinue it because it was not meeting our objectives. In every case, it was enormously helpful to have data from participants that had been collected in a spirit of unbiased inquiry. We knew we were making decisions based on more than our own limited perspective.

The Learning Exchange was using evaluation data to make decisions about programs that were rapidly evolving in an organizational context that got progressively more complex and demanding. We rarely bothered to write a formal report documenting our methods or results. None of us needed a report per se. Instead, we prepared “quick and dirty” summaries of data that staff would then scrutinize, discuss, and use as the basis for decisions about what to do next: discontinue a program, continue with no change, make minor or major adjustments, or occasionally, collect additional data.

The only time we made the effort to prepare a formal evaluation report was when we did a relatively thorough evaluation of the UBC-Community Learning Initiative (UBC-CLI) in 2007-2008. Kate Murray, our research assistant, prepared a report so we could let our community partners and UBC colleagues know how the UBC-CLI was progressing. We wanted to encourage people to think about how Community Service Learning (CSL) works and how they could be involved in its advancement at UBC. The report itself did not influence our internal decision-making but we hoped it would inspire others to decide that CSL was a worthwhile activity.

Discovering this approach had a name

Building evaluation into our programs meant the Learning Exchange team was continually being reminded to ask questions like, “What’s working? What’s not working? Where are we going? Why are we doing this? What else could/should we be doing?” Following the discipline of collecting, analyzing, and reflecting on feedback from others protected the staff from the hubris of assuming the process of implementing the programs had taught us everything we needed to know. The evaluation data usually confirmed much of what the team suspected based on our own observations. But almost always, we received perspectives, insights, and suggestions that would not have occurred to us on our own.

From my point of view this was the ideal way to use evaluation. Key decision-makers in the programs were designing the evaluation plan, collecting and analyzing data, and using the information to make decisions. We were not doing evaluation to satisfy a funder or a bureaucratic requirement. We were doing it to satisfy our own curiosity. We did not concern ourselves with whether our methods would be deemed rigorous enough by academic standards. Our concern was to get as complete and accurate an understanding of what was happening in our programs as possible given the resources on hand so that we could make intelligent decisions about what to do next.

The elicitation of feedback from program participants was important but not the only factor we considered. We also considered how well a particular program fit with our emerging sense of purpose and what the program’s costs and benefits were. In addition, we had to remain attentive to what was happening in the two environments we were embedded in: UBC and the Downtown Eastside. The Learning Exchange was part of two different contexts, both of which were complex and contested. Our survival depended on our being in synch with at least some of the key forces within the university (e.g., UBC’s Trek vision) and the community (e.g., the drive to find innovative ways to empower residents).

IMG_6552After reading Getting to Maybe and discovering how knowledge about living systems can be applied to organizations, I looked for other information on complexity science. I discovered that my evaluation guru, Michael Quinn Patton, one of the co-authors of Getting to Maybe, had formulated a type of evaluation that was essentially what we had been doing at the Learning Exchange. Patton describes Developmental Evaluation as being informed by systems thinking and an awareness of complex nonlinear dynamics. Its processes include asking evaluation questions, applying evaluation logic, and gathering real-time data to inform decision-making. It is particularly effective in situations where social innovations are being developed in contexts that are complex, contested and uncertain (Patton 2011).

It was affirming to realize that we had been practicing something that had a name and whose legitimacy was being established. It was inspiring to get some guidance about how to strengthen our approach. In particular, linking evaluation to complexity science alerted me to the need to pay more systematic attention to the forces operating in the environments in which our work took place.

In the early years, the Learning Exchange was operating on the margins of both UBC and the Downtown Eastside. Both environments were somewhat hostile to our efforts. We did not set out to be innovative for the sake of being innovative. We were just trying to figure out how to navigate the challenging contexts we worked in. We had to invent new programs and activities that would be seen as credible and worthwhile in two environments where the criteria for legitimacy and value were different and sometimes contradictory.

The survival of the Learning Exchange was not a given. We were vulnerable to a variety of risks—students could get hurt in the neighbourhood, patrons or staff could get hurt at the storefront, any member of the team could do something that could undermine the trust and credibility we were trying hard to create (including doing something innocent that was misinterpreted), we could fail to get sufficient funds to continue—the list was long. More importantly, only some factors were subject to our control. So we had to be alert to what was happening in the external environment as well as being alert to what was happening within our programs and the organization itself.

Learning about the adaptive cycle provided a framework that made sense of the cycles of growth and change we had experienced at the Learning Exchange. Learning about complexity science helped me see that the staff team needed to be responsive to our surrounding contexts even if signs of danger were not omnipresent. I believe that doing Developmental Evaluation was a useful practice regardless of where the Learning Exchange sat on the arcs of innovation or institutionalization.

For related reflections, go to The Learning Exchange as social innovation.

 Reference

Patton, Michael Quinn (2011) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: The Guilford Press