The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
This is essentially a practical guide to using "stories" as a means of documenting a program or project's impact on people's lives. Stories have often been seen as providing anecdotal evidence, and often they are collected in such a way that they are not very different from testimonials of the kind you'd put on a marketing brochure. What the Most Significant Change (MSC) technique offers is a systematic approach to collecting and analyzing stories from project participants and intended beneficiaries. It is meant to record the stories in the participants' own words, but also the context within which the story is collected, who collects it, how stories are then reviewed and analyzed - sort of a coding process -- and finally how the most significant stories are selected.
Overall, MSC can complement nicely more traditional approaches to monitoring and evaluation and certainly improves upon the current practice of using "stories" or "vignettes" to document evidence of a project's impact.
Friday, May 26, 2006
Monday, May 22, 2006
Cook, T.D. (2000). The false choice between theory-based evaluation and experimentation. New Directions in Evaluation, 87, 27-34.
"Few program theories specify how long it should take for a given process to affect some proximal indicator in the causal chain. But without such specifications, it is difficult to know when disconfirmation occurs, whether the next step in the model has simply not occurred yet or instead will not occur at all. It is this ambiguity about timelines that allow program developers who have been disappointed by evaluation results to claim that positive results would have occurred had the evaluation lasted longer. But because such theories are not typically available, the argument is often heard when developers do not like what the evaluation reports." (30)
This has a very practical impact on WHEN to implement specific elements of an evaluation and on the EVALUABILITY of a project or program at any specific point in time.
To address this problem:
Sridharan, S., Campbell, B., & Zinzow, H. Developing a Stakeholder-Driven Anticipated Timeline of Impact for Evaluation of Social Programs, American Journal of Evaluation, Vol. 27, No. 2, June 2006.
In the absence of a clearly spelled out theory of the timeline of impact(s), the authors suggest a methodology for developing such a timeline based on stakeholders' expectations. In other words, the project stakeholders may be in the best position to estimate when the project's impacts will be felt.
"Few program theories specify how long it should take for a given process to affect some proximal indicator in the causal chain. But without such specifications, it is difficult to know when disconfirmation occurs, whether the next step in the model has simply not occurred yet or instead will not occur at all. It is this ambiguity about timelines that allow program developers who have been disappointed by evaluation results to claim that positive results would have occurred had the evaluation lasted longer. But because such theories are not typically available, the argument is often heard when developers do not like what the evaluation reports." (30)
This has a very practical impact on WHEN to implement specific elements of an evaluation and on the EVALUABILITY of a project or program at any specific point in time.
To address this problem:
Sridharan, S., Campbell, B., & Zinzow, H. Developing a Stakeholder-Driven Anticipated Timeline of Impact for Evaluation of Social Programs, American Journal of Evaluation, Vol. 27, No. 2, June 2006.
In the absence of a clearly spelled out theory of the timeline of impact(s), the authors suggest a methodology for developing such a timeline based on stakeholders' expectations. In other words, the project stakeholders may be in the best position to estimate when the project's impacts will be felt.
Friday, May 19, 2006
Building community capacities in evaluating rural IT projects: Success strategies from the LEARNERS Project, by June Lennie, Greg hearn, Lyn Simpson & Megan Kimber, International Journal of Education and Development Using ICT, Vol. 1, No. 1 (2005).
The article describes outcomes of a project aimed at building evaluation capacities of two Australian rural communities involved in ICT initiatives. I guess living down under is as good an excuse as any for using the acronym C&IT, which stands for communication and information technologies rather than the more common -- at least in the US -- ICT, information and communication technology.
While the authors clearly espouse participatory action research, they are also quite open about limitations they encountered in trying to use PAR in rural Australian communities.
I particularly liked the focus on "community capacity building". Capacity building in ICT projects often starts with how to operate a computer, use basic software, send an email and browse the web. More often than not, that's also where the training ends. Those who pick up the skills and go on to more advanced training can hope to use these skills to get better paying jobs, often leaving the rural areas for better jobs in urban areas. In other words, the capacity that is built in rural areas through ICT projects is often not part of a "community" approach to capacity building. Individual training, in isolation of the needs of the community as a whole, and capacity building at the community level, isn't sufficient.
The article describes outcomes of a project aimed at building evaluation capacities of two Australian rural communities involved in ICT initiatives. I guess living down under is as good an excuse as any for using the acronym C&IT, which stands for communication and information technologies rather than the more common -- at least in the US -- ICT, information and communication technology.
While the authors clearly espouse participatory action research, they are also quite open about limitations they encountered in trying to use PAR in rural Australian communities.
I particularly liked the focus on "community capacity building". Capacity building in ICT projects often starts with how to operate a computer, use basic software, send an email and browse the web. More often than not, that's also where the training ends. Those who pick up the skills and go on to more advanced training can hope to use these skills to get better paying jobs, often leaving the rural areas for better jobs in urban areas. In other words, the capacity that is built in rural areas through ICT projects is often not part of a "community" approach to capacity building. Individual training, in isolation of the needs of the community as a whole, and capacity building at the community level, isn't sufficient.
Thursday, May 18, 2006
"Developing a Sense-Making Methodology Framework for Collective Learning in Latin American Community Telecenter Assessment, by Michel Menou and Peter day. Paper presented at a non-divisional workshop held at the meeting of the International Communication Association, New York City, May 2005.
Some quotes:
- "History appears to be repeating itself in that the lessons of the early European telecenter movement do not appear to have been understood."
- "The institutional communities (international organizations, governments and major non-government organizations associated with them) pretend to offer 'best practices' based upon their particular view of telecenters financial sustainability."
- "As usual the views of the beneficiaries are hardly expressed in by themselves in the literature."
- "Most stories show that ICT are used, allowing communications or access to information not previously possible, and bring some immediate positive, or negative, outcomes. But does this actually point to the beginning of a transformation process that will lastingly change the fate of the people and that could not be arrived at through other means? And is ICT use the prime driver or is it only an enhancer of other key factors?"
- "Research results are hardly available to, or understandable by, the communities themselves and do not contribute to their decision making process while they feed into institutional policies. Research therefore tends to contribute more to the perpetuation of community domination than to their empowerment. Present monitoring and evaluation practices need to be transformed into community driven collective learning processes."
Menou and Day make extensive references to a "sense-making methodology" but didn't explain it much... need to investigate that...:)
Some quotes:
- "History appears to be repeating itself in that the lessons of the early European telecenter movement do not appear to have been understood."
- "The institutional communities (international organizations, governments and major non-government organizations associated with them) pretend to offer 'best practices' based upon their particular view of telecenters financial sustainability."
- "As usual the views of the beneficiaries are hardly expressed in by themselves in the literature."
- "Most stories show that ICT are used, allowing communications or access to information not previously possible, and bring some immediate positive, or negative, outcomes. But does this actually point to the beginning of a transformation process that will lastingly change the fate of the people and that could not be arrived at through other means? And is ICT use the prime driver or is it only an enhancer of other key factors?"
- "Research results are hardly available to, or understandable by, the communities themselves and do not contribute to their decision making process while they feed into institutional policies. Research therefore tends to contribute more to the perpetuation of community domination than to their empowerment. Present monitoring and evaluation practices need to be transformed into community driven collective learning processes."
Menou and Day make extensive references to a "sense-making methodology" but didn't explain it much... need to investigate that...:)
Wednesday, May 17, 2006
Merging Theory with Practice: Toward an Evaluation Framework for Community Informatics
March 2001, Dara O'Neil
This was a paper presented at the October 2001 Second International Conference of the Association of Internet Researchers. It looks very much like a traditional literature review prepared in the context of preliminary work on a dissertation. For my purposes, it needs to be updated and it has little focus on international experiences with telecenters and related ICT access initiatives but it provides an interesting framework, identifying five key areas of outcomes for community informatics projects: (1) Enhancing strong democracy; (2) Increasing social capital; (3) Empowering individuals; (4) Revitalizing sense of community; and (5) Providing economic development opportunities.
In an international context, (3) and (5) seem to be the primary intended outcomes.
March 2001, Dara O'Neil
This was a paper presented at the October 2001 Second International Conference of the Association of Internet Researchers. It looks very much like a traditional literature review prepared in the context of preliminary work on a dissertation. For my purposes, it needs to be updated and it has little focus on international experiences with telecenters and related ICT access initiatives but it provides an interesting framework, identifying five key areas of outcomes for community informatics projects: (1) Enhancing strong democracy; (2) Increasing social capital; (3) Empowering individuals; (4) Revitalizing sense of community; and (5) Providing economic development opportunities.
In an international context, (3) and (5) seem to be the primary intended outcomes.
Thursday, May 04, 2006
"Anecdotes and Beyond: 101 Ways to Assess an ICT Project", ICT@USAID Technology Series, April 26, 2006.
Main message: There isn't one right way to evaluate an ICT Project. The right mix of evaluation questions and the right mix of methods and tools depend on a solid analysis of the objectives of the projects and the objectives of the evaluation.
Main message: There isn't one right way to evaluate an ICT Project. The right mix of evaluation questions and the right mix of methods and tools depend on a solid analysis of the objectives of the projects and the objectives of the evaluation.
Subscribe to:
Posts (Atom)