Technology Adoption & Diffusion in Education
By V.H. Carr Jr.
A nicely written review of the literature on diffusion of innovation theory and how it applies – in its different variations – to the adoption, diffusion and integration of technology within education settings.
Some connections to critical mass, tipping points and scaling up, all of which I have been thinking and writing about recently:
• Everett Rogers’s Diffusion of Innovation (1986)
“..the adoption of interactive communications differs from that of previous innovations. 1) A critical mass of adopters is needed to convince the “mainstream” teachers of the technology’s efficacy. 2) Regular and frequent use is necessary to ensure success of the diffusion effort. 3) Internet technology is a tool that can be applied in different ways and for different purposes and is part of a dynamic process that may involve change, modification and reinvention by individual adopters.”
• Rate of adoption theory
“Diffusion takes place over time with innovations going through a slow, gradual growth period, followed by dramatic and rapid growth, and then a gradual stabilization and finally a decline.”
So, do we wait for that “dramatic and rapid growth”, the tipping point, or can we make it happen sooner?
• Need-based diffusion strategies
The “early adopters” and “early majority” have different characteristics and different needs. In the process of designing diffusion strategies, these differences need to be taken into account. When it is time to “integrate” technology within the curriculum, for example, you can’t assume that the average teacher will behave in the same fashion as the innovators and early adopters. This has important implications in terms of scaling up and integration of technology within the curriculum.
• Need for ease of use and low risk of failure
“The early majority’s aversion to risk quite naturally translates into a need for easy of use and early success if they are to adopt and diffuse the technology. The overlap with support and training requirements is obvious.”
Can we design interventions with rapid results in mind? … A progressive approach that guarantees early successes to build on?
Monday, August 28, 2006
Friday, July 21, 2006
Evaluation of UNESCO's Community Multimedia Centres - Final Report
An interesting report, since UNESCO is planning a scale up of the CMC program and looking for partners to finance scale up activities. The methodology section mentions surveys and then no data is presented and the survey instruments are nowhere to be found. In the end, it reads like a typical qualitative evaluation -- nothing wrong with that -- and it does point to some significant challenges facing the CMC program.
Some snippets of interest to me:
(p. 21)" Of greater concern is the still very limited capacity within CMCs to systematically monitor and evaluate activities and use of hte CMC."
The mistake I see over and over again -- we must be failing to LEARN something -- is that M&E systems are set up to address the needs of the funding agency but are rarely developed by the people eventually responsible for implementing them and therefore fail to address the needs of the people directly supposed to benefit from the projects and the M&E systems put in place.
(p. 32)"... far too often donors try to impose business models on institutions that we would never try to do with similar institutions at home -- crisis centres, libraries, other social agencies and the like."
If it's donor funded, it should target those who can't afford the service and therefore a business model makes little sense. If it's a business model and people can afford to pay, then why is donor funding needed. Let's face it, we're all very confused about all this. You've got non-profit organizations increasingly trying to operate based on "business models" to compete with for-profits and for-profits who want to get in on the development business and develop social arms and philanthropic missions.... We're just messing up with our comfortable artificial boundaries.
(p. 35)"Efforts in achieving financial sustainability are forcing CMC managers to find ways of increasing turnover by giving emphasis to the paid services targeted to the sector of the population with purchasing power, losing the scope of activities targeted at poor or marginalized."
Not exactly surprising. What is needed is for donors to realize that while the CMCs and other telecenter types of establishments can generate some revenues from services to those who can afford them, they may need to continue subsidizing certain types of services that are targeted at those who can't afford them. So, when there are cybercafes, why not look at existing cybercafes that have been established based on a business model and work with them -- with $$ --to provide subsidized access to services targeting the poorer segments of society?
An interesting report, since UNESCO is planning a scale up of the CMC program and looking for partners to finance scale up activities. The methodology section mentions surveys and then no data is presented and the survey instruments are nowhere to be found. In the end, it reads like a typical qualitative evaluation -- nothing wrong with that -- and it does point to some significant challenges facing the CMC program.
Some snippets of interest to me:
(p. 21)" Of greater concern is the still very limited capacity within CMCs to systematically monitor and evaluate activities and use of hte CMC."
The mistake I see over and over again -- we must be failing to LEARN something -- is that M&E systems are set up to address the needs of the funding agency but are rarely developed by the people eventually responsible for implementing them and therefore fail to address the needs of the people directly supposed to benefit from the projects and the M&E systems put in place.
(p. 32)"... far too often donors try to impose business models on institutions that we would never try to do with similar institutions at home -- crisis centres, libraries, other social agencies and the like."
If it's donor funded, it should target those who can't afford the service and therefore a business model makes little sense. If it's a business model and people can afford to pay, then why is donor funding needed. Let's face it, we're all very confused about all this. You've got non-profit organizations increasingly trying to operate based on "business models" to compete with for-profits and for-profits who want to get in on the development business and develop social arms and philanthropic missions.... We're just messing up with our comfortable artificial boundaries.
(p. 35)"Efforts in achieving financial sustainability are forcing CMC managers to find ways of increasing turnover by giving emphasis to the paid services targeted to the sector of the population with purchasing power, losing the scope of activities targeted at poor or marginalized."
Not exactly surprising. What is needed is for donors to realize that while the CMCs and other telecenter types of establishments can generate some revenues from services to those who can afford them, they may need to continue subsidizing certain types of services that are targeted at those who can't afford them. So, when there are cybercafes, why not look at existing cybercafes that have been established based on a business model and work with them -- with $$ --to provide subsidized access to services targeting the poorer segments of society?
Friday, May 26, 2006
The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
This is essentially a practical guide to using "stories" as a means of documenting a program or project's impact on people's lives. Stories have often been seen as providing anecdotal evidence, and often they are collected in such a way that they are not very different from testimonials of the kind you'd put on a marketing brochure. What the Most Significant Change (MSC) technique offers is a systematic approach to collecting and analyzing stories from project participants and intended beneficiaries. It is meant to record the stories in the participants' own words, but also the context within which the story is collected, who collects it, how stories are then reviewed and analyzed - sort of a coding process -- and finally how the most significant stories are selected.
Overall, MSC can complement nicely more traditional approaches to monitoring and evaluation and certainly improves upon the current practice of using "stories" or "vignettes" to document evidence of a project's impact.
This is essentially a practical guide to using "stories" as a means of documenting a program or project's impact on people's lives. Stories have often been seen as providing anecdotal evidence, and often they are collected in such a way that they are not very different from testimonials of the kind you'd put on a marketing brochure. What the Most Significant Change (MSC) technique offers is a systematic approach to collecting and analyzing stories from project participants and intended beneficiaries. It is meant to record the stories in the participants' own words, but also the context within which the story is collected, who collects it, how stories are then reviewed and analyzed - sort of a coding process -- and finally how the most significant stories are selected.
Overall, MSC can complement nicely more traditional approaches to monitoring and evaluation and certainly improves upon the current practice of using "stories" or "vignettes" to document evidence of a project's impact.
Monday, May 22, 2006
Cook, T.D. (2000). The false choice between theory-based evaluation and experimentation. New Directions in Evaluation, 87, 27-34.
"Few program theories specify how long it should take for a given process to affect some proximal indicator in the causal chain. But without such specifications, it is difficult to know when disconfirmation occurs, whether the next step in the model has simply not occurred yet or instead will not occur at all. It is this ambiguity about timelines that allow program developers who have been disappointed by evaluation results to claim that positive results would have occurred had the evaluation lasted longer. But because such theories are not typically available, the argument is often heard when developers do not like what the evaluation reports." (30)
This has a very practical impact on WHEN to implement specific elements of an evaluation and on the EVALUABILITY of a project or program at any specific point in time.
To address this problem:
Sridharan, S., Campbell, B., & Zinzow, H. Developing a Stakeholder-Driven Anticipated Timeline of Impact for Evaluation of Social Programs, American Journal of Evaluation, Vol. 27, No. 2, June 2006.
In the absence of a clearly spelled out theory of the timeline of impact(s), the authors suggest a methodology for developing such a timeline based on stakeholders' expectations. In other words, the project stakeholders may be in the best position to estimate when the project's impacts will be felt.
"Few program theories specify how long it should take for a given process to affect some proximal indicator in the causal chain. But without such specifications, it is difficult to know when disconfirmation occurs, whether the next step in the model has simply not occurred yet or instead will not occur at all. It is this ambiguity about timelines that allow program developers who have been disappointed by evaluation results to claim that positive results would have occurred had the evaluation lasted longer. But because such theories are not typically available, the argument is often heard when developers do not like what the evaluation reports." (30)
This has a very practical impact on WHEN to implement specific elements of an evaluation and on the EVALUABILITY of a project or program at any specific point in time.
To address this problem:
Sridharan, S., Campbell, B., & Zinzow, H. Developing a Stakeholder-Driven Anticipated Timeline of Impact for Evaluation of Social Programs, American Journal of Evaluation, Vol. 27, No. 2, June 2006.
In the absence of a clearly spelled out theory of the timeline of impact(s), the authors suggest a methodology for developing such a timeline based on stakeholders' expectations. In other words, the project stakeholders may be in the best position to estimate when the project's impacts will be felt.
Friday, May 19, 2006
Building community capacities in evaluating rural IT projects: Success strategies from the LEARNERS Project, by June Lennie, Greg hearn, Lyn Simpson & Megan Kimber, International Journal of Education and Development Using ICT, Vol. 1, No. 1 (2005).
The article describes outcomes of a project aimed at building evaluation capacities of two Australian rural communities involved in ICT initiatives. I guess living down under is as good an excuse as any for using the acronym C&IT, which stands for communication and information technologies rather than the more common -- at least in the US -- ICT, information and communication technology.
While the authors clearly espouse participatory action research, they are also quite open about limitations they encountered in trying to use PAR in rural Australian communities.
I particularly liked the focus on "community capacity building". Capacity building in ICT projects often starts with how to operate a computer, use basic software, send an email and browse the web. More often than not, that's also where the training ends. Those who pick up the skills and go on to more advanced training can hope to use these skills to get better paying jobs, often leaving the rural areas for better jobs in urban areas. In other words, the capacity that is built in rural areas through ICT projects is often not part of a "community" approach to capacity building. Individual training, in isolation of the needs of the community as a whole, and capacity building at the community level, isn't sufficient.
The article describes outcomes of a project aimed at building evaluation capacities of two Australian rural communities involved in ICT initiatives. I guess living down under is as good an excuse as any for using the acronym C&IT, which stands for communication and information technologies rather than the more common -- at least in the US -- ICT, information and communication technology.
While the authors clearly espouse participatory action research, they are also quite open about limitations they encountered in trying to use PAR in rural Australian communities.
I particularly liked the focus on "community capacity building". Capacity building in ICT projects often starts with how to operate a computer, use basic software, send an email and browse the web. More often than not, that's also where the training ends. Those who pick up the skills and go on to more advanced training can hope to use these skills to get better paying jobs, often leaving the rural areas for better jobs in urban areas. In other words, the capacity that is built in rural areas through ICT projects is often not part of a "community" approach to capacity building. Individual training, in isolation of the needs of the community as a whole, and capacity building at the community level, isn't sufficient.
Thursday, May 18, 2006
"Developing a Sense-Making Methodology Framework for Collective Learning in Latin American Community Telecenter Assessment, by Michel Menou and Peter day. Paper presented at a non-divisional workshop held at the meeting of the International Communication Association, New York City, May 2005.
Some quotes:
- "History appears to be repeating itself in that the lessons of the early European telecenter movement do not appear to have been understood."
- "The institutional communities (international organizations, governments and major non-government organizations associated with them) pretend to offer 'best practices' based upon their particular view of telecenters financial sustainability."
- "As usual the views of the beneficiaries are hardly expressed in by themselves in the literature."
- "Most stories show that ICT are used, allowing communications or access to information not previously possible, and bring some immediate positive, or negative, outcomes. But does this actually point to the beginning of a transformation process that will lastingly change the fate of the people and that could not be arrived at through other means? And is ICT use the prime driver or is it only an enhancer of other key factors?"
- "Research results are hardly available to, or understandable by, the communities themselves and do not contribute to their decision making process while they feed into institutional policies. Research therefore tends to contribute more to the perpetuation of community domination than to their empowerment. Present monitoring and evaluation practices need to be transformed into community driven collective learning processes."
Menou and Day make extensive references to a "sense-making methodology" but didn't explain it much... need to investigate that...:)
Some quotes:
- "History appears to be repeating itself in that the lessons of the early European telecenter movement do not appear to have been understood."
- "The institutional communities (international organizations, governments and major non-government organizations associated with them) pretend to offer 'best practices' based upon their particular view of telecenters financial sustainability."
- "As usual the views of the beneficiaries are hardly expressed in by themselves in the literature."
- "Most stories show that ICT are used, allowing communications or access to information not previously possible, and bring some immediate positive, or negative, outcomes. But does this actually point to the beginning of a transformation process that will lastingly change the fate of the people and that could not be arrived at through other means? And is ICT use the prime driver or is it only an enhancer of other key factors?"
- "Research results are hardly available to, or understandable by, the communities themselves and do not contribute to their decision making process while they feed into institutional policies. Research therefore tends to contribute more to the perpetuation of community domination than to their empowerment. Present monitoring and evaluation practices need to be transformed into community driven collective learning processes."
Menou and Day make extensive references to a "sense-making methodology" but didn't explain it much... need to investigate that...:)
Wednesday, May 17, 2006
Merging Theory with Practice: Toward an Evaluation Framework for Community Informatics
March 2001, Dara O'Neil
This was a paper presented at the October 2001 Second International Conference of the Association of Internet Researchers. It looks very much like a traditional literature review prepared in the context of preliminary work on a dissertation. For my purposes, it needs to be updated and it has little focus on international experiences with telecenters and related ICT access initiatives but it provides an interesting framework, identifying five key areas of outcomes for community informatics projects: (1) Enhancing strong democracy; (2) Increasing social capital; (3) Empowering individuals; (4) Revitalizing sense of community; and (5) Providing economic development opportunities.
In an international context, (3) and (5) seem to be the primary intended outcomes.
March 2001, Dara O'Neil
This was a paper presented at the October 2001 Second International Conference of the Association of Internet Researchers. It looks very much like a traditional literature review prepared in the context of preliminary work on a dissertation. For my purposes, it needs to be updated and it has little focus on international experiences with telecenters and related ICT access initiatives but it provides an interesting framework, identifying five key areas of outcomes for community informatics projects: (1) Enhancing strong democracy; (2) Increasing social capital; (3) Empowering individuals; (4) Revitalizing sense of community; and (5) Providing economic development opportunities.
In an international context, (3) and (5) seem to be the primary intended outcomes.
Thursday, May 04, 2006
"Anecdotes and Beyond: 101 Ways to Assess an ICT Project", ICT@USAID Technology Series, April 26, 2006.
Main message: There isn't one right way to evaluate an ICT Project. The right mix of evaluation questions and the right mix of methods and tools depend on a solid analysis of the objectives of the projects and the objectives of the evaluation.
Main message: There isn't one right way to evaluate an ICT Project. The right mix of evaluation questions and the right mix of methods and tools depend on a solid analysis of the objectives of the projects and the objectives of the evaluation.
Subscribe to:
Posts (Atom)