Pupils make more progress in 3Rs 'without aid of computers'
Here's an interesting reference to a study of test results in UK schools and computer use that points to the fact that students who make more use of the computer do worse in tests. I love this stuff.... Why exactly do they do worse? If they're spending their time on the computer playing games (not educational games), the effect is pretty much the same as watching TV in the sense that they're being distracted from actually doing their homework.
It is possible to make effective use of the television for educational purposes. There are educational programs and schools make use of televisions and videotapes on a regular basis to supplement textbooks. The computer isn't much different. It is possible to use a computer and educational software to supplement more traditional form of instruction, yet it is also very likely that kids will turn on the computer to play games.
I don't quite get what the research question was. Would you rather ask: "Do kids watching more TV have better test scores?" or "What kind of utilization of the TV results in improvements in test scores?"
Our experience tells us that extensive TV watching is an obvious distraction from homework and most likely to result in lower test scores. Though the clear relationship behind that statement is between the amount of homework and test results. TV isn't the only thing that can distract kids from doing their homework.
Why should we bother asking whether the amount of computer use is related to test scores? I don't care how many hours my kids are using the computers. I do care how many hours they're on the computer playing games vs. using the computer to do research for an assignment on the Internet. I also think there's much more to using a computer for educational purposes than browsing the Internet.
As for educational software, my experience has been that while they may be okay to practice an already acquired skill, they are not going to teach your kid anything new. That's not to say that all educational software is useless and it could be that when used effectively and integrated within the curriculum, some educational software is exactly what is needed to reinforce skills taught in the classroom.
Thursday, March 24, 2005
Monday, March 21, 2005
The "Real" questions: The Real Digital Divide, Article in The Economist, March 10, 2005
I've always thought that the reason we keep asking the same questions and we don't get the right answers is often that we're just not asking the right questions. At this point, I'm not quite sure but what is the most important question to focus on for evaluation purposes:
1. What is the impact of technology on education?
2. How do we maximize the impact of technology on education?
I tend to lean towards question 2 as the critical one because to me it's pretty obvious that technology CAN have very beneficial impacts, yet it can also be a big waste of money. If we tried to answer the first question, my guess is that you'd get a range of studies, all quite scientific, yet showing different results. I think there are a bunch of studies that do exactly that. See http://www.nosignificantdifference.org/, which shows hundreds of studies (probably focused on the US unfortunately) about the impact of IT in distance education. All that these studies tell us is that under certain conditions, technology works well to support education. The key is then to clearly identify these conditions and replicate with adaptations when necessary. The "with adaptations" is important because what works in the US doesn't necessarily work somewhere else and the local context is always a significant factor.
What triggered this particular reformulation of my own thoughts is the article on the Real Digital Divide in the Economist and the subsequent debate within ICT for Development circles. For the most part, there has been a reflex to try to defend what we do for a living, to justify our work by providing evidence that our work has an impact, that it's not just cell phones but that telecenters can and do provide benefits to the communities they serve, etc...
I am not convinced that our evidence is particularly strong but I am more and more convinced that this is the wrong debate. The question is not "whether", but "how". How do we make it work? How do we make sure that we've got our priorities right?
Read "The End of Poverty", by Jeffrey D. Sachs and then think about the role a telecenter can and can't play in the middle of rural Africa.
The questions become:
HOW can telecenters truly contribute to development in conditions of extreme poverty (if at all)?
What technologies (cell phones, radio, internet, PDAs.... ) are most relevant in conditions of extreme poverty?
Perhaps more importantly, what should be the priority investments?
If we have $xxx amount of money to invest in an extremely poor rural area, which is better, to put 100% of that amount in either agriculture, education or health. I don't think putting 100% in IT would make sense. This, of course, assumes a fixed, limited amount of funding available and a necessity to prioritize. If we did follow a Jeffrey Sachs type of "differential diagnosis" approach, we would need to study carefully the particular rural area where we are planning an intervention, identify what its main challenges are and develop a tailored plan for getting out of the cycle of extreme poverty and onto the development ladder. I'm not sure it makes sense to work only on one individual rural area and expect that what goes on in the rest of the country is not going to affect that area but we could assume that the same exercise is done around the country.
This, of course, allows us to fall in the trap of looking at IT as if it were a separate sector to support and as if it were competing with other sectors for funding and prioritization. If we looked a little more at IT as a potential component of all sectors, this prioritization problem would be less of an issue. The question is not whether the priority should go to IT investments vs. investments in agriculture, education, health, etc... but rather how relatively small investments in IT for agriculture, IT for education, IT for health could truly support development objectives.
I've always thought that the reason we keep asking the same questions and we don't get the right answers is often that we're just not asking the right questions. At this point, I'm not quite sure but what is the most important question to focus on for evaluation purposes:
1. What is the impact of technology on education?
2. How do we maximize the impact of technology on education?
I tend to lean towards question 2 as the critical one because to me it's pretty obvious that technology CAN have very beneficial impacts, yet it can also be a big waste of money. If we tried to answer the first question, my guess is that you'd get a range of studies, all quite scientific, yet showing different results. I think there are a bunch of studies that do exactly that. See http://www.nosignificantdifference.org/, which shows hundreds of studies (probably focused on the US unfortunately) about the impact of IT in distance education. All that these studies tell us is that under certain conditions, technology works well to support education. The key is then to clearly identify these conditions and replicate with adaptations when necessary. The "with adaptations" is important because what works in the US doesn't necessarily work somewhere else and the local context is always a significant factor.
What triggered this particular reformulation of my own thoughts is the article on the Real Digital Divide in the Economist and the subsequent debate within ICT for Development circles. For the most part, there has been a reflex to try to defend what we do for a living, to justify our work by providing evidence that our work has an impact, that it's not just cell phones but that telecenters can and do provide benefits to the communities they serve, etc...
I am not convinced that our evidence is particularly strong but I am more and more convinced that this is the wrong debate. The question is not "whether", but "how". How do we make it work? How do we make sure that we've got our priorities right?
Read "The End of Poverty", by Jeffrey D. Sachs and then think about the role a telecenter can and can't play in the middle of rural Africa.
The questions become:
HOW can telecenters truly contribute to development in conditions of extreme poverty (if at all)?
What technologies (cell phones, radio, internet, PDAs.... ) are most relevant in conditions of extreme poverty?
Perhaps more importantly, what should be the priority investments?
If we have $xxx amount of money to invest in an extremely poor rural area, which is better, to put 100% of that amount in either agriculture, education or health. I don't think putting 100% in IT would make sense. This, of course, assumes a fixed, limited amount of funding available and a necessity to prioritize. If we did follow a Jeffrey Sachs type of "differential diagnosis" approach, we would need to study carefully the particular rural area where we are planning an intervention, identify what its main challenges are and develop a tailored plan for getting out of the cycle of extreme poverty and onto the development ladder. I'm not sure it makes sense to work only on one individual rural area and expect that what goes on in the rest of the country is not going to affect that area but we could assume that the same exercise is done around the country.
This, of course, allows us to fall in the trap of looking at IT as if it were a separate sector to support and as if it were competing with other sectors for funding and prioritization. If we looked a little more at IT as a potential component of all sectors, this prioritization problem would be less of an issue. The question is not whether the priority should go to IT investments vs. investments in agriculture, education, health, etc... but rather how relatively small investments in IT for agriculture, IT for education, IT for health could truly support development objectives.
Subscribe to:
Posts (Atom)