Joined a group on Diigo 24 hours ago, opened an email telling me about what other people are tagging in that group, followed a link, discovered Pearltrees. Next step: play for a couple of hours. Let the creative juices flow. How can I use this tool? How does it work?
That's how things happen. Copy some script and voila... click on the pearl below to visit my first Pearltree map.
Try it out for yourself! Discover Pearltrees' website! Watch a YouTube introduction to Pearltrees.
Disclaimer: I have no personal or professional connection to Pearltrees.
Saturday, August 22, 2009
What's YOUR Social Media Policy?
As social media enter the workplace, intentionally or not, organizations in the public and private sector are rushing to develop social media policies and/or adding social media sections to their employee handbook. Some of it is preemptive, some is reactive.
Preemptive
An organization launches an enterprise social networking tool and various groups consulted raise concerns. Human resources has concerns. The legal team has concerns. There are so many unknowns. Let's dig up all existing policies and make sure we add to it to cover all eventualities -- most of which we can't predict. Those we can predict are typically already covered by existing policies.
Reactive
An organization just had a negative experience with some internal abuse of social networking tools and decides to clamp down on anything and everything that might happen in the future. You can spot a policy that emerges from such a situation if some of the language refers to situations that would only apply to a small number of employees.
Whether the policy work is preemptive or reactive, I wonder whether the authors of such policies ever consider how their words might be interpreted by employees.
Employee Reactions
* "That's just the official policy, nobody cares about what the policy says. Just be careful and don't get caught. They can't monitor everyone all the time anyway."
* "Wow! They can use that to fire me if they want to. As a matter of fact, they can probably find something in there to fire anyone if they really want to get rid of someone."
* "They're just targeting employees who get caught watching porn at the office or who spend their days surfing the web and don't do the work. It doesn't really apply to me."
* "Fine. I get what this is all about. Security, productivity, I get it. I think it's time I develop my own policies." :)
Here are some ideas -- not restricted to social media:
* I shall not share links to my personal social bookmarking site with my employer even if 99% of my bookmarked resources are work-related. By extension, I shall not share any relevant web-based information with my employer or co-workers if the information was found during off-hours.
* I shall never use my personal computer to do any employer-related work. It doesn't matter if my computer has the necessary software and the work site computer doesn't. Downloading free software to the work site computer is an obvious no-no.
* By extension, I shall never use my own pens, paper or other supplies to do any employer-related work.
* I shall never try to bypass red tape / bureaucratic processes in order to get the work done. Forget about common sense. It's a myth. Policy rules!
* I shall make sure I understand policy thoroughly and send as many clarifying emails as I feel necessary to my supervisor. Better swamp their email box than get fired over some misunderstanding.
* I shall limit the number of times I check my work email during off-hours. I shall never respond to a work email during off-hours. I shall never check my work email on weekends.
* I shall not think about work too much during off-hours. 15 minutes a day is the limit.
* I shall not mention my employer (especially not in any favorable way) on my blog or other social media tool. By extension, my employer's name shall appear only on my CV.
* I shall not arrive at work early or leave late. Any encroachment on personal time is totally unacceptable.
* I shall make sure to separate employer-related knowledge from what I really know. Only employer-related knowledge is relevant at work.
* I shall investigate surgery or mind-control techniques that might allow me to better separate the part of the brain that deals with employer-related work and the rest of my brain. We wouldn't want too much collaboration between the two. There is no such thing as a gray area of professional interests that isn't directly job-related. That gray area is a danger zone. The brain has two sides: job-related brain vs. personal brain. Everything that is not strictly job-related is personal and therefore should not be used at work.
* Self-censorship is the best policy. Anything that might be misinterpreted should be deleted, never spoken and forgotten.
* I shall make sure that no one reads my blog. If one person reads it, it's one too many. Who knows what they might read into this post?
Get it? Obviously, I'm taking it a little too far and I can laugh at it, but I wish employers would lighten up a bit too. What do we really need? Training? Yes. Policy-driven fear of termination? I don't think so.
Oh, by the way, this has obviously nothing to do with my own employer. I wouldn't want this post to be interpreted as criticism -- that would be against official policy. :)
Oops! I am in clear violation of my own policy. Just spent more than 15 minutes thinking about work-related issues during off-hours... on a Saturday, no less!
Preemptive
An organization launches an enterprise social networking tool and various groups consulted raise concerns. Human resources has concerns. The legal team has concerns. There are so many unknowns. Let's dig up all existing policies and make sure we add to it to cover all eventualities -- most of which we can't predict. Those we can predict are typically already covered by existing policies.
Reactive
An organization just had a negative experience with some internal abuse of social networking tools and decides to clamp down on anything and everything that might happen in the future. You can spot a policy that emerges from such a situation if some of the language refers to situations that would only apply to a small number of employees.
Whether the policy work is preemptive or reactive, I wonder whether the authors of such policies ever consider how their words might be interpreted by employees.
Employee Reactions
* "That's just the official policy, nobody cares about what the policy says. Just be careful and don't get caught. They can't monitor everyone all the time anyway."
* "Wow! They can use that to fire me if they want to. As a matter of fact, they can probably find something in there to fire anyone if they really want to get rid of someone."
* "They're just targeting employees who get caught watching porn at the office or who spend their days surfing the web and don't do the work. It doesn't really apply to me."
* "Fine. I get what this is all about. Security, productivity, I get it. I think it's time I develop my own policies." :)
Here are some ideas -- not restricted to social media:
* I shall not share links to my personal social bookmarking site with my employer even if 99% of my bookmarked resources are work-related. By extension, I shall not share any relevant web-based information with my employer or co-workers if the information was found during off-hours.
* I shall never use my personal computer to do any employer-related work. It doesn't matter if my computer has the necessary software and the work site computer doesn't. Downloading free software to the work site computer is an obvious no-no.
* By extension, I shall never use my own pens, paper or other supplies to do any employer-related work.
* I shall never try to bypass red tape / bureaucratic processes in order to get the work done. Forget about common sense. It's a myth. Policy rules!
* I shall make sure I understand policy thoroughly and send as many clarifying emails as I feel necessary to my supervisor. Better swamp their email box than get fired over some misunderstanding.
* I shall limit the number of times I check my work email during off-hours. I shall never respond to a work email during off-hours. I shall never check my work email on weekends.
* I shall not think about work too much during off-hours. 15 minutes a day is the limit.
* I shall not mention my employer (especially not in any favorable way) on my blog or other social media tool. By extension, my employer's name shall appear only on my CV.
* I shall not arrive at work early or leave late. Any encroachment on personal time is totally unacceptable.
* I shall make sure to separate employer-related knowledge from what I really know. Only employer-related knowledge is relevant at work.
* I shall investigate surgery or mind-control techniques that might allow me to better separate the part of the brain that deals with employer-related work and the rest of my brain. We wouldn't want too much collaboration between the two. There is no such thing as a gray area of professional interests that isn't directly job-related. That gray area is a danger zone. The brain has two sides: job-related brain vs. personal brain. Everything that is not strictly job-related is personal and therefore should not be used at work.
* Self-censorship is the best policy. Anything that might be misinterpreted should be deleted, never spoken and forgotten.
* I shall make sure that no one reads my blog. If one person reads it, it's one too many. Who knows what they might read into this post?
Get it? Obviously, I'm taking it a little too far and I can laugh at it, but I wish employers would lighten up a bit too. What do we really need? Training? Yes. Policy-driven fear of termination? I don't think so.
Oh, by the way, this has obviously nothing to do with my own employer. I wouldn't want this post to be interpreted as criticism -- that would be against official policy. :)
Oops! I am in clear violation of my own policy. Just spent more than 15 minutes thinking about work-related issues during off-hours... on a Saturday, no less!
Friday, August 21, 2009
Diigo Lists
A while ago, I had to transfer my bookmark collections from FURL to Diigo. While the process was automatic, my tags turned into a mess. I had developed a clever way of organizing my tags in FURL. At the time, I thought it was quite clever. It worked very well for me and I even wrote a little paper about it (Learning from Doing: Social Bookmarking).
I should write a Part II to explain why it turned out to be very dumb. I certainly didn't anticipate having to transfer the bookmarks to another service and what that would mean in terms of "portability." Here's a quick example. I used tags like these:
"ICT -- Access"
"ICT -- Education"
"ICT -- eGov"
In the transfer, these tags ended up split into three parts: "ICT", "--" and "Access". I had 800+ meaningless "--" tags. It will take a while to clean up the mess. There's probably a bigger lesson to be learned here but I haven't figured it out yet.
By the time I'm done with the clean up and I've learned to use all of Diigo's capabilities, it will be time to transfer to the next best thing!
Why bother cleaning up? I would like to be able to link the relevant collections (KM tags in particular), to my Learning Log business novel.
I now have a "didactic fiction" bookmark list on Diigo.
I should write a Part II to explain why it turned out to be very dumb. I certainly didn't anticipate having to transfer the bookmarks to another service and what that would mean in terms of "portability." Here's a quick example. I used tags like these:
"ICT -- Access"
"ICT -- Education"
"ICT -- eGov"
In the transfer, these tags ended up split into three parts: "ICT", "--" and "Access". I had 800+ meaningless "--" tags. It will take a while to clean up the mess. There's probably a bigger lesson to be learned here but I haven't figured it out yet.
By the time I'm done with the clean up and I've learned to use all of Diigo's capabilities, it will be time to transfer to the next best thing!
Why bother cleaning up? I would like to be able to link the relevant collections (KM tags in particular), to my Learning Log business novel.
I now have a "didactic fiction" bookmark list on Diigo.
Thursday, August 20, 2009
The Boss, by Andrew O'Keeffe
Here is another business novel I came across last week, downloaded to my Kindle and just finished reading: The Boss, by Andrew O'Keeffe. I've read a good number of business novels and business parables in the last couple of years. This is the best to date, and probably the first I wouldn't mind reading again, and again. In fact, I might put a reminder on my agenda to read it every year a week or so before my employee appraisal meeting. And if I find myself in a job interview, I'd want to read it again as part of my prep work.
The story is written from the point of view of an employee facing a great cast of horrendous executives and according to the author, based on true stories. I find it hard to believe a company run by these executives would survive long but for the purpose of storytelling, it works.
Here are some quick lessons about what works in the business novel genre:
* get to the point (the learning point), move on.
* a quick description of the setting, to the extent that it relates to the core of the story, is good, but there's no need to overdo it with beautiful prose. Simple prose, relatively short sentences, common vocabulary. It's meant to be read by busy business people, not for a day at the beach.
* keep it simple: No need for subplots or an extended cast of characters. Stick to what's needed to tell the story and not more.
This particular novel makes good use of Aesop's fables (an early form of didactic fiction), connecting individual fables to situations the main character is encountering at work.
Image by Carla216 via Flickr
I grew up with the Fables of La Fontaine rather than Aesop's fables but it's the same idea.The story starts with a good amount of whining about how bad bosses can be, but slowly, the main character learns to handle her reactions to the three "Bs" (bad boss behavior) and how to not be a victim. The cases of bad boss behavior she confronts are a little exaggerated. They may all be based on true story but I would hope no one would be so unlucky to be exposed to all of them at once in one job.
I wrote in an earlier post about the role of the advisor in the business novel. This business novel doesn't have an advisor. There are a couple of people around the main character who provide useful insights and encouragement, but no single individual has all the answers. That worked very well in the story and it's much more realistic than the "all-knowing" advisor.
There are many parallels between this business novel and what I am trying to write in Learning Log. At the end of the book, there are discussions questions for facilitation, and sections meant for specific audiences (employees, leaders, etc...), something I've also tried to incorporate in Learning Log. In many ways, it's telling me that I'm on the right track and I have more work to do to make my novel as good as The Boss.
PS: I've also found 3-4 other business novels and discovered that Japanese business novels are quite popular. I wonder if the French have written any.
Saturday, August 15, 2009
Organizing Your Desktop
I used to accumulate folders all over my desktop and over time, it became difficult to find what I needed. I've found a way to make a little easier to find my stuff quickly. It revolves around three elements:
1. A core visual on my desktop that helps to organize folders according to key work-related tasks. I use Cmap to do the diagram.
2. Shortcuts to folders I need to access most often
3. Regular review and reorganizing based on how work is evolving.
The diagram below is based on my real world job.
1. A core visual on my desktop that helps to organize folders according to key work-related tasks. I use Cmap to do the diagram.
2. Shortcuts to folders I need to access most often
3. Regular review and reorganizing based on how work is evolving.
The diagram below is based on my real world job.
Thursday, August 13, 2009
Vanity Awards
Here's how the email starts:
"I am pleased to announce that Knowledge For Development LLC has been selected for the 2009 Best of Arlington Award in the Computer Operator Training category by the US Commerce Association."
And then there's a nice image of the award plaque. Wow! Awards are nice, aren't they? Except this one might be embarrassing to display.
Step one: Read with both eyes open to spot the red flags all over it
* The message was sent using an email address only posted on my website.
* The sender doesn't seem to know my name.
* My company, Knowledge for Development, LLC, although technically still registered, hasn't been operating for several years.
* Since when would any of my activities qualify as "Computer Operator Training"?
Step two: What's the US Commerce Association anyway?
They do have a simple website. I won't even point you to it. Anyone can have a website.
Step three: What are other people saying about it?
The following article was found just one item below the US Commerce Association website in a simple Google search:
All That Glitters? US Commerce Association Awards to Biz May Not Be What They Seem
I didn't go beyond that. I don't want to know how much they wanted to charge me to send me the award plaque. That's essentially what they are in the business of, selling you vanity awards. They're just not very upfront about it.
A few months back, we received in the mail a very fancy package addressed to "The Parents of ______." They had my daughter's name and even the name of one of her teachers who had supposedly recommended her for a prestigious and highly selective -- not to mention expensive -- leadership program. That took me a little longer to spot because my daughter happens to be a straight "A" student and a teacher did indeed recommend her for the program. The program's website was very similar to what I found at the US Commerce Association site. Other sites talking about the program were providing conflicting information about whether it was a fraud or not. Some parents seem to think it had been a great learning opportunity for their kids. Sometimes it's not a straightforward fraud but you are getting manipulated into buying something you would not have bought if you had not been told how great you (or your kids) were.
In any case, you were paying a high price for the program and you were inclined not to look at the price because your kid had been selected and not sending them would be to deprive them of a great opportunity they had earned. I explained the whole thing to my daughter and we agreed the money would be better spent on some other great opportunity she could pick herself. I also suggested to my daughter that she should keep the fancy mailing package as a reminder that all that glitters isn't gold. Great lesson for her!
"I am pleased to announce that Knowledge For Development LLC has been selected for the 2009 Best of Arlington Award in the Computer Operator Training category by the US Commerce Association."
And then there's a nice image of the award plaque. Wow! Awards are nice, aren't they? Except this one might be embarrassing to display.
Step one: Read with both eyes open to spot the red flags all over it
* The message was sent using an email address only posted on my website.
* The sender doesn't seem to know my name.
* My company, Knowledge for Development, LLC, although technically still registered, hasn't been operating for several years.
* Since when would any of my activities qualify as "Computer Operator Training"?
Step two: What's the US Commerce Association anyway?
They do have a simple website. I won't even point you to it. Anyone can have a website.
Step three: What are other people saying about it?
The following article was found just one item below the US Commerce Association website in a simple Google search:
All That Glitters? US Commerce Association Awards to Biz May Not Be What They Seem
I didn't go beyond that. I don't want to know how much they wanted to charge me to send me the award plaque. That's essentially what they are in the business of, selling you vanity awards. They're just not very upfront about it.
A few months back, we received in the mail a very fancy package addressed to "The Parents of ______." They had my daughter's name and even the name of one of her teachers who had supposedly recommended her for a prestigious and highly selective -- not to mention expensive -- leadership program. That took me a little longer to spot because my daughter happens to be a straight "A" student and a teacher did indeed recommend her for the program. The program's website was very similar to what I found at the US Commerce Association site. Other sites talking about the program were providing conflicting information about whether it was a fraud or not. Some parents seem to think it had been a great learning opportunity for their kids. Sometimes it's not a straightforward fraud but you are getting manipulated into buying something you would not have bought if you had not been told how great you (or your kids) were.
In any case, you were paying a high price for the program and you were inclined not to look at the price because your kid had been selected and not sending them would be to deprive them of a great opportunity they had earned. I explained the whole thing to my daughter and we agreed the money would be better spent on some other great opportunity she could pick herself. I also suggested to my daughter that she should keep the fancy mailing package as a reminder that all that glitters isn't gold. Great lesson for her!
Tuesday, August 11, 2009
Foreign Assistance Revitalization and Accountability Act of 2009
I don't usually get that excited about new bills presented to Congress but I figured I had to read this one. The Foreign Assistance Revitalization and Accountability Act of 2009 is out.
I printed all 60+ pages of it (sorry!) and went at it with a pink highlighter. In some sections, I found myself highlighting everything, so I stopped the highlighting procedure.
I was particularly interested in the section below:
(p. 9)
"Sec.624B Office for Learning, Evaluation and Analysis in Development.
(1) Achieving United States foreign policy objectives requires the consistent and systematic evaluation of the impact of United States foreign assistance programs and analysis on what programs work and why, when, and where they work;
(2) the design of assistance programs and projects should include the collection of relevant data required to measure outcomes and impacts;
(3) the design of assistance programs and projects should reflect the knowledge gained from evaluation and analysis;
(4) a culture and practice of high quality evaluation should be revitalized at agencies managing foreign assistance programs, which requires that the concepts of evaluation and analysis are used to inform policy and programmatic decisions, including the training of aid professionals in evaluation design and implementation;
(5) the effective and efficient use of funds cannot be achieved without an understanding of how lessons learned are applied in various environments, and under similar or different conditions; and
(6) project evaluations should be used as source of data when running broader analyses of development outcomes and impacts.
None of this is very new, particularly aggressive or revolutionary. It's common sense. The problem I sense is that it fails to acknowledge that M&E as it has been practiced in international development, isn't necessarily going to provide the answers we're all looking for. Evaluation is done when the project is over. That's too late to change anything about how that particular project was run. Something has to be done while the project is being implemented. Something has to be done to ensure that the team implementing the project is fully engaged in learning. Technically, that's what the "M" for monitoring is meant to do.
Instead of putting so much emphasis on the "evaluation" part of the M&E equation, and trying to do "rigorous impact assessments", I would want to focus much more on developing more meaningful monitoring. Meaningful monitoring could use some insights from knowledge management. You don't do knowledge management around projects by waiting till the end of a project to hold an After-Action-Review and collect lessons learned. If you try to do that, you're missing the point. However, if you hold regular reviews and you ask the right kinds of questions, you're more likely to encourage project learning. If you have a project that is engaged in active learning, you are not only more likely to have a successful project but you will increase your chances of being able to gather relevant lessons. Asking the right kinds of questions is critical here. You can limit yourself to questions like "did we meet the target this month?" or you can ask the more interesting "why" and "how" questions.
Traditional monitoring involves setting up a complex set of variables to monitor, overly complex procedures for collecting data.. all of which tends not to be developed in time, and is soon forgotten and dismissed as useless because it is too rigid to adapt to the changing environment within which the project operates. [I may be heavily biased by personal experiences. But then, don't we learn best from personal experience? ]
I know the comparison is a stretch but at NASA, the safety officer assigned to a project is part of an independent unit and doesn't have to feel any pressure from the project management team because he or she doesn't report to project management. If something doesn't look right, she has the authority to stop the work.
If monitoring and evaluation is to be taken seriously within USAID, I suspect that it will require a clearer separation of M&E functions from the project management functions. If the monitoring function is closely linked to project reporting and project reporting is meant to satisfy HQ that everything is rosy, then the monitoring function fails to perform. Worse is when monitoring is turned into a number crunching exercise that doesn't involve any analysis of what is really going on behind the numbers. Third party evaluators need to be truly independent. The only way that is likely to happen is if they are USAID employees reporting to an independent M&E office.
I would also want more emphasis on culture change. As long as the prevailing culture is constantly in search of "success stories," and contractor incentives are what they are, there will be resistance to taking an honest and rigorous look at outcomes and impacts. Without an honest and rigorous look at outcomes and impacts, the agency will continue to find it difficult to learn. If you can't change the prevailing culture fast enough, you need to establish and independent authority to handle the M&E functions or train a new breed of evaluation specialists who don't have to worry about job security.
My first hand experience with USAID-funded impact assessments has led me to question whether those who ask for impact assessments are willing to acknowledge that they may not get the "success story" they are hoping for.
Hmm.... I guess I still have strong opinions about M&E. I tried to get away from it.
I've always thought that M&E was closely related to Knowledge Management, but I also thought it was the result of my own career path and overall framework. (See my core experience concept map on my new website)
Watch out for these M&E and Knowledge Management connections:
(p 12)
(6) establish annual evaluation and research agendas and objectives that are responsive to policy and programmatic priorities;
If you're going to do research, why not make it "action research". Keep it close to the ground, make it immediately useful to those involved in implementing projects on the ground . Then you can aggregate the ground-based research findings and figure out what to do at the policy and programmatic levels. Otherwise you'll end up with research that's based on HQ priorities and not sufficiently relevant to the front lines. If you're going to try to capture knowledge that is highly relevant to the organization, make sure you're doing it from the ground up and not the other way around. Knowledge needs to be relevant to the front lines workers, not just to the policy makers.
(p. 12)
(11) develop a clearinghouse capacity for the dissemination of knowledge and lessons learned to USAID professionals, implementing partners, the international aid community, and aid recipient governments, and as a repository of knowledge on lessons learned;
I'm glad at least the paragraph doesn't include the word "database". I'm hoping there's room for interpretation. I'd love to be involved in this. Knowledge management has a lot to offer here, but we need to remember that knowledge management (an organizational approach) isn't exactly the same as Knowledge for Development. Knowledge management can be an internal strategy. As indicated in para. (11) above, the dissemination of knowledge and lessons learned needs to go well beyond the walls of the organization itself. That's both a challenge and an opportunity.
(p. 12)
(12) distribute evaluation and research reports internally and make this material available online to the public; and
Do project staff really have the time to read evaluation and research reports? Do the people who design projects take the time to read evaluation and research reports? I don't mean to suggest they're at fault. What probably needs to happen, however, is that report findings and key lessons are made more user-friendly, otherwise, they remain "lessons filed" rather than "lessons learned."
In my current job with the NASA Goddard Space Flight Center, I've been very fortunate to witness the use of case studies as a very powerful approach to transmitting lessons learned. Case studies often originate from a massive Accident Investigation Report that very few people will ever read from end to end. Case studies extract key lessons from a lengthy report and present them in a more engaging manner. It's also not enough to expect people to access the relevant reports on their own. There has to be some push, some training. The same case studies can be used in training sessions.
These don't feel like well thought out ideas but then, at least they're out of my head and I can get back to them later when something more refined comes to mind. If I waited for a perfect paragraph to emerge, I wouldn't write much at all.
I printed all 60+ pages of it (sorry!) and went at it with a pink highlighter. In some sections, I found myself highlighting everything, so I stopped the highlighting procedure.
I was particularly interested in the section below:
(p. 9)
"Sec.624B Office for Learning, Evaluation and Analysis in Development.
(1) Achieving United States foreign policy objectives requires the consistent and systematic evaluation of the impact of United States foreign assistance programs and analysis on what programs work and why, when, and where they work;
(2) the design of assistance programs and projects should include the collection of relevant data required to measure outcomes and impacts;
(3) the design of assistance programs and projects should reflect the knowledge gained from evaluation and analysis;
(4) a culture and practice of high quality evaluation should be revitalized at agencies managing foreign assistance programs, which requires that the concepts of evaluation and analysis are used to inform policy and programmatic decisions, including the training of aid professionals in evaluation design and implementation;
(5) the effective and efficient use of funds cannot be achieved without an understanding of how lessons learned are applied in various environments, and under similar or different conditions; and
(6) project evaluations should be used as source of data when running broader analyses of development outcomes and impacts.
None of this is very new, particularly aggressive or revolutionary. It's common sense. The problem I sense is that it fails to acknowledge that M&E as it has been practiced in international development, isn't necessarily going to provide the answers we're all looking for. Evaluation is done when the project is over. That's too late to change anything about how that particular project was run. Something has to be done while the project is being implemented. Something has to be done to ensure that the team implementing the project is fully engaged in learning. Technically, that's what the "M" for monitoring is meant to do.
Instead of putting so much emphasis on the "evaluation" part of the M&E equation, and trying to do "rigorous impact assessments", I would want to focus much more on developing more meaningful monitoring. Meaningful monitoring could use some insights from knowledge management. You don't do knowledge management around projects by waiting till the end of a project to hold an After-Action-Review and collect lessons learned. If you try to do that, you're missing the point. However, if you hold regular reviews and you ask the right kinds of questions, you're more likely to encourage project learning. If you have a project that is engaged in active learning, you are not only more likely to have a successful project but you will increase your chances of being able to gather relevant lessons. Asking the right kinds of questions is critical here. You can limit yourself to questions like "did we meet the target this month?" or you can ask the more interesting "why" and "how" questions.
Traditional monitoring involves setting up a complex set of variables to monitor, overly complex procedures for collecting data.. all of which tends not to be developed in time, and is soon forgotten and dismissed as useless because it is too rigid to adapt to the changing environment within which the project operates. [I may be heavily biased by personal experiences. But then, don't we learn best from personal experience? ]
I know the comparison is a stretch but at NASA, the safety officer assigned to a project is part of an independent unit and doesn't have to feel any pressure from the project management team because he or she doesn't report to project management. If something doesn't look right, she has the authority to stop the work.
If monitoring and evaluation is to be taken seriously within USAID, I suspect that it will require a clearer separation of M&E functions from the project management functions. If the monitoring function is closely linked to project reporting and project reporting is meant to satisfy HQ that everything is rosy, then the monitoring function fails to perform. Worse is when monitoring is turned into a number crunching exercise that doesn't involve any analysis of what is really going on behind the numbers. Third party evaluators need to be truly independent. The only way that is likely to happen is if they are USAID employees reporting to an independent M&E office.
I would also want more emphasis on culture change. As long as the prevailing culture is constantly in search of "success stories," and contractor incentives are what they are, there will be resistance to taking an honest and rigorous look at outcomes and impacts. Without an honest and rigorous look at outcomes and impacts, the agency will continue to find it difficult to learn. If you can't change the prevailing culture fast enough, you need to establish and independent authority to handle the M&E functions or train a new breed of evaluation specialists who don't have to worry about job security.
My first hand experience with USAID-funded impact assessments has led me to question whether those who ask for impact assessments are willing to acknowledge that they may not get the "success story" they are hoping for.
Hmm.... I guess I still have strong opinions about M&E. I tried to get away from it.
I've always thought that M&E was closely related to Knowledge Management, but I also thought it was the result of my own career path and overall framework. (See my core experience concept map on my new website)
Watch out for these M&E and Knowledge Management connections:
(p 12)
(6) establish annual evaluation and research agendas and objectives that are responsive to policy and programmatic priorities;
If you're going to do research, why not make it "action research". Keep it close to the ground, make it immediately useful to those involved in implementing projects on the ground . Then you can aggregate the ground-based research findings and figure out what to do at the policy and programmatic levels. Otherwise you'll end up with research that's based on HQ priorities and not sufficiently relevant to the front lines. If you're going to try to capture knowledge that is highly relevant to the organization, make sure you're doing it from the ground up and not the other way around. Knowledge needs to be relevant to the front lines workers, not just to the policy makers.
(p. 12)
(11) develop a clearinghouse capacity for the dissemination of knowledge and lessons learned to USAID professionals, implementing partners, the international aid community, and aid recipient governments, and as a repository of knowledge on lessons learned;
I'm glad at least the paragraph doesn't include the word "database". I'm hoping there's room for interpretation. I'd love to be involved in this. Knowledge management has a lot to offer here, but we need to remember that knowledge management (an organizational approach) isn't exactly the same as Knowledge for Development. Knowledge management can be an internal strategy. As indicated in para. (11) above, the dissemination of knowledge and lessons learned needs to go well beyond the walls of the organization itself. That's both a challenge and an opportunity.
(p. 12)
(12) distribute evaluation and research reports internally and make this material available online to the public; and
Do project staff really have the time to read evaluation and research reports? Do the people who design projects take the time to read evaluation and research reports? I don't mean to suggest they're at fault. What probably needs to happen, however, is that report findings and key lessons are made more user-friendly, otherwise, they remain "lessons filed" rather than "lessons learned."
In my current job with the NASA Goddard Space Flight Center, I've been very fortunate to witness the use of case studies as a very powerful approach to transmitting lessons learned. Case studies often originate from a massive Accident Investigation Report that very few people will ever read from end to end. Case studies extract key lessons from a lengthy report and present them in a more engaging manner. It's also not enough to expect people to access the relevant reports on their own. There has to be some push, some training. The same case studies can be used in training sessions.
These don't feel like well thought out ideas but then, at least they're out of my head and I can get back to them later when something more refined comes to mind. If I waited for a perfect paragraph to emerge, I wouldn't write much at all.
Friday, August 07, 2009
Learning from Success and Failure (a follow up)
I am tired of reading statements like "We learn best from failure" and "We learn best from success." No. We learn best when we pay attention to what happened, how it happened and why? Whether it was a success or a failure doesn't make a difference in terms of our capacity to learn from an event.
A caveat or two:
A caveat or two:
- The organizational culture and existing processes within an organization may make it easier or harder to systematically learn from either success or failure.
- There may be a natural propensity to learn from failure (simply because it hurts and we don't want to do it again). Even if that can be demonstrated, it certainly doesn't mean that we can't learn from success. If we choose to learn from success and we put the right processes in place, there's no reason we can't do it.
Sunday, August 02, 2009
Organizational Learning and Fading Memories
Warning: Learning doesn't last.
Lessons learned can be slowly forgotten over time. Memories fade. When something is "learned", is it permanently imprinted in our memories? No. We become complacent again. We forget. We may not forget everything but we forget the details, the how and the why. Lessons may be institutionalized through new rules and processes as a result of an accident -- to ensure it doesn't happen again -- but with the passage of time, it's just another rule, soon disassociated from the original incident or accident. As soon as people no longer understand the "why" associated with a rule or process, it can be dismissed as bureaucratic red tape and soon ignored or frequently bypassed.
Remember Chernobyl? Remember Bhopal? Remember the Tenerife double aircraft disaster?
What do you remember about them?
Everyone remembers the Titanic, but what exactly do we remember about it? Do we need to be reminded of the details of why and how it happened on a regular basis?
We pay most attention to the why and how just after an accident happens because everyone is focused on "how could it possibly happen?" and "who is responsible?" What we really need is a process for reminding people of the why and how when they think they least need it, when everything is going well and they start thinking it could never happen to them.
I'm also wondering about other factors:
1) Proximity: What's the relationship between an individual's "proximity" or level of involvement with an accident or related lessons on the one hand, and the declining memory curve? Does first hand "learning" last longer?
2) Intensity: What's the relationship between the intensity of the failure (i.e. human lives lost vs. a failed project that didn't achieve its objectives), the extent to which the causes of failure are investigated, and the speed with which memories of the failure fade and lessons are unlearned.
3) Dynamic nature of Lessons: Lessons need to be "updated" regularly based on most recent history and discoveries. Even if you've learned something based on first hand experience, you still need to "update" that knowledge.
Rules and mandated processes need to remain linked to their original rationale. When someone is told that they need to follow rule x, y, z, they should be able to ask "why" and to get a straight answer other than 1) that's how we've always done it, or 2) that's the rule. If you understand the why and the rationale makes sense, you're much more likely to follow the rule.
Lessons learned can be slowly forgotten over time. Memories fade. When something is "learned", is it permanently imprinted in our memories? No. We become complacent again. We forget. We may not forget everything but we forget the details, the how and the why. Lessons may be institutionalized through new rules and processes as a result of an accident -- to ensure it doesn't happen again -- but with the passage of time, it's just another rule, soon disassociated from the original incident or accident. As soon as people no longer understand the "why" associated with a rule or process, it can be dismissed as bureaucratic red tape and soon ignored or frequently bypassed.
Remember Chernobyl? Remember Bhopal? Remember the Tenerife double aircraft disaster?
What do you remember about them?
Everyone remembers the Titanic, but what exactly do we remember about it? Do we need to be reminded of the details of why and how it happened on a regular basis?
We pay most attention to the why and how just after an accident happens because everyone is focused on "how could it possibly happen?" and "who is responsible?" What we really need is a process for reminding people of the why and how when they think they least need it, when everything is going well and they start thinking it could never happen to them.
I'm also wondering about other factors:
1) Proximity: What's the relationship between an individual's "proximity" or level of involvement with an accident or related lessons on the one hand, and the declining memory curve? Does first hand "learning" last longer?
2) Intensity: What's the relationship between the intensity of the failure (i.e. human lives lost vs. a failed project that didn't achieve its objectives), the extent to which the causes of failure are investigated, and the speed with which memories of the failure fade and lessons are unlearned.
3) Dynamic nature of Lessons: Lessons need to be "updated" regularly based on most recent history and discoveries. Even if you've learned something based on first hand experience, you still need to "update" that knowledge.
Rules and mandated processes need to remain linked to their original rationale. When someone is told that they need to follow rule x, y, z, they should be able to ask "why" and to get a straight answer other than 1) that's how we've always done it, or 2) that's the rule. If you understand the why and the rationale makes sense, you're much more likely to follow the rule.
Subscribe to:
Posts (Atom)