Tuesday, August 11, 2009

Foreign Assistance Revitalization and Accountability Act of 2009

I don't usually get that excited about new bills presented to Congress but I figured I had to read this one. The Foreign Assistance Revitalization and Accountability Act of 2009 is out.

I printed all 60+ pages of it (sorry!) and went at it with a pink highlighter. In some sections, I found myself highlighting everything, so I stopped the highlighting procedure.

I was particularly interested in the section below:

(p. 9)
"Sec.624B Office for Learning, Evaluation and Analysis in Development.
(1) Achieving United States foreign policy objectives requires the consistent and systematic evaluation of the impact of United States foreign assistance programs and analysis on what programs work and why, when, and where they work;
(2) the design of assistance programs and projects should include the collection of relevant data required to measure outcomes and impacts;

(3) the design of assistance programs and projects should reflect the knowledge gained from evaluation and analysis;

(4) a culture and practice of high quality evaluation should be revitalized at agencies managing foreign assistance programs, which requires that the concepts of evaluation and analysis are used to inform policy and programmatic decisions, including the training of aid professionals in evaluation design and implementation;
(5) the effective and efficient use of funds cannot be achieved without an understanding of how lessons learned are applied in various environments, and under similar or different conditions; and
(6) project evaluations should be used as source of data when running broader analyses of development outcomes and impacts.

None of this is very new, particularly aggressive or revolutionary. It's common sense. The problem I sense is that it fails to acknowledge that M&E as it has been practiced in international development, isn't necessarily going to provide the answers we're all looking for. Evaluation is done when the project is over. That's too late to change anything about how that particular project was run. Something has to be done while the project is being implemented. Something has to be done to ensure that the team implementing the project is fully engaged in learning. Technically, that's what the "M" for monitoring is meant to do.

Instead of putting so much emphasis on the "evaluation" part of the M&E equation, and trying to do "rigorous impact assessments", I would want to focus much more on developing more meaningful monitoring. Meaningful monitoring could use some insights from knowledge management. You don't do knowledge management around projects by waiting till the end of a project to hold an After-Action-Review and collect lessons learned. If you try to do that, you're missing the point. However, if you hold regular reviews and you ask the right kinds of questions, you're more likely to encourage project learning. If you have a project that is engaged in active learning, you are not only more likely to have a successful project but you will increase your chances of being able to gather relevant lessons. Asking the right kinds of questions is critical here. You can limit yourself to questions like "did we meet the target this month?" or you can ask the more interesting "why" and "how" questions.

Traditional monitoring involves setting up a complex set of variables to monitor, overly complex procedures for collecting data.. all of which tends not to be developed in time, and is soon forgotten and dismissed as useless because it is too rigid to adapt to the changing environment within which the project operates. [I may be heavily biased by personal experiences. But then, don't we learn best from personal experience? ]

I know the comparison is a stretch but at NASA, the safety officer assigned to a project is part of an independent unit and doesn't have to feel any pressure from the project management team because he or she doesn't report to project management. If something doesn't look right, she has the authority to stop the work.

If monitoring and evaluation is to be taken seriously within USAID, I suspect that it will require a clearer separation of M&E functions from the project management functions. If the monitoring function is closely linked to project reporting and project reporting is meant to satisfy HQ that everything is rosy, then the monitoring function fails to perform. Worse is when monitoring is turned into a number crunching exercise that doesn't involve any analysis of what is really going on behind the numbers. Third party evaluators need to be truly independent. The only way that is likely to happen is if they are USAID employees reporting to an independent M&E office.

I would also want more emphasis on culture change. As long as the prevailing culture is constantly in search of "success stories," and contractor incentives are what they are, there will be resistance to taking an honest and rigorous look at outcomes and impacts. Without an honest and rigorous look at outcomes and impacts, the agency will continue to find it difficult to learn. If you can't change the prevailing culture fast enough, you need to establish and independent authority to handle the M&E functions or train a new breed of evaluation specialists who don't have to worry about job security.

My first hand experience with USAID-funded impact assessments has led me to question whether those who ask for impact assessments are willing to acknowledge that they may not get the "success story" they are hoping for.

Hmm.... I guess I still have strong opinions about M&E. I tried to get away from it.

I've always thought that M&E was closely related to Knowledge Management, but I also thought it was the result of my own career path and overall framework. (See my core experience concept map on my new website)

Watch out for these M&E and Knowledge Management connections:

(p 12)
(6) establish annual evaluation and research agendas and objectives that are responsive to policy and programmatic priorities;


If you're going to do research, why not make it "action research". Keep it close to the ground, make it immediately useful to those involved in implementing projects on the ground . Then you can aggregate the ground-based research findings and figure out what to do at the policy and programmatic levels. Otherwise you'll end up with research that's based on HQ priorities and not sufficiently relevant to the front lines. If you're going to try to capture knowledge that is highly relevant to the organization, make sure you're doing it from the ground up and not the other way around. Knowledge needs to be relevant to the front lines workers, not just to the policy makers.

(p. 12)
(11) develop a clearinghouse capacity for the dissemination of knowledge and lessons learned to USAID professionals, implementing partners, the international aid community, and aid recipient governments, and as a repository of knowledge on lessons learned;

I'm glad at least the paragraph doesn't include the word "database". I'm hoping there's room for interpretation. I'd love to be involved in this. Knowledge management has a lot to offer here, but we need to remember that knowledge management (an organizational approach) isn't exactly the same as Knowledge for Development. Knowledge management can be an internal strategy. As indicated in para. (11) above, the dissemination of knowledge and lessons learned needs to go well beyond the walls of the organization itself. That's both a challenge and an opportunity.

(p. 12)
(12) distribute evaluation and research reports internally and make this material available online to the public; and

Do project staff really have the time to read evaluation and research reports? Do the people who design projects take the time to read evaluation and research reports? I don't mean to suggest they're at fault. What probably needs to happen, however, is that report findings and key lessons are made more user-friendly, otherwise, they remain "lessons filed" rather than "lessons learned."

In my current job with the NASA Goddard Space Flight Center, I've been very fortunate to witness the use of case studies as a very powerful approach to transmitting lessons learned. Case studies often originate from a massive Accident Investigation Report that very few people will ever read from end to end. Case studies extract key lessons from a lengthy report and present them in a more engaging manner. It's also not enough to expect people to access the relevant reports on their own. There has to be some push, some training. The same case studies can be used in training sessions.

These don't feel like well thought out ideas but then, at least they're out of my head and I can get back to them later when something more refined comes to mind. If I waited for a perfect paragraph to emerge, I wouldn't write much at all.


Reblog this post [with Zemanta]

No comments: