Showing posts with label NASA. Show all posts
Showing posts with label NASA. Show all posts

Monday, August 14, 2017

Making New Mistakes - Learning @ NASA - August 16th Webinar Open to All




I'll be joining NASA colleagues Michael Bell of Kennedy Space Center and Jennifer Stevens of Marshall Space Flight Center to talk about how NASA addresses lessons learned.  My focus at the Goddard Space Flight Center has been working with projects to institutionalize group reflection activities such as the Pause and Learn as a way of facilitating group learning and documenting lessons for dissemination, focusing on knowledge flows and learning rather than lessons in a database.

This webinar is open to the public and there should be time for Q&A.
________________________________________

What?  You missed it.  You can catch up here.

Saturday, October 01, 2016

USAID and NASA - A Tentative Comparison of Industry Trends and Current Knowledge Management Challenges


The table below doesn't claim to be a thorough comparison of USAID and NASA.  It's a quick glimpse at key characteristics that impact current knowledge management challenges, inspired by the SID - Future of AID session earlier this week and about 10 years of practical experience in both of these worlds.

This deserves much more reflection and more than a blog post and table.   It could be a full book, but I can't answer the "SO WHAT?" question.  I keep coming up with new mini-insights that need to be connected somehow to build the bigger puzzle. All I'm really saying is that the two agencies are not that different and key knowledge management challenges are common across industries even if NASA is perceived as being well ahead of USAID from a Knowledge Management perspective.


US Government Agency / Industry
USAID / International Development
NASA / Aerospace
Goal
Global Economic Development, Poverty Reduction
Science & Exploration
Programs and Activities implemented to achieve the goal
Broad commitment to SDGs, Country strategies, sector-specific programs, individual projects
High-level strategies in each key space science domains (astrophysics, heliophysics, earth science, etc..); programs and individual missions
Implementation Models
Public private partnerships; contracts and grants with implementing non-profits and for-profit private sector organizations

International collaboration: working within the United Nations system
Increased emphasis on private sector involvement; continued partnerships with industry as contractors and academia as partners/contractors; partnerships with other countries’ space programs

International collaboration: Space Station
Changes in the industry
New entrants:
·        Countries like China and India, operating under different models, different rules.
·        Private sector investors
·        Large individual donors and corporate donors
New entrants:
·        Countries with new space ambitions
·        Private sector taking over roles previously owned by government (transport to Space Station, launch services, etc…)
Challenges
 Rapidly changing global economic and political environment; need to explore new implementation models.  NEED TO ADAPT FASTER, THEREFORE LEARN FASTER.
Rapidly changing technological innovation and implementation models. NEED TO ADAPT FAST, THEREFORE LEARN FASTER.
Key differences
Measuring success (‘IMPACT’) is a perennial challenge.  Scaling and replicability become difficult because there isn’t enough attention paid to “HOW” the activity was made to be successful.  Little emphasis on understanding the complex set of factors leading to success.  (See previous post)

Very little rigor in program and project implementation. (subjective judgment here, based on personal experience/perception)

What’s needed: Adaptive management, CRITICAL THINKING
Measuring success has never been an issue.  Success and failure are very clear and visible.  Identifying technical failures is a challenge when it happens on orbit, but the biggest challenge is identifying AND CORRECTING organizational failures.


High degree of rigor in project management (increasing rigor on cost and schedule dimensions), sometimes to the point of being a serious burden and impeding innovation.

What’s needed: Tailored application of project management “requirements”, CRITICAL THINKING
Knowledge Management Challenges
·        High turnover, shuffling around the same top contractors, same group of consultants (small world)
·        High barriers to entry (perhaps that’s changing with the emergence of new actors)
·        Generalists vs. specialists and the need for a holistic approach to problem solving, multi-disciplinary approach.
·        North-South discourse/issue, reinforcing impact of information technology
·        Absorptive capacity, perceived weakness of local knowledge capture/knowledge transfer.
Confusion around M&E, Knowledge Management and communications/PR resulting from the incentives structure (see previous blog post). 

DIFFICULTY IDENTIFYING REAL LESSONS, SPECIFYING “SUCCESS FACTORS”, INCLUDING CONTEXTUAL FACTORS.  NEED TO LEARN TO ADAPT AND INNOVATE.   Learning from flawed data on impact studies is… flawed.  Need to come up with something much more forward looking, agile and adaptive.
·        Retiring, aging workforce with critical experience-based knowledge is leaving
·        New entrants/partners are not using tested/proven approaches, steep learning curve, yet that’s how they can take risks and innovate
·        Need for insights from other fields, increased openness to insights from non-technical fields
·        Perennial challenge of cross-project knowledge transfer (“we are unique” mentality) and knowledge exchange across organizational boundaries.


FINDING THE BALANCE BETWEEN LESSONS LEARNED (OLD KNOWLEDGE) AND LEARNING TO ADAPT AND INNOVATE (NEW KNOWLEDGE).
This was a case where an insight map didn't seem to fit the purpose, yet I bet it would help me to connect the dots a little better. 

I had previously written about the two organizations:  Foreign Assistance Revitilization and Accountability Act of 2009, August 11, 2009.  A great deal of USAID's current focus on Monitoring, Evaluation, Knowledge Management and the CLA (Collaborating, Learning and Adapting) model emerged out of that 2009 legislation.  

See also "Defining Success and Failure: Managing Risk", July 29, 2009.

______________________________________________
12/17/2016 - Addendum - There are many interesting and related insights in Matthew Syed's Black Box Thinking, which investigates how certain industries are much better (more thorough) at learning from their mistakes than others.

Thursday, August 04, 2016

Mapping Lessons Learned to Improve Contextual Learning at NASA - APQC's August 2016 Webinar


A special invitation to join Dr. Rogers and I for a presentation on Mapping Lessons Learned at NASA. 




"If you missed APQC's 2016 KM Conference this past April, we've got a treat for you! Join us on Thursday, August 18 at 10:30 a.m. CDT for the August KM webinar, Mapping Lessons Learned to Improve Contextual Learning at NASA.
NASA Goddard Space Flight Center’s Chief Knowledge Officer, Dr. Edward Rogers, and Barbara Fillip from Inuteq, will repeat their highly-rated session from the conference on how Goddard has designed a KM program to fit the needs of the organization, focusing on one of the most essential aspects of the program: the process for documenting lessons learned from projects using concept maps.

This presentation will have a very brief intro to concept mapping, followed by an explanation of how and why it is used at NASA. Dr. Fillip and Dr. Rogers have worked on this together for seven years and will jointly address benefits of the approach as well as remaining challenges.
Can't make the webinar? Register anyway and you will receive a copy of the slides and recording, regardless of attendance. "



FOLLOW UP:  We had more than 400 live attendees and the webinar was very well received.  It worked well that we talked for 30 minutes and had 30 minutes of Q&A.


Sunday, July 31, 2011

Signs of KM Maturation

More than three years ago (May 2008), I joined the Office of the Chief Knowledge Officer at NASA's Goddard Space Flight Center.  As a contractor rather than a civil servant, I was (and still am) working with a Task Order and slowly getting to assimilate how on-site contractors are supposed to work.  I had worked on Government contracts before, but in a very different context and not on-site.  At first, I thought I was responsible for expanding the reach of the center's KM practices so that it wasn't an ad hoc affair but a set of KM practices embedded into the projects' life-cycle.  Ideally, projects would complete a set of KM activities on a regular basis just like they go through key reviews and reach critical milestones.  It would be part of what they do.  In a perfect world, they would be doing it because they see value in it rather than because it's a requirement.  A lot of groundwork had already been laid by the Chief Knowledge Officer, so it made sense and at the time, it didn't look overly ambitious. I was naive.  The most important thing I have learned over the past three years is establishing a KM program takes time, even when you have a dedicated staff.  KM staff need to be resilient, persistent, and willing to constantly engage in small experiments to refine and adapt their approach, take advantage of opportunities that present themselves, and avoid the traps of KM.

If everything works well, as of October, I will finally get to work more directly with the projects to embed some KM practices in their life-cycle.  This is happening now not just as the result of a fortuitous coincidence of budget issues, but made possible by the fact that in the intervening years, our office has worked very hard to make KM practices work in a critical strategic area of the project organization.  Having demonstrated a successful approach in one small, yet critical office, we are offered an entry into the big guys' world, the mission projects.

When KM is funded as an overhead function, KM is at risk of de-funding.  When the project office is willing to pay not just for an annual KM event but a full time KM position, you know you're doing something right.  I'm not sure this is an indicator that features prominently in KM maturation models. Is it possible that the source of funding is a better indication of success than the overall size of a KM office? I feel that I have just been given this opportunity and I don't want to miss the boat. 
Of course, a lot could go wrong between now and October.  It is still very much a contractor position, therefore subject to a lot of budget uncertainty in the medium to long term.  If this opportunity moves forward as planned (I'm optimistic about it), there are no guarantees that we will succeed. There are no guarantees that what we did with that one small office can be a blueprint for other efforts, yet we have learned a lot with that effort and with three years under my belt in the organization, I am now much better equipped to assess the environment and admit that it is ambitious.

Working directly with the projects, rather than being perceived as a separate office, is an important step forward.  It has a lot to do with ownership of the KM activities.  When KM is something that the KM office does, it is typically an overhead, disposable activity.  When KM is embedded in projects, it becomes part of what they do, a way of doing work.
Enhanced by Zemanta

Saturday, January 08, 2011

Links of the Week (01/08/2011)

Read Knowledge Management Below the Radar
January 4, 2011, by Adam Richardson
My Comments:If my own experience is of any value, it's best to allow KM to thrive "under the radar" wherever it wants to sprout across the organization rather than try to control it centrally from a KM Office.  The challenge is that letting KM-related activities emerge and grow organically may result in a multitude of pockets of knowledge and associated technologies that are not necessarily well integrated or connected.  You can end up with knowledge silos.  So, the KM Office, if there is one, has a role to play in connecting the dots and providing broad guidance as well as... and this is very important, filling the gaps... doing what is critical from a KM perspective that isn't already being done. 

A good example of that are the case studies our office develops based on the experience of projects.  Project teams may focus on their own lessons learned, which they should, ideally, handle internally, with the KM Office's support as needed  However, the project teams are not likely to spend time writing a case study meant to disseminate what they've learned to other projects.  It's something the KM Office can take on as a service to the organization as a whole, facilitating the transfer of lessons from one project to the rest of the organization.

Related Links
  • Office of the Chief Knowledge Officer, Goddard Space Flight Center, NASA (that's where I work)

  • NASA Case Studies


  • Read How to Make Use of Your Organization’s Collective Knowledge – Accessing the Knowledge of the Whole Organization - Part I, by Nancy Dixon

    My Comments: Nancy Dixon's posts aren't your typical blog posts, they're well thought out essays.  They usually come in a series on a particular topic. She talks about "sensemaking" as the first step in making use of an organization's collective knowledge.  In a practical setting, we call it "Pause and Learn," and it's a two-hour team reflection activity that enables members of a team to have a conversation about the salient aspects of a particular project experience. For project teams, one of the challenges is accepting the fact that this relatively simple conversation is valuable (i.e, worth spending precious time on).

    Related Link
  • Knowledge Management at Goddard: Pause & Learn

  • How do Rocket Scientists Learn?

  • Tuesday, July 13, 2010

    Mapping to Support Organizational Learning

    In June, I attended the Third International Conference on Knowledge Management for Space Missions in Darmstadt, Germany.  I was there to deliver a presentation titled "Mapping To Support Organizational Learning," and to learn from other KM initiatives, particularly within the European Space Agency (ESA).  My own presentation had a narrow focus, providing some insights into a process we've developed at NASA's Goddard Space Flight Center for capturing and reusing lessons or insights within the context of a "local learning loop."  It's not a process that necessarily scales up well to institutional-wide lessons learned but it appears to be quite useful to ensure that teams learn from their experiences and that those experiences are shared with teams that follow.

    Darmstadt is an interesting city where about everyone appears to own a bicycle.  It was a lot of fun to hear the French, Italians, Germans, British, etc...  all presenting in English with their respective accents.  I felt totally at home.

    Enhanced by Zemanta

    Friday, November 06, 2009

    The Office of the Chief Knowledge Officer at NASA's Goddard Space Flight Center

    I am trying to improve the Google Search results for a specific web site and testing some approaches. One of them involves creating outside links to the site. I realize they have to be quality links and this probably won't qualify as a quality link but there's no harm in trying.

    The Office of the Chief Knowledge Officer at NASA's Goddard Space Flight Center (GSFC) is the office responsible for Knowledge Management at Goddard. That's where I work. Our office is best known internally for the Road to Mission Success Workshop (also known as RTMS) and best known externally for the NASA Case Studies developed by the office. We also implement Pause and Learn (PaL) sessions which are the NASA equivalent of After-Action-Reviews (AARs).

    The office is led by Dr. Edward Rogers, Chief Knowledge Officer.

    And, for the latest news about what Goddard is doing, check out the website of NASA's Goddard Space Flight Center.
    Reblog this post [with Zemanta]

    Tuesday, August 11, 2009

    Foreign Assistance Revitalization and Accountability Act of 2009

    I don't usually get that excited about new bills presented to Congress but I figured I had to read this one. The Foreign Assistance Revitalization and Accountability Act of 2009 is out.

    I printed all 60+ pages of it (sorry!) and went at it with a pink highlighter. In some sections, I found myself highlighting everything, so I stopped the highlighting procedure.

    I was particularly interested in the section below:

    (p. 9)
    "Sec.624B Office for Learning, Evaluation and Analysis in Development.
    (1) Achieving United States foreign policy objectives requires the consistent and systematic evaluation of the impact of United States foreign assistance programs and analysis on what programs work and why, when, and where they work;
    (2) the design of assistance programs and projects should include the collection of relevant data required to measure outcomes and impacts;

    (3) the design of assistance programs and projects should reflect the knowledge gained from evaluation and analysis;

    (4) a culture and practice of high quality evaluation should be revitalized at agencies managing foreign assistance programs, which requires that the concepts of evaluation and analysis are used to inform policy and programmatic decisions, including the training of aid professionals in evaluation design and implementation;
    (5) the effective and efficient use of funds cannot be achieved without an understanding of how lessons learned are applied in various environments, and under similar or different conditions; and
    (6) project evaluations should be used as source of data when running broader analyses of development outcomes and impacts.

    None of this is very new, particularly aggressive or revolutionary. It's common sense. The problem I sense is that it fails to acknowledge that M&E as it has been practiced in international development, isn't necessarily going to provide the answers we're all looking for. Evaluation is done when the project is over. That's too late to change anything about how that particular project was run. Something has to be done while the project is being implemented. Something has to be done to ensure that the team implementing the project is fully engaged in learning. Technically, that's what the "M" for monitoring is meant to do.

    Instead of putting so much emphasis on the "evaluation" part of the M&E equation, and trying to do "rigorous impact assessments", I would want to focus much more on developing more meaningful monitoring. Meaningful monitoring could use some insights from knowledge management. You don't do knowledge management around projects by waiting till the end of a project to hold an After-Action-Review and collect lessons learned. If you try to do that, you're missing the point. However, if you hold regular reviews and you ask the right kinds of questions, you're more likely to encourage project learning. If you have a project that is engaged in active learning, you are not only more likely to have a successful project but you will increase your chances of being able to gather relevant lessons. Asking the right kinds of questions is critical here. You can limit yourself to questions like "did we meet the target this month?" or you can ask the more interesting "why" and "how" questions.

    Traditional monitoring involves setting up a complex set of variables to monitor, overly complex procedures for collecting data.. all of which tends not to be developed in time, and is soon forgotten and dismissed as useless because it is too rigid to adapt to the changing environment within which the project operates. [I may be heavily biased by personal experiences. But then, don't we learn best from personal experience? ]

    I know the comparison is a stretch but at NASA, the safety officer assigned to a project is part of an independent unit and doesn't have to feel any pressure from the project management team because he or she doesn't report to project management. If something doesn't look right, she has the authority to stop the work.

    If monitoring and evaluation is to be taken seriously within USAID, I suspect that it will require a clearer separation of M&E functions from the project management functions. If the monitoring function is closely linked to project reporting and project reporting is meant to satisfy HQ that everything is rosy, then the monitoring function fails to perform. Worse is when monitoring is turned into a number crunching exercise that doesn't involve any analysis of what is really going on behind the numbers. Third party evaluators need to be truly independent. The only way that is likely to happen is if they are USAID employees reporting to an independent M&E office.

    I would also want more emphasis on culture change. As long as the prevailing culture is constantly in search of "success stories," and contractor incentives are what they are, there will be resistance to taking an honest and rigorous look at outcomes and impacts. Without an honest and rigorous look at outcomes and impacts, the agency will continue to find it difficult to learn. If you can't change the prevailing culture fast enough, you need to establish and independent authority to handle the M&E functions or train a new breed of evaluation specialists who don't have to worry about job security.

    My first hand experience with USAID-funded impact assessments has led me to question whether those who ask for impact assessments are willing to acknowledge that they may not get the "success story" they are hoping for.

    Hmm.... I guess I still have strong opinions about M&E. I tried to get away from it.

    I've always thought that M&E was closely related to Knowledge Management, but I also thought it was the result of my own career path and overall framework. (See my core experience concept map on my new website)

    Watch out for these M&E and Knowledge Management connections:

    (p 12)
    (6) establish annual evaluation and research agendas and objectives that are responsive to policy and programmatic priorities;


    If you're going to do research, why not make it "action research". Keep it close to the ground, make it immediately useful to those involved in implementing projects on the ground . Then you can aggregate the ground-based research findings and figure out what to do at the policy and programmatic levels. Otherwise you'll end up with research that's based on HQ priorities and not sufficiently relevant to the front lines. If you're going to try to capture knowledge that is highly relevant to the organization, make sure you're doing it from the ground up and not the other way around. Knowledge needs to be relevant to the front lines workers, not just to the policy makers.

    (p. 12)
    (11) develop a clearinghouse capacity for the dissemination of knowledge and lessons learned to USAID professionals, implementing partners, the international aid community, and aid recipient governments, and as a repository of knowledge on lessons learned;

    I'm glad at least the paragraph doesn't include the word "database". I'm hoping there's room for interpretation. I'd love to be involved in this. Knowledge management has a lot to offer here, but we need to remember that knowledge management (an organizational approach) isn't exactly the same as Knowledge for Development. Knowledge management can be an internal strategy. As indicated in para. (11) above, the dissemination of knowledge and lessons learned needs to go well beyond the walls of the organization itself. That's both a challenge and an opportunity.

    (p. 12)
    (12) distribute evaluation and research reports internally and make this material available online to the public; and

    Do project staff really have the time to read evaluation and research reports? Do the people who design projects take the time to read evaluation and research reports? I don't mean to suggest they're at fault. What probably needs to happen, however, is that report findings and key lessons are made more user-friendly, otherwise, they remain "lessons filed" rather than "lessons learned."

    In my current job with the NASA Goddard Space Flight Center, I've been very fortunate to witness the use of case studies as a very powerful approach to transmitting lessons learned. Case studies often originate from a massive Accident Investigation Report that very few people will ever read from end to end. Case studies extract key lessons from a lengthy report and present them in a more engaging manner. It's also not enough to expect people to access the relevant reports on their own. There has to be some push, some training. The same case studies can be used in training sessions.

    These don't feel like well thought out ideas but then, at least they're out of my head and I can get back to them later when something more refined comes to mind. If I waited for a perfect paragraph to emerge, I wouldn't write much at all.


    Reblog this post [with Zemanta]

    Wednesday, July 29, 2009

    Defining Success and Failure, Managing Risks

    When I started working as a NASA contractor more than a year ago, I quickly noticed the differences between NASA and international development. No kidding! I didn't expect much in common but I wasn't sure at all what to expect with NASA.

    NASA's daily routines, all the hard work that gets missions into space, revolves around minimizing the risk of failure. Risk management is a big deal. You build a spacecraft that costs a lot of money, you can't afford to have it blow up or become floating space debris. A lot of attention is paid to ensuring success by analyzing every possible failure mode and coming up with mitigation strategies when there is a residual risk. There are methodologies and full time risk manager positions handling all this.

    Failures tend to be obvious, even when they are not catastrophic. Either something is working as planned or it is not. You can have a partial failure but you know exactly what is and what isn't working. It's not something you can hide.

    As in the transportation industry, catastrophic failures are studied extensively to understand the causes of the failure and to make sure that this particular type of accident isn't repeated. NASA tries very hard to learn from its failures. Accidents in the human spaceflight program may get the most public coverage, but all accidents and failures in space and on the group are dissected to understand root causes. Ensuring the lessons learned from such detailed studies are embedded in project routines to avoid repeat failures is a more difficult task. Many of the contributing factors to a failure are soft issues like team communications that don't have an easy technical solution ready to apply uniformly across missions.

    On the success side, the typical discourse sounds very much like PR and has little to do with trying to understand what when right when a mission is successful. There is little attention being paid to all the factors that made it possible for a particular mission to succeed. Success is defined as the absence of failure and doesn't seem to require extensive "study." Success is normal, failure is the anomaly to focus on.

    I should add that success is defined very clearly and early on in a mission's development. How that success is defined early on has important implications on how the mission is designed and the types of risks and mitigation strategies that are developed, including what gets chopped down when the budgets are cut.

    Turning to the field of international development, it's as if all of that is reversed. Failure is something projects / donors hardly ever admit to because they can get away with it. Failure is often far away, relatively invisible, easily forgotten. Success and failure are not clear cut because, among other things, projects (and their multiple stakeholders) often fail to come to a common understanding of what will constitute success. Monitoring and Evaluation (M&E) meant to document progress and, eventually, "success" is underfunded or not funded at all, as well as very difficult to execute in a meaningful and unbiased way. There are no incentives to openly talk about failures and why they happen.

    There is no "risk management" beyond perhaps spelling out some assumptions in the early design of a project. In a NASA project, risk management is an ongoing process, not something handled during the project design phase.

    Writing the statement above about the absence of risk management in international development prompted me to go check. I discovered that AusAID, the Australian development agency, does talk about "risk management." See AusGuidelines: Managing Risk. And this turns out to be very timely since the Australian Council for International Development is doing a workshop on the topic July 29th in Melbourne and July 30th in Sidney.

    My previous experience in international development circles has been that few project managers have been trained as project managers and are aware of or applying project management methodologies such as those promoted by the Project Management Institute (PMI). That's just not how international development projects are designed and implemented. I did see a trend, especially in IT-related projects, where more sophisticated project management approaches were becoming a requirement.

    So, if NASA needs to learn how to analyze its successes as much as it analyzes it failures, I would suggest that the international development community needs to pay more attention to defining what success and failure mean for any given project or program, and start applying risk management principles more systematically. Risk management methodologies would need to be adapted to existing international development practices and requirements and to the specifics of different types of projects.

    See also:
    Charles Tucker, "Fusing Risk Management and Knowledge Management," ASK Magazine, Issue 30.
    Reblog this post [with Zemanta]

    Sunday, April 05, 2009

    Knowledge Management in Federal Agencies

    The Federal Knowledge Management Working Group launched a Federal Knowledge Management Initiative a while ago. Members of the group are feverishly working within Action Groups to create sections of a Roadmap document. I'm a little skeptical about the overall value and quality of what is going to emerge as the final document but if the primary objective of the initiative is to put knowledge management on the agenda of the Obama Administration and the leadership of federal agencies, then it might achieve that.

    I have been participating in two of the Action Groups and in the process, I've learned a few things about "writing by committee", the challenges of writing a coherent piece when the authors come from different perspectives and don't share a common language, using a wiki to work on collaborative writing, how to get group members to volunteer for specific writing or review and editing tasks, and more generally, how to voice disagreement effectively.

    The centerpiece of the initiative is the creation of a Federal KM Center. Sometimes, when you are trying to make a point (as in.. there is a need for a Federal KM Center to increase the visibility of KM in Federal Agencies), you end up emphasizing the negative (there are few Chief Knowledge Officers, Federal Agencies employ ad hoc KM practices, etc...) and failing to highlight the real successes. For example, a couple of agencies (esp. Army and NASA) are perceived as good examples to follow and repeatedly mentioned as such while many agencies that have developed relevant and successful "knowledge management" practices are much less visible and never mentioned.

    What if the reality is that many more Federal Agencies are implementing Knowledge Management related activities, don't necessarily feel the need for a formal KM program, and achieve great results without one? There is an assumption that if you don't have a formal KM program you're probably not doing enough, not doing much. What if not needing a formal KM program is a sign that you are already ahead of the curve and your KM approach is well integrated in your operations?

    What if an agency that is allowing its various offices to develop their own best practices or lessons learned activities is more effective than a centralized KM office? Which should come first? A centralized KM program? The ad hoc emergence of best practices/lessons learned activities within organizational units? If the objective is to generate quick wins, I would suggest that ad hoc activities at the local level, within organizational units is more effective. Once those local level mechanisms are in place, coordination and knowledge sharing across organizational units can help build greater organizational learning at the agency level.

    [Testing Zemanta with this post]
    Reblog this post [with Zemanta]