Monday, August 14, 2017

Making New Mistakes - Learning @ NASA - August 16th Webinar Open to All




I'll be joining NASA colleagues Michael Bell of Kennedy Space Center and Jennifer Stevens of Marshall Space Flight Center to talk about how NASA addresses lessons learned.  My focus at the Goddard Space Flight Center has been working with projects to institutionalize group reflection activities such as the Pause and Learn as a way of facilitating group learning and documenting lessons for dissemination, focusing on knowledge flows and learning rather than lessons in a database.

This webinar is open to the public and there should be time for Q&A.

Saturday, August 12, 2017

Quantification Bias

"Give me one example of a  time when a lesson learned was used effectively by a project."
You'd think one example wouldn't be too hard to find.  I'm not being asked "What's the percentage of lessons in the database that are actually applied?"

Then someone will also ask, "What's the ROI of lessons learned activities?  Does it save us any money?  How many failures have lessons learned ever prevented?"

This eternal conversation is one that I'll admit I've avoided at times, perhaps because it's just challenging.  It's challenging to provide an answer that will satisfy the person asking these types of questions.

I've addressed metrics in small bites throughout the years, most recently in a metrics anecdote post. Quantifying "learning from experience" is daunting.  Sometimes I almost want to say "I know it when I see or hear it."  In fact, it's more likely that I'll notice that a lesson has NOT been learned, when I'm having a déjà vu experience during a lessons learned session and I'm hearing something I've heard multiple times before. I could point Management to those lessons that keep coming back.  I've done that informally.  I have not kept quantitative data.  I can't tell you how many times it's happened in the past year.  I could, however, do a more thorough job of documenting specific instances AND perhaps even more importantly, figure out why it's happening again.

The answer to "why are we not learning this lesson" is never a simple one and it's usually not a single point failure and easy to fix problem.  Sometimes, as I've pointed out in the previous blog post, the root cause of the failure to learn is related to the ownership of lessons.  Making sure Management is aware of the repeated problems isn't the end of it.  In my experience, nothing I bring up to Management is completely new to their ears.  However, in the knowledge manager's role, I also facilitate dialogue between key stakeholders, including Management, through knowledge sharing workshops.  The topics selected for such workshops are typically based on recent themes emerging from lessons learned session.  And so we try to address the pain points as they emerge, but I'll confess that we don't quantify any of it.  Correction, we do the obvious of counting how many people attend the workshops.

There is a general quantification bias in many aspects of work and decision-making.  Everyone wants to make decisions based on evidence.  In most cases, evidence is taken to mean hard data, which is understood to be quantitative data (as opposed to soft, qualitative fluff), as if hard data was always correct and therefore much more useful and reliable than anything else.  The words "evidence" and "data" have now been completely associated with quantitative measures.

When people say "where is your data?" they don't mean what are your two or three data points.  That's easy to dismiss, it's anecdotal.  The more data points you have (the bigger your dataset), the more accurate your conclusions must be.  Under certain conditions, perhaps, but certainly not if you're asking the wrong questions in the first place.

I recently came across Tricia Wang's TED Talk, "The Human Insights Missing from Big Data."


Given that Ms. Wang is a data ethnographer (very cool job!), her point of view isn't surprising and given that I'm more or a qualitative methods person, the fact that I find it relevant and relate to it isn't surprising either.  That's just confirmation bias.   Ms. Wang brought up the quantification bias, which I have often been struggling against in my work.  It manifests itself in questions such as "how many hits do you get on the lessons learned database" or "how many new lessons were generated this past year?"  These (proxy measures of learning) are the simpler questions that have (meaningless) quantitative answers.  Is having a meaningless quantitative answer better or worse than saying that something can't be measured.  I should never say "that can't be measured."  It would be better to say "I don't know how to measure that.  Do you?"

I wouldn't suggest we should all turn to qualitative methods and neglect big data.  We should, however, do a better job of combining qualitative and quantitative approaches.  This isn't news.  It's just one of those lessons we learned in graduate school and then forgot.  We learn and forget just so that we can relearn.

My own bias and expertise stands squarely with qualitative approaches.  It could be simply that my first degree being in political science, I always have in the back of my mind that decision-making isn't simply a matter of having access to information/data to make the right decision.  It's part of what makes us human and not machines.

Friday, August 04, 2017

The Ownership of Lessons

Earlier this week I attended a panel discussion on "The Role of Learning in Policymaking" organized by the Society for International Development's Policy and Learning Workgroup. I took a lot of notes because it was all very interesting but I'll focus here on one issue that hit a nerve for me:  Lessons learned ownership.

There are many reasons why some lessons are not "learned"  We don't believe them, we don't care enough, we forget them, etc....   I'm only going to focus here on one reason: Lack of ownership.  In other words, the hypothesis is that the ownership of a lesson contributes significantly to its utilization.

This lack of ownership comes in (at least) two flavors, two variations on the "not invented here" theme:

1. We don't learn very well from other people;  We learn better from our own experience -- and even then it's far from perfect because of personal biases and other issues.  Even if we understand and agree with someone else's lesson, we may not think it applies to us.  We don't own it.

2. We don't like being told what we should learn, especially if someone else's conclusion doesn't match ours. Why would I care about someone else's idea of what I should learn?  Did I ask for this "feedback"?  It is being offered in a way that's useful to me?  Sometimes we just don't want to own it.  We actively resist it because we didn't come up with it.

Example:  A donor agency makes policy recommendations to a developing country government based on strong donor-collected "evidence."  Let's face it, we can't get out own government to always act upon strong "evidence," so why do we expect other countries to act upon donor-generated lessons. Ownership needs to be built in from the beginning, not mandated at the end.  We might all know that but does it always happen?  I don't think so.

From Ownership to Action
To say that lessons are not learned until something is changed (in policy, procedures, behavior, etc...) is perhaps cliche and misleading or at least not very useful.  Over the past 9 years of helping project teams identify lessons from their experience, I have found that statement to be disconnected from reality.  If not totally disconnected from reality, I found the one-to-one linear relationship between lesson and action to get to "learning" to be a gross oversimplification.  Some of this oversimplification has to do with the lack of discussion of lesson ownership.

Having facilitated more than 100 lessons learned discussion sessions, I can now quickly identify ownership red flags in lessons learned conversations.  A lot has to do with the pronouns being used. I try to provide ground rules upfront encouraging the use of "I" and "we" and making sure the group is clear about who "we" refers to.  Blaming individuals or entities who are not in attendance and hinting at lessons intended for "them" ("They should do ________.") are both big red flags. It doesn't mean the conversation needs to stop, but it needs to be redirected to address ownership issues and ultimately increase the chances that some action will be taken.

At that point, the facilitator's redirect can go into two different directions and sometimes both are needed:
  • "Assume THEY didn't hear you right now and they're going to keep doing it their way (i.e, they are not going to learn).  What can you do next time to avoid this or at least mitigate the problem?"
  • "Is there an avenue for giving them this feedback so that they might do something about it (i.e., they might learn) and this problem isn't repeated?"
In the real world, where lessons that are documented don't automatically turn into actions, that's how I try to deal with ownership issues.  I primarily work with project teams, but their work requires interactions with many stakeholders external to the team.  Sometimes what is most needed is for separate lessons learned sessions with different set of stakeholders and then some discussion of lessons across the different sets.  It's not necessary to look for perfect consensus across the different groups, just to optimize understanding of the different perspectives.

It feels as if I'm only skimming the surface here.  More percolation needed.