Saturday, August 12, 2017

Quantification Bias

"Give me one example of a  time when a lesson learned was used effectively by a project."
You'd think one example wouldn't be too hard to find.  I'm not being asked "What's the percentage of lessons in the database that are actually applied?"

Then someone will also ask, "What's the ROI of lessons learned activities?  Does it save us any money?  How many failures have lessons learned ever prevented?"

This eternal conversation is one that I'll admit I've avoided at times, perhaps because it's just challenging.  It's challenging to provide an answer that will satisfy the person asking these types of questions.

I've addressed metrics in small bites throughout the years, most recently in a metrics anecdote post. Quantifying "learning from experience" is daunting.  Sometimes I almost want to say "I know it when I see or hear it."  In fact, it's more likely that I'll notice that a lesson has NOT been learned, when I'm having a déjà vu experience during a lessons learned session and I'm hearing something I've heard multiple times before. I could point Management to those lessons that keep coming back.  I've done that informally.  I have not kept quantitative data.  I can't tell you how many times it's happened in the past year.  I could, however, do a more thorough job of documenting specific instances AND perhaps even more importantly, figure out why it's happening again.

The answer to "why are we not learning this lesson" is never a simple one and it's usually not a single point failure and easy to fix problem.  Sometimes, as I've pointed out in the previous blog post, the root cause of the failure to learn is related to the ownership of lessons.  Making sure Management is aware of the repeated problems isn't the end of it.  In my experience, nothing I bring up to Management is completely new to their ears.  However, in the knowledge manager's role, I also facilitate dialogue between key stakeholders, including Management, through knowledge sharing workshops.  The topics selected for such workshops are typically based on recent themes emerging from lessons learned session.  And so we try to address the pain points as they emerge, but I'll confess that we don't quantify any of it.  Correction, we do the obvious of counting how many people attend the workshops.

There is a general quantification bias in many aspects of work and decision-making.  Everyone wants to make decisions based on evidence.  In most cases, evidence is taken to mean hard data, which is understood to be quantitative data (as opposed to soft, qualitative fluff), as if hard data was always correct and therefore much more useful and reliable than anything else.  The words "evidence" and "data" have now been completely associated with quantitative measures.

When people say "where is your data?" they don't mean what are your two or three data points.  That's easy to dismiss, it's anecdotal.  The more data points you have (the bigger your dataset), the more accurate your conclusions must be.  Under certain conditions, perhaps, but certainly not if you're asking the wrong questions in the first place.

I recently came across Tricia Wang's TED Talk, "The Human Insights Missing from Big Data."


Given that Ms. Wang is a data ethnographer (very cool job!), her point of view isn't surprising and given that I'm more or a qualitative methods person, the fact that I find it relevant and relate to it isn't surprising either.  That's just confirmation bias.   Ms. Wang brought up the quantification bias, which I have often been struggling against in my work.  It manifests itself in questions such as "how many hits do you get on the lessons learned database" or "how many new lessons were generated this past year?"  These (proxy measures of learning) are the simpler questions that have (meaningless) quantitative answers.  Is having a meaningless quantitative answer better or worse than saying that something can't be measured.  I should never say "that can't be measured."  It would be better to say "I don't know how to measure that.  Do you?"

I wouldn't suggest we should all turn to qualitative methods and neglect big data.  We should, however, do a better job of combining qualitative and quantitative approaches.  This isn't news.  It's just one of those lessons we learned in graduate school and then forgot.  We learn and forget just so that we can relearn.

My own bias and expertise stands squarely with qualitative approaches.  It could be simply that my first degree being in political science, I always have in the back of my mind that decision-making isn't simply a matter of having access to information/data to make the right decision.  It's part of what makes us human and not machines.

No comments: