Monday, December 24, 2018

eLearning Materials Development - Testing a Course Preview

Lettering by Alexandra Fillip (on Instagram @alextrieslettering)
In the past 48 hours, I was absorbed in a little project.  I wanted to develop an interactive presentation using PowerPoint.  Instead of a set of slides that would automatically move forward or move forward in a linear fashion with a click of the mouse, I wanted the viewer to explore the slides in a non-linear way by selecting to click on different elements of the slides.

In the end, I switched to Google Slides because it provides an easier way to embed the presentation in the LMS I will be using.  Once the presentation is embedded, any update or change I make in Google Slides is automatically updated in the LMS.   Since I tend to continuously improve content, automatic updates of the files to avoid confusing versions of a presentation is a great feature.  The same is true of the presentation posted below.  It will be updated automatically if I make changes to it in Google Slides after posting this blog.

I also learned a few very useful tricks in just 24 hours of playing around. The resulting course preview (below) is not perfect because there are still a few things that don't work exactly the way I would like them to work, especially with regards to the navigation.  By the time the course is launched I suspect further improvements will have been made.

Knowledge Management in Project Environments - A Course Preview
 

A few reflections based on this rapid development:

1. Testing before full development 

Once the course concept has been finalized and it's time to develop the content that will actually be delivered, it's a good idea to test a small component.  In this case, I tested an interactive course preview to know exactly what is going to be possible for me to develop on my own with existing tools BEFORE developing full presentations for the entire course.   I could have developed a full set of interactive PowerPoint presentations with various features only to discover that all interactivity would be lost once uploaded to the LMS or that the students would need to download them to view them as I intended.

The test needs to cover the full development cycle, all the way from the development of the slides to their upload to the LMS to know exactly how the students are going to access and view the materials and how they are going to interact with the materials. Always test the student view in the LMS.

2. Adjust existing materials to target audience

While I already have 90% of the content in one form or another, the packaging for delivery purposes is not a matter of cut-and-paste. Each course I teach is very distinct in terms of audience, format and activities. Individual slides or graphics and pieces of text can be re-purposed, but adjustments are always needed. In the case of this course intended for project managers, I want to make sure that examples or explanations for key concepts are specifically relevant to the world of a project manager.

3. Make use of free available tools 

I have been tempted to identify a good eLearning development software and purchase it to ensure I have access to the most professional tools. However, in my situation, teaching on different platforms, it won't help at all.

At UMUC I have to use the UMUC platform, at GMU I have to use Blackboard and I have found that I prefer to partner with institutions that have their own eLearning platform in place rather than be responsible for my own.  It also forces me to learn to use a variety of tools, which is a long-term advantage.  If I had to be completely on my own, I would probably use one tool, like Yammer, and keep it as simple as possible.

In the process of testing how I would upload this little interactive presentation to Blackboard, I also figured out what tools are available on Blackboard for video lectures (Kaltura).  Therefore, for this particular course, I have solved 90% of the technical issues with this testing phase.  I can now focus on developing my content based on what I know of the technical constraints and opportunities.  In fact, identifying the tools available gave me new ideas.

4. Creative work is fun and rewarding, therefore I should plan on doing more of it!  (that's the beauty of being your own boss)

There are things I will do in the evening even if I am tired and on weekends because they engage the brain in ways that energize me.  In that sense, creative work is not "work".  I knew that already 15 years ago when I launched my first course, but the tools available now to everyone allow for the development of much more engaging content and a great deal more creativity.

Fifteen years ago my course was the equivalent of an online textbook with tons of hyperlinks and a separate email discussion board to engage with the participants.  Now, with the tools available, all it takes is a willingness to learn how to use those tools very quickly.   Whatever problem I encountered in this rapid development effort, there was a blog or a YouTube video that helped me find a solution.  This was a straightforward example of learning by doing and rapid problem solving.

5. Wider Application of Lessons (so far)

Everything I learned in the past 48 hours will be applied to improve the other classes I teach. I have already added video lectures to the UMUC class, but looking back they can be further improved with each session I teach. Similarly, the online component of the GMU class (assuming I teach it again next year) will also benefit from improvements.  And now that I think of it, I could also dramatically improve my Skillshare class on Insight Mapping. Let's make it a goal for the second half of 2019.

That was fun!  I have a feeling 2019 is going to be exceptionally fun.

Wishing everyone a happy holiday season and exceptional New Year!

Friday, December 21, 2018

Knowledge Management for Project Managers - A New Course Coming Soon

Exciting news!  

I have just heard that my proposal for a new course to be offered through the George Mason University Executive & Professional Education program has been approved.



The course is meant to give Project and Program Managers (and those aspiring to these positions), the essential theoretical and practical knowledge needed to better leverage knowledge as a key resources within projects and programs. 

The first session of the course to be offered in the Spring of 2019 will be 100% online through the George Mason University Blackboard eLearning platform.

I am also hoping to develop and deliver face-to-face versions of the course tailored to the needs of specific organizations.

Details, including registration information, dates and a course outline will be posted as soon as everything is confirmed.

To be kept informed of this or future sessions, please drop me an email at barbara@fillipconsulting.com.

Friday, December 14, 2018

What I have learned (so far) from teaching KM

Teaching is all about knowledge transfer.  Understanding the challenges of knowledge transfer from an organizational learning perspective is very helpful in creating parallels for teaching (and the other side of the coin, learning).  Knowledge Management and Organizational Learning could borrow a little more from learning theories and instructional design.  The Learning and Development (L&D) departments of most organizations could also learn from Knowledge Management and Organizational Learning.


In the fall of 2018, I focused my professional activities on teaching, but since i was teaching Knowledge Management, a lot of merging happened.  That merging of ideas is something akin to Nonaka and Takeuchi's Combination stage of knowledge creation perhaps.  Now I can't help cite some of my classes mandatory readings they're so present in my mind.

I taught two sessions of a Knowledge Management class for undergraduate students completely online and a semester long graduate level class on Knowledge Management Strategy face-to-face in a Friday evening/Saturday all-day format that accommodates working students. The two courses are taught completely differently.  There is some overlap in terms of the content, and the target audiences are very different.

In both courses, I've enjoyed the part that actually involves TEACHING through my interactions with the students.  With the graduate level class I have a lot more control over the methods and content and much more flexibility to adjust anything I need the next time I teach it (assuming I am asked to come back to teach it next year).  With the undergraduate level class, I am adjusting to a rather rigid format and content I have limited control over.

Regardless of the format and challenges presented by each class, I found myself often wondering how much learning was actually going on.  Obviously I was teaching but that's just one side of the equation.  It's like a conversation.  I could be talking while no one is truly listening. 

In both classes, there were disappointments and challenges.  There were things I wasn't fully prepared for, such as the 4.0 student who isn't quite happy with anything other than 100% on every assignment, the general reluctance to read, the overzealous reliance on Google to find answers to everything and the nagging feeling that there is some cheating going on (in the online class).  I was troubled by all of this but I was also inspired and in awe of some of the learning that I witnessed.  There were a few times, both online and face-to-face when I reviewed an assignment and I genuinely thought I could not have done it better if I had tried. 

The online learning environment can be particularly challenging because it requires much more self-discipline on the part of students.  I have seen good students (those who were really good before stepping into my virtual classroom) take advantage of what was being offered and learn a lot.  I have also seen weaker students make strong efforts to take advantage of my advice and support throughout the session and improve tremendously.  Somewhere in the middle, 50% of the class just wants to get to the finish line and will do the minimum required.

The face-to-face environment is challenging in a different way.  The students are working adults.  On Friday evenings, they are not in the best state to absorb four hours of teaching or engage in deep learning.  They want to get through the evening.  When they come on a Saturday they've had another class the night before.  Again, these are not ideal learning conditions.  Most of the work they have to do in-between classes involves a group consulting project which is also quite stressful (if they take it seriously).  Those who take it seriously learn the most, but there is a toll to pay.

While I have learned a great deal this Fall semester in terms of my own teaching and there are lots of little things I can change to improve the classes and their delivery, the most important things I have learned probably relate to my increased understanding of the students themselves, their strengths, weaknesses, motivations, attitudes, and how to react (and not overreact) when things don't go exactly as planned.

I look forward to more teaching and developing new classes.


Wednesday, September 05, 2018

Evidence of Learning

As I have taken on more teaching assignments this fall, I find myself asking new questions.  How do I know that the students are learning?  Am I transmitting something?  Am I transmitting something of value?

Since I am talking about traditional post-secondary education rather than continuous professional development, the obvious method for determining whether the students are learning comes in the form of assessments (papers, quizzes, participation in class discussions, etc...).

Since I am relatively new to this, I recognize that my insights are those of a newbie.

In a class of 25 students, it is easy to see with the initial assignments that they come into the class with a wide range of existing capabilities and prior knowledge.  I didn't teach much of anything to the student who writes the perfect answer in the first week of class.  However, I can "see" the learning when they start the class struggling with the concepts being presented, but by week 4 of an 8 week class they are getting more comfortable and by week 8 their final paper demonstrates a significant change in the way they are thinking about the topic. 

In a class of 25 students, at least a third is there to check the box and graduate as soon as possible. They will do the minimum possible and since that has probably been their strategy for a while, they are cruising without learning much of anything. This manifests itself with answers to discussion prompts that repeat something from the assigned readings and does not make any effort to connect to their own experience. Do I give up on that third of the class?  No.  I make them work for it. I try to ask them simple questions that would help them connect the concepts being discussed to their daily realities. I've also learned that it's dangerous to make quick judgments and assumptions about any student's particular approach to learning, their motivation for being in the class, etc...  The reality is that I know very little about them, especially in online classes. 

I have really enjoyed teaching (grading not so much!) and I am most proud of the students who have told me I made them think very hard.  So, that's it.  If I made them think, they built up their thinking muscles and they learned. In online classes, I have to follow strict rubrics for grading.  Those are useful at the beginning, to establish clear standards for the students to follow, but these rubrics are also hampering real conversations and learning from each other.  I'm happy when I see some independent thinking, regardless of whether the readings are cited properly or not.

The students are learning and I'm learning.  What else could I possibly want?

Friday, August 10, 2018

Job Crafting & Stretch Assignments - How to continue to learn

About a year ago, I started on a new professional path, committed to launching a new stage of my career, leaving behind the relative security of a full-time federal government contractor job for the freedom and uncertainty of consulting and teaching.  It was meant primarily as a learning journey. 

Growing and learning require stretching.  That stretching can get quite uncomfortable.  Every single thing I did over the past year was a stretch and therefore the level of discomfort was at times very palpable.  It helped (a little) that I knew this was likely to happen and I was somewhat prepared for it. 

Here are some things I've learned in the process:

  • While I often talked about this past year as a "Year of Learning", it was a form of learning that was meant to help me craft a path forward, not just learning for the sake of learning.    Specifically, I learned that while my initial expectations were to do 80% consulting and 20% teaching, the ratio was perhaps not the right one.  A 50/50 split might work out better. 
  • Stretching should not be confused with pushing oneself to the edge of burnout. In fact, overloading on work and getting overwhelmed only guarantees a decrease in "learning."  If it is impossible to set aside time to reflect on the job/assignment, then I can almost guarantee a lower quality output and no learning (other than not to do that again).
  • As I move forward and leave the idea of the "year of learning" behind, I will focus on the concept of job crafting for myself, which is easy as a self-employed consultant/part-time faculty.  I am writing my job description with a blank canvas.
  • I also want to continue integrating and combining my areas of expertise, looking for deeper insights as well as innovative solutions to stubborn challenges. 

Saturday, June 09, 2018

Action Learning and On-the-job Learning - a Follow up

This is perhaps a demonstration of how we learn just by being exposed to a diversity of conversations and how a simple blog and some prompting by others online can generate unplanned learning.  A few weeks ago I posted some questions reflecting my confusion around action learning, on-the-job learning, action research, experiential learning and similar terms.

My understanding of action learning was very fuzzy at the time and it has now evolved to the point where I see it as a specific group learning technique with a narrow range of applications in the same sense that After-Action-Reviews are a specific group learning/reflection technique.  It's a process with a specific set of rules.  It needs to be facilitated by an action learning coach, and it is meant to help solve a specific problem which first needs to be identified carefully so that it can in fact be addressed through this action learning process.

I'm both satisfied that I have a better understanding of the process and somewhat disappointed.  I wanted it to be more than that.  With a name like 'action learning', I expected more.

Is it on-the-job learning?  There is a learning component to it.  Using action learning as a process is a way of learning group problem solving.  It's probably a useful mechanism to improve critical thinking skills and team dynamics.  It's entirely about a work-related problem and therefore it's "on-the -job". However, it's not really what I would call 'learning by doing."

Saturday, May 26, 2018

Prototyping through Conversations

I'm in Week 7 of a Working Out Loud Circle (my first) and while I had some difficulty connecting this week's exercises to my goal and I've almost lost track of what my goal was in the first place, there are always interesting insights that come out of the conversation with my circle buddy. 

I have found the additional resources provided in each of the weekly guide to be a great source of useful insights even when I'm not sure the rest of the activities did anything for me. 

Here's an example:  This week was about thinking about a long term vision of oneself. I did a lot of work on that a year ago when I was transitioning from full-time work to consulting.  My vision is still the same and I'm on track. In a sense, this entire year has been an experiment, prototyping a range of different activities.

One of the additional resources for week 7 is a blog post by John Stepper titled "The simplest and easiest form of prototyping is a conversation."  I experienced this earlier this week when, after a significant number of individual interviews to collect data on on-the-job learning for a client, I was finally starting to see where this work was going, I did a hand-written sketch of the framework that was emerging, and during the last two interviews of the week, I went with my "prototype" framework to test some of the ideas that were emerging.  These conversations were some of the most satisfying I have had so far.  Listening, asking questions and absorbing information in interview is great, but I find the conversations where I start to validate my own emerging understanding to be the most satisfying.

Saturday, May 19, 2018

Action Learning, Action Research & Experiential Learning - Lots of questions & no answers

I am currently doing work around on-the-job learning.  Trying to define what constitutes "on-the-job learning" has turned into an interesting exercise.  On the face of it, anything that you might learn while doing your job is on-the-job learning. You might also define it by articulating what it's not.  Anything that requires you to leave your work to go attend some "training" may involve learning but it's not "on-the-job" even if it's sponsored by your workplace.

Some on-the-job learning is planned and intentional (mentoring, coaching, stretch assignments, etc...) but most of it happens in the flow of work and may be unconscious. In the past few weeks I've come across many variations without coming any closer (yet) to a definition.  How does on-the-job training (as opposed to on-the-job learning)?  Is it different?    It would seem that something like an apprenticeship would be closer to on-the-job training.

What about action learning? Is that a form of on-the-job learning?

According to the World Institute for Action Learning (WIAL), action learning can be defined as "a process that involves a small group working on real problems, taking action, and learning as individuals, as a team, and as an organization. It helps organizations develop creative, flexible and successful strategies to pressing problems."

The main difference between what is traditional seen as on-the-job learning and action learning may be the team dimension of action learning. The same can be said of many knowledge management practices, including After-Action-Reviews.  They focus on team or group-level learning.

What about action research?  Is that the same as action learning?  Is that a form of on-the-job learning?  Does the word "research" make it sound more scientific and rigorous?

This is where we need to define "on-the-job."  If "on-the-job" means in the process of one's daily job, then action learning and action research might not fit the bill.  While they are meant to address real, practical problems, they appear to be separate from the normal workflow.  But then, the same could be said of mentoring and coaching.

How about experiential learning?  That might get things even more mixed up because people delivering training might argue that they make it "experiential."

I'm quite lost at this point.  What if "on-the-job learning" is a useless label?

On May 25th, the Knowledge Management Community of DC is hosting Dr. Bea Carson for a session on Action Learning.  Dr. Carson is an author, speaker and expert in the field of Action Learning.

I look forward to it since I am personally quite confused about some of what I perceive as artificial boundaries between disciplines and overlapping terminology. 


Thursday, March 29, 2018

How to Raise an Iceberg

I've been thinking about learning. Nothing new here.  Usually my thought process goes to individual learning, group learning, organizational learning.... how do we make them reinforce each other  Today, I'm looking at it with a different lens.

I've often argued that knowledge management needs to be better embedded in the flow of work, or at least in existing processes. For example, organizations that have strong risk management processes could enhance their risk management with knowledge management, couple the two processes and avoid duplicating efforts.  When we try to put new processes on top of already complex work routines, we unnecessarily decouple knowledge processes from work.  The same can be said about on-the-job-learning more generally.  The goal should be to enhance learning in the flow of work.  To a large extent, it may already be happening and it is not getting recognized.  Here is an extension of the knowledge management analogy using the famous knowledge iceberg.  The knowledge iceberg is typically used to illustrate that only a small portion of our knowledge (our explicit knowledge) is visible above the surface of the ocean while most of our knowledge (tacit knowledge) is buried in our heads and difficult if not impossible to share.  My little hand-drawn version of the iceberg below suggests a similar challenge with learning.  Formal learning, above the water line, is visible, measurable, and can be managed by organizations.  Informal learning is much less visible, not well understood, yet may contribute much more to overall learning within the organization.

Formal learning tends to be managed to support work and it can be aligned with the organization's strategic objectives.  Informal learning happens in the flow of work.  The challenge is 1) to better understand how informal learning can benefit the organization and support formal learning objectives; and 2) to raise the iceberg so that informal learning gets more visibility and the attention it deserves.

I've put Communities of Practice at the ocean's surface because I see it as a way of trying to encourage and shape informal learning around topics relevant to the organization.


Inspiration for this line of thinking often comes from:


Friday, March 23, 2018

We Still Need Lessons Learned - (Part 3 of 3)

In the previous two posts, I've argued that 1) KM practitioners need to be prepared to sell lessons learned practices by looking at them differently and debunking myths about the futility of lessons learned databases, and 2) the value of lessons learned is in the transformation of tacit knowledge into explicit knowledge.  In this third and last part of the series, I will talk about the value of aggregation.

Part 3: The value of lessons learned activities is not in the individual lessons themselves sitting in a database but in the aggregation of lessons. 


As I reviewed existing lessons and then spent almost 10 years helping to identify "new" lessons, two interesting insights emerged:  1) Many so-called lessons were deemed unworthy of being pushed higher up for further dissemination; 2) Many interesting new lessons had to do with the need to change and adapt to new conditions.

Dealing with the little lessons
Many lessons did not necessarily rise to the level of "real" lessons that deserved to be pushed to a database. Many of these lessons were good practices that projects needed to be reminded of.  It's easy to get caught in a discussion about what's a real or worthy lesson (almost as easy as getting caught in a discussion of the definition of knowledge management).  The mistake is to dismiss small lessons as not worthy.  The "little lessons" or "motherhood and apple pie" lessons may not be sexy and they may be a reminder that we often seem not to learn, but they are important in other ways.  Perhaps they're not lessons at all, but they tell you something about underlying problems that need to be addressed at the institutional level and not through a lessons learned database. They take on different meaning when you look at them in aggregate. You can review all of them, identify recurring themes,  then develop mini-case studies to incorporate into informal workshops as well as more formal training.  In addition, management could be encourage to use these mini-case studies as story vignettes in their communications.

Another way to look at these less than stellar lessons is to say that they are "local" lessons.  They are valid and valuable as the project's lessons but they don't need to be shared or go any further.  That can also become a problem.  What if the same little local lessons are being learned again and again around all the projects.  Unless someone is paying attention to these little lessons across the projects, they will remain localized, soon forgotten, invisible and will never come to the attention of management... until something goes terribly wrong, a thorough investigation is launched and the findings are that the underlying issue was known to everyone and prevalent across the organization, but it didn't get anyone's attention because no red flags were attached to it.  Make the recurring little lessons more visible. Do something about them.

Part of the role of the KM manager is to keep an eye on the recurring "motherhood and apple pie" lessons.  The idea is that these lessons are obvious, things that everyone should know and can't possibly disagree with.  Anything that sounds like motherhood and apple pie is dismissed because it is just common sense or it's a good practice.  It's the opposite of the powerful insight, the aha! moment.  It's not sexy at all.  It may even be embarrassing.  How could they not have known this before?  That's common sense.  It's just good project management practice.  That lesson is in the database already, in a dozen variations.  I would say, keep an eye on what everyone says is obvious yet keeps coming up as a challenge or a lesson. This is a little counter-intuitive. It's not what keeps management up at night. It could be pervasive, yet under the radar. No red flags.

In addition, just because something is obvious to a senior project manager and he/she doesn't see the value of putting it in the lessons learned database doesn't mean it's obvious to a junior member of the team who hasn't yet learned it first hand.

It is dangerous to think that it takes a major disaster like a Challenger or Columbia to create case studies worthy of widespread dissemination and incorporation into courses and workshops.  People need to learn from the "little lessons" before they become big problems.  The underlying issues that led to or contributed to the Challenger and Columbia accidents were things that manifested themselves in small ways every day and probably almost everywhere in the organization.  You can wait until they become a BIG lesson or you can catch them when they're much less visible.  It's not the stuff of academic papers and conference presentations (though I might try that route too), it's not catchy, but it matters.

As a side note, I've often asked myself if our KM activities could actually prevent another big failure.  How could we ever know what failures we've prevented?  How could KM be held responsible for failing to prevent a failure big or small?  Obviously, KM isn't the only mechanism organizations leverage to prevent failures, especially in high-reliability organizations like NASA.  Safety and Mission Assurance as well as Risk Management also play key roles... which is also why they should be working in close coordination with KM.

Learning to Adapt to Change and Unlearning
Shared Voyage (book cover)Many of what I would consider the more interesting lessons had to do with change and adaptation to change.  They may have been unique lessons and definitely new lessons but they fell into this bigger bucket of learning to adapt to change.  In my last presentation within NASA/Goddard a key point I made about what I had learned during time there was that we are (this isn't specific to NASA) a little slow to learn the lessons related to the need for change.  We have difficulty unlearning, we have trouble accepting that what had worked well in the past may not work well now.  Unlearning is key to adapting to change and continuous learning.   The theme of "unlearning" at NASA is addressed in Shared Voyage: Learning and Unlearning from Remarkable Projects.  This obviously creates another challenge for static lessons learned databases.  Ideally, lessons would be constantly flowing throughout the organization, aggregated and regularly analyzed for trends, reviewed and re-assessed through constant conversations.

Reference
  • Laufer, A., Post, T. & Hoffman, E. (2005). Shared Voyage: Learning and Unlearning from Remarkable Projects. NASA History Series. NASA SP-2005-4111.

Wednesday, March 21, 2018

We Still Need Lessons Learned (Part 2 of 3)

Part 2: The value of a lessons learned activity is in the transformation of tacit knowledge into explicit knowledge,

The value of such lessons learned activities is increased when it is the result of a group reflection process (as opposed to an accumulation of lessons gathered from individuals. The numbers in the visual (in the previous post about the benefits of Pausing to Learn) are made up, meant to convey a sense of comparative magnitude of the benefits of documenting lessons.  A great deal of learning happens in the process of writing lessons down, even if no one else is reading them. It is now out of people's heads and transformed into explicit knowledge.

I have described this in APQC presentation (Rogers & Fillip, 2017). Learning happens in the initial conversation that helps to articulate the context and the lessons themselves.  Learning happens through knowledge synthesis as the conversation is documented (whether it's a narrative, a conversation map or a video interview).  Learning happens in the review and validation process and finally learning happens when others read or are exposed to the lesson.  In other words, learning happens in conversations, but these are not random conversations.  These are intentional learning conversations.

Even if no one ever reads those lessons, at least no one will be able to say that the employees retired or left and walked out the door with all their (tacit) knowledge. And yes, I know that most of it is still in their head and can't be captured, yet a good amount of learning or knowledge transfer can happen through conversations.

The real problem is that we often don't have very good mechanisms for utilizing that explicit knowledge and we draw the wrong conclusions, which is the equivalent of failing to learn from a failure... or learning the wrong lesson from a failure. An example would be to draw the conclusion that people aren't using the lessons learned system because they haven't been trained to use it.  Seriously?  Well... let's not assume anything but that wouldn't be my best guess as to why no one is using the database.  

There could be better utilization of lessons learned through other project processes such as reviews and risk management (Fillip, 2015).  In fact, many repeated "lessons" are the result of institutional challenges that cannot be addressed at the project level and could be tackled through organizational risk management. Expecting all the benefits of lessons learned processes to come from individuals going to the database to look for lessons is just wrong headed. It's standard build-it-and-they-will-come wrong-headedness. 

Reference


Monday, March 19, 2018

We Still Need Lessons Learned, and We Need to Look at Them Differently (Part 1 or 3)

In this first part of the series, I will talk about some of the challenges of leading a lessons learned practice and how I tried to address them, in other words, how I learned to sell lessons learned. 

Part 1: Selling Lessons Learned as a KM Practice

KM is a sales job.  When I was working at NASA/Goddard helping project teams document their lessons, I was at times confronted with resistance in the form of remarks such as "no one uses the lessons learned databases, why should we bother contributing lessons to the system?"  This had been exacerbated by successive reports criticizing the Agency level lessons database called the LLIS.  Regardless of the merits of the critique of that system (which isn't the point of this post series), I still had to do my job.

Since I needed to convince project teams to meet with me and document lessons, I developed an approach that would overcome this resistance.  It was based on a simple diagram (below) that allowed me to address the issue upfront by redirecting the conversation by saying "you're right, the lessons are not being used the way they were expected to in the databases, but that's not the right way to look at lessons."

I would use this diagram as a handout during planning meetings with the project leaders to convince them to hold a Pause and Learn (PaL) session (group reflection activity) and I would use it again (hand drawn on a flip chart) at the very beginning of the PaL session to explain why we were all there and what the benefits were.  For a valuable conversation to happen, I needed both the buy-in from the project management team and the buy-in from the rest of the team.  Sometimes the project management team does a good job of bringing the team to the table with the right attitude. Sometimes it's the job of the facilitator to quickly elicit that attitude at the very front-end of the meeting.  There is nothing worse than a senior scientist showing up for a PaL session determined to wreck it with sarcasm about how useless these meeting seem to be since we keep repeating the same mistakes (in their opinion) or a team showing up because they've been told to be there but they really think they should be back in their office working on real immediate and tangible problems.
Source:  Rogers, E. & Fillip, B. Mapping Lessons Learned to Improve Contextual Mapping at NASA. APQC Webinar, August 18, 2017.
Once a team had experienced a successful Pause and Learn session, it was typically much easier to do it again at a later stage in the project life cycle. If you've made a sale and you've delivered on the expectations, you can expect repeat business. KM may be everyone's job, but from the knowledge management professional's perspective, there is a significant amount of "selling" involved.

Related Resources

Sunday, March 11, 2018

Women in Knowledge Management (or any male-dominated field)


Stan Garfield posted a list of  KM Thought Leaders.  I don't think the list is new but it circulated recently on LinkedIn, which is where I saw it. The great majority of the thought leaders on that list are men.  I am re-posting here the names of women who were on that list.  The idea is 1) to give the women more visibility with a separate list; 2) to point out that perhaps there are many more women thought leaders in KM who need to be added to this list.

Ironically, posting the list on this blog will do little to increase visibility, but that's only a first step.

For those interested, there is a Women in KM LinkedIn Group.  It's "unlisted".  I assume there is a reason for keeping it unlisted but I don't know what it is. I shall inquire. I've been a member of the group and never contributed anything (my bad).

There was also a recent article in RealKM by Bruce Boyes about the issue of gender equity in KM (thank you to Boris Jaeger for pointing me in that direction).  The article seemed to point to the fact that the field of KM isn't immune to broader societal inequalities.  There is nothing surprising about that and I can't disagree. Having experienced some gender inequality frustrations of my own.  As a woman, I have a good sense of how it has affected my career.  I can't say I have good answers or solutions other than to become more vocal about it AND take responsibility for some of it as well.

While I happen to be more sensitive to gender bias, there are probably other biases embedded in that list.  How many are not US-based for example?


First NameLast Name
V. MaryAbraham
PattiAnklam
StephanieBarnes
IrmaBecerra-Fernandez
MadelynBlair
HelenBlunden
GloriaBurke
DanieleChauvel
KimizKalkir
VanessaDiMauro
NancyDixon
LiliaEfimova
SueFeldman
CarolKinsey-Goman
SusanHanley
RachelHappe
HeatherHedden
JuneHolley
DorothyLeonard
CharleneLi
AliceMacGillivray
CarlaO'Dell
JeanO'Grady
NirmalaPalaniappan
KatePugh
CelineSchillinger
CatherineShinners
CarlaVerwijs
NancyWhite
NilminiWickramasinghe
Gone but not forgotten
DebraAmidon
MelissieRumizen
Moved on to other focus area or retired
VernaAllee
KayeVivian


3/16/2018 - Correction: Adding women I had omitted from the original list.

  • Alex Bennet
  • Jo Ann Girard
  • Joitske Hulsebosch
  • Bronwyn Stuckey
  • Beverly Wenger-Trayner


Do I want to be on that list?  
Am I upset because I'm not on the list?  Yes and no.  A year ago I could not have cared less but now as an independent consultant with my own business to worry about, yes, I do need to worry about not being on that list and I may need to figure out what it takes to get the recognition and visibility.  Perhaps it's not that specific list I need to worry about but more what it represents (free advertising).  Do I think I should be on that list right now? I am not able to answer with a resounding "YES", whereas most men would not hesitate.  THAT is the problem!

What am I going to do about it?
  • Learn more about women thought leaders in KM: I don't know more than 1/3 of the women on that list.  I probably should.  I therefore commit to learning more about their thought leadership.
  • Identity other women leaders in KM: Reach out to the women on the list and women in the "Women in KM" LinkedIn group to expand the list.
  • Become more active in the "Women in KM" LinkedIn group
  • Increase my networking with women in KM in my local area

References
Boyes, B. (2018). Improving gender equality in knowledge management: Is there a gender imbalance in KM, and if there is, what should we do about it? RealKM.  URL: http://realkm.com/2018/03/08/improving-gender-equality-in-knowledge-management/ 

Garfield, S. KM Thought Leaders.
URL: 
https://sites.google.com/site/stangarfield/kmthoughtleaders.





Monday, March 05, 2018

Collaborating and Learning Across Organizations in the Context of Joint Projects

Two speech bubbles - Co-learning


Learning in the context of collaborative efforts and/or complex organizational structures can be a challenge. Most of the literature, including what has been written by KM practitioners, focuses on the organization as the main unit of analysis. On the other hand, the Learning and Development (L&D) practitioners are primarily focused on individuals and sometimes teams. There is also a significant amount of work written about learning across teams, which addresses transferring knowledge from one team or one project to another. In fact, the theme of knowledge transfer comes up regularly. The focus below will be different.

Increasingly, individuals within organizations find themselves working in teams, projects or work groups that are made up of staff who belong not just to different divisions within the organization but to different organizations altogether. As collaborations, partnerships and contracting arrangements evolve to meet the needs of a constantly changing environment, individuals must learn to work effectively within these teams, which includes learning to learn collaboratively. This type of joint learning or co-learning is relevant in the two fields I am most familiar with, international development and aerospace. In both fields, the Government entities (USAID & NASA) do not deliver their mandate on their own. They work with and through myriads of partnerships, contractual arrangements as well as formal and informal collaboration agreements each of which comes with a specific set of constraints and opportunities. For each of these ways of working together, co-learning opportunities need to be identified and their potential value assessed upfront. Co-learning isn’t for everyone and every situation.

Co-learning across multiple organizations can be challenging. It’s one thing to conduct a lessons learned conversation within a team whose members all belong to the same organization. It’s another to conduct the same lessons learned conversation when members of the team belong to different organizations. The likelihood that walls (defensive mechanisms) will come up is increased.

For example:

  • The organizational cultures may be different. While one organization may be accustomed to open and honest conversations about what isn’t working, others may shy away from such transparency. 
  • Organizational processes may be different. One organization may have a very rigorous and structured process for documenting lessons while another doesn’t have a process at all. This may lead to many unspoken and unwritten assumptions. 
As with all aspects of collaboration (not just the joint learning aspect discussed here), one of the drivers of success is a thorough discussion of all assumptions about not just what will be done, but how it will be done. For example, if two organizations are going to implement a project together with specific roles and responsibilities clearly assigned and a component of that project is to discuss and document lessons to make regular adjustments in the project’s path forward, it will be important to address all relevant assumptions upfront and clarify as much as possible, including the following:
  • Clarify the overall goals and objectives of the joint learning activities. There should be a clear understanding of why the learning activities are being undertaken jointly as opposed to each organization involved in a collaboration conducting its own independent lessons learned. This can happen even within the same organization and is typically a sign of dysfunction. There are many circumstances where an internal lessons learned session is also needed, which might focus on questions such as “What did we (as organization x) learn from our collaboration with organization Y? Joint lessons learned sessions should never be used to try to place blame on any member of the collaboration or to tell other members of the collaboration what they should have done differently. If a serious problem occurred (a partner isn’t performing as expected for example), there are avenues to address these problems other than a lessons learned session. The joint learning session helps identify what every party involved could have done different from the start, not who is to blame for failures.
  • Clarify who will lead the joint learning effort. Perhaps not just which of the two organizations, but which staff position. This may be important in providing clues regarding how the activity is perceived by both organizations and whether there is a disconnect that needs to be addressed. Does the position/person have the right level of seniority, appropriate level of expertise, etc.? If it is important for all parties to be represented fairly and equally in the discussion, it may be a good idea to get the support of a third party facilitator to ensure neutrality in the conduct of the session.
  • Clarify the schedule of learning activities. Avoid using ambiguous wording. Phrasing such as “Lessons learned will be documented on a regular basis” can be interpreted in many different ways and could lead to disconnects between the collaborating partners.
  • Stipulate the methods to be used: A wide-open lessons learned conversation with 20-30 participants is very different from a series of one-on-one interviews, yet both could conceivably be used to elicit and document project lessons. In fact, a broad discussion of methods and their use in the specific context of the project would be valuable. Don’t assume that the way you’ve been doing it on other projects is necessarily the right approach in this context.
  • Be clear about outputs and utilization: Discuss how lessons identified will be utilized to make adjustments to the path forward in terms of work plans and specific activities. For example, my preferred format for documenting lessons learned discussions is an insight map. However, I would never assume that it’s always the best format or that it works for everyone. Beyond concerns about formats which can be trivial, it’s important to talk upfront about whether the session is meant to identify specific recommendations and actions or not. In some cases, the parties to the discussion have a wide open conversation, go back to their respective offices and independently decide on specific actions they will take to address issues raised during the lessons learned meeting and perhaps report on actions taken at a later date. In other cases, a second joint meeting may be held to focus on specific recommendations and actions. In these different variations of the process, it’s also important to discuss the issue of decision making and validation. Who will have the final say in terms of the final version of the lessons and/or recommendations? What kind of follow up or accountability process is in place to ensure that lessons are indeed embedded into future activities (within the ongoing project, not just in a potential future collaboration)?
  • Be ready for disconnects: Since it is not possible to anticipate every inaccurate assumption or disconnect that might derail the collaboration, there should be a clear mechanism for quickly addressing disconnects when they arise. 
And a final point, don’t assume that prior collaboration experience between two organizations means you can skip all these steps about clarifying assumptions. For all you know, one of the two organizations may have held an internal lessons learned process at the end of the last collaboration and decided that it didn’t work well for them and they would recommend a completely different approach. Or, you might just be dealing with very different team, different leadership, making it a completely different collaborative environment. It never hurts to check that you are all on the same page.

Wednesday, February 28, 2018

Teaching Knowledge Management

I've immersed myself in teaching with two classes. Both are focused on Knowledge Management, but they are as different as one could imagine. I am just finishing my first session online at the University of Maryland University College (UMUC) and I am preparing for George Mason University's face-to-face class in the fall of 2018. Very different classes with (for now), the same foundational textbook which has just been updated. Next I'd really like to provide some KM training within an organization.  That would provide a third, completely different way of teaching the same subject.
Component
University of Maryland
University College
George Mason University
Schar School of Public Policy
Format
100% Online
100% Asynchronous (though some chats are possible)
8 weeks long
Face-to-face over four-months.
Some Fridays 5-10pm and some Saturdays 9am-5pm
Total of 7 face-to-face sessions
Small online component.
Level & Discipline
Required class for Undergraduate Business Management major, elective for other majors.
Specialized Masters Degree in Organizational Development/Knowledge Management
Curriculum Design
Instructor follows a set syllabus and pre-selected readings and assignments. Multiple instructors teach the class and the common syllabus helps maintain consistency. Greatest degree of freedom is in designing the Learning Activities for weekly discussions.
Instructor is fully responsible for curriculum design with some coordination with other classes to create synergies and avoid overlap.
Student Audience
Working adults, many in military, including many first generation higher education seekers. Students have never met each other, will probably never meet.
Working adults. Student cohort goes through the program as one group.  They know each other well.
Learning Objectives
An introduction to KM as it applies to business. Why is it important? How does it manifest itself in organizations?
A thorough understanding of KM concepts and approaches and building the skills needed to implement KM initiatives in organizations.
Making It Stick
Apply key KM concepts to yourself, your work, your studies; Make the learning activities very scenario-based to encourage critical thinking.
Highly experiential, practice KM skills in the classroom and work with real organizations. Lots of small group work.
Readings & Other Materials
Core textbook (Dalkir), supplemented by less abstract materials, including videos.
Core textbook (Dalkir), Milton/Lambe's Knowledge Manager's Handbook, and lots of case studies, including an in-depth presentation and discussion of KM at NASA.
Bonus
Some technology enhancements for added "engagement": Animated presentation using PowToon for instructor introduction, class overview and/or difficult concepts.
Possible guest speaker(s) to leverage the wealth of expertise in the DC area.


Saturday, February 24, 2018

Conversations among KM Practitioners

The soon to be renamed Knowledge Management Association - DC Chapter held its monthly face-to-face meeting yesterday.  John Hovell facilitated a knowledge cafe starting us of with a short informal presentation on what he sees as a a set of trends leading to convergence of Knowledge Management, Organization Development, Diversity and Inclusion, and Systems Thinking (not sure I'm remembering this last one correctly but Systems Thinking was discussed in the mix).

In my mind, it's not so much that any convergence is really happening across disciplines, it's just that Knowledge Management has always been interdisciplinary and therefore crosses many other disciplines. Someone interested in Artificial Intelligence is probably observing a convergence between KM and AI.  We all approach KM from our personal framework, disciplinary background and professional experience. 

At the center of this convergence framework, John put the concept of conversational leadership.

As a tangent, I was also scanning through Kimiz Dalkir's Knowledge Management in Theory and Practice (3rd Edition) and I was struck by the number of KM models and frameworks presented. Clearly, there is no consensus on how to look at KM simply because it is so interdisciplinary. I also like the way Chun Wei Choo looks at it in  The Knowing Organization: How Organizations Use Information to Construct Meaning, Create Knowledge and Make Decisions. Choo approaches KM from an information science perspective without making it about IT.

I hope we can continue the conversation within the Knowledge Management Community and I would recommend that we focus on the concept of conversational leadership to explore it further. It's not new. It deserves re-visiting.

Conversations don't solve everything but they embody both the simplicity and complexity of human interactions. What is it about conversations that is so powerful? From a practical standpoint, how can we, as KM professionals, introduce "effective" conversations in the workplace? When we say "effective conversation" what does that mean? How do conversations relate to another KM tool, storytelling? Is that part of a bigger theme of narratives? I could keep going with questions and how things relate to each other.


How do conversations facilitate the transformation of individual knowledge into team and organizational knowledge? How can I integrate conversational leadership skills as a learning objective in classes I teach (both face-to-face and online)? If your job focuses on employee engagement then you'd be asking how do conversations support employee engagement.

Friday, January 19, 2018

Knowledge Management & Organizational Learning for Startups

To be successful, startups need to know how to fail early and fail fast, work with prototypes, engage in a lot of testing and a rapid iterative cycle of planning, doing, testing, assessing/learning and planning again.  They need to be agile.  They need to make decisions very quickly about a myriad of issues.

In that sense, they can benefit from knowledge management if they adopt practices that will support improved decision making based on rapid organizational learning. You don't conduct After-Action-Reviews or their equivalent every year, you incorporate an element of "pausing to learn" in every weekly core staff meeting and you make it a practice that gets disseminate as the organization expands so that it becomes part of the organizational culture.

In addition, startups are, by definition, small and immature organizations.  They may lack organizational structures and governance because as long as they are small, they can get away with having very little formal structure.  That may be exactly what is needed at that early stage of organizational development, but if they look ahead and think in terms of increasing maturity, they will want to pay attention to the inevitable emergence of organizational silos and preemptively set up information and knowledge structures and governance to ensure continuous knowledge sharing across the organization as it grows and evolves. Creating teams and giving them autonomy to go do what they do best is great but they will tend to go off and reinvent the wheel if they are allowed to, resulting in a multiplicity of tools, duplication of resources, proliferation of intranets and overlapping discussion forums, none of which will be integrated within a global vision of where the organization's knowledge resides and who to leverage it for the organization's benefit.

For something more substantive to read, see Piera Centobelli, Roberto Cerchione, and Emilio Esposito, "Knowledge Management in Startups: Systemic Literature Review and Future Research Agenda," Sustainability 2017, 9, 361.

Saturday, January 13, 2018

Learning More and/or Better

The following string of thoughts comes out of recent readings and meetings.  As always, more questions to ask than answers to provide.

We can think about learning in two dimensions, quantitative and qualitative.  Learning MORE is quantitative.  Learning BETTER is qualitative. 

I am inclined to think (or hypothesize) that learning MORE is a very incremental process whereas learning BETTER is potentially an exponential process.  I don't really want to use the word "exponential" here and I certainly don't want to use the word "transformational".  What I mean is that learning BETTER, addressing the qualitative dimension of learning is potentially more impactful than learning MORE. 

It's the difference between the idea of continuous learning, which is simply about learning more over time, and the idea of learning HOW to learn, which is about becoming a better learner.

This manifests itself currently for me in terms of something as simple as reading.  The number of books I will read this year is somewhat irrelevant.  I am much more interested in developing, nurturing my capacity to engage in deep reading and deeper learning.  There is some tension there because I could benefit from reading more broadly, which might translate into more books.  The compromise might be scanning more books from a broad spectrum of disciplines but reading deeply a smaller subset of those books.

Reading now:  Humility Is the New Smart: Rethinking Human Excellence in the Smart Machine Age, by Edward D. Hess and Katherine Ludwig.