Showing posts with label critical thinking. Show all posts
Showing posts with label critical thinking. Show all posts

Saturday, March 11, 2023

Playing with ChatGPT and Understanding How to Query ChatGPT

Like any new tool, ChatGPT and related generative AI tools require some amount of human learning.  Granted the latest generation of generative AI/chatbots is very sophisticated and we, as humans, know how to ask questions, yet suddenly the art of asking questions comes front stage.

As search tools improved over time and the interface landed itself to entering full sentences rather than just keywords, we probably all started naturally entering questions in Google search and other search tools.  I know I did.  Instead of entering "World Café" I could enter "What is a World Café"?  Ideally, there would be a difference in the results because with a simple keyword I am asking for everything that mentions World Cafés and with a simple "What is" question, I am looking for a description or definition.  

Enters ChatGPT and it's a new world, a new way of interacting with a query tool.  It may feel like a conversation but it is not. I would prefer to reserve the word conversation for interactions with humans. I don't care that it seems to acquire an attitude at times.  We should not fall into the trap of thinking it has human-like capabilities, or feelings of any kind.  It does not.  Neither should we be reacting to its answers as if it were a human.  It is probabilistic model. It does an impressive job of guessing what the next word should be but it has no understanding of what the sentences mean.  It is iterative in a useful way.  You can refine your query without starting over and the the tool remembers the initial parameters of your query. 

Let's explore further with my "World Café" query.  As a side note, I know enough about World Cafés to have a sense of the accuracy and meaningfulness of what I would normally find by searching the web.  I am not an expert who would have written the content that exists on the web about this topic.  I have also attended World Cafes and implemented adaptations of the model.  As with anything related to information and knowledge management, the prior knowledge and experience of the individual encountering and trying to absorb new information is relevant.

Here are a few questions I asked:

  • What are the main uses for a World Café? 
  • Is it different from a Knowledge Café? -- I learned a few nuances I was not aware of.
  • If I am planning a Knowledge Café, what are some of the questions I should ask?  -- this was a badly formulated question which resulted in an answer that was off the mark. The answer focused on the types of questions to ask within the Café rather than questions to ask myself as a planner of the Café. The same confusion could have happened in a conversation with a human.
  •  That's not what I was looking for. Let's rephrase.  What are some best practices in planning a world café or a knowledge café? -- See how I fell into the conversation mode.  I am not sure how ChatGPT interprets the fact that I tried to tell it the answer was not what I was looking for. I am not sure how ChatGPT can learn from that and what it would learn from that.  Ultimately, my question was not phrased properly. 
  • When is a World Café or Knowledge Café the most appropriate way to engage a group of people in a meaningful conversation?
  • What are some alternative stakeholder engagement methods? -- here, because I am still refining a query, ChatGPT knows that I am looking for alternatives to World Cafés/Knowledge Cafés and therefore it will not list these two methods in the answer. 
I came back to this query a few days later and tried something a little different. Note that you can return to a past query and start from where you left off or start a new query.  I tried it both ways with the following question:
  • I have been asked to plan a World Café for somewhere between 20 and 100 people.  How should I go about planning this World Café?
The answers to the same question were quite different.  The answer that came as a result to a completely new query was off-topic in the sense that it did not focus on the planning.  It described the entire process of implementing a World Café. It did so in such generic terms that with the exception of a single sentence, it could have applied to any meeting.  So, I follow up with: "I'm only interested in the planning stage."  The answer was a series of bullet points that would also apply to any meeting.  I guess I was looking for more specificity. So I insisted with "I am looking for more specific guidance".  And it worked.  Each bullet point was now accompanied by 3 or 4 useful sub-bullets.

The answer that came as a result of the existing query (the refinements from all the questions listed above) was concise yet much more precise and targeted.  Every bullet point mentioned World Cafés and was on point.  And yet, it completely failed to mention anything about identifying and inviting participants, which turns out to be a big gap. 

I am trying to imagine what happened with the two different queries.  The query starting from scratch is looking at a huge amount of materials mentioning World Cafes and the concise answer it is able to provide is the most generic.  It's not wrong, but not very useful either.   I am imagining that the refined query is looking at a more narrow set of materials determined to be relevant based on the previous questions.  Depending on the winding road of questions I asked, it may have eliminated resources that talked about participants.  I wonder.

So I asked a follow up question:  "What about participants?"  YES.  Very good answer to that.  

Final question:  What are the key sources for this information?  I had a strong negative reaction to the one bullet in the answer that suggested ChatGPT had "professional experience". 


Sorry, but ChatGPT has zero professional experience planning or implementing World Cafes. I don't think I would dare to say that I have professional experience in X if all I've done is read about X. ChatGPT is gaining a lot of experience answering questions and learning how to answer questions that satisfy the questioners, but until humans feed it the knowledge based on their experience, it can't learn.  Questions around what it can or cannot learn are fascinating. 

Lesson Learned with this set of ChatGPT queries:

  • Smart querying requires critical thinking.  This is particularly true as we (humans) learn to interact with this powerful tool. Until we fully understand its capabilities and its weaknesses, we need to treat our queries as practice runs.  Our practice runs are training materials for ChatGPT as well, so it is potentially learning how to answer pretty bad, beginners' queries.
  •  Don't give up too quickly when the answer seems off topic or too generic.  Refine the query until you get to the level of detail and specificity you need and accept that it is not perfect. 
  • Keep an eye on the full query, the series of questions and refinements, because it may represents a set of constraints that shapes the set of materials ChatGPT looks at.  If you veered off, like I did, with a question about alternative methods, you might need to later say "ignore question x" and focus on World Cafes.  When I asked "What are the key sources for this information?" I think ChatGPT answered for the entire question thread, not the last question in the thread. 
  • There is a lot I don't understand about HOW ChatGPT goes about selecting its sources in the context of a single question vs. full set of iterative queries.  I will continue testing and practicing asking good questions as a way to continue learning. 
Read also:

Thursday, January 30, 2020

Knowledge Management and Critical Thinking

Tara Mohn led a presentation and discussion today at the monthly face-to-face meeting of the KM Community of DC Meetup about mindful KM facilitation.  The discussion reminded me of two related discussions:

1. Words matter in KM conversations and the terms mindful and mindfulness are so often associated with meditation that they may not be appropriate for some workplace cultures.  There are alternatives that can get the same message across.  One such alternative is "critical thinking."

2. Some components of KM, such as the development of job aids, best practices, templates, etc... which are designed to ensure that employees do not unnecessarily reinvent the wheel can go overboard by being too prescriptive.  Equally important, and potentially dangerous within a younger and less experienced workforce, SOPs, templates and similar knowledge management tools can lead to "mindless" cut-and-paste and the absence of critical thinking, which in the end is the opposite of what a knowledge management effort should encourage.

When pressed to deliver under tight schedules, employees are looking for shortcuts.  Knowledge Management efforts need to find the right balance between facilitating access to job aids, templates and SOP on the one hand, and the critical thinking that is required to use those tools effectively, knowing when and how to adapt them to specific needs.



Saturday, January 13, 2018

Learning More and/or Better

The following string of thoughts comes out of recent readings and meetings.  As always, more questions to ask than answers to provide.

We can think about learning in two dimensions, quantitative and qualitative.  Learning MORE is quantitative.  Learning BETTER is qualitative. 

I am inclined to think (or hypothesize) that learning MORE is a very incremental process whereas learning BETTER is potentially an exponential process.  I don't really want to use the word "exponential" here and I certainly don't want to use the word "transformational".  What I mean is that learning BETTER, addressing the qualitative dimension of learning is potentially more impactful than learning MORE. 

It's the difference between the idea of continuous learning, which is simply about learning more over time, and the idea of learning HOW to learn, which is about becoming a better learner.

This manifests itself currently for me in terms of something as simple as reading.  The number of books I will read this year is somewhat irrelevant.  I am much more interested in developing, nurturing my capacity to engage in deep reading and deeper learning.  There is some tension there because I could benefit from reading more broadly, which might translate into more books.  The compromise might be scanning more books from a broad spectrum of disciplines but reading deeply a smaller subset of those books.

Reading now:  Humility Is the New Smart: Rethinking Human Excellence in the Smart Machine Age, by Edward D. Hess and Katherine Ludwig.

Wednesday, January 03, 2018

Why We Need Contrarians



I don't make New Year's resolutions.  I have a variety of reasons for not doing it, but at least one of them is that I like being a contrarian.  When the crowd is doing something just because everyone in the crowd seems to be doing it, I resist.

More seriously, and in a professional context, being aware of the fact that each time I write publically about monitoring and evaluation (M&E), I run the risk of never getting hired to do M&E work, simply because of my skepticism and at times contrarian opinions.  And yes, these are opinions. I don't claim to be correct while everyone else is wrong.  I just wish people would stop and think rather than follow the crowd.  Admitedly, following the crowd is much easier.  You get pushed by its forward motion, going downhill like the little grey people in the picture above.  When going in the other direction, not only are you going uphill but you have to fight your way through a crowd of opposing views.

Today, I will point to research that may support at least some aspects of my thinking and validate my conviction that we need contrarians.  Not surprisingly, it has to do with a cognitive bias, the resulting fallacy.

I first encountered this fallacy through a simple case study run by NASA and developed in collaboration with academics researching decision-making.  The case study describes a mission that was ultimately successful but was considered a near-miss.  In other words, it benefited from a certain amount of luck and it could have been a complete failure. The learning objective of this case study is that people draw conclusions about decisions made based on the outcome.  The mission succeeded, therefore the decisions must have been good.   People studying decision-making and focusing on the resulting fallacy say, "not so fast." You cannot say much about the quality of a decision based solely on the outcome.

What does the resulting fallacy have to do with being a contrarian you ask.  It points to the need for more in-depth analysis of cause and effect and the thinking behind our decisions.  We need more contrarians who are ready to raise their hand and say "not so fast." 

Why be a contrarian?  Someone has to.  When 99% of the people in a particular industry or discipline are jumping on a bandwagon, I like to be the one standing back and watching.  It's not that I don't ever jump on bandwagons.  I do.  There are some bandwagons however where I get tickled and I go, "wait a minute... something is fishy.  I don't know what it is yet but I'm not getting on that one."

I like poking and prodding when I see the bandwagon passing by.  Being a contrarian doesn't make me right.  I can make me useful.  There's always a role for a devil's advocate, a skeptic who will force others to articulate their positions and assumptions.

This is also related to the importance of allowing dissenting opinions in organizations, but that's a topic for another post perhaps.

Resources

"Do you have a contrarian on your team?", Insights by Stanford Business, November 13, 2015, by Elizabeth MacBride.

"The Resulting Fallacy is Ruining Your Decisions," by Stuart Firestein, December 7, 2017.

"Understanding Near-Misses at NASA," ASK OCE, August 17, 2006, Vol. 1, Issue No. 12.

Sunday, July 30, 2017

Consider (Book 30 of 30)

Title: Consider: Harnessing the Power of Reflective Thinking in Your Organization
Author: Daniel Patrick Forrester

Ending this book series blog challenge with this book is no accident.  While I didn't have a precise order in mind in going through the 30 books in 30 days, I planned both the beginning (Learn or Die - Book 1 of 30) and the end (Consider).  To me, these two books represent a very "back-to-basics" approach.  We've hit the 20+ mark in the history of Knowledge Management and perhaps 25+ mark with Organizational Learning. What about basic conversation skills?  What about critical thinking?

We can complain all we want that databases of lessons learned aren't the answer, but how about helping people in organizations -- at the individual level and in teams -- to pause long enough to reflect, think it through, consider.  No time to think?  Think again! It's like everything else.  Make the time to think, reflect, consider. I dare you. Just try it.  It's refreshing.

I'm taking an entire year to do it and it doesn't mean I'll be sitting around in The Thinker pose doing nothing for 12 months.  I'll be very busy, yet I'm calling it a Year of Learning precisely because it will involve a lot of quick learning cycles, pauses, reflecting and adapting quickly.  Pausing to reflect doesn't mean you waste time.  In fact, pausing frequently to reflect means you have more opportunities to discover early that you're off track and to correct course or simply take advantage of new opportunities. In essence, you make better use of time and you're much more adaptable and flexible in a fast changing environment.

If you're still wondering why you should take the time to pause and reflect regularly, read the book.  I highly recommend it. You can pair it with Madelyn Blair's Riding the Current (Book 5 of 30).

TO DO:
  • Publish a list of resources on individual reflection (for PKM purposes).
  • Revisit the Skillshare Classes to decide whether to 1) leave "as-is", 2) remove, 3) redo.



Monday, November 21, 2016

Uncertainty vs. Ambiguity

I wrote an article for an internal organizational newsletter recently about ambiguity and decision making in the context of project management ("Ambiguity, Decision Making and Program/Project Management," pgs. 22-24, The Critical Path, Winter 2016).  The impetus for the article had nothing to do with current political issues, but now it keeps coming back to mind.  The point I was trying to make in that article is that our aversion to ambiguity makes us dismiss ambiguity rather than force us to tackle it with critical thinking.  It's a cognitive bias we need to be more aware of.

People keep saying that we don't like uncertainty, but what they mean to say is that we don't like ambiguity.  What we are facing with the Trump transition are conditions that resemble ambiguity rather than uncertainty. Uncertainty can be characterized by known risks and probabilities associated with those risks being realized. Ambiguity is characterized by unknown risks and an inability to come up with the probability of various outcomes being realized. When faced with known risks and probabilities, we have risk management analytical tools that allow us to assess the risks and deploy various strategies to address them.  With ambiguity, we tend to hide our heads in the sand, which is never a good idea.  Ignoring a challenge because we don't know how to address it doesn't make it go away.

Regardless of our individual political affiliations, we need to acknowledge that what we are facing is an ambiguous situation rather than an uncertain situation.  Critical thinking skills will be at a premium.  Sharpen your minds!