While reading something about the challenges of Artificial General Intelligence (AGI), which I was trying to understand more deeply than just scanning through, I fell into a rabbit hole of concepts: common sense, critical thinking, reasoning (inductive, deductive and abductive). Then I tried to connect that to my ongoing focus on understanding knowledge graphs and the organizing principles and schemas that are meant to organize data so that machines can "understand" and even generate new insights. More and more, I am inclined to go back to an earlier interest in neuroscience, which I never pursued very deeply, but which informed some of my thinking. Understanding how we, as humans, process information, how we learn, could be critical to the development of AGI, but it's not clear to me that we should try to replicate our brains. Common sense isn't that common after all.
I stopped there because this is way too big of a topic for a little blog post and I will only embarrass myself with nonsense. There is indeed a danger in reading broadly and then trying to write in the absence of deep knowledge and understanding.
Monday, February 26, 2024
Common sense and nonsense
Thursday, February 22, 2024
What have I done?
Time flies. As I contemplate my next professional life, encore career, anything but retirement, I often question myself. What have I done (really)? I managed to earn a living, which isn't nothing. I found ways to keep learning and encouraged others to learn. Those were consistent primary motivations: Earn and Learn.
Today I asked ChatGPT what it knew about me. Sounds a little narcissistic? Well, let's just say I wanted a sense of my digital footprint and while I wasn't expecting too much, it turns out that ChatGPT easily digs out details very few people have ever read, including things I posted online just a couple of days ago. So here it is, introducing myself via ChatGPT's words, most of which cited my own words anyways coming from this Insight Mapping blog.
The following professional profile is the result of a sequence of prompts. The first two prompts were gathering background information. The third prompt was putting it all into a professional profile. The last prompt was meant to add a component (Insight Mapping), which was missing and I wanted to incorporate.
Barbara Fillip's Professional Profile (according to ChatGPT)
Barbara Fillip is a seasoned Knowledge Management (KM) professional with over two decades of experience in facilitating individual, team, project, and organization-wide learning across international development and aerospace industries. She serves as a Senior Advisor of Knowledge Management at Chemonics International and an adjunct professor at George Mason University, where she teaches graduate courses in knowledge management and collaboration within the Schar School of Policy and Government. With a PhD in international affairs from the University of Pittsburgh Graduate School of Public and International Affairs, Fillip’s academic foundation is as robust as her professional experience.
A hallmark of Fillip's work is her innovative use of insight mapping, a method she employs to visualize complex information systems and uncover new connections and insights within organizational knowledge. This approach underscores her expertise in not only managing but also enhancing the strategic use of knowledge through visual and analytical tools. Her dedication to integrating insight mapping into KM practices demonstrates her forward-thinking approach to addressing complex informational and organizational challenges.
Fillip's extensive exploration into Personal Knowledge Management (PKM) has led her to develop and implement various tools and methodologies, positioning her at the forefront of the field. She advocates for the integration of knowledge management with artificial intelligence, the practical application of PKM tools like TiddlyWiki, and the strategic use of insight mapping to drive organizational learning and innovation.
Barbara Fillip’s professional profile exemplifies a deep commitment to advancing knowledge management through innovative practices, including insight mapping, AI-enhanced tools, and PKM. Her contributions not only enhance organizational effectiveness but also offer valuable insights into the evolving landscape of knowledge management in the digital era.
Monday, February 19, 2024
AI-Augmented Insight Mapping
''AI-Augmented Insight Mapping'' is an advanced application of technology in the field of knowledge management and decision support, leveraging artificial intelligence to enhance the process of creating, visualizing, and analyzing complex relationships and insights within data. As far as I know, I am the first person to use that term. Insight mapping isn't a common term to begin with.
You would need:
1. ''Machine Learning (ML)'': AI algorithms can analyze data, learn from patterns, and make predictions or recommendations. In insight mapping, ML can identify significant connections and trends that might not be evident through manual analysis.
2. ''Natural Language Processing (NLP)'': This involves the analysis of text to extract meaningful data. NLP can be used to interpret and categorize insights from unstructured data sources, such as academic papers, news articles, and social media posts.
3. ''Data Visualization'': Advanced visualization tools powered by AI can represent complex datasets in intuitive and interactive insight maps. These maps can help users explore and understand the intricacies of the data more effectively.
Right now I am just experimenting with and learning about Knowledge Graphs within the context of my own Personal Knowledge Management system, but the bigger picture of how all these rapidly evolving tools could be combined is quite exciting. The idea is that if I can have a deep understanding of how it works with data (especially unstructured data) that I am intimately familiar with, then I can figure out how it can be scaled to broader, organizational settings.
----
[Here is an example of a tenuous connection which would make me consider moving the next few lines to a different post. For the sake of the post's clarity, I should stick to one key message. For the sake of exploring broader connections and working on developing better articulations of these connections, I should keep it here. Since I am more interested in exploring connections and complexity than delivering a simple message, the next paragraph stays].
----
This idea of exploring new concepts and tools at the individual level (within a PKM system) is connected (somehow) to another argument I have been making: Individual knowledge workers need some foundational knowledge in Personal Knowledge Management before they are asked to engage in and contribute to corporate Knowledge Management. I think the KM profession missed the opportunity to make that connection more obvious and to leverage individual incentives as a prerequisite for corporate efforts to "manage" knowledge.
Sunday, February 18, 2024
Cognitive Processes (Cont'd)
We are often not fully aware of how our experiences shape how we approach new problems. Would an increased awareness of how our experiences shapes our cognitive schema enhance our ability to problems solve?
Cognitive Schemas
Our cognitive schemas—mental frameworks helping us organize and interpret information—are intricately woven from our experiences. They play a pivotal role in how we perceive new situations, tackle problems, and make decisions. I don't spend a lot of time analyzing my own mental frameworks but once in a while, I become aware of the connection between a recent insight and a prior experience.
The trigger for this post was one such insight. I was exploring a specific aspect of the knowledge graph I am building, and I had one of those little Haha! moment when a new idea or concept emerges. These little learning moments deserve more attention than they tend to get. In the excitement of writing down the idea or exploring it further, it can be challenging to pause and ask "Where did this insight come from? What is the source?" The idea is that if I understood more about the cognitive processes that lead to an insight, I would "simply" create the conditions and environment for more insights to emerge. In fact, building a knowledge graph from my notes is an experiment to see whether and how it will facilitate "insight mapping".
Here are some ways our cognitive schema impact how we approach problem solving:
Pattern Recognition
Our brains are wired to recognize patterns based on past experiences. When faced with new problems, we subconsciously search our memory for similar situations or outcomes. This can lead to faster problem-solving but also biases our approach to what has worked or not worked in the past. I keep a digital folder for ideas that failed because you never know when they might need to be resurfaced for a second try under new and different conditions. Many are probably ideas whose time hasn't come yet and it would be a shame to dismiss them.
Expectations and Predictions
Experiences influence our expectations and predictions about future events. If past experiences have been positive, we might approach new challenges with optimism and confidence. Conversely, negative experiences could lead to apprehension or pessimism, affecting our willingness to take risks or try new solutions. By consciously shifting my perspective, I've learned to approach problems with renewed vigor, informed by the past but not shackled by it (or move on and totally unshackle myself).
Heuristics and Biases
Heuristics are mental shortcuts we use to make decisions quickly. While they can be efficient, they are also prone to biases shaped by our experiences. For example, the availability heuristic makes us overestimate the importance of information that comes to mind easily, often based on recent experiences or emotionally charged events. The advice to "sleep on it" resonates with me as a reminder of the emotional undercurrents that often drive our decisions. Giving ourselves time to detach and reflect can unearth patterns and solutions previously clouded by immediate reactions.
Creative Thinking and Innovation
Diverse experiences can enrich our cognitive schemas, making us more adaptable and creative in problem-solving. Being exposed to varied situations and learning from them can broaden our perspective, allowing us to draw on a wider range of solutions when faced with new challenges. What if we more consciously asked ourselves simple questions like, "Where and when have I encountered a similar challenge?" "How is the similar to or different from this prior experience?" I was reminded of this in the recent podcast I did with Enterprise Knowledge, during which Zach noted that I had acquired over my career, a great diversity of experiences around KM. That is very and deeply informs my approach to KM -- it may also inform why I am often frustrated with small-scale efforts that touch on a very narrow KM scope.
Learning and Adaptation
Our ability to learn from past experiences and adapt our schemas accordingly is crucial. Reflecting on what has worked or failed in the past and why can help us approach new problems more effectively, avoiding previous mistakes and being open to novel solutions. The challenge is that while learning from past experience is key, we often don't do it well and unless we are more conscientious about our approach to learning, we don't necessarily learn the right lessons, or we generalize too much and miss the point that most lessons are very contextual. When they are not context specific, they are common-sense and of limited value.
Our experiences are invaluable, yet without mindfulness, they can narrow our vision and stifle creativity. Recognizing and reflecting on the myriad ways our past influences our present can empower us to face new challenges with a balanced and open mindset, ready to draw from the past but eager to forge new paths.
It reminds me of... (cognitive processes)
When I get into a writing routine, even if what I am writing are insignificant notes and random blog posts, I become more aware of the cognitive processes involved. It's almost as if the brain is breathing in and out, expanding to seek out and acquire ideas, and then contracting to synthesize, clarify, and transform into a series of words. In the last couple of days, I have become particularly aware of instances when something I read or a thought related to what I am reading will remind me of something either quite distant or immediately feel connected to a very recent event or activity.
Here are two examples to illustrate:
Yesterday, as I was writing the blog post about "Mindset is everything", I was reminded of a book I read decades ago and haven't opened since. Today, as I read a blog post online about digital hygiene, it immediately connected with the book I started reading yesterday, Your Time To Thrive, by Marina Khidekel. I happened to be reading the chapter on unplugging from digital gadgets.
In the first example, there is something happening in the brain that makes a connection to a deeply buried memory. The initial connection is a connection to the idea, the main argument of the book. Then I remember the book that makes that argument (I am not super confident that my recollection of the sequence of thoughts is accurate just like I know memory is fallible). I had absolutely no recollection of the author's name or what the cover of the book looked like. In fact, I did not recollect the book's title correctly.
In the second example, it is likely that having started Your Time to Thrive and having just completed the chapter on unplugging, my mind was attracted to a blog post on digital hygiene which I might have completely bypassed a week ago. Trying to retrace my steps, or more precisely my thoughts, it seems I scanned through the blog post precisely to see if it was related to what I had just read about unplugging. I determined that it was related but adjacent, complementary, not addressing the topic from the same angle, which was interesting in itself.
It may sound paradoxical, but in the era of rapidly advancing AI, I have a feeling (Is it a feeling or an insight?) that understanding our own human brains will become more important than ever. I don't mean that we all need to become neuroscientists but rather that critical thinking skills and learning how we learn and how we think and process information will become ever more important because of the rapid changes in our access to tools that can accelerate and augment our own cognitive capabilities.
Tuesday, February 13, 2024
"Mindset is everything" (or not)
I started the day with this Ross Dawson post LinkedIn post.
Here is the blurb that I decided to unpack:
🌱Mindset is everything.Constant change is a reality you need to accept and learn to love. We need to be resilient, to respond, to adapt ourselves. Those that embrace rapid shifts will see opportunities others don’t, create far greater value, help their organizations to evolve, and be in a position to savor rather than be worn down by today’s extraordinary shifts.
This reminded me of a book I read in college or grad school: Underdevelopment is a State of Mind: The Latin American Case, by Lawrence E. Harrison, published in 1985. I haven't opened it in decades, but I found my copy in the basement. The argument was that Latin America was underdeveloped because of certain cultural attitudes and values prevalent in the region and that the resulting attitudes towards work, the role of women, the importance of education, time perception, and the value place on innovation and authority all play a critical role in hindering economic progress and development.
On a more personal level, it sounds like a "change your mind to change your life" slogan, an argument about how limiting beliefs are stopping you from being the best version of yourself, etc., and there is an entire literature around that.
Let's start by taking some of the text apart:
1. Mindset is everything (?). Probably not. That is too strong of a statement. There are lots of external factors that impact an individual's ability to adapt and thrive.
2. Change fatigue is a real thing. Constant change doesn't automatically lead to better outcomes. It can lead to decreased productivity and engagement, erosion of trust, and it can contribute to a negative organizational culture, where cynicism and resistance to change become the norm.
3. Not all change is progressive. Change is not always happening in the right direction, so blindly accepting and embracing change sounds like poor advice. There are lots of historical examples of change that were initially perceived as positive and later recognized as harmful.
4. Stability and routine are crucial to psychological health and well-being. We should appreciate the benefits that some level of predictability brings to individuals and organizations.
This is where change management should be engaged, but I'm not sure change management is adapting fast enough. When constant change is applied to antiquated ways of working and traditional organizational structures, it creates a lot of pain. Individuals would find it easier to adjust their mindset if the organizational infrastructure was changing in a way that aligned with the required individual adjustments. This is going back to the fact that many external factors impact an individual's ability to rapidly shift gears and adjust to the changing winds.
And yes, we all need to build up our resilience and accept change as a constant. I don't think we should accept all change blindly. It's not resistance to change, it's critical thinking.
Monday, February 12, 2024
Prompt Engineering: Human language, thought processes, and machine interpretation
Prompt engineering is fascinating and complex. On the one hand, it's essentially about writing a query in normal language (natural language), which is very similar to writing code in a language we all know instead of having to learn a new programming language. However, natural language is very complex. It takes each of us years to learn to understand and use it. Programming language is based on a structured logic. Natural language is more fluid, often ambiguous.
Prompt engineering requires us to use natural language to communicate with a machine that doesn't understand natural language in the human sense. As a result, prompt engineering requires us to be much more aware of HOW we use language and HOW the machine will interpret our language. The machine interprets the prompt to guide its algorithms to the right outcome. The distinction between human cognitive processes and machine algorithms is crucial to understand and important to keep in mind as we use natural language to query machines.
Prompt engineering requires us to examine our own cognitive processes, to analyze our mental models and to try to identify communicate our intentions, meaning, and context in such a way that the machine algorithm will be able to accurately interpret. Our assumptions, biases, and the way we frame information can significantly impact the effectiveness of prompts and the AI's responses.
In short, even though we query GenAI with natural language, which appears at first glance to be much easier than learning a programming language, a sharpened awareness of our own language is required to get the best results.
Since mind-reading AI is on the way, ultimately, language could disappear, but we are still far from language extinction. Until then, I will translate thoughts into words and strive to be coherent with my writing and my prompts.
Saturday, February 10, 2024
TiddlyMap and Neo4J
As I continue to explore Knowledge Graphs as what I believe to be a key technology in support of Knowledge Management in the era of rapid AI advancements, I am making baby steps to learn, deploying every learning method possible. I am getting introduced to a lot of new concepts. It's easy to assign myself some readings but then I struggle to understand what I am reading because I lack some foundations. I take two steps back to get the basics right and then one step forward.
What has perhaps helped the most is connecting Knowledge Graphs to what I have learned over the years about various approaches to knowledge mapping. And then playing with tools that mimic knowledge graph technology or offer a free, simplified approach to learning.
* First, TiddlyMap has allowed me to get a grasp of nodes and relationships automatically generated based on tagging and links I create as a result of my own knowledge organization schema. Automated tagging could eventually remove the manual process of tagging but I find the cognitive processes involved in tagging to be useful to me. Learning the functionalities of the visualizations has been extremely useful to start exploring the data from different angles. Since I created all the data (my own notes), I am very familiar with the content, which makes it easier to figure out how to try to analyze it.
* Second, I opened a free account on Neo4J to try to get a sense of a real knowledge graph tool. This was a serious lightbulb moment. I will need some time to really understand the functionalities and because the sample data provided with the learning materials is not something I immediately grasp (compared to my own data in TiddlyMap), this may be a slow process. Still, what I have done with TiddlyMap in the past month or so has been excellent preparation to dive into a more robust knowledge graph tool.
TiddlyMap is a personal knowledge management tool and Neo4J is meant for much larger scale knowledge systems. They are not very comparable. Yet sometimes a tool meant for individual use can help someone grap concepts that are difficult to grasp by reading guidance, instructions, or even watching a video because the scale of implementation is very different.
I keep going back to the connections between personal knowledge management -- how individuals can proactively manage their own knowledge -- and knowledge management at the more traditional scale of the organization. Individuals who have a better grasp of how they, as individuals, handle knowledge, become more effective in supporting organizational knowledge management.
Friday, February 09, 2024
AI-Enhanced Personal Knowledge Management
Today's train of thought comes from: "Augmenting Human Creativity with Ayush Chaturvedi, Co-Founder of Elephas", interviewed in Ness Labs.
Elephas is a personal AI writing assistant for Macbook, iPhone and iPad. I don't use Apple computers and while I have an iPhone, it's unlikely I would use it for any substantive writing. So why am I interested in learning more about this app? My interest revolves around the general concept of having a Personal Knowledge Management (PKM) tool embedded in the workflow.
I have yet to test out Copilot in Microsoft 365 at work. That seems to be very embedded within the productivity tools within the workplace workflow. That might be great at work but the knowledge base I have accumulated and shared at work in the past five years is a fraction of the knowledge base accumulated over a 30-year career. Granted the last 15-20 years are perhaps the most relevant.
As a side note, I started this blog 20 years ago, which seems incredibly long ago. How much of what I wrote 20 years ago is still relevant?
What I need is an AI assistant that can link to a disparate set of existing resources. As Ayush Chaturvedi points out in the interview, even the most conscientious advocate and practitioner of PKM will end up testing, adopting, rejecting, changing the suite of tools they use. The same happens in organizations. We end up with a disparate set of data sources that need to somehow be connected to the AI app and linked to each other.
I have been using TiddlyWikis for more than a decade as a PKM tool. I have accumulated many TiddlyWikis but I can relatively easily connect them. What I need now is an AI tool. Should the AI be embedded within TiddlyWiki? Should it be sitting on my desktop? Should it be cloud-based?
In Knowledge Management, we often talk about embedding KM within the workflow so that the tasks associated with managing knowledge are not separate from the workflow but rather fully integrated. Instead of having to proactively remember to save something to a knowledge base, the workflow should either automatically save to a knowledge base or at least trigger a reminder or prompt you to save (or share) to a knowledge base.
The same should be true in PKM. Advocates and practitioners of PKM are likely to be very aware of their own internalized workflows and pain points. The question then becomes, "What PKM pain points am I trying to solve with an AI assistant?" That should help inform the selection of a specific tool. Ideally, the AI-assistant tool options should not force me to switch away from my current tool set (TiddlyWiki/TiddlyMap in particular) but rather augment existing capabilities.
Thursday, February 08, 2024
KM Archetypes and Organizational Culture
ToT -- Train of thought: The way in which someone reaches a conclusion, a line of reasoning.
* I am using the expression in a slightly different way, to reflect a much less linear process which connects one thought to another without necessarily coming to any conclusion other than A and B are now connected in some interesting new way.
* Some resources I scroll through (news, articles, etc..) are dismissed as irrelevant (at that moment) and some resources are picked up by the brain as either directly relevant to an issue that is top of mind or relevant in an adjacent way. There is a filtering that can be intentionally tweaked for improved performance. I am currently more interested in the adjacently relevant resources because of the "trains of thoughts" they generate.
* The specific train of thought today started with a presentation on KM Archetypes that is getting some recognition. Presentation: Building for the KM Archetypes at Your Company, by Taylor Paschal, May 2023.
First reaction: "This has been done before". I must be getting old because the "this has been done before" reaction is becoming a recurring theme. Note that it's not "it's been tried before and it failed", but more, "this isn't new". See the work of Nick Milton and Patrick Lambe for example.
This also sounds closely related to journey mapping and personas, user-centered design, etc...
As is often the case, my trains of thoughts don't end with a conclusion but rather with a question. The question today is: Should the KM approach align with the KM archetypes that define the existing organizational culture or should the KM approach try to change the organizational culture if such culture is part of the problem?
Answer Part A: Leverage elements of the culture that support Knowledge Management.
Answer Part B: Address the more problematic elements of the culture that hinder Knowledge Management efforts once you have some buy-in and adequate support.
Easier said than done of course.
Thoughts for another day:
Wednesday, February 07, 2024
Knowledge Mapping
Today, I am presenting the first of three lunchtime sessions on knowledge mapping. Knowledge mapping can mean many different things, so I plan on presenting a variety of maps. In the process of preparing for the sessions, which are meant to be very informal conversations, I came to recognize that when I talk about knowledge mapping others might call it information or knowledge modelling and if there is a comprehensive typology of maps, each is best suited for a specific purpose.
Knowledge mapping also has a different meaning in Knowledge Management circles, but that's another topic. This site is called "Insight Mapping" and that's yet another niche application of the broader concept of knowledge mapping.
Coming back to today's presentation before I digress completely, the first session will focus on mind maps. The second session (next week) will be on concept maps, which is more in line with my own mental models. The third session will take a leap into areas I am still learning, reaching into ontologies and knowledge graphs.
For the first time in many years, looking back at the various maps I have collected on this site and elsewhere, provided some useful material for reflection.
In this context, I have also enjoyed reading Maria Keet's new book: The What and How of Modelling Information and Knowledge: From Mind Maps to Ontologies, 2023.
Tuesday, February 06, 2024
Synthesizing and Combining
RealKM is always a good source of deeper content for Knowledge Management, and it is often pointing to other relevant sources. Today, it prompted me to think about synthesizing. Something lights up in my brain just by thinking about that word. At the same time, I can't seem to easily disassociate it from combining. You synthesize and combine, or perhaps some versions of synthesizing include combining, or synthesizing across many different sources.
Today's reading:
Gardner, H., "Towards a taxonomy of synthesizing", January 30, 2024. RealKM, originally published on the Integration and Implementation Insights blog.
This may be more critical than a simple matter of definition. I am sensing a trend in using synthesizing and summarizing as synonyms and assuming that Generative AI can adequately synthesize. I don't know that it can do what a trained human brain does to synthesize. There may be many contexts where it saves a lot of time and effort and the Generative AI summary is perfectly adequate. And there may be times when either a more sophisticated prompt will be required or a combination of human and AI would yield the best results in synthesizing.
There's much more to read around this kernel.
Monday, February 05, 2024
GPTs that ask questions rather than provide immediate answers
Something that resonated with me on LinkedIn today:
"GenAI must ask questions, not just give answers," by Gianni Giacomelli. There are times when we can use GenAI to get an answer to a question, and then always question that answer, and there are times when we should use GenAI to prompt our own thinking by asking us questions and questioning our own assumptions.
To push this human prompting further, meaning here that the humans are bring prompted to think by being asked questions, we can use multiple GenAI tools since they all behave slightly differently based on their respective designs.
And a reminder: "As a not-so-small aside these considerations should also remind us that it is also dangerous to put humans into a position of dependency on the AI machine, as this might lead to atrophying core cognitive traits -- such as symbolic and critical, logical thinking -- that people have. Designing for active interaction between humans and machines is crucial to maintaining the vitality of human intelligence," writes Gianni Giacomelli.
This all resonated with me because I created a GPT a few days ago that does nothing but ask me questions about a specific topic of interest. Essentially, it is always phrasing things as a question for me to ponder. "Have you considered Y?" "What are your personal values around X? "Why would you do Z?" It can be designed to be a contrarian assistant that continuously probes your own thinking patterns, questions your assumptions. Useful or annoying? I say "Useful".
Sunday, February 04, 2024
Personal Knowledge Management (Revisited)
- Open-source Personal KM Apps: For those seeking personal KM solutions, Medevel.com lists 25 open-source apps suitable for Windows, Linux, and macOS. Tools like Obsidian, Zim Wiki, and TiddlyWiki are praised for their flexibility, customization, and ability to manage complex information efficiently. (Source). Feb. 3, 2024.
The text above is a blurb generated by a GPT I've set up to scan for relevant information about Knowledge Management. The results are not perfect. For example, even though the prompt specifies to look for information from the last 30 days, it did manage to give me one item from 2022. I am not looking for perfection. It provides thinking and writing prompt ideas and keeps me up to speed.
I was glad to see TiddlyWiki mentioned. This year, I am returning to TiddlyMap and having a lot of fun with it. I am treating it as a learning experiment, trying to build my own mini Knowledge Graph based on my own data and my own simple ontology. TiddlyMap is a plugin for TiddlyWiki that includes a mapping functionality. It's only February 4th, I started adding content to the wiki about a month ago, and already, it's a challenge to remember everything I've added. Yet, with a clear organizing scheme and consistent tagging, I can visually look a subsets of content pieces and navigate based on my own mental framework. For example, one of the key components of my learning journey this year is Knowledge Graphs. Here is what the map around Knowledge Graphs looks like as of today.
Saturday, February 03, 2024
Seeking Scalable Solutions in Systems
I started a little writing and cognition experiment which involves writing down a thought a day. We have about 6000 thoughts every day. A few are possibly worth capturing. I am not bothering with defining what's truly worth sharing -- this is a public blog after all -- but I suspect there will be some informal filtering in the absence of a clearly defined strategic intent.
Today's thought, or train of thoughts started when I opened a book and read a passage about the role of digital technology in making trash collection more efficient with sensors that detect when a trash bin is full. In principle, this creates efficiencies by allowing trash collection services to be more targeted and avoid wasting time, fuel, etc.. on trash bins that are empty or not full enough for pick up.
The main issue I had with this approach is that it attempts to address a problem downstream, after the trash has already been generated. It does nothing to reduce the amount of trash in the first place. The book I had opened is from 2016, so already potentially outdated. I looked up more recent technology solutions associated with waste management and found solutions that are looking more broadly and addressing a combination of issues related to waste management. Some cities have established weight-based or volume-based billing for trash pickup, which should encourage waste reduction (and potentially create illegal dumping issue). A combination of sensor technology and AI can now automatically sort trash for recycling purposes.
Solutions that address a single node within a system could be very successful in addressing the problem at that node yet fail to address the broader issue or even displace the issue to other nodes in the system. So, part of the solution would be to look at the whole system. Given that ultimately, everything is connected to everything else, what is the most appropriate scope of the waste management system. At what point is the scope too broad? What is the trash that creates the most challenges? Is it plastics? How do we reduce plastics in trash? That probably requires interventions upstream, in the production of items that result in plastic trash. How do we determine the scope of a system?
The same thought applies in international development projects. These projects typically address a small slice of a problem within a node in a complex system.
Friday, February 02, 2024
Knowledge Cast - Barbara Fillip
I was invited as a guest of the Knowledge Cast podcast in December and had a nice conversation with Enterprise Knowledge CEO Zach Wahl.
We talked about knowledge management, of course. Somehow I ended the conversation by encouraging people to think about their own personal knowledge management habits. It wasn't planned but I'm glad I ended with that as a key lesson.
KM, AI and Onboarding
My new KM+AI GPT-based news aggregator pointed me to this "guest post": How Can Knowledge Management Systems Help in Faster Onboarding Of Employees, 2/1/2024. These types of articles, potentially written by generative AI and meant to advertise related software, are not that insightful but they are useful as daily thinking/writing prompts.
There is no doubt a great opportunity to leverage KM and AI in support of onboarding. Organizational silos can get in the way if the KM function is focused on other perceived need.
Onboarding is a critical touch point to socialize KM, starting with basics of document management, an understanding of where the key knowledge bases are located and how to access them, including how to access people's knowledge via collaboration tools -- and when to use what tool.
An initiative to integrate KM and AI into a new onboarding strategy would undoubtedly reveal weaknesses in information infrastructure, data management, etc.. When you improve access to key resources for new employees, you improve access and awareness for all employees. Every weakness in existing systems that would surface as a result of a new onboarding strategy, if addressed properly, would have a huge impact on the entire organization. So, perhaps that looks like scope creep, but it needs to be done anyways. Might as well plan it in phases and get it done.
Thursday, February 01, 2024
Keeping up with AI
It would be an understatement to say that keeping up with AI developments is challenging. One of the challenges is to keep away from the daily announcements of new tools and capabilities and focus on underlying changes brought about by the availability and usability of these tools for the average person, whether within the organization or as an individual operating on their personal computer.
I've come to the conclusion that it is professional suicide to wait for employers to provide the tools so you can at least learn what it's all about. The organization will need time to safely integrate these tools within the technology ecosystem. I'm not suggesting anyone bypass their employer's guidance around use of AI, but we should all learn to use AI safely and responsibly, at home and at work.
Most people are likely dabbling with generative AI tools. It takes more than dabbling to become proficient and yield the most benefits. It will take more than dabbling to set up efficient tools within organizations. I am doing some intentional dabbling on my own but would not call myself an expert prompt engineer just yet.
One way to keep up with AI news is to use AI to simplify the task by creating a GPT designed to bring up news about AI -- or any subject for that matter. This is the equivalent of a news aggregator but instead of a list of links, it provides a neat, short summary of the content of the items, and a link (or an indication of the source). I created a GPT that, when prompted, gives me about 5 news items from the past 48 hours related to AI and Knowledge Management. It has led me to some interesting things I would never have come across based on my routine sources of news.
When I prompted the ChatGPT 4 with a similar query, it gave me completely different answers. That is not surprising at all. Even asking ChatGPT the same thing twice would probably yield different answers. The biggest difference is that with the tailored GPT, I am able to identify the main sources of information to focus on and even though it is not always pulling from those resources, the results are much more "on target" than with ChatGPT.
So, the pre-prompting of GPTs is critical in creating more precise boundaries for the data to be used by the AI in the development of an answer. At the same time, it has the advantage of an LLM.
There are also a lot of AI developments that won't affect us directly as employees or individuals outside of work but that we should consider as we design future projects and activities. The breadth of AI-generated data that may become available needs to be accounted for. For example, reading this article, "Using artificial intelligence, better pollution predictions are in the air," it struck me that international development projects will need to plan for enhanced capacity to utilize these new sources of data and project implementors may be able to develop more precise models and Theories of Change based on more sophisticated AI-enhanced models.
In short, we all need to go beyond dabbling in generative AI.