About a month ago, I created a GPT based on ChatGPT 4.0. It's easy to create but requires some fine-tuning. I used Ross Dawson's approach detailed here: Creating custom GPTs for news and Information Scanning; and I adjusted it to suit my own purpose. The results have been mixed but I'm reasonably happy with what I got today.
Thursday, March 28, 2024
Wednesday, March 27, 2024
Critical Cognitive Capabilities in the Age of AI
In an era increasingly dominated by artificial intelligence (AI), our cognitive landscape is undergoing a transformation as significant as any in our history. This shift demands not just an adaptation but a deliberate enhancement of our cognitive capabilities. Amidst this technological evolution, the cultivation of critical thinking, emotional intelligence, creativity, and adaptive learning emerges as essential to thrive.
The story of Theuth and his invention of writing, as recounted by Socrates (see the previous post), provides a profound starting point for this discussion. Just as the introduction of writing raised concerns about memory and wisdom, today's rapid advancements in AI and the Internet pose new challenges and opportunities for human cognition. In this digital age, the capacity for ''critical thinking'' has never been more important. As we navigate vast oceans of information, discerning fact from fiction, valuable data from noise, requires a keen analytical mind. This skill ensures we remain effective decision-makers in both personal and professional realms, notwithstanding the deluge of AI-generated content and analysis.
Emotional intelligence stands out as a uniquely human attribute that AI is far from replicating. Our ability to understand, empathize, and interact with others is paramount, especially as AI technologies handle more cognitive tasks. Developing emotional intelligence helps us navigate the complexities of human relationships and teamwork, fostering environments where collaboration between humans and AI tools is productive and innovative.
Creativity is another domain where humans can excel beyond AI's capabilities. While AI can generate new patterns and ideas based on existing data, the human capacity to think abstractly, imagine the unimaginable, and connect disparate concepts in novel ways remains unmatched. Encouraging creativity in education and the workplace ensures that as AI takes over more routine or analytical tasks, humans will continue to lead in innovation, design, and artistic expression.
Lastly, the concept of adaptive learning is crucial. Just as AI systems learn and evolve based on new data, so too must we. However, our learning is not just about absorbing information; it's about adapting to new ways of thinking, new technologies, and changing societal norms. This ability to learn and relearn throughout life is what will keep us relevant and resilient in the face of rapid technological changes.
As we consider the future in this age of AI, our focus should not only be on developing technical skills to use and manage AI systems. Instead, we must emphasize the uniquely human capabilities that will complement AI's growth. By nurturing critical thinking, emotional intelligence, creativity, and adaptive learning, we prepare ourselves not just to coexist with AI but to lead a future where technology enhances our human experience, not diminishes it.
Of related interest:
- Gianni Giacomelli's "With AI, learning and reskilling ≠ training|", blog post of 3/25/2024.
Tuesday, March 26, 2024
The Echoes of Theuth: From Writing to the Internet and AI
In Plato's "Phaedrus," Socrates recounts the tale of Theuth, the Egyptian god of writing, presenting his invention to King Thamus. This ancient narrative, exploring the invention's impact on memory and wisdom, mirrors the last couple of decades' discourse on the emergence of the Internet, and now, Artificial Intelligence. This post's primary lens focuses on cognitive implications. There are, of course, broader concerns the digital age and AI have introduced.
Revisiting Theuth in the Age of Information and AI
Theuth's claim that writing would enhance memory and wisdom was met with skepticism by Thamus, who argued it would instead weaken memory and give only an illusion of wisdom. This cautionary perspective finds its echo in the modern era, first with the Internet, and now more profoundly with AI. Both technologies, while distinct, share a common thread in their transformative impact on how we acquire, process, and value knowledge.
The Internet: A Precursor to AI's Cognitive Challenge
Before AI became a household term, the Internet had already begun reshaping our cognitive landscapes. Dubbed as an external "hard drive" for our collective memory, it introduced the "Google effect," where the ease of accessing information led to a potential decline in memory retention and effort in learning (See Nicholas Carr's book, The Shallows). The vast, accessible sea of data promised knowledge but often delivered a surface-level engagement with complex subjects, mirroring Socrates' concerns about the written word.
AI and Beyond: A Continuation of Digital Age Dilemmas
AI amplifies these concerns, offering unparalleled access to information and automating tasks with efficiency but at the potential cost of diminishing our cognitive faculties. The ''illusion of wisdom'', where individuals may overestimate their understanding due to the breadth of accessible information, becomes an even greater risk. As AI systems take on more roles that require analysis, decision-making, and even creativity, the question of what it means to truly know or understand something becomes increasingly pertinent.
Acknowledging the Spectrum of Concerns
While I focus here on cognitive effects, I also want to recognize that the challenges posed by AI and the Internet are multifaceted. Ethical dilemmas, privacy breaches, algorithmic biases, and the digital divide are significant issues that warrant attention. The societal impact of these technologies stretches beyond individual cognitive abilities, affecting our collective moral and social frameworks.
Charting a Thoughtful Path Forward
In navigating the future of AI and the digital landscape, a balanced, thoughtful approach is essential. By critically assessing the benefits and potential pitfalls, especially in how these technologies influence human cognition, we honor the Socratic tradition of deep questioning. This not only involves scrutinizing AI's capabilities and impacts but also reflecting on how the Internet has set the stage for today's digital challenges.
As we continue to integrate AI and digital technologies into our lives, let's maintain a critical eye towards their impact on our cognitive abilities and society. The story of Theuth, extending through the age of the Internet to the dawn of AI, serves as a valuable framework for understanding these challenges, encouraging us to ensure that technology enhances, rather than diminishes, our human experience.
----
I used AI to research and write this post. To what extent did that contribute to a potential decline of my cognitive capabilities -- regardless of inevitable age-related decline? To what extent did it enhance my knowledge, understanding, and cognitive capabilities? How would I know? What can I do to prevent cognitive decline related to technology use (or overuse) while leveraging technology to enhance my access to and use of knowledge?
Friday, March 22, 2024
The Evolution of Content Management: From Static Documents to Dynamic Collaboration
In the digital age, content management has become a cornerstone of knowledge work, enabling us to organize, access, and share information like never before. My journey through various tools and concepts in content management has illuminated a fundamental shift: from managing static documents to engaging in dynamic, collaborative content creation. This post explores this evolution through my experiences with TiddlyMap (starting almost 10 years ago), Learning Management Systems (LMS), Knowledge Graphs, and Microsoft Loop.
Discovering Transclusion in TiddlyMap
My exploration began with TiddlyMap, a tool that blurs the lines between notetaking and concept mapping. It's where I first encountered the concept of transclusion. This feature allows content from one Tiddler (note) to be included in another seamlessly, ensuring that updates are reflected universally. The result? A single source of truth within my personal knowledge base, facilitating a modular organization of content that is both efficient and consistent. (See Transclusion in WikiText)
Key Takeaway: Transclusion in TiddlyMap showcased the power of interconnected content, highlighting the importance of maintaining consistency and efficiency in personal knowledge management.
Revisiting Reusable Objects in Learning Management Systems
My journey took me back to the concept of reusable objects in LMS, something I had encountered earlier. These digital resources can be utilized across various courses or modules, embodying the principle of modularity and reuse. This approach not only saves time and resources but also ensures consistency across the educational spectrum.
Key Takeaway: The practice of creating and using reusable objects in education underscores the need for content that is both flexible and adaptable, catering to diverse learning contexts and styles.
Connecting the Dots with Componentized Content
A recent webinar on Knowledge Graphs brought the term "componentized content" into sharper focus for me. This concept, akin to reusable objects, emphasizes breaking down content into manageable, standalone components that can be dynamically assembled. It resonated with my experiences, highlighting a broader trend toward agile and responsive content management systems that can evolve with our needs.(See Taking Content Personalization to the Next Level: Graphs and Componentized Content Management).
Key Takeaway: Componentized content is at the heart of modern content management, reflecting a shift towards more agile, responsive, and interconnected systems that can support complex information ecosystems.
Experimenting with Microsoft Loop
My exploration culminated with Microsoft Loop, a tool that epitomizes the modern ethos of collaborative work. Loop's components are modular pieces of content that teams can collaboratively edit in real-time, streamlining the way we work together. This real-time collaboration, without duplicating content, signals a new era of efficiency and connectedness in teamwork. (See Get to Know Loop Components)
Key Takeaway: Microsoft Loop represents the future of collaborative work, where dynamic, component-based content and real-time collaboration drive productivity and innovation.
Conclusion:
We're moving from static, siloed documents to a world where content is dynamic, interconnected, and collaborative. This evolution is not just technological but philosophical, changing how we think about knowledge, learning, and work.
These tools and concepts have reshaped my approach to content management, pushing me towards more flexible, efficient, and collaborative methods. They highlight a broader shift in our digital landscape, one that values modularity, reusability, and collaboration above all. While my personal knowledge management tools re often a playground for learning, the biggest value may come from collaboration and ultimately, the combination of people and tools to achieve augmented collective intelligence (ACI).
Final Thought:
Testing new tools is always fun (to me). They offer a glimpse into the future of content management—a future where knowledge is more accessible, collaboration is seamless, and learning is boundless. At the same time, it is worth reminding ourselves -- repeatedly -- that the tools are meant to enhance human capabilities. Some will be more effective at enhancing individual capabilities, like TiddlyMap, while others are designed for collaboration and enhanced team or group capabilities.
Next I have to think about the implications of this evolution for Knowledge Management and how we might need to rethink our knowledge management models and approach to knowledge assets.
Wednesday, March 20, 2024
Start with the problem(s) and prioritize
I recently read an interesting piece from Harvard Business Review titled "Find the AI Approach That Fits the Problem You're Trying to Solve " The essence of the article resonates deeply with my own beliefs, particularly around the notion that effective problem-solving begins with asking the right questions. Statements such as "without the right questions, you'll be starting your journey in the wrong place" and "Start with the problem, not the technology" echo a seemingly obvious yet profoundly complex reality.
This concept, while straightforward, is far from simplistic. In the realm of international development, organizations are confronted with a labyrinth of challenges, far beyond the scope of a singular issue. It's not just about identifying a problem and pairing it with a technological solution. There lies a critical, yet often overlooked step: prioritization.
Consider the diverse array of organizations striving to address global development issues. The challenge isn't merely in selecting a single problem but in discerning which lever to pull for maximal impact. Should technology then be primarily leveraged to navigate these strategic decisions, allocating resources more effectively?
While funding agencies may gravitate towards these macro questions, implementing organizations face more pragmatic concerns. Their focus often shifts towards securing necessary funding, leveraging technology to streamline grant seeking and proposal writing processes. This delineation underscores a fundamental principle: the application of technology, particularly AI, must be tailored not just to the problem at hand, but to the scale and scope of the organization's mission and resources.
Funding agencies are not going to fund implementers to improve their proposal development mechanisms, but they could and will fund efforts to leverage technology (including AI) to address global challenges. To what extent will that funding go to macro questions around levers for maximum impact?
Monday, March 18, 2024
KM and AI in the Workflow
We (KM professionals) often talk about embedding KM processes into the workflow so that KM isn't an additional burden on top of other processes. And now we see a new push to embed AI in workflows. Beyond using a GenAI interface like ChatGPT, GenAI applications can be fully integrated within the tools employees use in their daily work. Microsoft's M365 Copilot is an example of that integration. I also just saw how this integration works in Coda.
With all the excitement over the new GenAI capabilities and bells and whistles of potential integration, let's pause to figure out how to best combine human elements of KM that leverage the best of human intelligence, human critical thinking. If we are going to dissect a process or set of processes in a workflow to integrate AI, we might as well spend some time thinking through where and how human intelligence will add value. Let's not apply AI just to save time and increase productivity. Let's revisit our workflows and integrate both AI and KM to give our brains more time to think.
How can we both speed up (boring, tedious tasks) and slow down to think within the same workflow?
By carefully designing workflows and fostering a culture that values both AI efficiency AND human insight, organizations can create a powerful synergy. A balanced approach would ultimately lead to more innovative and thoughtful outcomes.
Saturday, March 16, 2024
From Montaigne's "Essais" to Knowledge Graphs
Pretty much everything leads to a thought related to knowledge graph these days. Here is today's train of thought:
I was considering reacquainting myself with Montaigne's essays for a number of reasons.
- The style and how it relates (or not) to the blogging of today
- The humanism/humanistic aspect of his writing and how it relates (or not) to today's conversations around humans and AI.
- His knowledge skepticism, introspection, questioning of his own knowledge, asking "Que Sais-je?"/What do I know?
Digression Warning!
Montaigne was one of the authors I needed to study deeply in high school (French High School) to prepare for one of the end of high school exams. In fact, the French Literature exam was not at the end of the last year of high school but at the end of the second-to-last year. This involved very intense literary text analysis (for a 16-year-old) and an oral exam that required both presentation of a specific text and answering questions about the text from an examiner. You had to prepare a number of texts, come to the oral exam with a list, and the examiner would pick one and start drilling you. I remember that our teacher preparing us for this exam was very demanding and therefore prepared us very thoroughly. I bet that if by some miracle my list of prepared texts was put in front of me, I would suddenly remember a lot about each of them. Well, no great miracle needed. I found all my high school exams in the basement -- where all matters of interesting knowledge artifacts can be found. I also have some of my handwritten (cursive), in-class philosophy exam essays, but I digress even within the digression, a sure sign that this should be a separate post.
A couple of years later, I would find myself in English 101 in college in the US, totally lost trying to analyze Shakespeare and other English language literature not only because English was still challenging for me, but because the type of text analysis expected of students seemed so different. I didn't "get" the assignment and struggled in English 101. Perhaps this was an early lesson in how language, literature, and culture are so interconnected and part of what makes us so uniquely human.
End of Digression
I went down to my basement book collection and while I don't seem to have any Montaigne on hand, I did find a "Dictionnaire de Citations Francaises," 1978 edition. Luckily, quotes from long-deceased authors are reliably static, so this isn't a book that would age with time. In fact, it's probably more accurate than most web-based collection of quotes. I wanted to dig into some Montaigne quotes.
There are multiple pages of Montaigne quotes, all from his "Essais".
It's a heavy book, like most physical dictionaries, with a narrow page. It's also a beautiful example of organized knowledge, with multiple indexes and numbered references. I can search by topic, by author, by historical period. So, immediately, I think... this needs to be turned into a knowledge graph. I want to be able to visually SEE how these 16,460 quotations are connected. Would it tell me something I can't possibly see by reading the dictionary? I would think so. Perhaps I should try on a small scale.
That being said, focusing on individual quotes extracting from essays could really fail to convey the context and full breadth of meaning and nuances that you would get from reading the full essays. If I were asked to explain the meaning of a quote, wouldn't I want to know what was written before and after the specific quote? So, while a knowledge graph based on individual quotes might be interesting as a small scale experiment, I can already see how it would have significant flaws, unless it could be paired with access to the full text for sensemaking purposes.
Monday, March 11, 2024
Two Layers of Knowledge Architecture
I've come across two different approaches or definitions of "knowledge architecture", and by extension, "knowledge architect". I'm not sure whether talking about them as two layers is accurate, but these two approaches are not mutually exclusive. In fact, they complement each other.
#1: Strategic Framework: Knowledge architecture as the framework for knowledge management, which could be the foundations for a knowledge management strategy and would include the traditional people, process, technology, and governance. This is a domain much more closely associated with organization development and learning, integrating elements to leverage both tacit and explicit knowledge.
#2: Organizational Schema: Knowledge architecture as the rules and schemas for organizing knowledge, which focuses on explicit knowledge and/or data (structured and unstructured). This is a domain much more closely associated with information management and now with AI, big data, etc. It's the domain of taxonomies, ontologies, and knowledge graphs.
Thursday, March 07, 2024
Thoughts around Leveraging Credibility Perception Theory
This is another early morning (useful) rabbit hole which started with a post on LinkedIn about a recently published paper that "examines how individuals perceive the credibility of content originating from human authors versus content generated by large language models, like the GPT language model that powers ChatGPT, in different user interface versions." (See "Do You Trust ChatGPT" for the original paper)
I was intrigued by the theoretical foundations for this type of research rather than the results of the specific study, so I went looking up information about credibility perception theory. Obviously, I'm not going to catch up on all the relevant theoretical perspectives in a couple of hours of early morning explorations, but this initial dive generated some questions?
First round of questions: Is this issue with credibility perception specific to technology-generated or technology-mediated information and our digital world? How much of it is as old as human have applied, or failed to apply critical thinking? How much of it is based on cognitive biases and the complexities of the human brain that exist regardless of technology's impact? Conversely, how much of it is impacted by technology and especially the latest technologies that are so persuasive at times.
Second round of questions: Are there variations or nuances in how credibility perception theory applies to textual information vs. visual information? I was thinking about PowerBi dashboards and other types of quantitative data visualizations that people love. How would this apply to concept maps and then more broadly, to knowledge graphs?
Third round of questions: Based on answers to all of the above, how would the development of an ontology which would be the foundation for a knowledge graph by impacted by these insights around credibility and trust? In other words, how could we leverage insights from credibility perception theory to develop and apply good practices in the development of ontologies and associated knowledge graphs?
Sunday, March 03, 2024
From Knowledge Cafes to Conversational Swarm Intelligence
The Power of Conversational Swarm Intelligence: Learning from Nature
Humans have always been good at writing and storing information to communicate. But, when we look at nature, we see some animals are experts at communicating and working together in real time. Birds flying together in a flock, fish moving as one in a school, and bees making decisions as a swarm show us incredible examples of teamwork. Inspired by these natural wonders, there's a new technology idea on the horizon called conversational swarm intelligence.
What is Conversational Swarm Intelligence?
Imagine combining the teamwork of birds, fish, and bees with our latest technology. That's what conversational swarm intelligence is about. Louis Rosenberg talked about this on the Amplifying Cognition podcast. It's about using technology to help people talk and make decisions together in real-time, just like some animals do in nature.
How Does It Work in a Knowledge Cafe?
Think about a big room where 100-200 people come together to chat in small groups of about 5-7 people. They discuss a topic, then mix up and join new groups to share what they learned. This mixing and sharing help everyone get a lot of different ideas and answers to the same question.
Now, add an AI assistant to each group. This AI listens, records, and analyzes what everyone says and shares insights from one group to another in real time. This means everyone gets to hear and think about a wide range of ideas without having to remember and retell them. It makes the discussion richer and helps find the best answers faster.
What's next?
Right now, these experiments use text chats, but imagine if this could work with spoken conversations. Someday, there might be robots sitting with us, listening, and offering insights from other groups instantly. What are the implications? How could this be used most effectively in support of human decision-making? What are some possible risks? How would this change the nature of conversations and more broadly, communications?