TL;DR: Effective knowledge management using KCS is what makes AI work best. Start with the content, then let GenAI turbocharge its delivery.
“We have a lot of Generative AI initiatives going on here. But ours is moving faster than the others because our KCS program had already created content that the AI model could easily and accurately use.”
KM Program Manager at a DB Kay client
GenAI: The Shiniest Hammer
When you have a hammer, they say, everything looks like a nail. And generative AI (GenAI) is the shiniest hammer that industry has wielded in years. Almost every business process and function has received the “let’s add AI to it!” treatment in recent months, from customer service chats to software development to advertising.
It’s easy to understand the enthusiasm. McKinsey estimates generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today[1]. Accenture says that GenAI will add $10.3 trillion in economic value to the global economy by 2038[2].
This potential is driving corporate budgets, hard. According to a recent survey of executives by CNBC, artificial intelligence is the single largest technology spending budget line item for the next year at 44% of companies[3]. Leaders don’t want to miss important opportunities, and much of what we see in enterprises is based on executive FOMO.
Because GenAI is so new, we’re not yet sure what the best use cases are…but since we’re throwing so much money at it, enterprises are casting seeds to the wind to see what will take root. In our focus area of knowledge management, there is excitement about using GenAI to write or rewrite knowledge base articles. This will drive efficiency and perhaps quality on the margins. But our testing suggests that today’s Large Language Models (LLMs), the best-known form of GenAI, won’t replace humans any time soon. Writing articles is all about understanding what’s important—about meaning-making—and we’re not there yet with GenAI.
So what is the right way for AI and KM to work together?
Keeping AI Grounded in Reality
LLMs perform so well it’s sometimes hard to remember that they don’t know any facts, nor do they reason. Ultimately, they’re complex statistical models of word usage, trained on huge volumes of text, attached to a random number generator. As such, their responses will almost always seem sensible, but they may diverge from reality[4]. Given how LLMs work, some hallucinations are inevitable. If you’re using an LLM to deliver support for a critical system, this is a big problem.
A promising solution to hallucinating models is a technology called Retrieval-Augmented Generation (RAG). The basic idea is to combine the best of search and GenAI. Search retrieves content relevant to the user’s question from an authoritative source, such as your knowledge base, and hands it off to GenAI to formulate a response. Basing responses on vetted content rather than a statistical model greatly reduces the risk of wrong answers.
KCS for AI
Many of our clients plan on training their corporate LLM with their own content and using it to drive RAG as well. This is great…but there’s a catch. From the earliest days of computing, programmers have used the phrase “garbage in, garbage out.” It’s an unsubtle reminder that the most sophisticated AI model is only as good as the data it’s trained on, and RAG is only as good as the knowledge it can retrieve.
We routinely see that KCS helps our customers create a knowledge base that’s consistent, accurate, and represents the collective wisdom of your team of experts, all of which improve knowledge grounding!
Salesforce, 10 Ways to Prep Your Knowledge Base for AI Grounding[5]
All this is to say that effective knowledge management is a prerequisite for enterprises that want to deliver GPT-like experiences that wow their customers. The best practice of Knowledge-Centered Service, KCS, delivers content that is especially well-suited to GenAI and RAG. KCS articles are:
- About one specific topic, so algorithms don’t have to figure out what sections are relevant to the user’s prompt…the whole article is!
- Continuously kept up to date, so answers can stay current and accurate without retraining the entire model
- Written in the user’s context, so algorithms don’t have to translate from technical jargon
The bottom line? If you’re going to do GenAI—and you are—give yourself the best chance of success by implementing KCS, too.
[1] McKinsey Digital. The economic potential of generative AI: The next productivity frontier, June 2023
[2] Accenture: Work, workforce, workers Reinvented in the age of generative AI, 2024
[3] Retrieved from https://www.cnbc.com/2024/06/20/ai-consumes-tech-spending-budget-with-most-going-to-worker-adoption.html
[4] I found this delightful description on the Amazon Web Services marketing website: “You can think of [a] Large Language Model as an over-enthusiastic new employee who refuses to stay informed with current events but will always answer every question with absolute confidence.”
[5] Retrieved from https://www.salesforce.com/blog/ai-grounding/