Chatbots are not the future of Technical Communication

By | 2018/01/30

And suddenly every tech comm and content strategy conference seems to be about getting your content ready for chatbots. Makes sense if you are a conference organizer. Chatbots are sexy and sex sells, even if the definition of sexy is a grey box with a speaker sitting on the counter.

But chatbots are not the future of technical communication. Here’s why:

Chatbots are stupid

No, I don’t mean that they are a stupid idea. I mean they are actually stupid. As in they are not very bright. As Will Knight writes in Tougher Turing Test Exposes Chatbots’ Stupidity in the MIT Technology Review, current AI does barely better than chance in deciphering the ambiguity in a sentence like: “The city councilmen refused the demonstrators a permit because they feared violence.” (Who feared the violence?) Human do this so easily we rarely even notice that the ambiguity exists. AI’s can’t.

As Brian Bergstein points out in The Great AI Paradox (also MIT Technology Review), an AI that is playing Go has no idea that it is playing Go. It is just analysing a statistical dataset. As Bergstein writes:

Patrick Winston, a professor of  AI and computer science at MIT, says it would be more helpful to describe the developments of the past few years as having occurred in “computational statistics” rather than in AI. One of the leading researchers in the field, Yann LeCun, Facebook’s director of AI, said at a Future of Work conference at MIT in November that machines are far from having “the essence of intelligence.” That includes the ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true. Another way of saying this is that machines don’t have common sense.

Chatbots, in other words, may be great at ordering stuff from Amazon or telling you to put a coat on because the forecast says it is going to rain, but they are nowhere near ready to help you fix your technical problem.

But even if they were a lot smarter than they are, chatbots would still not be the future of technical communication.

Chatbots are a CLI

Chatbots are a command line interface. You ask them something. They reply (often stupidly). That is what a command line interface does. In fact, we have had chatbots you can type at for a long time. ELIZA, a chatbot created in the 1960’s at MIT Artificial Intelligence Lab could act as a Rogerian psychotherapist. Uttering comforting platitudes to the broken hearted is not the height of intelligence. Any sympathetic school child can do it. Solving complex technical problems is much more complicated because the problem area is much more diverse. Putting a voice interface on the AI isn’t going to change that. Command line interfaces, whether visual or verbal, still have the same problem they have always had: they don’t support discovery and exploration.

Since the 70s we have has text adventure games like Colossal Cave Adventure where you discover your environment with conversations like this:

YOU ARE STANDING AT THE END OF A ROAD BEFORE A SMALL BRICK BUILDING.
AROUND YOU IS A FOREST. A SMALL STREAM FLOWS OUT OF THE BUILDING AND
DOWN A GULLY.

go south

YOU ARE IN A VALLEY IN THE FOREST BESIDE A STREAM TUMBLING ALONG A
ROCKY BED.

Would you prefer this interaction over a video game that actually shows you the forest and the stream tumbling along a rocky bed? (Or, you know, going outside and actually seeing a forest and a stream tumbling along a rocky bed?) It may have a particular kind of retro charm, but for practical purposes it is incredibly clumsy and laborious.

And the problem here is not one of how smart the AI is. It is the clumsy, time consuming, non-discoverable, hard to explore nature of the interface that is the heart of the problem. Making the AI smarter isn’t going to make the interface any more appealing.

Now admittedly, this is not a lot different from talking to technical support on the phone. The difference is that technical support is still (mostly) staffed by human beings who have that common sense, that “ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true” that AIs just don’t have yet, and that some speculate they may never have.

But the thing is, talking to tech support is not exactly technical communication nirvana. In fact, tales of tech support are predominantly tales of frustration on both sides of the phone. And what does tech support do if you ask them to help you with a truly complex problem? Chances are that they send you documentation and ask you to call back if you get stuck. And what to they do when they come across a common problem: they write a knowledge base article about it.

Some of that, of course, is about saving money, as tech support people have to be paid. With AIs, you could theoretically stay on the line for hours at minimal cost to the provider. But how many people are going to want to stay on line for hours with a chatbot? Only the lonely. Certainly not those in a hurry to get a job done.

But there are lots of problems that you would not even think of trying to solve in a conversation with tech support. Sometimes if you are going to ask an expert, you need the expert on site where they can see your work and watch what you are doing. Which brings us to the next problem with chatbots.

Chatbots are blind

Another limit of the chatbot interface is revealed by the text adventure interface. Not only can the chat bot not show you anything, it can’t see you do anything either. You have to tell it what you do.

go south

An expert can look at your work and tell you what is wrong. A coach can watch you execute a maneuver and critique your form. A chat bot can’t see anything. Even if it could, the ability of the AI to make sense of what it is seeing and put it in the context of the user’s task just isn’t there yet. So you have to describe your problem to the chatbot. You have to turn your problem into words.

But turning your problem into words is difficult. It requires you to understand and articulate the problem, and by the time you can understand a problem well enough to articulate it, your are well on your way to solving it. The primary virtue of an expert is that they can look at what you are doing and spot the flaw that you cannot see. You know what result you are not getting. The expert spots the one thing in the hundreds of parts and pieces you have assembled that is on backwards.

A chat bot can’t watch you work. It can’t look at the work you have done. And you can’t tell it what the problem is because you cannot see the problem, only the result of the problem.

But even if your chatbot grew up and became a robot with the vision and the common sense and the specialized knowledge and experience to watch you, to examine your work, and to spot the flaw in it the way a human expert would, it still would not be the future of technical communication.

Why not? Setting aside the fact that if you had a robot with those capabilities you really would not need to learn to do the task yourself, the real problem is that there is only so much that experts can do for us. In the end, learning is about rearranging our own mental furniture, finding our way through the thickets of our own minds. The expert can help us enormously at certain key junctures in that process, but most of it we simply have to do for ourselves. Content can certainly be a huge help to us through the processes, but it has to be a type of content that is most amenable to the trial and error, the exploration, and the intuitive leaps of recognition and synthesis that are fundamental to that journey, and that is text.

Chatbots don’t support wayfinding

Most of learning is wayfinding. The iterative process of refining our mental models until they fit the world as we are discovering it and trying to manipulate it requires us to range broadly and often erratically across a large body of information. John Carroll’s work that led to the publication of The Nurnberg Funnel showed that different people traverse texts in different ways driven by the particulars of their individual task and background.

No simple, comprehensive, logical treatment of the paradox of sense-making is possible. The tension between personally meaningful interaction and guidance by a structured curriculum entails a priori limitations on how much we can ever accelerate learning.

Users must forage for information as they forage for insight while attempting to hack through the brambles of their own preconceptions. By far the easiest medium to do this foraging in is text. Nothing else lets you speed up and slow down, go straight or turn left with anything like the same ease. There are, to be sure, ancillary media that can play a valuable role in our foraging: maps, graphics, animations, etc. But it is text that leads us to them, and text that leads us on.

Text, particularly hypertext, excels in these areas. Voice interfaces suck at this. The eye can pick a relevant phrase out  of a scanned text at considerable speed. A speeded up audio stream quickly becomes a high pitched babble. And the eye can speed up and slow down of its own accord, whereas the hand or the voice has to give commands to the audio stream. The eye can flip through a stack of paper or a list of search results in seconds. Navigating the same volume of audio material would take far longer and require far more interactions. And where the eye can command the hand to follow a link almost without thinking, embedding hypertext in voice just isn’t feasible. Audio is a fixed pace linear medium, yet the task of foraging for information and understanding is one that demands the ability to change pace and direction at will. Again, it is not the intelligence of the AI that is crucial here, but the nature of the interface and the nature of the task.

Even AI can’t build a Nurnberg Funnel

But what if the AI were to get so smart that it could avoid the need for you to do all this discovery? That certainly seems to be the hope that some of the current boosters of AIs in content strategy seem to be hinting at: the ultimate expression of delivering the right content in the right format to the right person at the right time. But that supposes that (all other issues being solved) we could know, with perfect certainty, what content is needed at a particular moment in the user’s journey to understanding. But we can never know that, because each user’s journey is unique and it is occurring largely in the privacy of their own skull.

Much of the wayfinding you have to do to learn something is navigating your own prejudices and presumptions, your own guesses that you think are facts. There is always some destruction to do in learning before construction can begin. Indeed, the destruction is the hard part, and it is all in your head and it is all unique to you. No human teacher and no AI is privy to that landscape. This is one of the most fundamental of human facts. We are born alone. We die alone. And we learn alone.

The notion seems to be that AI will remove the need for wayfinding by always being one step ahead of you. This is something no human teacher has ever achieved consistently, and there is a very good reason for this. It is the Nurnberg funnel, and there can be no Nurnberg Funnel. As John Carroll wrote:

Users, no less than instructional designers, are searching for the Nurnberg Funnel, for a solution to the paradox of sense-making. But there is not and never was a Nurnberg Funnel. There is no instantaneous step across the paradox of sense-making.

There is no instantaneous step across the paradox of sense-making. There is only foraging for information and understanding across a field of information and experience. And the form of information most suited to creating that navigable, foragable field of information is text, the past, present, and future heart of technical communication.

 

 

 

 

 

48 thoughts on “Chatbots are not the future of Technical Communication

  1. Michael LaRocca

    “…by the time you can understand a problem well enough to articulate it, you are well on your way to solving it.” It wouldn’t be a bad idea to carve that into a piece of wood and hang it on the wall above my desk.

    ELIZA was a lot of intellectually stimulating fun, but I don’t think we solved a whole lot of problems together. And I really haven’t seen anything more capable of passing a Turing Test.

    I envision a future of purpose-driven robots that do some things, and people doing what those robots can’t. Technical communication is one of those things that we’ll be hanging onto for a very long time.

    Reply
    1. Mark Baker Post author

      Michael, yes, these things are fun to play with. But there is an old saw about a dog walking on its hind legs: it is not done well, the wonder is that it is done at all. You can draw a crowd this week with a dog walking on its hind legs. Next week it’s a bore and the crowd is onto something else.

      That is not to say that voice interfaces won’t find a permanent niche, but we are a long way from them being anything other than a niche. And there are many things they will never be able to do, not because of how smart or dumb the AI behind the interface is, but because of how limited the interface is no matter what is behind it.

      Reply
  2. Chris Despopoulos

    Mark, thank you for questioning the emperor’s wardrobe. A fundamental value of text is the capacity to combine and recombine. This takes directed intelligence to accomplish… Something that AI is short on as far as I can tell. At any rate, I haven’t heard any noises about smartly recombining texts in chatbots. Or maybe I’ve just missed it?

    Reply
    1. Mark Baker Post author

      Chris, I think that is fair. The mechanics of combining bits of text are not difficult, of course, but knowing which bits to combine to meet a particular need seems to be something the chatbots are a long way from mastering. I do think there has been some talk about it. There has been some talk about DITA and the Chatbots (cool band name!) but I suspect that is one of those things in which it is easy to construct a demo of a few naive cases, but something quite different when asked to solve significant real world problems.

      Still, the central issue for me has nothing to do with how smart chatbots are, or how dumb the are, but with the fundamental misunderstanding about how humans learn that is inherent in the hype about them.

      Reply
  3. Larry Kunz

    Hi, Mark. It’s been a while. I’m really glad to see a new article from you. I agree with your general premise: there’ll never be a Nurnberg Funnel, and AI will never satisfy our need to forage and explore.

    I’m a little surprised, however, that you say “the navigable, foragable field of information” is text. This might be a small point, but I think you’re overlooking the virtues of static visuals. Give me a halfway good map, for example, and I’ll happily forage my way to new insights. I learned the constellations by looking at star maps (and, of course, at the stars themselves) — not by reading text the described the positions of the stars. Yes, maps have text on them — but not just text. The images, for me, contribute as much to the experience as the text. So….while video and audio have limitations, as you point out, and while AI has big limitations, should we really devote our attention to text only?

    Reply
    1. Mark Baker Post author

      A very fair point, Larry. I have made an edit to acknowledge the role that these other media can make, and pointed out in doing so that it is largely text that leads us to them.

      I am dubious, though, of your claim that images contribute as much to the experience as text. I suspect that may be a matter of taking text too much for granted. Text is so ubiquitous that other media can seem more striking by contrast. But chances are you found those other media via text and that it was text that put them in context for you. If graphics are the scenic overlook of the information journey, text is the road that gets you to them.

      Every new media that comes along, we are told, will spell the death knell of text. What really happens is that text encompasess them like currents in a bun — grateful for the contribution they make, certainly, but still the heart of the meal.

      Reply
      1. Chris Despopoulos

        I think it could be time to define text in generally applicable terms. Is text always and only a set of symbols that combine to produce lexical units? Isn’t math text? Are icons? Can you consider a map to be text? I don’t want to hazard a definition here because it’s sure to elicit howls of protest. But I wonder if it wouldn’t make sense to describe the attributes of text, and see which ones we can apply to the other media Larry listed.

        For example, you can say that recombination is an attribute of text. Well, it’s also an attribute of maps — you can layer information on a map to show characters in video games, available restrooms, or deliver sound streams giving histories of locals. So… Is a map NOT a text?

        Reply
        1. Mark Baker Post author

          I think it is reasonable to call icons (and emojis) text. They are hieroglyphics. The show the economy and the inflexibility of all pictorial writing systems. They also don’t require translation, another feature of pictorial writing systems — one that allowed Japan to adopt much of Chinese culture despite the difference in languages.

          Beyond that I think we should simply divide media into two classes: scannable and non-scannable. Text, graphics, maps, etc, are all scannable media. Audio and video are all non-scannable media. Notably, text is the only linear scannable media, which says a lot about its versatility and utility.

          The relationship between the properties of being scannable and being recombinable is interesting. As a first crack at this consider that to be scannable a media must consist of independent recognizable elements on which the scanning eye can alite. To the extent that they are independently recognizable, such units are at least theoretically separable from the whole, and therefore at least theoretically recombinable.

          The difficulty with simple separation and combination is context. A recognizable element is not necessarily recognizable purely on its own. Often it is recognizable in context, and loses its identity and unity if separated from its context. This has always been the central problem with attempts to reuse and recombine text.

          This is also why I am reluctant to grant that recombinability is a fundamental property of text. I do think it is an interesting property, but I worry about over generalizing the point. Is is a property of text, or merely a property of some texts?

          Reply
  4. Diego Schiavon

    Mark, thank you for writing that chatbots are not the future of TechComm. I was thinking something along these lines, but in a much less elegant and well-rounded
    way, while reading the program of an upcoming TechComm event in Amsterdam, near where I live.

    The Information Energy conference, as it is called, will focus indeed on AI and bots. I was tempted to go, but I am not interested
    in AI or bots.

    I must admit that I have never come across either technology on my work and I know very little about both, but I see little scope for AI in what I do.
    I am generally skeptical about overhyped technological silver bullets, so I might be biased.

    Still, I do not think technical communication has really managed to reliably solve the problem of how people access and use
    technical information.

    We used to have typewriters and offset printing, and user manuals gathered dust on shelves.

    Now we have XML, HTML and FrameMaker, and user manuals gather dust on shelves.

    Tomorrow we will have AI and bots and intelligent content and IoT and whatnot. And user manuals will still gather dust on shelves.

    TechComm has very little to show for about a century of autonomous existence. Throwing a ton of technology at the problem does not
    improve things a single bit if we do not understand the problem in the first place.

    Reply
    1. Mark Baker Post author

      Hi Diego. Manuals gather dust on the shelves because people do not look on shelves for information. People look for information online. If tech comm wants to be useful it needs to create material that works well when discovered via search (Every Page is Page One) and that is located where search can find it. What we can say about the chatbot boosters is that at least they recognize that the audience is online. They would serve that audience better overall by creating hypertext rather than chatbots, but at least they are in the right place.

      Reply
      1. Diego Schiavon

        Manuals gather dust on the shelves because people do not look on shelves for information.

        My company makes machines that work in environments without internet connection. We print our manuals on expensive cleanroom paper. The HTML version is mostly for reference and has to stay outside the working environment.

        Even assuming that we start publishing to HTML, if our customers allow it, the engineers will have to read documentation on machines that are not connected to the internet.

        Our audience is not online. If it were online, we’d be bankrupt after 6 months of court cases on copyright infringements and broken NDAs.

        Reply
        1. Chris Despopoulos

          Diego, are your machines connected to the customer’s internal network? Are they or do they contain computers that could serve documentation? In that case, you could ship HTML on the machines. We have a lot of customers who can’t access the internet, but our product is a VM customers install in their datacenter. Since it’s a VM, and it serves up an HTML GUI, we can toss the documentation on the same machine and serve that up as part of the GUI.

          Reply
          1. Diego Schiavon

            Chris, yes, we are working on that. Probably in a few months we will be able to ship HTML on the machines, but we first need to implement a PLM system that matches the machine configuration.

            Still, we supply a number of manuals that are essentially never read by anyone. Like the general safety manual, which I do not believe more than 20 people have read over the past 20 years or so. Including the people who wrote it.

            The manual was even translated into different languages. I think that the readership of the translated versions is even smaller, less than 5 including the translator.

            What I hastily and fuzzily tried to express is that documentation deliverables, in whatever format, are only a very limited part of what technical communication is about.

            What is more important is how a document matches the organization’s structure and workflows.

            To the extent that a document does not match the way a company works, it can be in print, HTML, ePub, AI or whatever hype marketing departments made up, but it is still essentially useless.

            Conversely, very low-tech approaches can be very successful, if they match closely the needs of the organization. Like Post-It notes in Kanban systems.

            My general, biased perception of technical communication today is that it consists far more of superfluous high-tech deliverables than well thought-out, low-tech ones.

            If TechComm managed to build a theory of what type of information is needed under what circumstances, then maybe technology could help improve the practical implementations.

            As it stands, more technology in TechComm feels to me just like a solution in search of a problem – while existing problems go unresolved.

          2. Mark Baker Post author

            Diego, “My general, biased perception of technical communication today is that it consists far more of superfluous high-tech deliverables than well thought-out, low-tech ones.” Amen to that!

            The fact of the matter is that biggest issue by far in tech comm is saying the right thing. Saying it the right way comes in a distant second. I say distant because most users can deal with the right thing said the wrong way, but the wrong thing said the right way is not good to anyone. In third place we have findability, but as long as you put content that says the right thing where a good search engine can find it, you have solved a large part of your findability problem. If you say the right thing the right way, you have solved almost all of it. What remains is to support the users onward journey from their initial landing spot, and for that hypertext will suffice most of the time.

            But of course, the better job you have done of saying he right thing in the right way, the less need there will be for onward navigation. Not none, of course, because of the inherently chaotic nature of information finding. But if you do these three simple things: say the right thing in the right way and provide useful links, you will have done about as much as it is possible to do. And all of that is pretty low tech.

            The real challenge, I think, is to develop the right low tech solutions that actually support doing these basic things. That is what my forthcoming book is about.

          3. Diego Schiavon

            I am looking forward to ordering your book!

            As a sidenote, there must be other ways to solve the findability problem besides putting everything in a search engine.

            I do not think it possible that content became findable only because the Internet showed up. What were people doing before the Internet?

          4. Chris Despopoulos

            Diego, a few points… I think the way an organization works is important, but also important is the way humans interact with the technology. (Maybe that’s what you mean?) If my airplane is losing altitude, that’s a different info request than if I can’t create a floating graphic on the page. In the airplane you have individual interaction with a technology and also interaction with a protocol that is developed by a vast organization. So yeah, these are important things. Understanding what the nodal point of information is in the given situation… That’s the problem for the author and the reader alike.

            Low-tech can indeed be the best. Law is still stored in books, and references are made to pages, paragraphs, and other book artifacts. Kanban post-it notes OTOH are a low-tech paradigm put on top of a higher tech system. A personal note, I can’t stand Kanban. For me, it’s yet another paradigm to learn so I can talk about how I organize my work. Yeech!

            What did people do before the internet? For text, they developed common models of organization… Prose structure, TOC, index, bibliographies, systems to index books, etc. Before the internet, paper was the hot technology… Most portable, cheapest, most malleable. Online has introduced new possibilities… William Burroughs had to type prose, cut it with scissors, then glue it together to make his “cut ups”. Now we can do that algorithmetically (sic). Search has turned out to be an easy way to find stuff without having to look into the stuff’s organization. And that tempts us to create lots of stuff without organizing it. Take JIRA for example — where the paradigm looks more like a teenager’s bedroom than anything else.

            Another temptation is a very neat idea that you came up with… actually pushing the information to the reader. I worry about that a bit — worry that it would create a bubble. My kids already live in information bubbles because they expect more push than pull. But then it’s something we need to keep our hands in.

            All this needs a theory! What IS information, after all? Does it vary per situation — the same content is information for some and noise for others? How does that work? How do we predict the effect? What impact does a given technology or organization have on it?

          5. Diego Schiavon

            Chris,

            the way an organization works is important, but also important is the way humans interact with the technology.

            Yes, I meant the way humans interact with the technology, but also with other humans. The accent is more
            on the organisation than on the individual user: that is more, I believe, a branch of psychology (human-information interaction?). Also useful, but not what I was getting at.

            (https://en.wikipedia.org/wiki/Sociotechnical_system)

            The definition of organisation is pretty loose here, and comprises everyone from the SMEs to the end users, regardless of whether they belong to the same legal entity.

            Search has turned out to be an easy way to find stuff without having to look into the stuff’s organization. And that tempts us to create lots of stuff without organizing it.

            Precisely. It is like solving the problem of obesity with looser belts. How about eating more vegetables?

            Another temptation is a very neat idea that you came up with… actually pushing the information to the reader.

            Thank you for crediting me, but I think it was another commenter who came up with it. I am more of a low-tech, clay tablet kind of person.
            I think the reader already has enough problems finding the information he thinks he needs, without having to wade through the information I think he needs.

            In other words:
            a) We produce documentation without taking the organization into account.
            b) Organisations evolve without taking the production of documentation into account.
            c) This results into organisations and documentation matching each other poorly: unused documentation and frustrated users.
            d) We try to improve the match with more technology (the latest hype), but this does not reduce the gap between documentation and the organisation: it just makes it simpler
            to process.

            Rinse and repeat until documentation becomes completely disfunctional and you need so much technology to make sense of information, that the system is bound to break at some point.

            My point was that there is a lot to gain by matching information and organisations better, and this does not necessarily require technology.

            Which does not mean that chatbots are completely useless in a perfectly optimised sociotechnical system. There can be systems for which
            chatbots are the best match. But I think that these are niche scenarios.

            All this needs a theory! What IS information, after all? Does it vary per situation — the same content is information for some and noise for others? How does that work? How do we predict the effect? What impact does a given technology or organization have on it?

            Exactly. I am not aware that such a theory exists, but maybe it does?

          6. Chris Despopoulos

            Diego, I’m in complete agreement with you. Still, you might benefit by delivering HTML in your machines because A) It’s clean by definition and B) It costs less. If it’s to be ignored either way, why not keep costs as low as possible?

            Another thing technology CAN give (for technological products, at least) is proximity — injecting content into the work flow, so users can see the information they need inline with the task. But I think this needs even more organization, more care, more work to figure out what to inject where.

            You’re absolutely right that organizations evolve without taking the production of documentation into account. And so do technologies (what we document). Documentation is information, and it’s on us to determine what information the CURRENT situation demands, how to organize and develop it, and also how to present it. We provide a service to the organization, in response to the technology… It’s on us to respond to the evolution of the organization. It’s how we add value — Why they pay us the big bucks.

          7. Mark Baker Post author

            “All this needs a theory! What IS information, after all?”

            I have a theory. Information is stories. We organize the world according to stories. Language consists of tacit references to stories. When you don’t understand what a piece of text is saying, despite knowing the dictionary definition of all the words, it is because it is tacitly assuming that you know a story that you don’t know.

            Stories are ultimately founded in experience. We don’t accumulate data and then compile it into wisdom. We start with experiences and tell stories to explain them. Data is distilled out of stories and makes no sense we separated from stories.

            The kind of information that can be successfully pushed to people is really data points distilled from stories they already know. If you ask Alex what the temperature is outside, you are asking for a datapoint in a story you already know.

            This is also why we have never been able to organize content successfully, or align content production with the organization in a consistent manner. It is because information is stories and stories don’t display the kind of regularity that lends itself to categorization. (Notice how the same movie turns up in four or five categories in some services, or a completely different category from the one you were looking for it in in others?)

            Stories are how we deal with all the parts of life we can’t organize and categorize, and all schemes of categorization and organization are based on stories.

            This is why Everything is Miscellaneous. This is why Every Page is Page One. This is why the secret to findability is to make content easily recognizable and then put it where it can be found.

            More on information as stories here: /2015/07/27/the-other-thing-wrong-with-the-dikw-pyramid/

        2. Mark Baker Post author

          Well, if the manuals are on a shelf in a cleanroom, at least they won’t be gathering any dust! 🙂

          Reply
          1. Diego Schiavon

            Outside the cleanroom usually 🙂

          2. Chris Despopoulos

            I’m in solid agreement with you there. Tech writing is in desperate need of an overarching theory that doesn’t rely strictly on the technology du jour. As a matter of fact, I’m working on a book…

            OTOH, I think there are examples of effective tech com out there. Sometimes in spite of the tech com “industry”, but out in the world nonetheless.

          3. Mark Baker Post author

            Chris, no small part of that problem is that the technology du jour is always either publishing technology or content management technology, neither of which by themselves make any contribution to the fundamental tech comm questions of what to say and how to say it. In my forthcoming book, I will talk a lot about how we can apply technology to support rhetoric. A unified theory of rhetoric for tech comm would be an good companion volume. 🙂

          4. Diego Schiavon

            OTOH, I think there are examples of effective tech com out there. Sometimes in spite of the tech com “industry”, but out in the world nonetheless.

            Sure there are. But successful techcomm is often not planned. It just sort of emerges from a mixture of experience, intuition and luck. In other words, some techcomm works well, but we are not really able to explain why.

            That is what I mean by a theory of techcomm: how does the communication of technical information exactly work, and how can we improve it?

  5. Ray Gallon

    Mark, thank you for this article just when we are organizing a conference that, among other things, seeks to debunk some of the hype around chatbots. It also, however, seeks to identify cases where chatbots are useful.

    I agree with just about everything you say in your post, and yet I don’t think chatbots are useless, or, as one of our presenters at Information Energy says, “Like a Cold Shower in the Middle of a Canadian Winter.” 😉 There are scenario-based chatbots, used for things like onboarding, that certainly have their place amid an arsenal of tools. These don’t require the same level of “intelligence” that you cite, in order to be useful, since the number of possible responses, when onboarding, is somewhat limited, even allowing for differences in learning styles, and the fact that the bot can’t know what actual task the user needs to perform.

    Also, it’s important to take into account that people usually look for the path of least resistance to solve problems, and if they can interact conversationally with some agent, human or otherwise, they will often prefer it to making the effort to look up a text somewhere – as we know from the tendency to go to Google before looking at online help, for example.

    Finally, there is one point I really need to dispute with you:

    “technical support is still (mostly) staffed by human beings who have that common sense, that “ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true”

    My experience tends to put most of this statement in doubt. But then, it’s subjective, isn’t it?

    Reply
    1. Mark Baker Post author

      Hi Ray. Yes, Information Energy was one of the conferences I was reacting too. Glad to hear it is more skeptical than the promotion made is sound. (Diego’s comment shows I am not the only one who took it for boosterism.)

      And yes, I take your point about technical support. But that is very much to my overall point. That interface sucks even with a human being at the other end. The problem with chatbots is not the intelligence of the bot but the inelegance of the chat.

      Reply
      1. Ray Gallon

        “The problem with chatbots is not the intelligence of the bot but the inelegance of the chat.”

        Exactly.

        What about the laziness factor? Don’t you think people will gravitate to chat because it’s easier, even if less satisfactory than text?

        I think they will, so long as the bot isn’t totally ridiculous (you know, like asking if you’ve plugged the thing in… just like humans do…)

        Reply
        1. Chris Despopoulos

          How can less satisfactory be easier? Isn’t the goal to achieve as much satisfaction as you can, as easily as possible?

          Reply
        2. Mark Baker Post author

          All I know is, every time I say Hey Google to my phone, my wife says “Did you say something?” and that is what the phone hears. I can’t imagine trying to use it on the bus. Typing is more private and less socially awkward. Which maybe takes us back to only the lonely. People who live alone often leave the TV or radio on all day just to hear a human voice, rather like letting a new kitten sleep with a hot water bottle and a ticking clock. Chatbots might be more attractive to them. But then again, if you say Hey Google with the TV is on, the phone hears the TV.

          Reply
  6. Roy MacLean

    Excellent. Yet another bandwagon trundling down the road…

    The limitations of an audio interface are plonkingly obvious. A visual list of say 20 options (search results, navigational links) can be scanned and one selected in a second or two. An audio interface would have to read them out… slowly …, just like the much reviled helplines (“For X press 1, for Y press 2, …”)

    Neither can an audio interface provide any kind of birds-eye view: ToC, map, context, connections, … These might involve text or graphics, plus some structural/spatial aspect.

    Reply
    1. Mark Baker Post author

      Plonkingly obvious, yes. But … oooh shiny! What galls me is people chasing the next shiny thing when, 20 years in, tech comm has still not learned to to hypertext properly.

      Reply
  7. Ray Gallon

    The shiny object argument is dangerous, even though it has merit. These technologies are not going to go away. Not only that, they will be imposed by industrial technologists who don’t have the vaguest idea about the repercussions of not including useful, or even vital information in their designs.

    That’s why the Information 4.0 Consortium exists – to make sure that the “human” part gets thought of at the design phase, not as an afterthought when stuff starts to fall over all over the place. It may be all well and good to call it a shiny object, but that won’t make it go away. Yes, some applications of this technology will fail – as they should – but others won’t. Our job is to make sure they succeed in a humanist manner that serves people’s needs, rather than just the needs of some technocratic spec sheet.

    And that is the rationale for the Information Energy programme, that tries to mix technologists and information specialists, and get them talking to each other. If that seems like a bandwagon concept, well, if you actually read the programme, I think you can see where we’re coming from – in short, mates, RTFP!
    😉

    Reply
    1. Mark Baker Post author

      Ray, I don’t think there is anything unusual about technologists not understanding the need for documentation. That is the good old curse of knowledge and it is endemic. The market always rebukes it, and if anyone is going to convince the technologists of this, I think it is savvy project managers, not documentarians.

      And frankly, I find that documentarians are typically not very good at seeing where information is needed and where it is not. It seems to me more the rule than the exception that the manual will include mostly information you don’t need and very little you do. Thus the thriving aftermarket in user-created documentation both in bookstores and across the web.

      And it is true too that many products succeed despite inadequate documentation because they have desirable feature sets that users are willing to work to learn to use. Good docs can be a competitive advantage, just as a good interface or good design can be a competitive advantage, but these things are less decisive when features, functions, or price are the competitive advantage. Thus the significance of docs and design comes later in the product cycle when the feature, function, and price differences have declined or the market has settled into different economic tiers.

      Of course, I know that you are also concerned about the social implications of these technologies. I sympathize with that concern, though I don’t always agree with you on the specifics. You want the public to understand the implications of the behavior of these “intelligent” machines. But I think that “computational statistics” has long ago taken us past the point where it is actually possible to document the behavior of machines. This is not because the behaviors are unknowable in any mysterious sense, but simply because the permutations of reactions to inputs are too numerous. We are trusting ourselves to devices whose actions we cannot fully anticipate or control.

      But as I noted in a conversation I had with Fei Min after our dinner in Cambridge (Ontario) last fall, this is not a new thing for humanity. Since the dawn of time we have kept servants in our homes that are capable of independent action and learning, whose actions we cannot fully anticipate or control, that can go wrong in ways we don’t fully understand, and can hurt us, even kill us, when they do. We call it the dog.

      Reply
      1. Ray Gallon

        Mark – never once have I used any term that includes the stem “document” in it. I am not talking about documentation or manuals – I’m talking about information in a much larger sense, which of course includes documentation where that’s appropriate, but I think the day of of the pure “manual producer” – however you want to define manual, and whatever the delivery format – are numbered (not over). There are a lot of kinds of information we need to be handling that many of us are not, yet, and the nature of our profession, and what we need to be occupied with, will definitely change.

        I sense in your most recent reply, a certain static view of things that I think is not warranted. I may have misunderstood, but change is certainly a constant. I do not think all change is good, but I am realist enough to know we need to live with, and adapt to it. Even in the U.S., engineering education now takes humanities into account, at least somewhat, which it hardly ever did in earlier years, and it strikes me that if you look at how library science, and the work of librarians has evolved over the years, from musty card catalogues in quiet corners to the bleeding edge of digital technology, you can see a striking example of how mentalities have had to evolve.

        That said, yes – we do like our dogs, despite the risk. And we’ll like our robots and AI agents too, eventually – after we domesticate them – and hope there aren’t too many pit bulls among them.

        Reply
        1. Mark Baker Post author

          Ray, I know your interest is broader. I am simply confining my critique of chat bots to their application to tech comm because that is the area in which I am confident of my arguments. My skepticism is broader, but my capacity to argue it outside the sphere I know is considerably less.

          You are trying to marshal communication experts because you don’t trust the technologists who are building these things. My problem with that is that I trust the communication experts less than I trust the technologists. Scratch a communication expert and you find a propagandist.

          Change is not good or bad. Change is good for you and bad for me, or good for me and bad for you. It is a virtue, not a vice, in engineers, that they tend not to think about who a given change is good for or bad for. Their monomania for the engineering itself tends to blind them to the claims of rival interests. And the ease with which companies can be created and grow today largely frees engineers from the business elites attempts to guard their interests. We certainly would not have Amazon today if the engineers could not do an end run around the retail establishment.

          Of course the changes these technologies bring can bring damage in their wake. But the ability of other interests to mitigate that damage is very much to be doubted. And when it comes to a technology that mediates information, I would rather trust the naivete of the engineers over the interest of the communication expert.

          One of the articles you link to in your LinkedIn posts advertizing the conference talks about bias in chatbots. Bias is the new word for heresy. While the engineer is trying to build an intelligent chat bot, the propagandist is trying to make sure they build an orthodox chat bot.

          CS Lewis once wrote that the objection to slavery is not that no man is fit to be a slave but that no man is fit to be a master. Similarly, no man is fit to define bias or orthodoxy for a chatbot. They will, of course, but I would rather it was engineers doing it naively than communication experts doing it expertly and in favor of their orthodoxy.

          There will, of course, always be limits placed on what the engineers can do, and appropriately so. But those limits belong to the ugliness and hurly burly of politics. Not because democracy is a good form of government, but because all the others are worse. No man is fit to be master.

          Reply
  8. Alessandro Stazi

    Hi Mark.
    I have read your post, so much interesting as usual.
    I try to summarize my points:

    1 – printable manuals are the best way for gathering dust, because the old paradigma of “users that seeks information” it’s simply unfit and by several years.
    2 – the present (not the future) of technical communication must be based on molecolar, discoverable information for addressing better the paradigma “information that seeks the users”.
    3 – user path-finding through an hypertext is almost umpredictable, but we can try to address it in the best way we can.
    4 – current chat-bot engines are simply not so performing as we could desire.

    But we have to solve a very tricky problem: how to deliver information for users always more customized, in web format (because everything, also the machinery, will be under the control of software UIs), driven by a free combination of contexts-events-tasks, serving the UX (disciplines as Design Thinking are always more effective in this direction), with mechanisms of prediction/interpolation of several parameters for mapping and building the right content grouping the molecolar pieces of text/images/video for solving the needs of users and providing the right User Assistance?

    Artificial Intelligence could be one piece of the solution and chat-bots not will replace every type of User Assistance but could be a specific way for solving a specific class of interactions.

    My point is not to measure how are stupid current chat-bots, but in which way it could be less stupid or however how could be integrated with other techniques for a UX more customized.

    Because printed manuals gathers dust, but also traditional online helps could be too much “static” if we have to differentiate the User Assistance for different users.

    AI is one chance that we cannot ignore only because now we have “too much young” AI tools.

    But I think that we will have the plaisure of speak again and checking our ideas on these concepts for long time.

    Reply
    1. Chris Despopoulos

      Alessandro, I don’t think we need AI… At least not yet. By keeping the information as close to the object of interest as possible, and assembling documentation for the objects YOU are interested in at the time of your request, we can go a long way toward personalized and relevant delivery. In my mind the goal is to assemble the body of content as late as possible so it represents the most current situation possible.

      I think the problem with chatbots is that ultimately their decision trees are hard-coded. (Unless I’ve misunderstood!) They can answer a fixed set of combinations, but with machines and IoT the combinations will become increasingly dynamic and complex.

      And aside from the decision trees and assembled content they offer, the voice-based and quasi-natural language interfaces to chatbots are just sugar coating IMO. You can pretend you’re “conversing”, but what you’re really doing is telling the bot which tree you want to traverse. It’s not clear to me that a conversational interface is the most effective way to do that.

      Reply
      1. Mark Baker Post author

        Chris, I agree that keeping the information on how to use the object of interest as close as possible to the object of interest is a good thing, at least for that is concerned strictly with the use of said object once you have found it. For information that helps you choose which object to use to solve a problem, it is obviously not a solution.

        I’m not sure how this counts as personalized delivery. By the time someone has identified the right button to push, information on how to push it tends to be pretty generic. And even if it were not generic, it is hard to see how any intelligence, natural or artificial, gains the information needed to personalize it.

        I think the idea of an AI chatbot is that the decisions trees won’t be hard coded. That would mean either that the AI worked out the decision tree for itself based on experience of past queries, or that it modified it on the fly based on clues from the current conversation. Again, I’m not sure if intelligence or lack of access to information on the current user is the limiting factor here.

        It is worth saying here that there are people who specialize in analysing call data from technical support groups and developing appropriate decision trees with the aim or minimizing call time. I suppose an AI could be trained to do that. Or rather, I suppose that statistical analysis could be used to do that, since it would seem that current AI is really mostly just statistical analysis.

        The deeper question, it seems to me, is how many information gathering problem are really tree traversal problems at all. I suspect not many. And the further problem is that when it comes to rearranging my mental furniture, it is not obvious how I can engage any intelligence, artificial or otherwise, to do that for me.

        Reply
        1. Chris Despopoulos

          Mark, attaching content to the object of interest doesn’t equal personalized content, it enables it. It’s a good board position to address the game as it evolves. From there, an awareness of the “objects” that are relevant for a user’s account or a user’s context is enough to personalize the delivery. I guess a conversation can achieve that awareness, but it seems less efficient than simply tracking what the user has running in her environment… At least for tech docs that address specific environments.

          I don’t want to be rash, but it might be possible that every information gathering exercise is a tree traversal, or at least traversal of a web of trees. I suppose AI can “learn” what is a usual collection (and pruning) of trees for a given context. But that’s just statistically likely, as you point out. For a unique context, I’m not sure how AI is supposed to make an informed choice.

          What AI still doesn’t do is fight to the death before you turn it off. That is to say, AI isn’t big on intent. For that you need emotion and probably lower forms of response. Current thought in human neurology says that emotion comes first, then the idea. In other words, to make a choice, or to formulate an idea, you go with “what feels right”. I can’t see AI doing that with texts any time soon.

          Reply
          1. Mark Baker Post author

            Chris, I don’t disagree that attaching content to the object of interest would be an enabler of personalization. I’m just not sure how much personalization is necessary in those cases. I can see that localization to the particular system or system configuration could be valuable in some cases. That strikes me as different from personalization. It is adapting to the location as opposed to adapting to the person. But if that is the kind of personalization you mean I certainly agree that it does not require any kind of AI, it merely requires the right kind of structure.

            I’m not sanguine with the idea that information gathering is a tree traversal. That implies that every step brings you close to a single nugget of information that solves your problem. I think that is flawed on two counts, first that it often requires more than one piece of information to successfully rearrange your mental furniture to accommodate a new idea, and that there is no tree organization that does not put some of those pieces on widely divergent branches.

            Second, it supposes that the navigation is one of progressive refinement, with each decision leading you closer to your goal. I think it is far more chaotic than than, with many false steps. Yes, tree traversal can involve recognizing that you are on the wrong branch, but I don’t think people proceed from there by systematically working back up the tree. Rather, they leap from branch to branch and tree to tree like squirrels.

            This is why I think the web is a better model of information organization and information traversal than the tree. John Carroll said that there is no solution to the sequencing problem because every reader constructs their own sequence ad hoc. Trees are simply folded sequences. Actual information traversal is much more chaotic than that.

            Of course, if we think of our job in terms of organizing content for use, the notion that that use is fundamentally chaotic is not good news and we are likely to resist. We would like to think the problem is not that user behavior is inherently chaotic, but that it is only chaotic because we have not discovered the right sequence yet. But I am with Carroll on this. There is no right sequence.

            This is why I believe hypertext is fundamental. It is not that hypertext presents a path for the user to follow. It doesn’t. It is that it paves all the paths and lets each user navigate for themselves. This may be less satisfying to create than a more directed design, but I am convinced it is actually more useful.

      2. Alessandro Stazi

        Hi Chris,
        I don’t say that AI will provide always the right solution in every case.

        I say that is an interesting option to explore for achieving one goal absolutely needed: customize delivery of the technical information dynamically, according to a multi-dimensional set of different attributes.

        Chatbot AI based are something very different than an hard-coded knowledge tree or a rule-based knowledge tree. Surely, the current state of art is not so close to our desire, but I’m not so sure that we don’t need of this type of innovation.

        At the same time, I’m absolutely sure that traditional printable manuals are not read by anyone and a lot of companies waste money for producing contents not useful for users.

        In other words, I’m not sure if AI can help us, but I’m sure that many practice that are in the “comfort zone” of technical communication are surely useless for the final users.

        Reply
        1. Mark Baker Post author

          Alessandro, I certainly agree that few if any read paper manuals (except in a certain specific cases where they still apply, like building bookcases, hooking up stereo systems, and certain industrial applications), and that AI does not offer much of an alternative in most cases. But those are not our only alternatives. Hypertext, whether deliberately planned or merely created by search is the middle ground were content actually does get used, and we should be spending far more effort on creating better hypertext, which mean creating content that works better as hypertext, and on creating the appropriate level of linking to make hypertext most effective.

          Reply
          1. Alessandro Stazi

            Hi Mark.
            I know your ideas and your specific point on “reader wayfinding” into an hyper-text and I completely agree with you on some topics based on EPPO approach.

            But I want to try to persuade you of a bit of my reasons, using an example.

            Bob is an auditor that have to check the compliance of a machinery M, installed into an industrial framework in Milano. The machinery is customized with functionalities MF1, MF2, and MF3. Furthermore, the machine has a specifc history of maintenance updates MU1, MU2, MU3, MU4, and MU5.

            Mary is an auditor that have to check the same machinery M, installed into an industrial framework in Roma. The machinery is customized with functionalities MF1, MF4, MF5, MF6, and MF7. Furthermore, the machine of Roma has not updated with any maintanance activities.

            My goal is to deliver to Bob, mainly, a core of information needed for the machinery M of Milano with the related customized functionalities and specific maintanance activities (MF1, MF2, MF3, MU1, MU2, MU3, MU4, MU5).

            And the same for Mary (MF1, MF4, MF5, MF6, MF7), for the industrial site of Roma.

            And all this provided dinamically on the tablets of Bob and Mary, for example driven by the GPS coordinates of the different sites.

            This is what I define “information seeking users”.
            But this is not in contrast with concept of “readers wayfinding”, because in the distinct sets of information well tailored provided at two distinct auditors, they can freely browse in the structure of the information according to their specific “wayfinding”.
            And from the customized core of information, they can find other related information, for getting complementary data.
            This is so revolutionary? No, it’s very alike, conceptually, with the current behavior of Augmented Reality engines.

            Of course, I don’t pretend to derive a general rule from a specific, simple, example (this should be deeply uncorrect), but only to show that makes sense to serve the user experience customizing the set of technical contents according to a specific set of attributes (in this case, the machinery M and the GPS coordinates of the site where M is installed).

            The final question is: AI can help us in this activity? And chatbot AI-based can be useful?
            I’m not sure to find all the answers through AI. But I’m really interested about it.
            Is there a solution that we can provide to Bob and Mary according to traditional, well known, best-practices of technical communication?
            In many cases, only a PDF manual with hundreds of pages where Bob and Mart have to search the needed information (“user seeking information” paradigma).
            I would like to know the opinions of Bob and Mary. Based on my experience, I’m ready to bet 5 Euros that they could preferr the customized scenario… :-).

          2. Mark Baker Post author

            Allesandro, I agree with you absolutely on the use case. There are many instances where every installation is unique and for cases where such unique instances are expensive, expensive to maintain, or uptime is highly critical to revenue generation, then providing a doc set that is customized to that unique machine is very much worth doing.

            But personally I would not choose the words “information seeking users” to describe this situation. It is still a matter of creating documentation for a product in which users will then seek information.

            Of course, because these unique installations share many components in common, we are going to want to generate the unique documentation set for each from a common repository. We may decide to do that statically or dynamically, but that is really just a matter of timing.

            The idea of using GPS to help the user select the right doc set is interesting, though it won’t work in all instances of unique-to-the-instance docs sets since many of them (such as ships and planes) move around. But that still seems to me to be merely a matter of identifying the right document to consult, not of reversing the initiative of information seeking overall.

            I think it is also important in these cases to remember that the people who maintain such systems often have specific training on both the system and its documentation. That training can significantly change how they use the information set. I wrote about that here: /2014/06/30/docs-that-are-part-of-larger-systems/

            But this is just a disagreement over whether the label “information seeking users” applies or not, not a disagreement about the appropriateness of this strategy in the kinds of environments you describe.

          3. Chris Despopoulos

            Allesandro, this is a perfect scenario for distributing documentation with the specific installation of the machinery. This can be online or paper, BTW… The US military used a Planned Maintenance System (PMS) which is a deck of cards that lists the maintenance activities you must perform for THIS machine at THIS time. As you update the machine you update the deck to match the current state of the machine. If you install a different subsystem, you insert a different subset of cards into the deck.

            For online docs you can do the same. Serve the docs from the machine in question. As you install different feature sets, you install different docs. As you perform different maintenance routines, part of that includes installing different docs. Then the users of THAT machine get THOSE docs, by definition.

            You could do this for centralized docs (served from your company site, for example) via account management. Track the status of each machine with each account, and any user who maps to that machine gets the correct docs.

            This gets complicated in virtual computing environments, where an application is made up of different components — and you have no preset idea which components a given user will have. Similarly for Internet of Things, you don’t know which set of things a given user will have. If you keep the documentation with each component or “thing”, and if you can assemble the documentation at the last minute when the user requests it, then you can deliver the correct documentation. AI is definitely not required.

            For our help system at Turbonomic, we assemble the documentation in the browser, on the fly. We can filter the content depending on the user’s situation, and I’ve shown that we can merge content from different remote sources. (We don’t have the business case to deliver that at this time, but we do have the capability.) Imagine a network of software components, where each user account can have access to a different set of components. In such a network, if you can merge content from each component into one doc set, you can then personalize the docs depending on which user makes the request.

  9. Mark Baker Post author

    Allessandro, you raise an interesting issue in the distinction you make between users seeking information and information seeking users. (I would point out that information that seeks users does not need to be discoverable — being discoverable is what you need when users seek information.)

    I do think there is a lot of interest in the information seeking users model today. I think this is largely driven by marketing, where the motivation of the marketer to deliver the content greatly exceeds the desire of the reader to receive it. Content marketing was supposed to be about acknowledging that trying to push content on users was a failing strategy and needed to be replaced by on of making information available that users actually wanted to read.

    That original vision of content marketing seems to have died, probably because creating content that reader seek out of their own accord is difficult and you have to wait a long time for it to have an effect. Impatient marketers now seem more intent on trying to push content onto reluctant audiences in more targeted way. The result, as far as I can see, is that every time I buy something that I have researched online, I get bombarded by ads for that thing for the next six months. It is always after I have bought the thing that the ads start showing up, so there is your problem with current targeting methods. The actual content I used to make my decisions on those purchases was stuff I found for myself, not stuff that was pushed to me by the manufacturer. So are marketers interested in better targeting methods? Sure. Is this wisdom or impatience? Not sure.

    But in tech com, I think the notion that we can successfully create content that seeks users seems even more dubious. How are we to know when a particular user attempts a new task or has a particular problem? And this is only scratching the surface of the problem, because it does not address all of the rearranging of mental furniture that the user has to go through to solve a tougher problem.

    In short, I think most tech comm is going to have to operate in the user seeking information model, and that we would do well to focus more of our energy and design on that model. But this is a good subject for a new post.

    Reply
  10. Diego Schiavon

    For a bit of pop culture…

    In End of Exile, a 1972 Ben Bova sci-fi novel, “chatbots” figure prominently. The novel recounts the story of a band of orphans inhabiting a spaceship that left Earth generations past.

    The maintenance instructions for the failing machinery on the spaceship are voice recordings, but are basically useless for the children: they do not understand the instructions, and the last grown-ups said, before they disappeared, never to touch the machines anyway.

    But how many people are going to want to stay on line for hours with a chatbot? Only the lonely.

    The “lonely” in the story is a boy, Linc, who tinkers in solitude until he manages to make sense of the chatbots, repair the machines and save his siblings from starvation.

    But the process takes lots of time and patience. The bots are indeed stupid and blind, and by no means support wayfinding. Linc gets repeatedly stuck in dead-ends. It is only thanks to a deus-ex-machina that the children are eventually saved.

    A few floors above the children hides Jerlet, the last grown-up, who could have easily taught Linc how to repair the machines, and who is shocked that the children failed to educate themselves with the chatbots and grew up as ignorant savages.

    The author probably did not mean to write a book about the paradox of sense-making, technical communication and instructional design, but he did anyway.

    The fact that it takes literary gimmicks in order for the chatbots to actually work is quite ominous. It is as if the author had said: “Look, actually learning something from these machines is so unlikely, that I will never finish the story unless I take a few shortcuts in the plot. Here, a touch of magic and everyone lived happily ever after.”

    Reply
    1. Mark Baker Post author

      Sounds like Ben Bova anticipated John Carroll by a couple of decades. That business of tinkering supplemented by occasional dips into the information source is exactly what Carroll observed.

      And I think it illustrates the point that communication is really all about stories. Any given informations set is constructed on the assumption that its readers understand a set of stories. “Click OK” is a reference to a complex story that would make no sense to someone who has not worked with an WIMP interface. WIMP interface is a reference to a complex story that would make no sense to someone who has no background in interface design.

      A human interlocutor can realize that you don’t understand the same stories you do and can teach you those stories, often by guiding you through a particular set of experiences, since this the the best way to instill those stories in people. We are a very long way from a chat bot being able to do that.

      Without that, you are left to try to piece the stories together for yourself, which, as Carroll showed, people tend to do anyway even if systematic documentation is available to them. For that, hypertext is always going to be a superior medium to a voice interface.

      Reply

Leave a Reply