The Paradox of Help Quality

By | 2013/06/20

 Why does help still kind of suck even after so many years?

Tom Johnson asks this poignant question in his post Do We Need a New Approach to Help? Why Are Users So Apathetic Towards Help after 50 Years of Innovation?

Tom provides a great survey of the trends and ideas in help design, starting with John Carroll’s seminal work on minimalism and suggests multiple possible ways forward.

I think there is enormous promise in many of the paths Tom invites us to explore, but at the same time, I am struck by the need to recognize that there is limit to how much help help can be, and a real danger in trying to do too much.

One of the lessons that Carroll stresses in The Nurnberg Funnel, and which is all too easy to forget, is that there is a hard limit to how much any form of instruction can help people learn. Learning is, essentially, resolving a mismatch between the mental model you have and the thing you are trying to learn. In a sense, the process of learning is a series of challenges to the real world to prove itself different from our current mental model. Thus, as Carroll emphasises, making errors is an essential part of learning.

You can’t document a way around this process of challenge, error, and response. As Carroll puts it: “[learners] are too busy learning to make much use of the instruction. This is the paradox of sense making”.

Errors are a result of the learner’s challenge to the real world. In this sense, they are neither fruitless nor a waste. An error is information. This accords well with information theory, which tells us the the best experiment is one that has a 50/50 chance of success or failure, because the result of such an experiment yields the most information by cutting the list of possibilities neatly in half. In a very real sense, therefore, making errors is learning.

There is thus a hard limit to how successful help can be. If it stopped users from making errors, it would stop them from learning. Thus minimalism’s emphasis on encouraging users to act and helping them to recover from errors when they inevitably make them.

In this sense, the ideal way to improve the learning experience would be to ensure that all the reader’s experiments had a 50/50 chance of success, though it is difficult to know if that could be achieved in a way that applied to all users, or how you would measure it.

Also, of course, the ideal help experience should excel at helping users recognize and recover from errors when they make them.

Whether we have reached the limit of help quality is to be doubted. Part of the problem is that it is very difficult to know when you are at the limit. Part of the problem is that when you have reached that limit, learning will remain for the user a frustrating and error-prone process, which makes it very difficult to say that you have done all you can for them and walk away. But the worst part of the problem is that when you attempt to push beyond that limit, you inevitably end up making the help worse, not better.

We need to recognize something important about how we generally try to improve help: We see that users are making errors. We regard that as a failure of help (though it is really a necessary part of learning). We look for ways to reduce those errors, and inevitably what we do is create more systematic instruction — which does not work because it does not match the learner’s mental model.

Minimalism says let them make mistakes, and help them up when they fall. That is a hard discipline to follow. Our instinct is to try to prevent them from falling. But like a child that eventually rebels against parental restraint, saying, firmly, “I can do it myself,” the learner simply ignores those instructions and goes about sense-making in the only way that really works for them. There is no real learning without scraped knees and bloody noses.

So, let’s not ignore the possibility that the reason help still sucks is that our efforts to make it better are actually making it worse.

But let’s also not forget that all this applies to learners, not experts. Learners don’t follow procedures because they don’t fit their mental models. But our audience is not just learners. It is also experience people, who need to look up specifics of certain tasks or values of certain parameters. Because they are experienced, their mental model is a good match for the product and the task, and so they can follow procedures.

We can’t stop writing the documentation that experienced people need, and we also can’t stop inexperienced people from consulting it as well.

Some blundering around is therefore inevitable. And when users blunder, product management will inevitably come knocking on tech comm’s door demanding that we do something to fix it. That, in the end, may be why it is so hard to maintain minimalism in general, and good help in particular. Good help lets the user fail, but good luck explaining that to product management.

Worse still, as Carroll also noted, the user’s idea of what they want in a help system is not the same things as what actually works well for them. Users a quite deluded about how they learn — or perhaps unwilling to admit to is — so they ask for systematic instructions, and declare their intention to use them, but in practice they don’t. They quickly abandon them and start hacking at the user interface. If you are asking for customer feedback on your docs, and acting on it, chances are you are making your help worse by making it match customer expectations rather than supporting how customers actually learn.

What, therefore, must we do? Maintaining an ideal minimalist help system can be very challenging indeed, both because it is hard to measure, and because it is hard to defend, even to the people who benefit from it. What we can do is to create content in a way that recognizes that readers forage through the content set stringing together their own ad hoc curriculum as they go. We can facilitate this by taking an Every Page is Page One approach to our documentation:

  • Create Every Page is Page One topics that are self contained and serve a specific and limited purpose.
  • Establish your context clearly in every topic.
  • Conform your topics to a well defined type so they are consistent, complete, and easy to navigate.
  • Assume the reader is qualified to read this topic, and provide links to background for readers who are not.
  • Keep each topic on one level and provide links for users to change levels as and when it suits them.
  • Link richly along all significant lines of subject affinity so that readers can easily go wherever their mental model and their challenge to reality leads them.

 

26 thoughts on “The Paradox of Help Quality

  1. Ellis Pratt

    We seem to be coming back to the problem that Help may be the wrong term to use. Maybe we need a set of new definitions for the different types of content?

    I agree with the importance of establishing your context clearly in every topic (and the other points).

    Reply
    1. Mark Baker

      Thanks for the comment, Ellis.

      Indeed, Help may be the wrong term. Certainly it has a lot of baggage associated with it. The old distinction between the manual and the help seems largely moribund (if for no other reason than that today they are invariably the same content organized in the same way).

      The worst thing you can do to content online is to break it up into separate pieces (and that includes having a separate knowledge base). It should all be linked together, and it is really all just “documentation”.

      There are (or should be) no intermediate-sized units on the book or help system scale, just every page is page on topics that link richly to form a complete documentation set.

      Reply
      1. Holt Clark

        “There are (or should be) no intermediate-sized units on the book or help system scale, just every page is page one topics that link richly to form a complete documentation set.”

        Sounds a bit like Wiki to me. Does that make Wikipedia the Help documentation for human life and all existence?

        Reply
        1. Mark Baker Post author

          Hi Holt. Thanks for the comment.

          Yes, a wiki is a good example of an Every Page is Page One medium, and Wikipedia is a good example of how to make a vast content set accessible and navigable without overwhelming the reader with its sheer size. Wikipedia may have way to go, though, before it covers life, the universe, and everything.

          Reply
  2. Diana Logan

    Great article – I like the mental model differentiation between novice and expert users, and also think this is spot on:

    “so they ask for systematic instructions, and declare their intention to use them, but in practice they don’t. They quickly abandon them and start hacking at the user interface.”

    We recently ran a user assistance “lab” asking users to carry out a specific task and offered them different types of help – step-by-step, video clip, or no help. The response was mostly “video clip” or “just let me have a go”. Once in the task users were very keen to just be left alone. It was a joy to watch and the hardest part was sitting next to them and not helping.

    I like parental analogy too – buttons misaligned or vests inside out is ok, but step in when the sweater is on back-to-front or the shoes are on the wrong feet – on a school day anyway!

    Reply
    1. Mark Baker

      Thanks for the comment Diana.

      Yes, that is pretty much what Carroll’s experiments showed as well.

      Reply
  3. Tom Johnson

    Mark, I like your insight here. I agree esp. where you say, “the ideal help experience should excel at helping users recognize and recover from errors when they make them.” Of course that’s hard to do, though. We can document error messages, but it’s another thing to actually provide help instruction in the error messages. To really help users recover from mistakes, wouldn’t we need to build in more help directly inside the interface? Or even better, build logic to show messages when users are making poor choices? I agree that such an approach would be ideal. But it would require much tighter integration with application development.

    Re the other points, yes, those are all good insights. Learning is hard, which is why users often avoid it. Mooer’s law states that people will only try to look up / learn information when it’s more painful not to look it up / learn it.

    Here’s a question for you. In your years of experience as a technical communicator, what have you done that connects best with users?

    Reply
    1. Mark Baker

      Thanks for the comment, Tom.

      I think there is certainly some scope for more help in the interface, and for just plain simpler interfaces. But I think it is important to recognize that there is a big difference between a systems error and a user error. Many of the errors that Carroll’s subjects made did not produce system errors because the users did things that were perfectly valid system actions, but which did not accomplish their task, because their mental model of the system was wrong.

      So error recognition and recovery is not just about recovering from error messages. It is about recognizing the mismatch between your mental model and the model of the system. That’s where Carroll’s paradox of sense making comes in, because there is no way to help the user reset their mental model by holding their hand more tightly. Insofar as building more help into the interface constitutes holding their hand more tightly, therefore, it really isn’t the solution.

      In this respect it is a lot like being a parent. It is not about never letting your kids fall down. It is about setting up situations in which it is relatively safe to allow them the opportunity to fall down, and then picking them up and comforting them when they do fall. Those falls are painful to the parent, but necessary for the child’s development.

      In all my years of experience as a technical communicator, the best thing I ever did to connect with users was to moderate the OmniMark User’s Group mailing list. This is different from simply reading posts. I actually answered questions, and went back and forth with users until the answer was clear.That way you get to find out exactly what the barriers to understanding are and where the user’s mental model is not a perfect match for the model of the product.

      Reply
  4. Jonatan Lundin

    I see a danger when we generalize the findings from Carroll and his team to all users of all technical products. Carroll did investigate mostly computer software in office environments. Generalizing makes us believe that all users are learning-by-doing by exploring the product, and when stuck, they need guidance or recognize errors and recover from them. This is not how it happens in many other domains. Users are not only learners or experts. There are many “cognitive types”: risk-takers, non-risk-takers, methodical, irrational etc. A methodical non-risk taker starts by searching for information before acting to confirm a mental model to reduce the risk of damage or injuries. I know this from my own research.

    Many users are not in a learning position when using a product; they simply want to get something done; meaning to use the product and leave it without having learnt anything (remember the goal of an active user which is throughput).

    I argue that something possible to generalize is the fact that users, regardless of being “irrational explorers”, “methodical confirmatives” etc, end up having an information-seeking goal which is can be formalized and compromised as a question. Many questions are indeed about recognizing an error and how to recover from it. The interface between the product usage situation and the information-seeking situation is the information-seeking goal.

    In fact, the information-seeking process is just a different problem solving strategy from the one users employ when trying to get the product working when it is not doing “as the user thinks it should”. I have described my view on the information-seeking behaviors among users of technical products here: http://www.excosoft.se/index.php/about-us/blog/item/6-do-you-know-how-to-design-for-the-searching-user?

    Reply
    1. Mark Baker Post author

      Thanks for the comment Jonatan,

      I agree there are perils in overgeneralizing Carrolls conclusions. The one that strikes me about them most is that minimalism is designed as a response to the paradox of sense making. But for a user whose mental model matches the system, no such paradox exists. Experts need documentation too.

      I’m not sure that the fact that he studied computer users is particularly a limiting factor. I see no reason to believe that the paradox of sense making is unique to computer users, indeed, it would be very difficult to explain why it would exist only in this domain. The specific remedies he suggests might be more domain-specific, and even temporally limited, but the underlying observations see much more solid.

      What the paradox of sense making shows us is that, contrary to what we might assume, people with the widest cognitive gap are the least likely to rely on documentation, because their cognitive gap separates them from the documentation as much as it does from the product.

      This appears to be true regardless of learning style. Carroll reported many users who self-reported as systematic learners who do a deep dive into docs first, but observed that that was not how they actually behaved in the experiments.

      I agree that many users are not attempting to learn, that they just want to do without learning. But that desire does not seem to exempt them from the paradox of sense making. In many cases, for instance, people select the wrong procedure because they have the wrong mental model. Even if they follow it systematically, they still don’t get the result they want. In many cases, they perceive that it is not going to work and abandon the attempt, but they still don’t know what the right procedure is to select.

      We can also look to other research that discovers the same patterns of behavior in other contexts. Examples include information foraging theory and studies like this one: http://www.nngroup.com/articles/website-reading/

      I think there are many in tech comm and content strategy who are still seeking a Nurnberg Funnel. I believe Carroll was right, and a Nurnberg Funnel is not possible, or, at least, not achievable using content technologies, and that attempts to make content and findability work perfectly actually make things worse, not better, for the reader.

      Reply
      1. Jonatan Lundin

        Hi Mark,
        “I see no reason to believe that the paradox of sense making is unique to computer users, indeed, it would be very difficult to explain why it would exist only in this domain”

        I interpreted your post as a statement that the observed behavior Carroll reported from software users, about users trying to make sense by trying the product out, is the only behavior user employ across domains. I’m saying that the sense making behavior, which means that users are not receptive to documentation, is a universal human behavior across domains, but not the only behavior user employ. Some users are methodical non-risk takers who actually consult the documentation before usage sine they know that their mental model is not in accordance with the mental model that the system designer had when designing the product. I agree totally that there is no reason to believe that the paradox of sense making is unique to computer users.

        In fact, my whole work with SeSAM is based on Carroll’s work and also Donald Normans view that there is often a gap between the system designer’s mental model and the user interpretation and creation of a mental model when thinking and seeing the product. I see a problem though in the information-seeking behavior of users. For example, due to an “invalid” mental model, users ask irrelevant questions and when finding an answer that they think matches the question, they read it to confirm the belief and when they find some passages that could be interpreted as a confirmation they interpret that their belief is correct. So they are sometimes not actually reading an answer to understand what the sender (=writer) is saying. Thus a user may find an answer to “answer” the question, when in fact it does not do it at all. This is of course not valid for all users. I have elaborated on these issues here: http://www.excosoft.se/index.php/about-us/blog/item/6-do-you-know-how-to-design-for-the-searching-user?

        I believe it is possible to design a search user interface that allows “learning-by-searching” thus provide clues and hints to make user aware of an incorrect mental model while searching. As such, error recognition and recovery happens in the search user interface and not in the actual content. It may work for some users but not for all.

        Reply
        1. Mark Baker

          “Some users are methodical non-risk takers who actually consult the documentation before usage since they know that their mental model is not in accordance with the mental model that the system designer had when designing the product.”

          Carroll had several subjects that described themselves as methodical non-risk takers and swore they were going to read the documentation fully before usage. None of them did.

          This could be interpreted in a number of ways:

          1. Perhaps these users were deluded about how they actually learn. Perhaps they thought this methodical approach was virtuous, or correct, and so reported it as their style, or actually attempted to behave that way, but failed, and resorted to exploration.

          2. Perhaps it is not possible for someone to shift their mental model based on instruction alone. They need concrete interaction with the new model in order to fix it in their minds. (There seems to be very large support for this in research and educational practice.)

          3. Maybe the content used in the experiment was just not written well enough or organized well enough and had it been better, the users might have read it though, changed their mental model, and been able to use the product successfully. This is an enormously difficult variable to control for in any experiment on content use. However, subsequent studies don’t seem to have see radically different behavior.

          4. Maybe the specific style of content they were given — systematic instruction — was the problem. Perhaps another style, other than systematic or minimalist, would have performed better. What makes Carroll’s book powerful is that it used neither of these techniques. It is full of storytelling — detailed accounts of the experiments and the interaction of the subjects with the content and with the researchers. We know that storytelling is the most powerful form of human communication, but few technical manuals use it, and few of those that do, do it well. (Perhaps storytelling is the one form of content that is a workable alternative to experience for the user.)

          So, perhaps the methodical non-risk taking strategy is unsuccessful because it is inherently at odds with how humans really learn, or perhaps it is because the content they are given isn’t good enough to support their strategy. Either way, though, it doesn’t work.

          There are some skills that nature does not provide in anything like the quantities required, and storytelling is one of them. It is therefore doubtful that a technical communication strategy based on great storytelling can be made to succeed. (I’m not convinced that minimalism can be made to succeed generally either — people would rather keep trying to build a Nurnberg Funnel, and minimalism is not a safe CYA strategy.)

          I doubt you are going to have much luck creating a search interface that can help people recognize that their mental model is correct. It seems well established that the ideal search interface is one with a text box and a button marked “Search”. But I shall still be interested in seeing what you come up with. Sometimes that thing that works appears contrary to all expectations.

          Reply
          1. Alex Knappe

            The perfect help system is a system the user doesn’t even recognize as such in the first place.
            Tutorials of some MMORPGs fulfill already perfectly the task, to give completely inexperienced users the basics to handle those complex social simulations and are easily skipped by the more experienced players.
            Good tutorials give you some head start. This could be easily ported to other systems.
            Take a Word document for example. A good tutorial would show you the basics of using Word, while creating a document layout that could be used as your own template lateron.
            More sophisticated would be some “bot” like functionality that accompanies you. Some chat systems use bots as their help system. Those bots react to specific commands, or more sophisticated on certain keywords and context.
            During my studies of computer science, I did experiment a bit with such algorithms and an IRC (Internet Relay Chat) bot. The bot reacted to some keywords and answered accordingly. Other users couldn’t know they were talking to a bot and I could log conversations of half an hour tops.
            So acceptance of such a system – if implemented correctly – should be much higher than a classical help system.
            If we could manage to combine the classical content with something similar (maybe disguise it as premium online chat assistant), we would be able to transport the needed information right to where it belongs.
            For the time being, this only would work for software, but as help systems get integrated more and more into all kinds of devices, there’s a chance we could go down that route altogether.

          2. Mark Baker Post author

            Thanks for the comment Alex,

            It sound like you do believe it is possible to build a Nurnberg funnel — to provide learning without error — or am I misinterpreting you?

  5. Jonatan Lundin

    Yes, I believe you are right that there is an inherent “problem” in the behavior users exposes when using a product. I see it in my research as well.

    In a study I did together with a colleague in the Netherlands, we saw patterns of behavior that are interesting. We made an experiment, where users where asked to do a task with an unknown tool. They had the manual available. The user went back to the same topic, which sometimes included the solution to the problem, several times during the course of solving the task, but did not manage to solve the task. They apparently did not read the topic.

    This behavior can be explained as follows. Users are trying to build an explanation (sense making); find the reason and solution to a perceived problem when not being able to user the product as wanted. When trying to make sense of the product while exploring it, they pick up clues from the interface that may support the built explanation. At some point, the user leaves the product and searches for information.

    The user is really looking for evidence in information that supports the built explanation. As soon as the user finds “evidence” in the topic, s/he goes back to the product and tries a slightly modified solution. The user interpretation horizon is somewhat skewed as information is interpreted based on the built explanation. As a result the user perceives that the topic is saying something it is not. The user is so focused on validating or falsifying the hypothesis constructed from the mental model that the user is not receptive for information that is contradictory to what the user is trying to investigate. It just requires too much mental effort to change track and start to build another explanation, that is build another mental model, based on information written in the topic. Users seem to ignore any fact that is contradictory to the built explanation (or theory or idea).

    What I also saw is that users do not read the whole topic, just to the point where they find the supporting evidence which sometimes was found in the first sentence. This behavior really has impact on how a topic must be written. A long topic is seldom read, which is one reason to why I think a topic must answer one (1) user question and also clearly signal what question it answers. A long EPPO topic may in some cases fail.

    This behavior also has impact on the search user interface design. A Google search box may also fail as the user is entering key words that are fetched from the built explanation, which may be completely wrong. And the user is using “wrong” or “vague” key words as our ability to express our information need is poor.

    Who really is a non-risk taker I believe depends on the domain. There is often no risk in testing or playing around with an office software. There are true methodical non-risk takers, which not only say they are but really are. Consider for example a service technician in a nuclear power plant. Certain service technicians in this domain, not all, are evaluating the risk of doing something wrong, which could be the result when following a vague mental model. If the risk of causing damage or injuries is calculated to be big, then that fact will affect the cognitive state and how focused the user is to pursue a perceived mental model. Such as user will probably not explore and play around with the product by executing an idea stemmed from the mental model. I saw this behavior also in our study, meaning that some respondents went to the manual before starting to do the task.

    I agree totally with your statements about the reason to why users do not go to documentation before usage. The discussion we are having is an important and interesting one, which is really needed in our field.

    Reply
    1. Mark Baker Post author

      Your results certainly sound consistent with Carroll’s experiments, and your explanation of the data largely mirrors his.

      Re risk-taking in safety critical domains, I think it is worth noting that in such domains they generally train people using simulation and apprenticeship. No one is ever allowed to just read the manual and get behind the controls. In effect, these environments are acknowledging that error is a part of learning and using multiple strategies to allow learners to learn, and thus to make errors, without compromising the safety of the system.

      I think we have to be a little careful about concluding that a topic must be pared down to the amount a user will read in one go. The aim is not to create a topic such that the reader cannot possibly not read all of it. If the reader refers to the topic several times in the course of trying to make something work, the topic may be doing the best job it possibly can, given that challenge and error are unavoidable parts of learning. Splitting the information up might simply create more navigational overhead. We need to strike the right balance — which is by no means easy to find.

      We also have to bear in mind that there are other users of the documentation besides learners. Experienced pilots and power plant workers don’t carry the whole of the systems in their heads, nor trust to memory. They need explicit documentation, which they can read because they already have right mental model.

      And part of training the new worker in these environments has to involve training them on the real documentation set. It is a functional part of their work environment and must be learned line anything else. So at some point, the learner has to graduate to the real docs, and, people being what they are, they will usually go there before they are ready.

      In other words, just as we have to accept error in the use of the system as part of the learning process, we also have to accept error in the use of the documentation.

      Reply
      1. Jonatan Lundin

        What we need is mechanisms in the user assistance design that helps the user “unfocus”, meaning provide users with pathways that help the user let go of a built explanation in focus, stemmed from an invalid mental model.

        I’m thinking of for example gamification: engage the user in playing something to gain “search bonus points” that can be used to get something (access to the “launch”). The game must be built to allow the user to build a valid mental model.

        Or add a speaker voice as complement to the text. Just like you have a commentator when watching a soccer game on TV to explain and clarify what you see on the screen, you need a commentator that for example can explain an image (which is what you get in a video). Carroll thought about incorporating gamification etc into the actual product to enhance the learning-by-exploring. What we need to discuss is if and how we should do it in the user assistance (when not embedded into the UI).

        Reply
        1. Mark Baker Post author

          It’s an interesting thought, Jonatan. The first thing that occurs to me is that the user does not want their mental model broken down — few experiences are more upsetting — so even supposing that we could devise mechanisms that help people break down their model, would they use them? To use the jargon of information foraging, would they have a good information scent?

          Gamification seems to suggest a different approach: get them off the information scent altogether and employ a different form of motivation. Take away the catnip and replace it with a ball on the end of a string.

          I suppose there might be something in that. The paradox of sense making, as Carroll expresses it, is: “to be able to interact meaningfully, one must acquire meaningful skills and understanding. But one can acquire these only through meaningful interaction.” So in a game context one can act meaningfully within the game and transfer the acquired knowledge to the real world. But does that really change anything? You still need to acquire meaningful skill and understanding in order to interact meaningfully in the game.

          Gamification might increase motivation, but it doesn’t solve the paradox of sense making. Of course, it doesn’t have to. All it has to do is to provide as good an accommodation with the paradox of sense making as minimalist documentation. Maybe it can do that, at least in some cases.

          I’m not sure how general its applicability is, though. I find most forms of gamification tedious, and I think many other people do to. And I also think it is really hard to do well.

          Reply
  6. Alex Knappe

    Mark,
    you’re misinterpreting some of it 🙂
    What I’m saying is, that it might be possible in (more or less) near future, to build systems, that provide help in a more natural way.
    The user will still go down the route of trial and error, but if we are able to provide help in a way the user finds to be natural, we took another milestone to the perfect help system.
    Reading a wiki (even if it is populated by perfect EPPO topics), a documentation or anything similar is not natural. It isn’t part of the product (even if legislation thinks otherwise). If you have to use them, you are distracted from the product – and this is the least a user wants to be.
    We need interactive help systems, that are embedded deep inside the product and an interface to fill it with relevant data.
    Embedded help systems should to some point learn the behavior of the user and react accordingly, offering help when it seems appropriate, shutting up when not needed.

    If you, as a user seeking for help, can ask someone who you think is able to answer to your questions you will do so. This is why hotlines, mavens and forums work so well.
    If your product offers such a “natural” way of providing help, users will most certainly appreciate it.

    Look at it as a two line defensive system:
    First line is disguised as a chat window. People know how to use those. They will start to ask questions, if they’re stuck with the product. Your intelligent system will check in the background (according to keywords and recorded habits of the user), if he finds a useful topic in the second line of defense. If not, the system itself asks a question to receive more information.

    Second line of defense is the actual information. These are EPPO topics, videos, wiki entries or whatever.

    Now imagine this conversation between an intelligent embedded system and the user:
    U: hello, can you help me?
    H: hi, sure, how can I help you today (generated generic question)
    U: I have this issue with positioning graphics
    H: What is wrong with it (another generic question, as information doesn’t lock on a topic)?
    U: the image always appears somewhere in the document , when i insert it.
    H: so? (generic question, still no lock on a topic)
    U: i want to have it in the line with the text , not somewhere
    H: so you want to know how to insert inline graphics, right? (keywords brought up a topic)
    U: yes.
    H: look, here’s a nice video tutorial, that shows you how it is done: (confirmation and linking info)

    In this Q&A scenario, the first line of defense prepares the ground for the second line of defense in a way the user is more likely to accept, than with a blank search field or an overloaded TOC or index.

    Reply
  7. Alex Knappe

    Mark,
    you’re misinterpreting some of it 🙂
    What I’m saying is, that it might be possible in (more or less) near future, to build systems, that provide help in a more natural way.
    The user will still go down the route of trial and error, but if we are able to provide help in a way the user finds to be natural, we took another milestone to the perfect help system.
    Reading a wiki (even if it is populated by perfect EPPO topics), a documentation or anything similar is not natural. It isn’t part of the product (even if legislation thinks otherwise). If you have to use them, you are distracted from the product – and this is the least a user wants to be.
    We need interactive help systems, that are embedded deep inside the product and an interface to fill it with relevant data.
    Embedded help systems should to some point learn the behavior of the user and react accordingly, offering help when it seems appropriate, shutting up when not needed.

    If you, as a user seeking for help, can ask someone who you think is able to answer to your questions you will do so. This is why hotlines, mavens and forums work so well.
    If your product offers such a “natural” way of providing help, users will most certainly appreciate it.

    Look at it as a two line defensive system:
    First line is disguised as a chat window. People know how to use those. They will start to ask questions, if they’re stuck with the product. Your intelligent system will check in the background (according to keywords and recorded habits of the user), if he finds a useful topic in the second line of defense. If not, the system itself asks a question to receive more information.

    Second line of defense is the actual information. These are EPPO topics, videos, wiki entries or whatever.

    Now imagine this conversation between an intelligent embedded system and the user:
    U: hello, can you help me?
    H: hi, sure, how can I help you today (generated generic question)
    U: I have this issue with positioning #keyword# graphics #keyword#
    H: What is wrong with it (another generic question, as information doesn’t lock on a topic)?
    U: the image #keyword# always appears somewhere in the document #keyword#, when i insert #keyword# it.
    H: so? (generic question, still no lock on a topic)
    U: i want to have it in the line #keyword# with the text #keyowrd#, not somewhere
    H: so you want to know how to insert inline graphics, right? (keywords brought up a topic)
    U: yes.
    H: look, here’s a nice video tutorial, that shows you how it is done: #link# (confirmation and linking info)

    In this Q&A scenario, the first line of defense prepares the ground for the second line of defense in a way the user is more likely to accept, than with a blank search field or an overloaded TOC or index.

    Reply
  8. Alex Knappe

    Mark, you did indeed misinterpret my post 🙂
    What I was implying was, that we need some kind of semi-intelligent, natural way of communication between help system and user.
    This has to be embedded into the product, as moving away from your current exploration of the product, reading some wiki or documentation disturbs the user experience with the product.
    Generally speaking, we need to give some basic information along with the product for complete novices in form of a useful “on the job” introduction (as a tutorial), that can be skipped and entered at any point (for trained users and experts).
    After completing the tutorial, the user is on his own at first. He’s on the path of trial and error, answering the first questions on his own.
    But at some point, the user will be stuck. And this is, where the second and main part of the help system should kick in.
    If done in form of a chat client embedded into the product (with an interface for the author), the user will start to behave like he’s talking to a hotline, a maven or his buddy next door.
    Chat windows are something, users are familiar with. They pose no mental hurdle.
    If you implement a semi-intelligent engine behind it, that answers the users vague questions with questions, with the goal to refine their question, you can narrow down their problem.
    The engine should work like a game of bingo. It asks questions long enough, until a piece of content in the background yells “Bingo, this is me!”. The engine then provides the solution, a helpful link, a wiki page or whatever has an affinity with the given keywords.
    It is also the responsibility of the engine to use more or less generic questions to reach this goal (like: “so?”, “so, were talking about xyz?”, “your problem is xyz?”). It is outmost important, that the questions appear as natural as possible.
    When the user doesn’t recognize he is talking to some machine, you have the perfect result.
    In your last few posts, the discussion about helpful search and navigation reminded me of this old experiment of mine. It was at that time a test of my programming skills and just for fun, but I see quite some potential.
    The problem with most search engines is, that they give the user only the possibility to ask one single question. Given the fact that most users naturally don’t exactly know what to type into this blank search field (we’re talking about google-fu), it is no wonder, they are weak at searching.
    A system that helps them to refine their question, to think about it and precise it in a few steps, would narrow down any question the user might have and point them to the most promising answer(s).

    Reply
    1. Mark Baker Post author

      Sounds a lot like Clippy. Does Microsoft’s disaster with Clippy mean that people inherently hate that type of interface, of does it mean that Microsoft implemented it poorly? That’s hard to tell.

      But what the paradox of sense making suggests is that the user can’t refine their question, no matter how the system asks them, because a meaningful question can only come from a meaningful interaction. Jakob Nielsen has reported the apparently contradictory findings that people are both increasingly search dominant and not very good at searching. The paradox of sense-making may help explain this contradiction. Searching may be the most effective behavior the user has available to them, not because the success rate is particularly high, but because the success rate of all information seeking is low, due to the searcher’s lack of meaningful experience.

      Reply
  9. Alex Knappe

    whoops, double posted (somehow the page didn’t show my first answer).
    but I guess now you get my point at least 🙂

    Reply
  10. David Worsick

    When I use Help myself for something new, whether online for a new program or some manual for a tool (e.g. a digital SLR camera), I first look for a brief summary section, if available, often after looking at the user interface (whether computer graphic or button-and-switch-laden hardware). After that introduction, when I access help (or a manual), I am hoping to avoid errors, not to generate new errors and certainly not for the fun of learning (I read history and science books for that: old-school paper mode).

    The learning approach may work for consumer toys, but our software is built to help clients do their jobs, and if these clients make too many errors, that may do IN their jobs. I really don’t think users are looking for a Californian New Age Learning Experience when they click Help. They are looking for a specific answer that they don’t currently have and they want to find it quickly. When they’re fiddling around on their own, they don’t bother to click Help. Because of that, I bug the programmers to include crucial instructions right on the user interface, so users can avoid mistakes before they make them.

    I’m also leary of letting people fail on software that can take an hour to process its solutions (our software analyzes seismic data in very complicated ways) and may lead to decisions proposing (or rejecting) very expensive projects. If these people have actually looked up the Help before they started their work, I think I owe it to our clients to ensure that the Help topic will either be complete, or have links to completion.

    Kids can run around and fall down, skinning their knees, as they’re kids. But when my son was learning to drive, I wanted him to learn through a very error-unprone method. Employees are expected to avoid falling too often if they want to keep being employees, as they aren’t kids (there are laws aout hiring kids, don’t you know?). My job is to help these employees and to help their managers like what these employees have done, not to promote a learning theory.

    I also think we’re forgetting that the major users for the Help are probably the experts (the mavins) who know that other employees don’t like using Help and are actually using this skills to strengthen their jobs (Look! I can read the manual!) Then these experts become the ones who ensure that everybody else can use our product to its fullest and not risk their jobs (and our product’s popularity). So testing only the “other employees” may be a faulty approach.

    I expect that these mavins never end up in these research studies: they’re too busy being experts. I’m not sure where the researchers are collecting their test subjects, but I suspect they may not be the people using the Help. And anyone who claims to read the manual fully before turning on the software either is expecting only a tiny brochure or is bluffing the researcher.

    I also find Carroll’s comment about the difference between what users want and what they need to be somewhat patronizing; a sort of “no, you don’t want this, child, you want that instead” approach.

    What I think most of the users really want from help is exactly the same thing they ask the local expert for: “Can you help me with this problem?” Yes indeed, “Help” is the right word for Help, since that’s what they ask the expert for. If only we could program the Help to act the same way as these experts.

    Reply
  11. David Worsick

    Sorry for the lateness of this post. I have one question I think we should ask: are the people they’re testing the people who make the most use of the Help? The testers seem to test people who are not trying to be experts in the field handled by the product. Not only that, there may be a concentration on testing products meant for wide-spread, often non-commercial use, namely the type of product and audience Minimalism is aiming at. However, products intended for specialized, commercial markets, particularly those intended for companies large enough to have experts (either official or unofficial “mavins”), may have a totally different type of user: the local expert. Then all queries by users would go verbally through this expert, who can help the user hone the questions and refine the query simply by asking questions. Has anyone studied these mavins and how they use help? The products they become experts in can be very sophisticated and well-funded, with help requirements significantly different from order-form GUI’s or word processing. And yet all I have seen so far is an emphasis on the type of market that Minimalism was specifically designed to handle.

    Reply

Leave a Reply