Correcting our Publication Skew

By | 2013/03/11

Technical communicators and content strategists tend to have a skewed view of communications. We tend to think of communications principally in terms of publications. But publications have never been the sole or even primary means by which communication takes place, and in the age of the Web, the role of publications in communication is diminishing. As I have remarked more than once before, the Web is not a publication, it is a colloquium. To move forward in the modern world, we need to correct the long-standing publications skew in the way we look at communications.

This reflection is prompted by a recent blot post by David Farbey. Farbey likens distributing an office document in Microsoft Word to eating a cake before baking it.

If your document is still being developed, or if you are asking others for review comments, that’s fine, but once your document is in its final form you really don’t want anyone to mess with it. You want it to be no-longer-editable. Making your document no-longer-editable means printing it, either to paper, or to PDF, or to some other non-editable format. That is a fundamental change to the nature of the document, in the same way that putting your cake mix in a baking tin in the oven and baking it fundamentally changes its nature too. It stops being a mix, and becomes a cake. Hurrah.

This is a very publication-skewed metaphor. Baking the cake is like publishing the document. The act of publishing changes its state. Once fluid and mutable, it becomes something fixed and unchangeable. The implication here is that this is the proper and intended destiny of every document: that it will go through a period of development and then, when it is “finished”, it will reach a fixed and immutable final form. It will be published.

But this is not actually what most written communication is intended for. Most writing is never published in this sense. It is not intended for finality or for fixedness. It is not a proto-publication. It is part of an extended conversation.

Meeting.

Most written material is not intended for publication, but forms part of an extended conversation. Tech comm and content strategy need to correct for the publication skew in their view of communications. Image courtesy of Ambro / FreeDigitalPhotos.net

There are several reasons why conversation by direct speech is not sufficient for all our conversational needs. The need to extend the conversation over time and distance is part of it. But more importantly, there is a need to allow for fuller statements of a position than the dynamic of face to face exchange allows for.

Thus, speeches have formed part of our academic and political culture from the beginning. They allow the speaker the space to marshal evidence and multiply supporting arguments in favor the the point they are making. But they do not fix the matter in final form. They are still a form of conversation. One speech is answered by another, which is answered by still another.

It is why we have business meetings today, and why we have prepared agendas for those meetings, and why we often give presentations with slides showing charts and graphs — because we need to bring a measure of structured and discipline to conversation — and not because we don’t want it to be a conversation, but because we want it to be a conversation the takes in all that is relevant, and considers it in sufficient depth and detail to allow the conversation to result in a meaningful business decision. And it is the business decision, not the publication of the material that led to it, which is the deliverable and the desirable end of this business process.

Another important aspect of conversation is that it allows us to shape the message differently for different audiences. We can adapt what we say to each person in turn, accommodating the message to their level of understanding, their knowledge, and the level of privilege. Most business documents are never final. They are modified and quoted and borrowed from as they pass from one function to another and one level of hierarchy or privilege to another. The message is reshaped at every stage of its journey as it is passed from one person to another. The last restatement of that message is not the final or publishable form. Every restatement along the way was intended for the specific audience it was addressed to. The last revision is the last not because the document has reached perfection  but because the message has run out of people to be passed on to. This is why people want business documents in an editable form.

Most of the technical communications content on the Web is not a publication, but part of an extended conversation. Forums, and sites like LinkedIn and Facebook exist mostly to foster extended conversations. Blogs, with their comment capabilities, and the author’s eager desire for comments, are a form of extended conversation. Beyond that, one sees the emergence of blogversations in which an idea is introduced in one blog post and then taken up and expanded upon or refuted in someone else’s blog. I engage in blogversations quite regularly with folks like Tom Johnson, Val Swisher, and Larry Kunz, and now, in this post, with David Farbey.

Even more publication-like things on the Web become part of extended conversations. If the piece itself is not comment-enabled, it can be tweeted and blogged about an hashed and dissected on forums.

Still, it is easy to see why tech writers and content strategists may have a view that is skewed toward publication. Most of them work in departments that have traditionally produced publications, and many of them have the word “publications” in their name. In many ways “technical publications” and “technical communication” are treated as synonyms. But this is just a skew inherited from the paper world where distribution was burdensome, and so you needed the polish and the finality of publication to justify the cost of distribution.

And then there is the matter of the craftsman’s longing for an artifact to point to as evidence of their accomplishment. To most people who create and distribute content in a business and on the Web, the content is merely a means to an end. Their validating artifact is the design or the product or the lead or the sale. But for the professional writer, the artifact is the document itself. We need the finality and the fixedness of a publication so that we can have something to point to and say: that’s it. That’s mine. That’s what I made. That is who I am.

Alas, however, our desire for a validating artifact is not a business justification for choosing business processes and tools that are skewed toward publication when what the business actually needs are tools that facilitate an extended conversation.

Instead, I would suggest, it is on us to learn to validate ourselves in other ways. And at the end of the day, this should not be so hard. Out job is to communicate, and our validating artifact should not be the document, but the living human being who once did not understand and now does. That is the artifact that validates the work of teachers and of parents, and it should be good enough for us too.

 

 

Category: Content Strategy Technical Communication

About Mark Baker

I am an aspiring novelist and former technical writer and content strategist. On the technical side, I am the author of Every Page is Page One: Topic-based Writing for Technical Communication and the Web and Structured Writing: Rhetoric and Process. I blog at everypageispageone.com and tweet as @mbakeranalecta.

33 thoughts on “Correcting our Publication Skew

  1. Paul Monk

    Insightful as always, and I very much agree. As I’ve been saying for a while now, the role of technical communicator needs to evolve (and I think it’s already started) from writer to writer/curator. The “publication” of the “official” content is only the first part of a conversation between the producers and the end users across various media (blog posts, forums, etc.).

    However, you’ve highlighted the problem of self-validation (“that’s mine”), but I think the greater problem is *business* validation (“that’s what I produced that proves to my company that I’m valuable and deserve to keep getting a pay cheque”). I agree our only goal should be end-user understanding; it’s hugely rewarding on a personal level, and I’m all on board with that. But that’s a bit esoteric, and the business folks are still looking for metrics to assess our value. (Somewhat bogus, but an admirable pursuit, I suppose.) I have yet to see anyone who’s developed effective tools for quantifying that, as simple as it may seem to many.

    I think parents and K-12 teachers are unencumbered to some degree by the metrics problem (teachers *have* metrics, albeit broken ones: standardized tests); but in both cases, those who perform poorly are at a lower risk of having their positions outsourced than tech writers (though I suppose that can be argued).

    So, I wouldn’t fault you for not having a solution to that problem; I don’t think anyone has one. Still, it’s something to aspire to, I suppose. Until such time as we have one, we’ll go ahead and stick with word count, thanks. I so miss the thump of the 500-page ref man on the desk 🙂

    Cheers,
    Paul

    Reply
    1. Mark Baker

      Thanks Paul.

      You raise a very important point about business validation, and about the difficulty of finding metrics to validate ones contribution.

      Sometimes it is impossible to find the detailed metric of individual contribution. Sometimes you have to look instead at the gross metrics, the roll of all the individual data sets which will often show a clear trend where the individual data only shows noise.

      This means that business decisions about tech comm may not always be based on the contribution of individual writers or even the entire tech comm department, but on general industry trends. And the general industry trend in tech comm, as in marketing, is toward the Web.

      Show how you can add to the effectiveness of the company’s Web presence, and you may find an argument that clicks.

      Reply
  2. Alex Knappe

    I’m nearly completely with you one this one, Mark. Our output is way to static. Recently, we put up a list of the documentation for one of our customers and the corresponding dates of the last changes. Most of them were not up to date regarding their corresponding product versions.
    One may ask why it is that way. The answer is simple: It’s to costly, to update the documentation every time a new feature is integrated. And this is because those documents are publications. They are printed, released on the web and so on.
    We could do much better, if we would leave the path of publication for every piece of information we release.
    Yet, I think we still need to do publications at some point.
    First point to do a publication would be at “release” of the product. This is because we got a very clear and defined snapshot of the momentary state of the product.
    Afterwards it only makes sense to do that at mayor cumulative updates.
    Second point to do a publication would be for static products, that won’t change throughout their lifetime, like heavy machinery.
    But a golden path? I don’t see one either.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Alex.

      There certainly needs to be a debut of some sort at product release. But maybe we need to think of it in different terms than publication. Perhaps “launch” would be a better word. The documentation is launched, that is, begins its public career at a certain point after meeting certain qualifications, but is fully expected to continue to change and grow. (This is, of course, much much easier in an Every Page is Page One environment.)

      Software already works this way. It is launched at a certain point (usually as an alpha or beta rather than a 1.0 release) and is regularly updated automatically for the rest of its active lifespan. We are at the point now where constant updates are regarded as a sign of life rather than a sign of immaturity. “No longer being updated” is essentially the kiss of death for a software project.

      I think we need to treat documentation the same way, not merely because we need to keep it in sync with software that is constantly being updated, but also because new information continues to be discovered and developed. Regular updates should be considered part of the natural life of a “launched” information set. “No longer being updated” should mean not that the information is ready to be published, but that the information is dead.

      Reply
      1. Alex Knappe

        While “living” documentation is a goal worth to work for, there is still a need to pack all those bits of renewed information back together at some point.
        Speaking in terms of software this would be something like a service pack. It would just be another “snapshot” of the momentary development state of the documentation, but still.

        Reply
        1. Mark Baker Post author

          Certainly there will be major and minor milestone releases, just as there are with software that is regularly updated. An interesting question on this is whether the doc release numbering follows the software numbering or has its own numbering, and potentially its own major and minor milestones.

          I am inclined to the latter, because software updates are only one of the triggers of documentation updates, and an update with the same version number as a software update tends to indicate that the only content changes relate to the current software changes, which will not be the case for a true living document.

          On ” there is still a need to pack all those bits of renewed information back together at some point.” That may be true using most current tools, where full integration of a documentation set is a time consuming human activity. But in an EPPO/SPFE paradigm, where integration is based on metadata and preformed by algorithms, every change, no matter how minor, is fully integrated the moment the change is made and there is no need for a periodic repacking of the content.

          Reply
          1. Alex Knappe

            I guess this is what the clash of the cultures is all about (at least at tech comm).
            While documentation solely based on EPPO paradigms may be completely viable in the software/consumer product branch in Northern America, it is less viable in Europe where legal authorities enforce restrictive paper output (even for the most obvious products or pure online applications ). I don’t even start about China 🙂
            And here’s the problem. Paper outputs (I’m with you, that a lot of them should be web only) destroy everything an automated process could do. While you can produce the raw data semi automated, I yet have to see a working automation process, that doesn’t produce fugly (in terms of comprehensive and readable) output.
            I think the release cycles of milestone documentation is all about certain triggers, which are most likely different for online/software documentation on the one side and printed documentation on the other side.

          2. Mark Baker Post author

            Well, you can certainly produce paper in the EPPO/SPFE paradigm. EPPO/SPFE is certainly an online-first approach, but it can produce paper as a secondary output, and it does a better job of producing paper than paper-oriented tools do of producing online versions of content.

            You are right, though, that paper output does not support all the useful things you can do online. And there are sorts of 20th century regulations that still mandate either paper or paper-world policies and procedures. There are seemingly more of these in Europe than in North America, but they are a problem everywhere. I have yet to encounter one that could not be accommodated by an EPPO/SPFE process, but I certainly haven’t seen them all, and can’t claim that an EPPO/SPFE process could meet all of them.

            I agree that update cycles can be defined in terms of triggers, though not all the triggers can be predicted in advance. But regulated doc processes do specify triggers, and often the triggers they specify are based on the economics of paper rather than the logic of information development.

            In the long run, though, I think it would be a mistake to let your process be driven by these regulations, not least because at some point the people responsible for them will wake up to the realities of the online world and rewrite the regulations. GOL initiative are already forcing this to a certain extent.

  3. David Farbey

    Hi Mark, I’m flattered that you took my blog article as the starting point for this post. Where I would part company from you, is where you assume that every product has a community of users who have constant access to the internet and who can enthusiastically contribute and share their experiences. This may well be the situation for some consumer goods, but I would argue that that is an unrealistic assumption when it comes to goods that are designed for engineering or industrial use. In those kinds of situations product users must rely directly on the information provided by manufacturers, and they do not have access to any sort of conversation with other users, however beneficial that might be. I would of course agree with you that a lot of product documentation is of very poor quality, and technical communicators should be working to improve it, and should be working with customers to do so wherever possible.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, David.

      You are absolutely right that there are products which are either so specialized of so controlled that no effective community exists around them. There are also cases where the users of such products are direct competitors and so will not share information with each other. This certainly does not cover all commercial products. There are large communities around most of the infrastructure of the web and most programming tools, for instance. But there are very definitely substantial numbers of products for which it is true.

      And I would agree also that it is even more important for the people providing the documentation for those products to create living documents that are constantly being revised and updated. If the community is not there to fill in the holes and correct the errors in the published docs, it is incumbent on the documentation team to do it themselves. This is precisely why I feel that tech pubs needs to overcome its publications skew and begin to see itself as curating a living body of information rather than trying to finalize and publish an information set that should never be final.

      Reply
  4. Debbie M.

    Why must it be either/or? Publications still have their place. In a pub, we are saying: at this time, with this release of hardware or software, follow these proven instructions to achieve your objective.

    Sometimes there’s a need for conversations and sometimes there are just commands that work for this aplication. Writing shutdown -r now has no need for discussion.

    I hope you’re not suggesting we send our end users on a quest, such as: go to Blog A and read the comments on this entry then hop over to Blog B, and read this entry but not the comments, and when you have full command of everyone’s thoughts, opinions, and troll concerns, determine the best way to reboot your server.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Debbie

      Must it be either or? No. I am suggesting that we need to correct for a skew in our view that makes us automatically turn to a publications model. Correcting that skew does not mean never using a publication model, it just means only using it when it is appropriate.

      Googling “shutdown -r now” shows that there is, in fact, a significant need for discussion, or, at least, a significant amount of discussion about it — which means people clearly felt the need to discuss it.

      I’m definitely not suggesting that we send our end users on a quest. I am suggesting that our end users go on a quest anyway, and that it makes more sense for us to make ourselves available to them as and when they seek help on that quest.

      I’m not the first to suggest this. John Carroll observed exactly the same thing back in 1980:

      “Learners also often skip over crucial material if it does not address their current task-oriented concerns or skip around among several manual, composing their own ersatz instructions procedure on the fly.” (The Nurnberg Funnel, p 8)

      “Our learners made many kinds of errors in following seemingly clear instructions in the manuals. They typically created and responded to their own agenda of goals and concerns, not to the careful ordering of steps in a training procedure.”(p 26)

      “Learners, however, don’t seem to appreciate overviews, reviews, and previews. They want to do their work. They come to the learning task with a personal agenda of goals and concerns that can structures their use of training materials.” (p 26)

      In other words, users do not submit them selves to the curriculum we have prepared for them. They have their own short term agenda that drives their use of the documentation.

      Here is a more recent study that looks at usage patterns between official docs and Stack Overflow, and the use of search in reaching those resources: http://blog.ninlabs.com/2013/03/api-documentation/.

      The behavior that Carroll observed back in 1980 is, of course, enormously enabled by the Web. If readers were self-directed when all they had was a paper manual, they are all the more self directed today when they have Google at their disposal, and the ability to ask a question and have their specific question answered by someone who has actually done the work they are attempting to do.

      This is where people go for help today, and if we are not there, in a form that works on the Web, we are not going to be useful to our readers.

      On the Web, of course, we don’t own the whole of their attention or the whole of their information experience, but then again, as Carroll discovered, we never did.

      Reply
  5. Alex Knappe

    composing their own ersatz instructions procedure

    You, my dear Americans are weird 🙂
    How the hell did THAT German word make it to your vocabulary?

    Reply
  6. Myron Porter

    Mark,
    I confess to ambivalence. I certainly agree with your points intellectually, but I also believe (vehemently) that there is something about a physical document (or at least a static file) that speaks to something deep within the core of what it is to be human. I believe this is true for reader and writer.

    A conflict? Hardly. Sometimes confusing? Definitely.

    Regarding your comment that we need to “validate ourselves in other ways”, I would respectfully change it to read “validate ourselves in additional ways.”

    Reply
    1. Mark Baker

      Myron, thanks for the comment. I understand your ambivalence. I don’t think I would go so far as to say the the idea of the book is something that speaks to the core of what it is to be human. Many cultures have existed without the book. But I think it would be very reasonable to say that it speaks to the core of what it means to be modern.

      Books are very old, of course, but I think their cultural dominance goes back at most to the reformation. This is when, thanks to Gutenberg, individuals could afford to own a bible, and many protestants began to describe themselves as “people of the book”.

      Before then, I think, it was the symposium, the council, the thing, the moot, that was at the heart of a civilization. The books were valuable auxiliaries, but discussion and advancement of knowledge too place where people met together to talk.

      In a very real sense, I think the Web is taking us back to that. I have described the Web before as a virtual colloquium of the whole world, and we can see that it is becoming the place where we meet and discuss and where knowledge advances. This is very clear in how sites like Stack Overflow are becoming the new locus of technical communication.

      I don’t think the book is of the essence of what it is to be human. I think that place belongs to the discussion. And the Web is restoring the discussion to its rightful place at the center of human interaction and knowledge making and keeping.

      But in doing so, the Web is disrupting some of the most basic conventions and value of the modern era, particularly the centrality of the book, and the comfort that it provided as a source of (apparent) certainly and stability of knowledge.

      The book had a way of settling the stomach. The Web gives us the collywobbles. But today, it is where the action is.

      Reply
      1. Myron Porter

        Mark, first thanks for taking the time to reply. Second, bear with me a bit. I swear I am not arguing for its own sake:

        Books precede the modern era (post-Renaissance?). Mircea Eliade argues that poetry is an outgrowth of early mysticism and that we evolved poetic techniques in order to assist in memorization of early sacred science ‘texts’. This goes back to pre-literate societies. Also, naming can be a way of exerting control, or of making things or actions understandable or or in some way manageable. For instance, Adam naming the animals in Genesis. The term, “people of the book” has a long religious heritage (particularly Islamic) and predates Guttenberg by at least a thousand years. The term indicates those people with a revealed “book” or code of conduct or revelation from God, such as Jews, Christians and Moslems.

        The point is that before there were books, the function itself existed, and that this function has deep roots in the essence of what we are.

        However, I also agree that discussion and community are critical aspects of humanity. They are not more important than the more rigid form of ‘books’, though each has areas of dominance.

        Note the progression from poetry to plays to novels to motion pictures: all co-exist today. Even though motion pictures are the currently dominant art form, perhaps (video) gaming or some other form will take the crown. But I doubt the successor will erase the other forms. Each has its place.

        By the same token, the Web is only disruptive because is is new and evolving, but in reality it is just another expression of our humanity. There will be other new expressions of humanity in the future. That is part of what I meant by attempting to amend your phrase to read “validate ourselves in additional ways.”

        Reply
        1. Mark Baker Post author

          Myron, I agree that sacred books go much further back than the reformation. But those sacred books played a very specific and limited role in those societies (vertically limited, I mean).

          What we saw in the modern era was a tendency to make all books sacred. (There are many who literally consider it sacrilege to burn a book, and some who consider it sacrilege even to dog-ear a book or scribble in the margins.)

          This was prefigured to no small extent by the Medieval reverence for the Greek texts. But while it did eventually become permissible to disagree with Aristotle, the modern era has continued to have a reverential regard for not just religious texts, or the great classics, but for all books. As David Weinberger argues in Too Big to Know, we came to think of knowledge as being shaped like a book.

          I see the Web as being far more fundamentally disruptive than movies or TV or radio. They were all media in the book mold: a one-way communication from the few to the many. The Web restores the primacy of the agora in human affairs. It is a many to many media such as we have never had before on any large scale.

          This ability of the many to talk to the many is taking conventional tech comm practices and shaking them by the scruff of the neck. This is not a media in which the production of a static one-way monolith has a place. And so I maintain that we have to learn to validate ourselves in other ways.

          Reply
          1. Myron Porter

            Burning books is a dangerous precedent, as is any attempt to erase the past, no matter how distasteful or reprehensible those books may be. (I confess to believing that folding book pages is barbaric. Marginalia can be pure nectar, though . . .)

            Agreed. This is a massive change far beyond radio or television. I fear that we are letting the change follow whatever course it is capable of creating by dint of force rather than taking a more active (planned) role in molding and enriching it as a part of our daily lives. For instance, we should be preparing all people to communicate effectively many-to-many. I’ll term that consultation. Here I am indicating productive communications, not the frequently pointless tweets and posts seen daily. The updating and streamlining of consultation to merge with new technology has many unexplored benefits.

          2. Mark Baker Post author

            Myron, yes, at the moment, the change is a Juggernaut that we at best half understand, and it is sweeping us along, shaping us rather than being shaped by us. We will need to turn that round and start to at least educate people better for this new mode of communication.

            The problem, when one is caught up in the rush of a Juggernaut, is that our first instinct is to try to cling to the old certainties rather than to start to get a handle on the Juggernaut. We need to get past that and start figuring out how to work, and how to educate our children to work, in this new environment.

            One other thing: on reflection I think you have a point about validating ourselves in additional ways. Books are not going to go away altogether (or, at least, I don’t believe they are), they are simply going to have a different role, a role which I don’t think we can begin to fully define yet. So for those who in the future write books for good and valid reasons will be entitled to validate themselves on the good qualities of those books.

            But most day to day technical communications isn’t going to involve books, and most tech comm jobs will not involve writing books at all. So, technical communicators will have to learn to work on the Web and to validate themselves according to Web values, not book values.

  7. Dwight

    Thanks Marc for articulating the problem so well. I agree with how you’ve defined the problem, but I don’t agree with the direction towards solving it.

    The problem with publications is that, no matter how task-oriented, they suited for academic study. As you know, studies show that most users turn to a guide only when they’ve run into trouble. Before that point, they’ve been exploring the software, trying this and that, and getting messy in the learning process. When they arrive at the guide, they have to abandon this exploratory mindset for bookish one. They hate it. They find the experience jarring and frustrating. So they bail, and go on exploring by searching for answers on the web.

    But, as you and other readers here have pointed out, that doesn’t mean the guide should never have been written. People on forums who can answer questions have either read the publication or have taken courses whose material was derived from it (part of the conversation). These gurus approach their learning as academics– they slog through the material to learn how to use the tool. This “works” for them because their motive is entirely different from users. How much they wade through the material can be debated. But I’m sure they do considerably more self-study than most users.

    I agree that a change needs to happen in thinking that manuals are the end solution. But is it realistic to expect that the same person who writes the basic instructions for a tool (somebody has to) must also figure out a way to add more to the conversation? Yes the scope of technical communication goes beyond writing manuals. But in the work-a-day world, with its business requirements as noted earlier, this problem is a lot to place onto the shoulders of writers. It seems to me that solutions to many of the problems with technical communication, which you and others have pointed out, will involve software intervention and management support.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Dwight.

      I think you hit on something important with:

      When they arrive at the guide, they have to abandon this exploratory mindset for bookish one. They hate it. They find the experience jarring and frustrating. So they bail, and go on exploring by searching for answers on the web.

      I think this is exactly what happens, and I think it is also important to note that while it may start with individuals looking to the manual, being frustrated and then turning to the web — a problem that could hypothetically be addressed by improving the manual — it progresses rapidly to users remembering their previous frustration with manuals, skipping them altogether and going straight to the Web.

      I agree too that there are gurus (I have used the word “mavens” in the past, who use information differently, and may be the main users of documentation (/2011/12/07/why-analytics-may-mislead/). But I’m not so sure that the maven’s approach is academic. That is, I don’t think they study and then do; I think they do and then study to understand what it is they have done. That may indeed call for a book, in some cases. But I don’t think that book is the “user manual” as we know it today.

      this problem is a lot to place onto the shoulders of writers

      Perhaps, but business does not work by starting with the employees it has and saying, what can we ask of these people. It looks at the problems and opportunities that the market presents and say, where can we find the people that can address these problems and seize these opportunities.

      I think people who are currently working in a traditional tech comm role are going to have a lot of work to do to qualify for the kind of technical communication roles that are going to exist in the future.

      Reply
  8. Steve Janoff

    It sounds like Myron is speaking of something akin to the symbol-making facility that Jung talked about. Same kind of thing that generated the cave paintings. Dialogue can be a form of sharing symbols, which can either be one-way or collaborative. It can be an enriching experience either way. Symbol-making is a personal experience and then sharing can amplify oneself when coming into contact with other people and their own symbols.

    Mark is talking about a kind of fluid communication that isn’t pinned down. Part of the problem may be that documentation has been treated like a product rather than fluid. But its long imprisonment in the book metaphor is now a hindrance, with the Web.

    Also, Tech Comms deals with a form of information that is less deep and meaningful than symbol-making — at least the goal is not to have the same resonance as a great work of art, although if it achieves that, that’s certainly a bonus. But that validates the author, which admittedly is not the goal of Tech Comms.

    It’s very hard to take away the paper mentality, as it’s embedded in so many of us, but I agree that it’s critical to abandon it with regard to the kind of information that should be (and already is) flowing on the Web.

    Richard Saul Wurman talked of the “Age of Also,” so yes, there will be digital downloads and there will also be CDs and DVDs, and yes, there will be books and there will also be ebooks and non-books.

    I think the goals of Tech Comms are much different than the goals of the deeply symbol-making arts, although they share some similarities. It’s the desire to make lasting, meaningful symbols — the validation that Mark is speaking of — that gets in the way of effective Tech Comms. Oftentimes the author is a frustrated creative writer who turns to Tech Comms to (a) make a living and (b) write the Great American User Manual (read: Novel). It’s just sublimation. It started out a little bit that way in my own case (though more emphasis on “a”).

    The day is waning where you can “curl up with a good manual.” You’ll still be able to curl up with a good book, but if you want technical information, you’ll probably have to curl up with your iPad or other portable device, unless you don’t mind sitting in front of a desktop or laptop.

    In terms of generating content, yes, the days of the “award-winning technical manual” are also waning. There won’t be an award for “writing the best Step 3 in a 7-step procedure.” Hopefully there won’t be any 7-step procedures left.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Steve.

      If you haven’t read it already, I think you would find David Weinberger’s “Too Big to Know” very rewarding. His thesis is essentially that the permanence of a physical book has distorted our idea of what knowledge is and what it means to know, and the the Web is now changing both in fundamental ways.

      Reply
      1. Steve Janoff

        Thanks for the replies, Mark. “Too Big to Know” is now on order – appreciate the reference!

        I can see too I’ll need to study your blog and your posts (and the contributors’ comments) before posting my own comments, as I seem to be reinventing the wheel in my attempts to “catch up” to what you are saying. 🙂

        You can ignore my comments on the TOC issue — that seems to be old news. And after reading your blog on Web vs. book organization, I can see that mind maps won’t work either, as they’re still an attempt to impose top-down organization on Web content. Based on that blog post, you seem to be suggesting that we’d benefit from crafting Web documentation into smaller, more localized versions of Wikipedia. Not a bad way to go. I will study more.

        Appreciate the interchange, keep up the good work!

        Reply
        1. Mark Baker Post author

          Steve, thanks for your kind words. But please feel free to comment whenever the mood strikes you. I don’t expect most of my readers take quite such a studious attitude to my blog (shame on them 🙂 ), so I have no problem addressing the same issue when it occurs in the context of a different post.

          After all, every page is page one.

          Reply
  9. Steve Janoff

    There is also the issue, and you may have dealt with this elsewhere (I’m only beginning to read these posts), that before the Web there was the PC and its desktop metaphor, courtesy of Xerox, “the document company.” Everything revolves around the document, the book (large document with many pages), the file, the folder. Even a web page is a page, and a help file is a document. (Every page is page one.)

    Mobile devices may be moving us away from this a little bit. But laptops and desktops are ever-present and many of us develop our content on these devices, using software and hardware tools and systems that embrace the desktop metaphor and the paper paradigm. Obviously most evident in PDF and print output, but even in online help systems and wikis.

    Microsoft “Office” — we’re still chained to an office inside an electronic box.

    So we produce our content using such tools, and we mostly consume our content using such tools.

    To get rid of the “publication” model we have to have someone develop new tools that we can use to create new kinds of information.

    The technologies came out of the publishing world too: SGML, DITA, and the like.

    Reply
    1. Mark Baker Post author

      Yes, I think there is a connection between the book model and the desktop metaphor of computing. Both are about a small personal environment that is small enough and simple enough to be managed by hand. I have suggested in a recent post that the lack of an economic model for authoring tool vendors to support anything other than the desktop model is really holding tech comm back. /2013/01/03/we-need-a-new-economic-model-for-tech-writing-tools/

      Reply
  10. Pingback: Writer, what’s your validating artifact? | The Smith Compound @ Wordpress

  11. Steve Janoff

    Mark, one other thought that suggests itself with this, and apologies if you’ve already dealt with this (I’ve tried to read your posts at least from this year, plus most of the comments — great stuff, all).

    First, I may have my history wrong, but I believe Tim Berners-Lee created the Web at least in part to allow physicists to share their papers and research over a network. It started as a publishing model, with pages and documents being the items presented. Of course it’s gone far beyond what a lot of people envisioned, and we see it as the Wild Wild Web because it’s so vast and untamed.

    But I doubt if that can go on forever. I’m sure the Web can grow to at least one or two orders of magnitude more than it is today, but eventually it will have to slow down and sort of congeal, which I think it has already begun to do. It is like the Big Bang of information. We’re still in the explosion part. Eventually (regardless of the fate of the universe), the Web will need to contract. It already seems to hinge around large, stable sites, like Google, Amazon, Yahoo — landing points or starting points like this. It’s like the U.S. Wild West of the 1800’s — vast and lawless, and then once it became settled, stable and law-abiding (in general). It blossomed into a number of significant cities west of the Mississippi.

    The random nature of a lot of Web searches, with their instant gratification, seems more a trend of the times, and might not necessarily be a permanent behavior. Ecommerce involves a lot of bouncing around but there’s also a lot of stability with a few select sites. Software vendors and others who develop technical documentation may not have caught up yet to the consumers who use them and are bouncing around the Web.

    It may be that as more and more sites develop and stabilize and become harbors of the kinds of information we’re all looking for, the random searches will slow down.

    There’s a limit to what the Web can do and be. There are only so many people on the planet, only so many companies, only so many (meaningful) web sites that can be created. After a while the law of competition will win out. You’ll have the same kinds of online consolidation and mergers/acquisitions that you have in the real world. Some of that has already happened, but it will happen in a much bigger way. Every online company has *some* brick-and-mortar presence, even if it’s just a bunch of people networked around the world. They are real. So M&A’s will involve real people and brick-and-mortar companies (brick-and-mortar in the sense of what is physical about the corporation, not necessarily a “store”).

    But getting back to the Web itself and its birth and life as a “publication” channel, it retains much of the “published” world of print and paper. You’re still consuming information in “paper” form, just online rather than on physical paper. Web pages are pages. They’re connected differently.

    So you have the presentation model, which still has echoes of the print world, and the “release model,” let’s call it, which is more of what you’re talking about. Online information should be updated as soon as the correct information is available, although there are issues of tracking, reviewing, and accountability that have been talked about. The writer often doesn’t have the authority to simply update a web page without getting some sort of approval from management as well as signoff from the technical people. The writer is often not qualified to make those decisions him- or herself.

    So the release model can use a lot of the kinds of changes you talk about. But the presentation model is troublingly familiar. That has not really changed.

    We need something as revolutionary as Horn’s Visual Language, with its tight integration of words, images, and shapes — although that is not enough and is not what I had in mind.

    Minimalism was a great contribution but doesn’t feel as revolutionary as Horn’s work.

    There needs to be a different way of presenting information than the publishing model, even on the Web. I know there are people working on such things, some of which appear similar to the “mind map” software with its intricate links. But whatever it ends up being, it will be very different from the publication model.

    So even our tools for publishing to the Web and our Web information products follow the publishing paradigm. I know you’re saying we should get rid of the “publication date” model, but that feels like only part of the story and I’m not seeing changes in the way information is presented online to match what users are looking for when they get on the Web.

    If this effectively says the same thing you’ve been saying, well, then we are in agreement!

    But I don’t see the publication model (or “publishing model”) going away for a long time. We have already all become accustomed to interacting with the Web via “pages.” As I say, mobile devices might change our views, but I don’t see any great paradigm shift as of yet. In a sense, the Web is just an advanced form of the papyrus, adding interconnectivity so your village can read papyruses from other villages on the other side of the world.

    We have new experiences on the Web, but our needs as human beings for information have not changed, I don’t think.

    Oh, we also have the fact that cyberspace is an illusion, not a real thing existing in the real world — it is a mental model that exists only in our minds, and we have collectively agreed on a basic structure of this mental model, or at least I believe that’s our experience with the Web. We’re all in agreement about what it means to “travel” on the Web. “Where do you want to go today?” Nowhere, it turns out. Or at least nowhere in real space. It’s kind of like when you’re in your car driving, you can drive for a thousand miles but you’re sitting in the same spot the whole time. Like watching TV from the sofa, except for a few adrenaline spikes here and there as drivers cut in front of you or depending on the road conditions.

    By the way, that could be the difference in holding a book in your hand vs. the Web: you have a connection with the physical world. But books won’t be going away any time soon, and that’s a good thing for those of us who love books.

    But electrons have another mission and I have yet to see a truly revolutionary presentation of information on the Web, although I’ve seen one or two examples that come close.

    In fact, I would say this: It is not necessarily *required* that we have a revolutionary presentation model on the Web, that deviates from what we’re familiar with from the publishing world. We could easily go forward the way things are, in terms of what we see on the Web. The release model needs to change as you indicate; but unless someone truly comes up with a revolutionary presentation format, and makes a good case for its adoption, we’re probably going to continue to see the Web be an online mapping of the publishing world. You yourself pointed out, in one of your comments I believe, that mind-mapping software, which is pretty cool and different from the typical publishing model, hasn’t taken hold. Perhaps, as you say, it’s because it is still too cumbersome.

    You know, software itself is going to have to move from the desktop metaphor-based GUI before we’ll begin to look at truly radical departures from what we’re seeing now on the Web. Web interfaces will need to change. *Then* perhaps we’ll see radical new kinds of documentation. But for now, it’s pretty much the same old same-old, with a few different twists.

    Reply
    1. Mark Baker Post author

      Steve, you are casting a wider net around “publication” than I was. That’s legitimate, because whenever you write something down and show it to others, you are, in a sense, publishing it (making it public). Every time you tweet, you publish.

      I was using publication in the more limited sense (the sense in which putting you short stories up on you website is not “publishing” and does not entitle you to call yourself a “published author”). Publishing, in this sense, is about a long period of preparation, usually of a lengthy text, followed by scrupulous vetting to arrive at a fixed and final form, which is then accepted by an independent authority (the publisher) and distributed widely with the intention that it will not be updated for some considerable time. This is what we mean when we talk about a “published book”, and it is the model that tech pubs has worked under for a long time, and the model, the skew towards which I am saying we must correct if we are to be effective on the Web.

      But yes, content on the web is still linear written text (plus videos, etc.). It is revolutionary in its connectivity, but not really revolutionary in its presentation of the connected parts. It is text, videos, graphics, animations — all things we have seen before.

      I don’t believe there is some new form of basic presentation brewing. I think written text, drawings, and pictures are all fixed technology that have been with us as long as there has been civilization, and will be with us as long as civilization remains.

      What has been revolutionized many times, and may be revolutionized many time more, is the packaging and distribution of these fixed technologies. The particular limits of our packaging and distribution media profoundly affect how we use these fixed technologies, which is why the Web is working such a revolution in our lives, but it does not change the basic presentation forms which are far older than paper or printing presses.

      Reply
  12. Steve Janoff

    I overlooked something critical.

    Let me cut to the chase: The problem is not with the topic but the TOC.

    The Web affords a different kind of navigation from the book due to (a) searching and (b) linking. “Search and click” is the “bouncing around the Web” that I was referring to.

    The TOC is a great organizing structure for books, but we’ve taken it and moved it wholesale to the Web, and it doesn’t work nearly as well there. It’s weak. It limits us and constrains us.

    Then we go the other way with HATs and we create a collection of topics held together by a book-world TOC, and we publish to PDF and now we have a book that’s not really a book. And we have online help or Web help that has the anachronistic TOC. So both formats are diluted, and less effective than they can be. (The PDF loses the narrative elan of the aftermarket software manual, for example.)

    It may be that a better organizing structure for Web topics is something like a mind map, where the nodes are either keywords or fragments from the topic title, and each one links directly to its topic. That way you could bounce around within the mind map something like the way you bounce around on the Web. The challenge is then how to make the mind map stay visible the way you could keep a TOC visible in a left navbar, for example. Maybe the mind map could be “always on top” and show or hide (fade, become clear) by a mouse movement or other control, as you need it.

    Adobe has tried to replicate Web navigation (search and link) in the print world via Acrobat, and that’s fine, but there you’re limited to Web-like behavior (and a poor substitute at that) within one or a collection of PDFs. That’s a far cry from that limitless feeling you have when you’re joyriding on the Web.

    This also solves the problem of the battle between book and Web. It’s not that a decision is being made between book and Web — both will prevail, it’s just that the organizing structure for one is not suited to the other and vice versa. They are different experiences. (You can’t “flip through a book” and “turn it over in your hands” on the Web, which is a source of endless frustration in dealing with ebooks.)

    Reply
  13. Steve Janoff

    Thanks for your replies on the various comments, Mark.

    Your posts spark a thought about a kind of documentation that I’ve seen in use in other contexts but that I don’t recall seeing used in typical web-based documentation, for example for consumer software products.

    You mention Wikipedia a lot as a prototype, although I realize you don’t intend it as a be-all/end-all and you don’t advocate slavishly following their own guidelines. But obviously it is a useful source. As a side note, Wikipedia reminds me of the great multi-volume print encyclopedias of old, specifically Britannica and the World Book (I had the latter as a kid), where you would have these rich articles with lots of text and plenty of dazzling pictures, and many references to other articles in the multi-volume set. (The editors at Britannica, under Mortimer Adler in the 1950s I believe, also came up with a scheme that bypassed the alphabetical arrangement, so I would make that one comment to the criticism you made elsewhere about encyclopedias — although EB didn’t make it obvious in the set itself, outside their Propaedia, and it was a difficult thing for the reader to figure out for themselves — and difficult to follow even with guidance. Mortimer Adler’s “A Guidebook to Learning: For the Lifelong Pursuit of Wisdom” is a phenomenal book outlining the story of how they came up with that structure.)

    Wikipedia does a nice thing with maps, for example, especially country maps. If I look up Germany, I find that within the main article there is a map with regional “hot spots.” If I click on the hot-spot for Bremen, up pops the main article for Bremen. It’s a simple link but a graphic link rather than a text one (although they do link the text name and the associated icon, in the actual graphic).

    So I can see this applied in a documentation context for example by someone documenting heavy machinery — a giant earth-mover, let’s say (something I’ve never documented so I could be way off). Suppose they have a schematic or graphic of the equipment, and as you roll your mouse over each section of the diagram, the corresponding area lights up as an animated hot spot, and you click on it and up pops a Wikipedia-like article on that part of the machine: “Engine Compartment,” for example. And that contains just about everything you want to know about what’s in that machine’s engine compartment, and about the engine itself: specs, parts, lots of text, additional diagrams, maybe a drill-down diagram, and then any procedures you’d want to perform would be within easy reach via links: how to change the oil, how to remove and replace the alternator, that kind of thing. And each sub-part would have its own Wikipedia-like page within this help system. Now I know this kind of contextual linking is already being done in industry but I haven’t seen anything where the result is a Wikipedia-like topic page (article).

    I’m guessing that such a thing could be hosted locally as an internal web site or on the company’s (secure) intranet. (It’s questionable what’s “secure” anymore but you know what I’m getting at here.)

    But we’ve all already seen this kind of navigation in sci-fi movies, Star Trek, and such — you know, “intruder alert,” and a schematic of the ship comes up on the screen, with the affected area lit up and animated (hot spot), then you click on it and you drill down into that area with an encyclopedia-like entry on that compartment coming up on the screen next to the image. Of course in the movies they like to rotate everything in space for the dramatic “Wow” effect. If I remember, “Minority Report” had some similarly fancy navigation through an information space, but using gestures rather than point-and-click (wasn’t it using swipes before the iPhone and iPad?). There was also another movie a few years back with a human avatar guiding the user through a virtual library.

    This kind of idea has been around a long time.

    So in theory this approach would be straightforward to apply to hardware or system documentation: anything with a physical device where location within the device is significant. It would not be as easy to apply, however, to a software system, since that is such a nebulous thing.

    At any rate, here’s a kind of anti-climactic question: Do you feel that DITA would be able to handle such a thing? I’d think that in this “local Wikipedia” version of product help, you’d want to have the content stored in smaller chunks than an article, obviously, so you could assemble everything on demand, and then update the pieces as needed, but of course, store it once, and use it in multiple places. The design of the whole thing seems like it would be the biggest challenge, while filling it with content seems like it would be the secondary challenge.

    Those are some thoughts. I will say that this kind of documentation seems expensive, but I could be wrong. And I’m not convinced that Wikipedia has the lock on the ultimate Web-based documentation style, but it’s certainly a challenging notion to think of how such a thing could be applied to user assistance. I think the biggest challenge might be in figuring out where you would incorporate how-to, procedural topics within this system, especially to avoid just recreating a Table of Contents, which as you indicate is not a good unifying structure for Web-based material.

    I welcome your feedback on any of this. Thanks again for the interchange. You’re making us all think in a different way, which I think is great.

    Finally, if you can point us to any examples of Wikipedia-like documentation on the Web, especially for things like consumer software, and especially with how-to topics, that would be great. Maybe I’m missing something obvious. So far mostly what I see are the tri-pane help sets, ported to the Web, that you have decried (and with good reason).

    Reply
  14. Dwight

    But I’m not so sure that the maven’s approach is academic.

    Yes, I think yours is a more accurate description of their approach.

    That is, I don’t think they study and then do; I think they do and then study to understand what it is they have done. That may indeed call for a book, in some cases. But I don’t think that book is the “user manual” as we know it today.

    But the user manual works for mavens anyway because their needs are different. To draw a parallel, there are people (very few) who read dictionaries, cover to cover. They do it to gain an encyclopedic knowledge of the language. Dictionaries are wonderful reference guides, but I can’t image a worse format for committing words to memory. Yet, in spite of that, dictionaries work for these people because the essential elements are there: accuracy, consistency, logical organization, and clear language. A better manual for learning words could be written, one that would save these people some time, but it would appeal to only that minority.

    I think people who are currently working in a traditional tech comm role are going to have a lot of work to do to qualify for the kind of technical communication roles that are going to exist in the future.

    Yes but the core content of manuals— the overviews, concepts, and step procedures— will be part of the conversation you mention for a while, and somebody has to write it. The core content is the baseline information that the rest of the conversation is built on. Manuals may not be useful to users, but they’re useful to mavens and course designers, and I don’t know that those people will ever be removed from the conversation.

    It may be that companies will look for tech communicators who can write the core content and extend it to the rest of the conversation. But I think the writing aspect will at least be the bigger part of that skill set.

    As for improving the user manual, I’m not sure any improvement will get users to use it. Users will have no truck with self-study. To draw them in would take a completely different approach in tech communication, as you’ve mentioned. I imagine it might involve various media, more visual representation of information, and a mix of structured and unstructured learning. Plus some necessary holes in the system so that users are barely aware they’ve engaged in self-study. Or maybe it will take something else entirely. In any case, at some point, users will wind up at Step 1 of a procedure that’s pretty much the same as it was written for the core content. The difference is that, when they get there, they’ll feel sure they’re in the right place (probably because somebody in the conversation told them). They’ll press on knowing why they’re doing that procedure and what they must do next. And if the system works, they’ll do this with a vague but inspired notion of their potential with that tool.

    Either that or applications would have to be radically changed from how they are today. And that would be expensive.

    Reply

Leave a Reply