We Need a New Economic Model for Tech Writing Tools

By | 2013/01/03

Tom Johnson’s correspondent, Sam from Canada, asks if tool vendors are not more to blame for the slow pace of change in tech comm than tech writers themselves:

Hi Tom,

I’ve been enjoying your posts along with Mark Baker’s. You both have good points about technical writing trends. I could be totally wrong, but maybe it’s not the tech writers that are resisting change. Maybe it’s the companies making the tools/money that are resisting change.

I don’t think the problem is so much that the tool vendors are resisting change. Tool vendors need a certain amount of change in order to create a reason for people to buy upgrades. But vendors also need, and therefore support, changes that provide a viable economic model for creating and selling software. They won’t support a change if there is not a viable way for them to make money by supporting it.

This is a problem across the software world. To sell a product, there needs to be a degree of proportionality between the price the customer pays and the value they receive. This means that there has to be some correspondence between the price of a good and the user’s level of utilization. People who only drive 500 miles a year will likely not own a car because their level of utilization is not high enough to justify the cost. They will rent or take taxis instead. On the other hand, people who drive hundreds of miles a day will quickly wear out their car and require a new one. There is obviously a wide range of utilization for different car buyers, but there are definite limits, and within those limits, car companies can make a profit building cars people are willing to buy.

A piece of computer code, on the other hand, runs incredibly fast. And Moore’s law means that the same code will run twice as fast today as it did two years ago. That’s equivalent to an two year old car going twice as fast on half as much gas as it did when it was new. For many computing functions, a single server machine running a single piece of code, can serve the needs of an entire corporation.

There is no proportionality here. If you sell that software to a company of 5 people and a company of 50,000 people, it will meet both their needs. But if you price it for the 50,000, all the 5 person companies won’t be able to afford it, and if you price it for the 5 person companies, the 50,000 person company will essentially get all the functionality they need for 50,000 people at the cost of 5. Either the vendor prices themselves out of the market or they leave huge amounts of money on the table. Either way, they can’t make a living.

What the software industry needs, therefore, are ways to restore proportionality between price and value. There are many approaches to this, including straight up per-head licensing, but each has its problems. A classic problem with licensing per head, for example, is found in collaborative tech comm environments, in which many people contribute a little bit to the information set, but a few tech writers work on it all day. Per head licensing in this situation means either that casual contributors are paying too much for their occasional use, or the full time tech writers are getting it for a song.

That gets you into complex schemes for different licence levels for different levels of access, all of which it hard to figure out, frustrating to users, and subject to all sorts of hacks and workarounds by which the customer tries to circumvent some awkward or expensive part of the licensing scheme. (Designating one employee to do all rendering so you only need one copy of the expensive rendering software, for instance.)

This also, by the way, is one reason why so many companies want all document authoring to be done in Word. They have paid the corporate licensing fees for the Office suite and they want to get full utilization from it.

This issue of proportionality is why so many vendors prefer to sell software that is desktop intensive. If the user has to be performing complex operations on their individual desktop machines, then the problem of proportionality goes away. Or at least it is obscured. The user’s PC spends most of its time waiting for keystrokes. Most of its computing power is idle, even when the user is typing away furiously. The power of the machine to perform millions of operations per second is blunted by the inability of the user to type more than a few hundred characters a minute.

Man working at computer

The economic model of tech pubs software demands the vendors create products that glue us to the keyboard. Image courtesy of stockimages / FreeDigitalPhotos.net

As long as the user is kept busy in an input-response loop, proportionality is maintained. A thousand users can’t take advantage of the reserve power of a single machine because they can’t attach a thousand keyboards and a thousand monitors. Not, at least, if the software is running on the desktop. Networks allow a thousand users with a thousand keyboards and a thousand monitors to use the power of a single server machine, as long as their operations are not too network intensive. If a high degree of graphical interactivity is required, it can’t run over the network and still be responsive on the desktop. Highly interactive GUIs help maintain proportionality, at least for now. Ever increasing bandwidth threatens this, leaving vendors with few options other than to simply sell software that only runs locally.

Not surprisingly, then, vendors love desktop publishing. Desktop publishing is the perfect highly graphical, highly interactive kind of application that demands each writer spends all day constantly interacting with an individual copy of the software running on an individual machine.

“Everybody will need a copy of X.” That is music to a vendor’s ears. That is what drives them to create tools such that everybody will need a copy of X. It is why, no matter how hard technical communications tries to move to structured writing, the vendors keep wrapping it back up in desktop publishing’s clothing. They have no choice. Real structured writing would destroy the proportionality inherent in the interactive graphically intense, desktop-centric world of desktop publishing. We are not going to get vendor support of any other model until someone comes up with an economic model that makes it viable.

There is, of course, all kinds of software that does not work this way. There is software that runs in the background on servers and over networks, using every bit of computing power available to it. It is, among other things, the software that runs the Web. And there is a reason that so much of it is open source software. There is also a reason that many commercial software companies have now jumped on the open source band wagon, and why they contribute so much to the specification, support, and development of open source software. They recognize that for many essential computing functions, there is no economic model for commercial software. Yet these functions create the infrastructure necessary for commercial applications to run. They recognize that it is in their interests to contribute to the creation of conditions in which it is possible to develop and sell commercial software, even if that means giving a great deal of software away for free.

This is not to say that all software that runs on a network has no proportionality between price and value. They key, once again, is interactivity. Thus there is a considerable commercial market for content management systems, though it exists side by side with an equally robust market of free content management solutions, such as WordPress, into which I am now typing this post. WordPress certainly has interactive features, but interactivity is not really essential to what it does. I could just as easily compose my posts off line and upload them to the server. It takes a more complex interaction to create proportionality in content management systems.

What it needs is something that requires the user to interact with multiple files or multiple objects at a time, and to be in constant contact with the server while they work. It requires the server, in other words, to project itself onto the desktop. Thus DITA is the answer to a CMS vendor’s prayer.

DITA breaks content up into hundreds of separate files, including topics, maps, and other assorted supporting files. That, in itself, would not create an economically viable model for a CMS vendor if the user only needed to interact with one of those files at a time. As with WordPress, you could easily create the files off line and upload them, destroying proportionality. But DITA doesn’t work that way. DITA demands that the user has access to many files at once:

  • The conref reuse mechanism allows one file to be brought into another by reference, so if the writer wants to see the referenced file in place as they write, they need both an editor that can request it from the CMS and a CMS that can respond in real time.
  • The map mechanism for organizing and assembling (and reusing) content  demands that if the writer wants to see the document they are assembling, they need to have live access to the map file as well as all the files referenced by the map.
  • The linking system, both the direct hard linking, and the indirection of linking through maps, demand access to both the maps that define link relationships and the files that are to be linked to.

The more complex the set of relationships expressed and managed through maps, the greater the demand for a live and continual interaction with the CMS. The production of Frankenbooks, in particular, because of the number of files involved, and the complexity of their organization, requires the highest degree of live connection to the CMS. It is thus in the interest of DITA CMS vendors to encourage the production of Frankenbooks rather than Every Page is Page One topics. Little wonder then that, however much DITA advocates may (rightly) disclaim any necessary connection between DITA and Frankenbooks, DITA processes often produce Frankenbooks. Frankenbooks are in the economic interest of DITA CMS vendors.

(We should note that vendors have embraced DITA in a way they never did for DocBook. The reason, I believe, is that DocBook, with its more monolithic document structure, never presented the same kind of economic opportunity for vendors as DITA does. Some DTP tools have provided basic DocBook support, but they have never promoted it or advocated for it as they have with DITA.)

It is thus in the economic interest of all vendors in the tech comm space (and in many other spaces) to keep us glued to the desktop, to keep us working in highly graphical, highly interactive environments. The problem is, this is a very inefficient way to work. Content management systems, it should be noted, do not manage content. They facilitate human beings managing content. All the management work is actually done by human beings, interactively, through a desktop interface. Again, the desktop interactivity is necessary to maintain the proportionality that the commercial model demands.

Still less do CMSs support content automation. There is, to be sure, some support for layout automation, but that is not really a CMS function, and in many cases, structured writing systems are constructed to offer the writer a pallet of elements with different layouts attached, and a WYSIWYG editing environment which means that the writer is still effectively doing the layout, albeit with a restricted pallet and few options to override.  Some DITA authoring tools, like FrameMaker, essentially put the author back in the familiar desktop publishing environment, with all the familiar desktop publishing responsibilities, but with the added desktop responsibility for managing reuse and linking. The writer is actually performing more functions interactively and by hand, not less.

But real content automation is largely lacking. There is no support for automatic aggregation and organization of content, and no support for automated linking. We are still firmly in the desktop publishing mode, and there we shall remain as long as the current economic model of tech comm tools is maintained.

I want to emphasize that I am not accusing the vendors of malfeasance here. Vendors products must follow a viable economic model. Those who come up with products that don’t have a viable economic model will simply go out of business and we will be left with the ones that do have a viable model. The only viable economic model that we seem to have for COTS tech comm software is the desktop publishing model, and so, by the logic of the markets, only those vendors that use that model are available in the market.

We can’t look to the vendors, therefore, to break us out of this model and move to a more productive model for content development. If we want a new economic model, we have to change our buying behavior. Economic models are driven, in the end, by buying behavior. If we want different tools, we have to start buying differently. We have to start having a very different attitude to how we tool our technical writing processes.

This does not mean that we have to start building our own tools from scratch. But it almost certainly means that we will need to start taking more responsibility for designing and integrating our solutions. If we want automation, if we want to hand off processing from people to machines in a big way, we are not going to be able to buy a single pre-integrated solution from a single vendor, because there is no economic model that would allow a vendor to make money selling that kind of system. Vendors need proportionality. They need butts in seats, eyes glued to screens, fingers on the keyboard. Vendors will also go to great lengths to inspire you with fear at the very thought of integrating your own solution.

They are not without reason in these warnings, either. As Joe Gollner points out, implementing content technologies is hard because of the amount of integration involved. Integration is not easy. On the other hand, the pre-integrated systems that the CMS vendors will sell you provide only a trivial level of integration that is really focused on keeping you in desktop publishing mode, locked in a model of individual, artisan, desktop productivity. Trying to do any real integration behind, or on top of, such systems turns out to be really difficult, because nothing about them is really designed to support it.

The fundamental problem is that we don’t find in tech pubs the kind of automation culture, the kind of integration culture, the kind of tool making culture, that you will readily find in development or IT. (It is, incidentally, why we tend not to be able to frame effective rebuttals when IT waltzes in and declares that our content management needs can be met by their existing systems.) Until we get out of the tool mindset of people who write business documents, and into the tool mindset of people who manage and integrate large volumes of critical business data, we are going to get the vendors we deserve.

So yes, Sam, the tool vendors are resisting the changes we need, but it is fundamentally our fault because we continue to have a desktop attitude to process and a desktop attitude to tools. We create a market in which only the kind of tools we have now provide a viable economic model for vendors.

10 thoughts on “We Need a New Economic Model for Tech Writing Tools

  1. Daniel D. Beck

    You’re absolutely right that the mindset needs to change. I’ve adopted the mindset that the tools I use are just as important as the content I write and it has benefited me greatly. Though I think you’re perhaps too dismissive of the importance of open source and free (as in freedom) software (and, by the way, there are business models for open source, such as support, consulting, multi-licensing, or development for hire). The documentation I work on day in and day out is managed using an open source tool, Sphinx and tools I’ve developed in-house (either alone, or with my developer colleagues). Open source software doesn’t really solve vendors’ problem of value proportionality, but it does solve it for the software’s consumers and creators, in that consumers easily correspond their costs (specifically, time spent contributing to the software) to the value they get from the software. Plus contributing to open source software often has non-linear benefits that don’t happen with purchasing: your organization may only put in a small effort into the software, but those contributions may prompt additional contributions from the community (e.g., bug fixes) that your organization gets for no additional outlay of time or money.

    Reply
  2. Mark Baker Post author

    Hi Daniel. Thanks for the comment. I agree entirely about open source and the viability of the open source business model — give the software away and live on service and support revenue (or addons). It works very well for companies like Automattic (Word Press) and Canonical (Ubuntu). I actually decided not to talk about it in the post in the interest of length, but it is certainly an important consideration.

    The question is whether it can work for a smaller market like tech pubs. Most users never pay you a cent, so you do need volume to bring in the revenues.

    But a tech pubs organization with a more integration-oriented attitude to tools can and should certainly make use of open source tools — just like every other integration oriented culture does.

    What I would really like to see in tools like Sphinx is that stop trying to be standalone end-to-end solutions. Be a parser that extracts information from source, and outputs the results in XML. Then allow users to integrated authored content from other XML sources, integrate the two sources, and then generate output using a separate XML publishing tool chain.

    In other words, be a piece that integrates well with other pieces rather than trying to stand alone and encompass all the structured authoring and output options in a single tool. That is how a culture of integration should look at a problem.

    Reply
    1. Daniel D. Beck

      What I would really like to see in tools like Sphinx is that stop trying to be standalone end-to-end solutions.

      Oh, absolutely. We need tools that adhere to a Unix-like philosophy of composition, where one program’s output is easily consumed as another’s input.* With Sphinx, one thing I struggled with was consuming the documentation for use with an open-source search engine. Because the program that goes between Sphinx and the search tool is keyed to Sphinx’s monolithic, styled output, it’s not really of use to anyone else. There’s a lot of effort like that which wouldn’t need to be recreated again and again, if the outputs were more composable.

      * I don’t know that XML is the be-all and end-all format though. As a writer I get it, but as a programmer it makes me twitch. What we need is a common representation for units of content and their relationships, not necessarily a single serialization format. XML has limitations and drawbacks that need not be the representation’s drawbacks. Sphinx and docutils (an underlying Python library) provide an internal representation, but I don’t think it has a specification and nobody else is using it anyway.

      Reply
      1. Mark Baker Post author

        We need tools that adhere to a Unix-like philosophy of composition, where one program’s output is easily consumed as another’s input.*

        Absolutely. This is why I created/curated the SPFE architecture (http://SPFE.info).

        As an old SGML hand, I also share your discomfort with XML, though in my case it is the writer in me that twitches. The programmer in me is comfortable enough with XML. An architecture of chained programs needs intermediate formats for one program to output and another to input, and being able to use one standard parser for multiple different intermediate formats makes it easier to create those tool chains. As a human readable format, XML also makes it easier to debug those tool chains by examining the messages between the parts.

        For authoring, though, its verbosity and insistence on closing every tag makes it cumbersome. SGML was created to enable people to create tagging languages to be used by people as tagging languages. (Markdown, for instance, could be described as an application of SGML.) XML was created with the assumption that people would never see the tags — it was supposed to be purely a machine format.

        It hasn’t worked out that way. People keep creating and using tagging languages (Markdown, WIKI markup), and people doing genuine structured writing in XML seem to prefer to look at the tags frequently, rather than sticking exclusively to the WYSIWYG view, particularly because in genuine structured authoring, a lot of the markup has no direct WYSIWYG equivalent.

        So, SGML was actually a superior technology for authoring applications. But that ship has sailed, and there seems little point in fighting that battle again on the authoring front. There are bigger fish to fry.

        Reply
  3. Alex Knappe

    As reinventors of the wheel, we only deserve the tools of a wheel inventor and we will ever be catered only with such tools.
    Tech comm itself is standing in its way to move on from mere tech writers towards information managers. As long as we are still writing content, we only will get the tools for it.
    How often do you think the sentence “to continue, press button x” has been written? Right, gazillions of times, and it will be written for some more gazillions in the future.
    This simple example reveals a mayor problem we have in tech comm. We are working on our own, no matter how collaborative we do that, or how many CMSes we have got in use.
    We recreate matching content myriads of times, instead of automating and reusing it on a large scale. You may say “but we are using a CMS to reuse as much content as we can”. I say, this is an illusion. You recreated thousands of already existing sentences, just to feed a system, that one day will be outscaled or outdated, just to recreate those sentences again.
    Where we need to head for, is a system, that allows automatic generation of text sniplets based on handling concepts preferably stored over the web, fed by sophisticated developer software (CAD programs, SDKs, whatever).
    We need to step away from being text generators, and head out for that what we essentially should do: create handling concepts for specific auditions or simply put communicate technical understanding to the ones that don’t understand it (yet).
    If we head that way, our tools will also need to change – and I can see lots of viable business models for such software.

    Reply
    1. Mark Baker Post author

      Where we need to head for, is a system, that allows automatic generation of text sniplets based on handling concepts preferably stored over the web, fed by sophisticated developer software (CAD programs, SDKs, whatever).

      I agree wholeheartedly. This falls into the realm of what I have called “narrated data” (see: http://thecontentwrangler.com/2012/10/06/its-time-to-start-separating-content-from-behavior/). Much of tech comm content is not really discursive narrative, it is narrated data. The example of the steps through a GUI is a good example. The prose describing the steps of a procedure is entirely formulaic. The formula is often prescribed explicitly in the company style guide. But human being should not be executing formulas. That is what computer’s are for. All the GUI procedures in a doc set, wherever they appear, can and should be generated from a single map of the interface.

      So, yes, generation of text, not reuse, is what we should be focusing on for all content that is essentially narrated data. But we are not going to get that from the vendors because it removes interactivity from the tools interface and thus destroys the proportionality of cost to value.

      Reply
  4. John Tait

    That’s a really interesting post — thanks.

    I’d be interested in hearing more of your opinions on using developer’s tools rather than tech comm tools to produce technical content.

    Developer’s tools tend to be open source or at least more likely to use standards and libre licences for components, meaning the tools are designed for productivity, and not (just) to keep vendors in business.

    I don’t have access to a DITA CMS (I tried Componize), but I’m struggling to find any reason to want one. Using git and GitHub allows me to work with DITA files on a USB stick offline in the middle of nowhere and transfer the files around between Windows and Linux computers while maintaining complete version control and history.

    git is an open source tool and GitHub is primarity aimed at savvy developers in open source and commercial developers.

    I use oXygen Author, which is excellent, but it’s interesting how well it’s integrated with the DITA-OT and the DITA for Publishers plug-in, both open source.

    I’m a hobbyist and don’t work (with DITA) on other projects, but GitHub would allow other writers to get involved easily and work on the same project in the same way. (Please finish my book for me!)

    (Note I use Word and Documentum for technical writing in my occupation, which is as productive as it sounds.)

    GitHub and git (and other distributed version control technologies) sound like a DITA CMS vendor’s worst nightmare.

    I’ve also found Emacs org-mode to be the best technival writing and publishing tool there is, even though publishing is not the main focus of this mode of a niche developer’s text tool. It’s been designed for productivity by its own users, and it shows.

    John

    Reply
  5. Mark Baker Post author

    GitHub and git (and other distributed version control technologies) sound like a DITA CMS vendor’s worst nightmare.

    They would be, except for one thing. Git, and other VCSs, work for programmers, even in large distributed projects, because programming languages (those used for the bones of big projects, at least) use a linker and loader to assemble, link, and organize the code of many different programmers. All the programmers need to know to use each other’s code is the list of symbols (global variables, constants, routine names) to call. They don’t have to know where they are or how to locate them. The linker takes care of that.

    In DITA, however, there is no linker or loader. The work of assembling, linking, and organizing content has to be done by hand. And while there is nothing preventing you from doing the linking and loading function by hand on the file system, it get more and more cumbersome as the size of the project and the number of contributors grows.

    A DITA CMS won’t do linking and loading for you, but it will give you content management tools that make it easier for you to do it yourself. That, essentially, is why people use expensive DITA CMSs rather than simple free VCSs.

    But I think that the content build system should work much more like a software build system. That is, is should have a linker/loader build into it. With a linker/loader in place, using a VCS for large content projects becomes much more viable. This is really the key thing that the SPFE architecture (http://SPFE.info) is designed to achieve.

    Reply
  6. Laurie Nylund

    I, too, find your article to be spot on, and as you point out, valid for many products beyond tech comm publishing software. Kind of depressing, actually. But I think the tool’s cost vs. value analysis, it only part of the story. The cost-benefit ratio of the writer’s time is a big factor in the overall environment as well.

    As long as companies can find writers in India willing to author content for pennies while using “cheap” tools like Word, businesses feel that they are practicing the 80-20 rule quite effectively. It is just a commodity, albeit required, that no one need pay more than a tiny fraction of its true cost in human time and effort. Until and unless the global labor market levels out, nothing will change.

    Reply
    1. Mark Baker Post author

      Thanks for the comment Laurie. You make a good point. The alternative to efficient process is cheap labor.

      How do you compete with cheap labor? You increase your productivity. We have got all the productivity out of the desktop publishing model that we are ever likely to get. If we want to increase our productivity further, we need a new model.

      Reply

Leave a Reply