Tom Johnson’s correspondent, Sam from Canada, asks if tool vendors are not more to blame for the slow pace of change in tech comm than tech writers themselves:
I’ve been enjoying your posts along with Mark Baker’s. You both have good points about technical writing trends. I could be totally wrong, but maybe it’s not the tech writers that are resisting change. Maybe it’s the companies making the tools/money that are resisting change.
I don’t think the problem is so much that the tool vendors are resisting change. Tool vendors need a certain amount of change in order to create a reason for people to buy upgrades. But vendors also need, and therefore support, changes that provide a viable economic model for creating and selling software. They won’t support a change if there is not a viable way for them to make money by supporting it.
This is a problem across the software world. To sell a product, there needs to be a degree of proportionality between the price the customer pays and the value they receive. This means that there has to be some correspondence between the price of a good and the user’s level of utilization. People who only drive 500 miles a year will likely not own a car because their level of utilization is not high enough to justify the cost. They will rent or take taxis instead. On the other hand, people who drive hundreds of miles a day will quickly wear out their car and require a new one. There is obviously a wide range of utilization for different car buyers, but there are definite limits, and within those limits, car companies can make a profit building cars people are willing to buy.
A piece of computer code, on the other hand, runs incredibly fast. And Moore’s law means that the same code will run twice as fast today as it did two years ago. That’s equivalent to an two year old car going twice as fast on half as much gas as it did when it was new. For many computing functions, a single server machine running a single piece of code, can serve the needs of an entire corporation.
There is no proportionality here. If you sell that software to a company of 5 people and a company of 50,000 people, it will meet both their needs. But if you price it for the 50,000, all the 5 person companies won’t be able to afford it, and if you price it for the 5 person companies, the 50,000 person company will essentially get all the functionality they need for 50,000 people at the cost of 5. Either the vendor prices themselves out of the market or they leave huge amounts of money on the table. Either way, they can’t make a living.
What the software industry needs, therefore, are ways to restore proportionality between price and value. There are many approaches to this, including straight up per-head licensing, but each has its problems. A classic problem with licensing per head, for example, is found in collaborative tech comm environments, in which many people contribute a little bit to the information set, but a few tech writers work on it all day. Per head licensing in this situation means either that casual contributors are paying too much for their occasional use, or the full time tech writers are getting it for a song.
That gets you into complex schemes for different licence levels for different levels of access, all of which it hard to figure out, frustrating to users, and subject to all sorts of hacks and workarounds by which the customer tries to circumvent some awkward or expensive part of the licensing scheme. (Designating one employee to do all rendering so you only need one copy of the expensive rendering software, for instance.)
This also, by the way, is one reason why so many companies want all document authoring to be done in Word. They have paid the corporate licensing fees for the Office suite and they want to get full utilization from it.
This issue of proportionality is why so many vendors prefer to sell software that is desktop intensive. If the user has to be performing complex operations on their individual desktop machines, then the problem of proportionality goes away. Or at least it is obscured. The user’s PC spends most of its time waiting for keystrokes. Most of its computing power is idle, even when the user is typing away furiously. The power of the machine to perform millions of operations per second is blunted by the inability of the user to type more than a few hundred characters a minute.
As long as the user is kept busy in an input-response loop, proportionality is maintained. A thousand users can’t take advantage of the reserve power of a single machine because they can’t attach a thousand keyboards and a thousand monitors. Not, at least, if the software is running on the desktop. Networks allow a thousand users with a thousand keyboards and a thousand monitors to use the power of a single server machine, as long as their operations are not too network intensive. If a high degree of graphical interactivity is required, it can’t run over the network and still be responsive on the desktop. Highly interactive GUIs help maintain proportionality, at least for now. Ever increasing bandwidth threatens this, leaving vendors with few options other than to simply sell software that only runs locally.
Not surprisingly, then, vendors love desktop publishing. Desktop publishing is the perfect highly graphical, highly interactive kind of application that demands each writer spends all day constantly interacting with an individual copy of the software running on an individual machine.
“Everybody will need a copy of X.” That is music to a vendor’s ears. That is what drives them to create tools such that everybody will need a copy of X. It is why, no matter how hard technical communications tries to move to structured writing, the vendors keep wrapping it back up in desktop publishing’s clothing. They have no choice. Real structured writing would destroy the proportionality inherent in the interactive graphically intense, desktop-centric world of desktop publishing. We are not going to get vendor support of any other model until someone comes up with an economic model that makes it viable.
There is, of course, all kinds of software that does not work this way. There is software that runs in the background on servers and over networks, using every bit of computing power available to it. It is, among other things, the software that runs the Web. And there is a reason that so much of it is open source software. There is also a reason that many commercial software companies have now jumped on the open source band wagon, and why they contribute so much to the specification, support, and development of open source software. They recognize that for many essential computing functions, there is no economic model for commercial software. Yet these functions create the infrastructure necessary for commercial applications to run. They recognize that it is in their interests to contribute to the creation of conditions in which it is possible to develop and sell commercial software, even if that means giving a great deal of software away for free.
This is not to say that all software that runs on a network has no proportionality between price and value. They key, once again, is interactivity. Thus there is a considerable commercial market for content management systems, though it exists side by side with an equally robust market of free content management solutions, such as WordPress, into which I am now typing this post. WordPress certainly has interactive features, but interactivity is not really essential to what it does. I could just as easily compose my posts off line and upload them to the server. It takes a more complex interaction to create proportionality in content management systems.
What it needs is something that requires the user to interact with multiple files or multiple objects at a time, and to be in constant contact with the server while they work. It requires the server, in other words, to project itself onto the desktop. Thus DITA is the answer to a CMS vendor’s prayer.
DITA breaks content up into hundreds of separate files, including topics, maps, and other assorted supporting files. That, in itself, would not create an economically viable model for a CMS vendor if the user only needed to interact with one of those files at a time. As with WordPress, you could easily create the files off line and upload them, destroying proportionality. But DITA doesn’t work that way. DITA demands that the user has access to many files at once:
- The conref reuse mechanism allows one file to be brought into another by reference, so if the writer wants to see the referenced file in place as they write, they need both an editor that can request it from the CMS and a CMS that can respond in real time.
- The map mechanism for organizing and assembling (and reusing) content demands that if the writer wants to see the document they are assembling, they need to have live access to the map file as well as all the files referenced by the map.
- The linking system, both the direct hard linking, and the indirection of linking through maps, demand access to both the maps that define link relationships and the files that are to be linked to.
The more complex the set of relationships expressed and managed through maps, the greater the demand for a live and continual interaction with the CMS. The production of Frankenbooks, in particular, because of the number of files involved, and the complexity of their organization, requires the highest degree of live connection to the CMS. It is thus in the interest of DITA CMS vendors to encourage the production of Frankenbooks rather than Every Page is Page One topics. Little wonder then that, however much DITA advocates may (rightly) disclaim any necessary connection between DITA and Frankenbooks, DITA processes often produce Frankenbooks. Frankenbooks are in the economic interest of DITA CMS vendors.
(We should note that vendors have embraced DITA in a way they never did for DocBook. The reason, I believe, is that DocBook, with its more monolithic document structure, never presented the same kind of economic opportunity for vendors as DITA does. Some DTP tools have provided basic DocBook support, but they have never promoted it or advocated for it as they have with DITA.)
It is thus in the economic interest of all vendors in the tech comm space (and in many other spaces) to keep us glued to the desktop, to keep us working in highly graphical, highly interactive environments. The problem is, this is a very inefficient way to work. Content management systems, it should be noted, do not manage content. They facilitate human beings managing content. All the management work is actually done by human beings, interactively, through a desktop interface. Again, the desktop interactivity is necessary to maintain the proportionality that the commercial model demands.
Still less do CMSs support content automation. There is, to be sure, some support for layout automation, but that is not really a CMS function, and in many cases, structured writing systems are constructed to offer the writer a pallet of elements with different layouts attached, and a WYSIWYG editing environment which means that the writer is still effectively doing the layout, albeit with a restricted pallet and few options to override. Some DITA authoring tools, like FrameMaker, essentially put the author back in the familiar desktop publishing environment, with all the familiar desktop publishing responsibilities, but with the added desktop responsibility for managing reuse and linking. The writer is actually performing more functions interactively and by hand, not less.
But real content automation is largely lacking. There is no support for automatic aggregation and organization of content, and no support for automated linking. We are still firmly in the desktop publishing mode, and there we shall remain as long as the current economic model of tech comm tools is maintained.
I want to emphasize that I am not accusing the vendors of malfeasance here. Vendors products must follow a viable economic model. Those who come up with products that don’t have a viable economic model will simply go out of business and we will be left with the ones that do have a viable model. The only viable economic model that we seem to have for COTS tech comm software is the desktop publishing model, and so, by the logic of the markets, only those vendors that use that model are available in the market.
We can’t look to the vendors, therefore, to break us out of this model and move to a more productive model for content development. If we want a new economic model, we have to change our buying behavior. Economic models are driven, in the end, by buying behavior. If we want different tools, we have to start buying differently. We have to start having a very different attitude to how we tool our technical writing processes.
This does not mean that we have to start building our own tools from scratch. But it almost certainly means that we will need to start taking more responsibility for designing and integrating our solutions. If we want automation, if we want to hand off processing from people to machines in a big way, we are not going to be able to buy a single pre-integrated solution from a single vendor, because there is no economic model that would allow a vendor to make money selling that kind of system. Vendors need proportionality. They need butts in seats, eyes glued to screens, fingers on the keyboard. Vendors will also go to great lengths to inspire you with fear at the very thought of integrating your own solution.
They are not without reason in these warnings, either. As Joe Gollner points out, implementing content technologies is hard because of the amount of integration involved. Integration is not easy. On the other hand, the pre-integrated systems that the CMS vendors will sell you provide only a trivial level of integration that is really focused on keeping you in desktop publishing mode, locked in a model of individual, artisan, desktop productivity. Trying to do any real integration behind, or on top of, such systems turns out to be really difficult, because nothing about them is really designed to support it.
The fundamental problem is that we don’t find in tech pubs the kind of automation culture, the kind of integration culture, the kind of tool making culture, that you will readily find in development or IT. (It is, incidentally, why we tend not to be able to frame effective rebuttals when IT waltzes in and declares that our content management needs can be met by their existing systems.) Until we get out of the tool mindset of people who write business documents, and into the tool mindset of people who manage and integrate large volumes of critical business data, we are going to get the vendors we deserve.
So yes, Sam, the tool vendors are resisting the changes we need, but it is fundamentally our fault because we continue to have a desktop attitude to process and a desktop attitude to tools. We create a market in which only the kind of tools we have now provide a viable economic model for vendors.