Improving First Run Quality

Production Line

The enormous improvements in quality and productivity that have occurred in industry over the last several decades can, in large part, be attributed to a focus on improving first-run quality. In traditional production line environments, the golden rule was never to stop the production line. Any faults that might occur or be noticed while the product was on the production line were to be allowed to pass on, to be found and fixed in post production testing. Come hell or high water, though, the line must never stop.

Toyota turned this thinking on its head. Workers are encouraged to stop the production line whenever a problem occurred. No flaw is allowed to proceed through the production process. And when a flaw is discovered, and the line is stopped, they don’t merely fix the flaw, they do a root cause analysis to discover why the flaw occurred. Was it because of a design flaw, a parts problem, a flaw in the design of the production line? Did it result from an incorrect compensation scheme for buyers in the parts department, resulting in their buying inferior components? However far back the problem originated, it was found and fixed.

The result of this approach is consistently better first run quality, little or no need for post-production testing, and significantly decreased production costs. Improving first-run quality pays off big. (And once the system is working, the production line rarely stops.)

Production Line

By Literary Digest 1928-01-07 Henry Ford Interview / Photographer unknown [Public domain], via Wikimedia Commons

In the software development world, the Agile development process similarly aims at improving first-run quality. Practices such as test-driven development of software, where you write the tests for a feature before you implement the feature, and pair-programming, where you have two pairs of eyes on the code as it is written, and two heads working out the design and looking for flaws, serve to significantly reduce the number of bugs in the first version of the software.

However, improving first run quality is not something you hear much about in content creation. Indeed, it is almost axiomatic in content creation that quality is the product of revision, and the more revision the better. As Craig Fehrman writes in the Boston Globe:

Perhaps the only belief that today’s writers share is that to produce good writing, you have to revise.

This cult of revision, however, is of recent origin. Fehrman, summarizing “The Work of Revision,” by Hannah Sullivan, an English professor at Oxford University, writes:

In the age of Shakespeare and Milton, paper was an expensive luxury; blotting out a few lines was one thing, but producing draft after draft would have been quite another. Writers didn’t get to revise during the publishing process, either. Printing was slow and messy, and in the rare case a writer got to see a proof of his work—that is, a printed sample of the text, laid out like a book—he had to travel in person to a publishing center like London.

Revision became part of literary culture with the typewriter and with the rise of modernism in literature, with its emphasis on paring back expression to the minimum. It is also, Sullivan suggests, a product of the writing instruction industry. Fehrman quotes Sullivan:

Writers need to look more like professors and to discuss their laborious processes. ‘We can’t teach you how to write, but we can teach you how to revise.’ And it’s a big business.

This same notion of the beneficial effects of revision is expressed in DITA and The Return of the Editorial Process, where Keith Schengili-Roberts argues that one of the benefits of content reuse is that when content gets reused, it gets looked at again by fresh eyes and can be edited and improved.

 I have noticed over the years that one of the other inadvertent bonuses of this approach is that topics that are looked at most – when being evaluated for reuse – become “edited topics” and are improved in the process of reuse. In DITA environments, I am seeing the return of the editorial process as technical writers review and inevitably revise content written by their peers.

The problem with this is that every book and article I have read on the benefits of reuse, and on the ROI of reuse, assumes that a topic is reused clean. If a topic is reused three times, that is supposed to reduce the writing cost 3 times, and the translation cost 3 times. But if the topic is being revised each time it is reused, then it needs to be translated again each time it is revised, and the cost of revising and retranslating has to be figured into the ROI equation. The cost of revising and re-translating may well be less than the cost of separately writing and translating the content from scratch three times, but the savings are considerably less than if the content were reused without revision.

Rather than welcoming the chance to revise the content, therefore, should we not rather be asking why the content was created imperfectly in the first place, and why is was still in need of revision when it came time to reuse it? Perhaps the cult of revision blinds us from asking the same question that every other process has learned to ask: why was the part defective, and how do we get to the root cause of the defect and make sure it does not happen again?

Relying on revision to produce content quality is a serious problem in a marketplace in which product cycles continue to shrink, and in which customers increasingly expect content to be instantly available and constantly up to date. There is simply no time for a revision cycle in modern technical communication.

Many of us, of course, are simply living without the revision cycle, and sometimes complaining about the lack of time to do revision. But I think we have to get past the idea that revision is the only way to produce quality content, and instead look on the need for revision as the sign of a flawed writing process. Since we are never going to have the time for revision restored to our schedules, it is time to start doing the root cause analysis on why we are producing flawed content in the first place, and looking for ways that we can remediate our content development processes to improve first run quality.

I’ll have some suggestions on way to approach this in future posts. If you are interested in the subject, you may also be interested in my upcoming talk at LavaCon 2013 entitled More Content in Less Time, which looks at the application of Lean and Agile techniques to the production of content. If you want to chat about how you can apply these methods in your organization, feel free to contact me.

Author: Mark Baker

Mark Baker is a content strategist and content engineer who helps organizations produce content that matches the way people seek and consume information on the Web today. He is the author of Every Page is Page One: Topic-based Writing for Technical Communication and the Web. He blogs at His website is

9 thoughts on “Improving First Run Quality”

  1. Interesting post Mark,
    will you be able to link a video of the LavaCon talk lateron?


  2. This is an excellent piece of information. I struggle to hammer home this concept to some junior writers in my team and they just don’t realize the importance of the first cut right.

    Thanks a lot. By the way, can you provide me some tips on documenting and publishing Saas products or cloud based products? Which medium is best: conventional authoring tools such as RoboHelp or Flare or a wiki like Confluence?

    Thanks in advance.

    1. Thanks for the comment, Anthony.

      I think with saas and cloud-based products, the documentation is really living in the eternal now, rather than being tied to a periodic release cycle. I don’t think it makes sense to a reader that the documentation should be months only if the product itself is in the cloud. So I would be looking at authoring products that support continuous updating of individual topics without having to do a major production/publishing cycle,

      I would also, of course, be looking for products that provide good support for Every Page is Page One topics.

      On both scores, I think wikis win over HATs.

  3. I find these remarks inspiring and truly thought provoking.

    It’s true that revision is often a crutch for content producers.
    For example, knowing that a revision is carried out can cause a sort of dependance: “I don’t need to be as vigilant in my writing because the proofreader will catch it.”
    Or if planned revisions fall to the way-side the result may be something like: “If I had a sufficient revision process then that error would have been avoided” …

    No one would every actually utter those words but the resulting content reflects certain attitudes. Just as poor content and revision may reflect problems in the writing process. I think you are on to something on re-examining how content is revised and the role revision has in the writing process.

    However, at the same time, proofreading by a different party does improve quality:
    1) Fresh eyes can see lingering typos
    2) A fresh mind can bring about new ideas on how to present content

    Not that it really matters, but as proof of the importance of proofreading, regardless of the process, I’d like to point out this small typo found at the end of the following sentence:
    “Rather than welcoming the chance to revise the content, therefore, should we not rather be asking why the content was created imperfectly in the first place, and why is was still in need of revision when it came time to reuse it?”

    Current revision techniques may be like a lingering bad-habit, but is this habit one that we may be unable to shake ?

    1. Thanks for the comment, Sarah. I agree we will not remove the need for proofreading entirely, though there are a number of tools that can be used effectively to reduce errors of this class.

      The real productivity killing revisions are omissions of fact or erroneous explanations which are not caught until later in the process, if they are caught at all. These too can be significantly reduced using tools and techniques that I will talk about in later posts.

  4. You present some very thought provoking ideas. I see this dilemma from two points of view. First, we are are each faced with different challenges throughout the development process (speaking from a software development perspective). We learn more about the product or industry as our experience with it increases. We are challenged to learn more and more about our audiences and, in my opinion, organizations are only now beginning to see the benefit of truly knowing our audiences. With this newly gained knowledge, we are compelled to examine earlier documentation and improve it. In this case, as you suggest, it may be reused topics that need to be revised. I’m not talking about improving grammar or making corrections. I’m talking about revising it to making it easier to understand or to bring new insights to topics. Note that my perspective is a bit skewed since I’m working on a fairly new product that has not reached maturity and the industry is rapidly changing.

    The other challenge is the ROI on topic reuse. Whether you use DITA or simply use topic-oriented authoring, a reuse strategy is essential. I like your breakdown that the gains may not be as straightforward as we might be led to believe. Often we hear the other side of the argument that it’s all hype. However, I think that creating and managing a reuse strategy is essential. At a minimum I see this as deciding how to handle revisions to reused topics. I’m fortunate that we don’t have our user assistance translated…yet. But I think this is essential in balancing the cost in the long-term; and flexible, yet firm policies around this will be key.

    1. Thanks for the comment Suzette.

      You are really getting to the heart of the matter (and anticipating where I plan to go with this) when you talk about gaining a better understanding of how we learn and how information is generated through a project. Often, the need to revise, and particularly the need to revise late, results from a failure to plan for how information will be created through a project. When we start paying attention to that, we can start to reduce our error rates and the amount of rework we have to do. We can’t reduce them to zero — that was the fallacy of the waterfall approach to project planning. But we can exercise much better control over them.

      The ROI of topic reuse is indeed a challenge, and the projections are often very optimistic. My beef with the focus on reuse, though, is not that it is not effective — is often is — but that it tends to be the only thing people focus on, and in the doing so they miss many other opportunities to reduce costs — and to make reuse work more effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *