In Praise of Short Term Thinking

I was in Portland  for LavaCon last week, and one night I had dinner with a bunch of content strategy types. As we do, we spent some of the conversation bemoaning the short term thinking that many people and organizations have about content.

One of the first sessions I attended, however, was Edwina Lui’s A Goldilocks Approach to Product Innovation: Finding the Right Team and the Right Project at the Right Size, which contained a salutary warning about the dangers of too much long-term thinking, particularly long term thinking that is aspirational rather than answering an immediate business problem.

Lui told of a enterprise-scale content management and content modeling initiative that foundered under growing complexity and failure to meet current business needs, and then contrasted it to a much smaller, simpler initiative that succeeded in addressing immediate business needs. Clearly short-term thinking won the day in this case.

Short term thinking is common throughout organizations. And sometimes it can be seen as the cause of expensive failures. And yet, if short term thinking invariably leads to failure, then most organizations should fail most of the time — which they don’t. The fact is, short-term thinking has some big plusses going for it.

One of the principles of the Toyota Production system, and of the Lean movement that it inspired, is don’t build anything until it is needed. A lean manufacturing system will often forego the efficiencies of large batch manufacturing in favor of smaller, less efficient machines that produce parts only when the next process in the line needs them. While the machine involved may be less efficient in itself, producing things only when needed reduces a much greater inefficiency — work in progress inventory.

Work in progress inventory — stuff sitting around in warehouses waiting till it is needed for the next stage of production — turns out to be an enormous source of waste. Not only is it tying up capital, and occupying floor space, it is hiding defects and robbing the company of the ability to respond quickly to market changes.

Long term thinking may say, we are going to need one million rear view mirrors over the next two years, so lets make a really big batch for a low unit cost. But what if there is a defect in the mirror design? What if regulations change? What if the design changes? What if everyone suddenly demands rear view cameras instead of mirrors? And what will it cost to store all those mirrors till they are needed? How much will it cost to integrate them into the production line? How much capital will be tied up in a million mirrors that are just waiting to be used?

It turns out that just making a few mirrors at a time, even at a higher unit cost, is actually much more efficient and avoids all sorts of pitfalls. Score one for short term thinking.

From a content perspective, this means that we should perhaps be more focussed on getting content out the door and into the hands of consumers than about creating elaborate information systems or elaborate systems of reuse. Those systems may perhaps be useful, but if they tie you into long-term publishing cycles or the creation and maintenance of complex models that interfere with getting content out the door today, they may be doing more harm than good.

There is a principle in Agile software development that says, do the simplest thing that works. Don’t build in features you think will be needed in the future. Just build what you have committed to build today, and do it the simplest way that will actually work. The reason is twofold:

  • If you build anything more complicated, it will take longer and cost more money. This will mean you have spent more resources before you deliver it to the customer and learn if it meets their needs. That lost learning opportunity is very expensive.
  • You don’t have enough information about what future needs are actually going to be (in part because you have delayed your learning opportunity) and what you are building today may not actually be useful in the future.

Long term thinking is thinking for the future, and the further you try to think into the future, the less certain your conclusions become. Short term thinking is thinking about the present. It is based on much more information.

In hindsight, when we get somewhere through a series of short term expediencies, it is easy to see that you could have got to this place far more efficiently and at far less cost if you had anticipated future needs. Long term thinking backwards is easy — it is based on certainties.

But at the start of the journey you did not have those certainties, nor any way to get them except by taking the journey. If you started out on another journey today, you would not have the information to make accurate long term predictions about your destination. If you tried, chances are you would waste a huge amount of resources to build things based on your wrong guess, and end up spending even more money than if you had taken a series of short term expediencies like you did the first time.

Don’t get ahead of yourself, in other words. Don’t build until you know what you are building for, and then don’t build more than you need right now.

But wait! Isn’t that really a form of long-term thinking? If you recognize that you lack sufficient information for a long term plan, isn’t that a form of long term thinking about what information you have, what you need, and where it is going to come from?

Yes it is. This is the long term thinking that underlies the lean and agile movements. We might sum it up by saying that the long term thinking is more about planning how to learn than planning what to do. Lean and agile stay lean and stay agile in order to be able to react quickly and efficiently to new information. And they plan their systems to maximize learning, so that they generate as much new knowledge as possible out of each short term build iteration they perform.

A journey of a thousand miles not only begins with a single step, it is steps all along the way.

We would do well to take a similar approach, both in the content that we create, and in the systems that we use to create it. Every Page is Page One is about creating small units of content that can work independently. It helps us get content out the door without elaborate publishing rituals.

But what about structured content? Isn’t that a form of long-term thinking, and doesn’t it have long term advantages? Sometimes, certainly. And sometimes it becomes an expensive disaster, multiplying in complexity and not delivering many of the long term benefits that were promised because short term needs have changed in the meantime.

But not every structured content initiative has to be enterprise scale. Not every project has to use a massive complex global standard 20 years in the making. Structured content techniques can bring short term benefits to short term projects. And along the way we might learn something that will help us avoid the pitfalls of large scale structured content if and when it is appropriate to go there.

34 Responses to In Praise of Short Term Thinking

  1. John O'Gorman 2014/10/20 at 16:23 #

    Good post, Mark – and like a lot of other problems in the digital space binary (either / or) doesn’t cut it.

    To be truly agile, one needs to be able to apply invariant (long-term) principles to short-term projects.

    • Mark Baker 2014/10/21 at 13:07 #

      Thanks for the comment, John. And well said. Unfortunately, we do seem to have a tendency to transpose principles into practices in our heads, and then apply invariant practices to our work.

  2. Marcia Riefer Johnston 2014/10/20 at 17:14 #

    This thought-provoking point reminds me of something I heard James McQuivey (a VP at Forrester) say a few weeks ago at the Delight conference: “If you want to build the future, build adjacent possibilities.” I understood him to mean, take one step then the next step then the next step rather than aiming for massive leaps.

    • Mark Baker 2014/10/21 at 13:11 #

      Thanks for the comment, Marcia.

      I love the phrase “adjacent possibilities”. I think it is very apropos of short-term thinking. The adjacent possibilities are precisely those things that you can get to in a short period of time. Short-term thinking does not have to be thinking with no thought to the future; it can be building the future by seizing the opportunities that are in front of you now.

  3. cud 2014/10/20 at 18:00 #

    Looking at the headline, my sociopolitical hackles went up. But reading the post I have to agree. We’ve all been in companies or teams that were paralyzed by the fear that any decision today will be wrong tomorrow.

    Where I work, the founder is always saying you can do anything if you choose the right level of abstraction. For our product it seems to bear out (not trying to sell anything — who here needs VM workload management software, anyway?) By making an abstraction layer that models the environment, we can add new technologies almost like adding plugins. The abstraction layer doesn’t need to change, just the mediation that feeds it data. (This is great for a product, but lots of work for the tech writer!)

    I think this is what makes a great chess player. By abstracting the principles of the game to the appropriate level, a chess master can respond to developments, while keeping an eye on the long-term goal — winning. The important thing is to not paint yourself into a corner. Nothing paints you into a corner more than assuming you have a 30-move plan to win, and sticking to it slavishly.

    • Mark Baker 2014/10/21 at 13:20 #

      Thanks for the comment, cud.

      Chess is a great analogy. Certainly, the best player are the ones that can see many moves ahead. But that all goes out the window as soon as the other person moves. But what is really telling is that it requires a mind of staggering genius to see that far ahead on a chessboard — and environment infinitely more constrained than any business environment we are likely to find ourselves in in the real world.

  4. Michael Andrews 2014/10/21 at 03:37 #

    I can’t disagree with the notion that one needs to implement a solution that addresses current business requirements, and not blue sky thinking. But before we embrace the idea that agile unfailingly gets us where we need to go, we also need to understand and accept that complex and messy projects are inherently inefficient. Because we don’t know the future, we don’t understand the longterm cost of our current choices — we can only see the current benefits and costs.

    I still think there is a role for a long term vision, and even a long term high level plan. It’s fine to be lean when you already know what you are making (say a car) but if you are evolving a product you need to have a framework that is extensible. If you can’t evolve the work you’ve done already into something more integrated, you may need to be prepared to refactor previous work. Refactoring may or may not be expensive, but it is common.

    I don’t think we should be limited to short term thinking because complex projects are often executed poorly. Such projects may have been based on flawed premises, but they may also have been managed badly, because they were structured poorly, improperly scoped, didn’t get political buy-in from stakeholders or demonstrate their value in a clear way. It’s harder to do those things on bigger, longer term projects, and requires special skills. But I’ve seen it done.

    • Mark Baker 2014/10/21 at 13:33 #

      Thanks for the comment, Michael.

      I don’t think agile unfailingly gets us where we need to go. I don’t think anything does. The future is hard, and we are going to get is wrong and make messes and lose money and be inefficient.

      Nor do I think it is impossible for mega-projects to succeed and come in on time and budget. But it is very hard, and very expensive, and perhaps requires a lot of luck as well.

      It is more a question of determining what is the most likely approach to product success, and in this regard the agile and lean processes have shown consistently good performance. Their biggest challenge, perhaps, is their counterintuitive nature. People who don’t trust the system tend to hedge their bets with long term plans based on inadequate information and then blame the agile process when things go wrong.

      And that is part of the calculation you have to make. Projects are done by the real people available to do them, not by perfect people with perfect vision or perfect confidence. You will usually get a better result by setting your goals based on the abilities of the team you actually have.

  5. Alex Knappe 2014/10/21 at 07:47 #

    And once again we’re looking at content strategy and content tactics to sum it up.
    Content tactics is responding quickly to business problems you have today (like adapting to a new type of media).
    Content strategy is about laying out your long term goals.
    Tactics can help to achieve the goals of a strategy. A strategy in return helps to achieve your long term vision.
    But, the strategy restricts the tactics you can use in order to keep your resources available. While a tactic for itself can deny using another tactic immediately.
    In other words: Short term thinking is only beneficial, if you support it by a long term vision.
    It is an old wisdom that only He who cares about the winter in the summer, is living to see another summer.
    If I would only be thinking about short term solutions I’d be running around in circles naked, yelling “mobile documentaaaation” – yet, I’m still sitting here dressed, amusing myself about the weak attempts to port hundreds of pages of text to fingernail sized displays. Riding a horse might be a good tactic seemingly, avoiding to try to ride a dead horse although seems to be a better one in terms of long time strategy.

    • Mark Baker 2014/10/21 at 13:48 #

      Thanks for the comment, Alex.

      I’m not sure I agree that content strategy is about long term vision. I think it is about tying content to revenue. The purpose of a company it to make money. There are many examples of companies who had long term visions of how they were going to make money that failed to take into consideration the needs of the current market, or that blinded them to market shifts which rendered their long-term strategy moot.

      It is not failure of long term vision that dooms companies these days, so much as missing the next sales cycle. Blackberry doubtless had a long term vision, but missed consumer smartphones, a market that appeared almost overnight.

      In The Lean Startup, Eric Ries talks about “the pivot” — the moment where a startup figures out that it needs to change how it is addressing the market — and the importance of making the pivot before you run out of runway — the willingness of your investors to continue financing you.

      If there is a long term vision in the life of a lean startup, it is the vision of continuous learning. Short term action to produce actionable learning in the short term. Rinse and repeat.

      You portray short term thinking as “running around in circles naked”. I think there is a strong tendency to equate short term thinking with chaotic or undisciplined thought or action. Correspondingly, there is a tendency to equate disciplined thought and action with long term thinking. But there is a trap here: it can lead us to believe that unless we are thinking long term, we are not being disciplined in how we think and act.

      I think the lean and agile movements have really illustrated the fallacy of this. They have shown that disciplined short term thought and action with a deliberate focus on learning, can be far more productive than long term plans based on less information. It is a counterintuitive idea, but a vitally important one.

      • Alex Knappe 2014/10/22 at 03:55 #

        I think I need to clear up my statement about strategy a bit. Strategy and long term thinking need to be as agile and flexible as short term thinking and tactics.
        They need to be both in a constant flux, including recent developments and adapting to current needs. But strategy is very much about anticipation of the future. It is about observing trends and a greater whole. It is about having a plan B and aces up the sleeve, when unexpected things happen.
        Tactics are solving one problem at a time, while strategy needs to try to solve lots of problems at any time.
        If you’re trying to set up a strategy and think “this will do for the next 5 years”, you’re ultimately going to fail (history tells us). Only adaptive strategies are going to point your tactics into the right direction.
        In return tactics without a strategy backing them up are also deemed to fail, as they solve one problem, and (if you’re lucky) open up one or more problems in another spot. This is running around in circles naked. Running from hot to cold to hot to cold while not being able to stop and focus on what’s ahead.
        In business this makes the difference between big players and mere survivors.
        For example Google: their strategy to invest loads of money into solving business problems we may (or may not) have (yet) paid off far better, than only sticking to optimizing their search algorithms.
        Same goes for Amazon. Adaptive strategy took them from a mere book seller to what they are now.
        It is needless to say that the same principle works for tech comm.
        Instead of trying to get that word document onto a smartphone (solving a business problem), strategy should take you to the point, where your content is laid out to work in as many possible environments as possibly needed (solving many – even future – problems at a time).

        • Mark Baker 2014/10/22 at 09:36 #

          Thanks for the clarification, Alex.

          I still can’t agree, though. Strategy is more about breadth of view than length of view. The general says, hold that hill, because strategically the hill protects the left flank. That is strategy, based on the General having knowledge of the whole battle field. The Captain on the hill has to dispose his company to hold the hill as best he can with the resources available. That’s tactics.

          So I definitely agree that tactics must be informed by strategy. But that is often about short term tactics being informed by short term strategy. Indeed, I would suggest that in business today, it is the lack of understanding of the company’s short term goals among staff that cause the biggest problems.

          This is not to say that there is never a role for long term thinking. The purpose of the post was to praise short term thinking, not to condemn all forms of long term thinking. I do agree that one should have an adaptable strategy, but I would be more inclined to think of it as a strategy for adapting rather than a strategy that can be adapted.

          In the tech comm case, I think our present situation is particularly instructive, since many organizations have implemented a long-term vision of creating an enterprise wide systems for the sharing of content between manuals, only to wake into a world where manuals are passe.

      • Vinish Garg 2014/10/28 at 20:51 #

        ‘Rinse and Repeat’ you say for lean products.
        I recall Sarah (from Scriptorium) talking about MVC at tcworld India conference earlier this year and I modeled one of my projects on Minimum Viable Content (MVC) Strategy. Rinse as repeat explain it so very accurately!

        • Mark Baker 2014/10/29 at 13:16 #

          Thanks for the comment, Vinish.

          I think minimal viable content is a great strategy, though it is easy to misunderstand what it means. I think there is a real danger of people mistaking it for simply creating as little content as possible. That is not what it means — or not what it should mean.

          Minimal viable product is the thing one initially takes to market to judge customer acceptance and increase learning. The point is to learn as much as you can about the market in the least time and for the least money. Based on what you learn, you then create more product, releasing it, and learning from the reaction.

          Similarly, minimal viable content should be about releasing content as quickly as possible to gauge reaction, and then releasing more/better content as long as it continues to yield positive revenue growth.

          But it is indeed the same principle: rinse and repeat.

  6. John Tait 2014/10/21 at 08:57 #

    “My long term strategy is to build in flexibility.”

    One day I will read an article praising the suitability of Microsoft Word for complex technical documentation. Easy to use! Adaptable! Ubiquitous! I might even write it.

    • Mark Baker 2014/10/21 at 13:50 #

      Thanks for the comment, John.

      Perhaps the focus should be more on the suitability of complex technical documentation itself, rather than on the tools that build it. Sometimes the difficulty of building or using tools for a task is an indication that the task itself is ill conceived.

  7. Edwina Lui 2014/10/21 at 17:56 #

    Thanks for the shoutout, Mark! I (obviously) agree that the flexibility that short-term thinking affords us is preferable to the rigidity that can, but shouldn’t be, the consequence of long-term thinking. The key problem, I think, is that when we engage in long-term thinking, we too often attach solutions, rather than just goals, locking ourselves in to a path that may not prove suitable 1 month, 3 months, or 1 year down the line. I like to think of it in terms of hiking a mountain with few established trails (of which I am, admittedly, a novice)—I know I want to get to the top, but starting at the bottom and walking straight up, while the most direct, is very unlikely to be the safest route, let alone one that ends in success. Instead, going short distances at a time, stopping to assess, even backtracking and taking a different route are far more likely to get me to the summit.

    That said, it’s equally important, when assessing for short-term solutions, not to equate current state with current need. In the case of the enterprise content model we attempted, existing content models were assumed to be accurate reflections of current content requirements, as opposed to the results of legacy decisions that they were.

    • Mark Baker 2014/10/21 at 19:49 #

      Thanks for the comment, Edwina. I really enjoyed your presentation at LavaCon. Quite a tour de force, in fact.

      And I think you are exactly right that it is vital not to equate current state with current need. Indeed, I have seen this over and over again in content projects, both long term and short term: the recreation of an outdated current state within the new system. A revolution in methods without the slightest change in models.

      Indeed, my feeling is that the biggest challenge we face in communications today is not anticipating the future but catching up with the present. In that sense, a short-term change can (and often must) be quite revolutionary, simply because the current state is so far behind where we need to be today.

      In fact, long term thinking can be very dangerous today because it can look right over the chasm that has opened at out feet and aim at an imagined future that looks a lot more like the past than the present. The challenge to adapt to current conditions is short term because those conditions exist right now. But it is still radically different from what we were doing before.

  8. John O'Gorman 2014/10/22 at 11:00 #

    Start With Why by Simon Sinek…the “Why” gives you both the long term strategy (hold that hill because) and short term tactics (concentrate firepower on the most likely ascent; monitor everywhere else because…) even operations: Aim your rifle at center mass because…)

    The challenge on most digital projects – like content management – is there is no reliable line beween strategy (long or short term) and tactics. In my humble opinion, this is largely due to the fact that like every other information management initiative, the semantic component is mostly ignored or at best paid lip service.

    It is relatively easy to go through an enterprise content management project without ever asking why because the use of technology is thought to be self-evident in terms of success.

    Experience (project failures and lack of uptake) tells us otherwise.

    • Mark Baker 2014/10/22 at 12:59 #

      Thanks for the comment, John.

      The word strategy gets used to mean a lot of things these days. Sometimes it just means “goal”. Sometimes it is used to mean “long term plan” as opposed to “short term plan”. Sometimes it is used to mean wide view as opposed to the narrow view of tactics. Sometimes it is used to mean any plan to do anything at all, no matter how mundane. I’m honestly not sure, at this point, which of the many available meanings any of us is using in this conversation.

      And my point was not to debate strategy vs tactics, but short term vs. long term thinking, which I regard as an orthogonal question. I most definitely do not mean short term thinking to imply either lack of discipline or lack of strategy. I mean it to imply decision making and development based on what is known rather than what is guessed at or hoped for.

      That said, I think you are absolutely right about the semantic component being ignored or paid lip service in most content projects. In many cases, content management projects are sold to facilitate management by hand rather than to implement management by structure and metadata.

      The more semantically structured and the more consistent a content object is — the more cohesion it has — the easier it is to automate the management of that object. Conversely, the less structure, the less metadata, and the less cohesion, the more management overhead is required. This is very much what Edwina was talking about — modeling the present state.

      I think this is based on an underlying assumption that structure and management are inherently hard and that they only way you can do them successfully is to create one structure and one management application for the entire organization and design it for lifespan of 10 or more years. In other words, it is assumed that this has to be done with long-term thinking because it is impossible to do it with short-term thinking.

      And the problem with that is that the information development landscape and the information consumption landscape is changing with unprecedented rapidity, rendering most of the assumptions on which these systems were based out of date.

      The big question is, can you be both structured and agile? I believe we can. Development does it. Indeed, they could not be agile without being structured. So why can’t we?

      • John O'Gorman 2014/10/22 at 13:23 #

        “The big question is, can you be both structured and agile? I believe we can. Development does it…”

        Actually, I would argue that developmers only appear to be agile because they don’t worry about semantics. Structure and management (grammar and methods) in programming are only two legs in the race. Three-legged racing is an art form, and most developers I know are all about the speed, not about the validity.

        They have to answer to the semantics of the code and the ‘fit-for-purpose’ requirements of the application. If one of their requirements was to be semantically interoperable with the rest of the enterprise they would (in the short term) be extremely slow.

        As a long term strategy, however, getting the semantics of the whole thing right is (or would be!) a huge advantage.

        • Mark Baker 2014/10/23 at 07:41 #

          There seems to be a widespread assumption that semantics must be universal to be valid or useful. (It is a very common assumption in the content space today.) Actually, the very opposite is the truth. Semantics are more valid and more useful the more local they are.

          Actionable meaning, and the vocabulary that describes it, is very local. We can gather a large amount of actionable semantics quite easily at the local level because we can ask authors for the semantics we need in terms that they already understand and use.

          But if we try to establish enterprise-wide semantics and capture them in an enterprise wide taxonomy, bad things happen: We make the task of capturing semantics much more difficult, since every author now has to learn the taxonomy. We make the semantics we capture less actionable, because the global taxonomy misses important local distinctions that we could act on locally. We make the semantics we capture less reliable, because local authors will understand and apply the global taxonomy in different ways based on their differing local experience and knowledge, and because it is not a perfect fit for the domain they are working in.

          The irony in this is that an accurately captured semantic is inherently global in nature. That is, a semantic that accurately captures what something is has global applicability even if its vocabulary is local. To use it globally simply requires mapping its vocabulary to a global space. If you capture good semantics locally, therefore, you should not have to worry about their being globally useful. You get that for free.

          This means that the best way to get reliable semantics at the global level is not to enforce a single semantic standard across the organization (for the reasons listed above) but to map the semantics of local domain into the global space as and when necessary. Semantics captured at the local level will be more consistent, more reliable, more accurate, and less expensive to obtain.

          Developers do worry about semantics. They worry about them a lot. They also encapsulate local semantics behind APIs and protocols so that they are at liberty to manage local semantics efficiently, and to modify them when necessary, while maintaining a consistent interface that the outside world can use. This semantic boundary is vital to creating agile systems — systems that are internally cohesive and externally loosely coupled.

          Content systems would benefit greatly from taking the same approach.

          • John O'Gorman 2014/10/23 at 09:40 #

            I think I agree with most of what you said, Mark, (except for your remark about taxonomy) until we get down to the last paragraph…If developers worried (alot) about semantics the way you describe it (global semantics with local vocabulary) then all applications would be interoperable, but they are not. One new application equals one new silo. Agreed that the local semantics and vocabulary make it easy for users to engage that single app, but again the semantic boundary enables agility: they only have to learn a local dialect.

            An enterprise taxonomy should be exactly the same as a local taxonomy and just as easy to learn. That’s what I meant by invariant (long-range) thinking. The taxonomy should also be able to manage local dialects and local semantics elegantly.

            Toyota had a long range strategy (make as many parts as posssible reusable across models and years) that enabled a short-term inventory strategy. My approach (universal taxonomy and local autonomy) does exactly the same thing.

          • Mark Baker 2014/10/23 at 11:19 #

            Hi John,

            No, its not about global semantics with local vocabulary. It is about local semantics with local vocabulary.

            Local semantics are mappable to global semantics insofar as the global semantics cover the same categories, but local semantics typically make distinctions that global semantics don’t. On the other hand, local semantics are far more restricted, and don’t cover all kinds of areas that global semantics do.

            But there is a more subtle problem. Local domains do not always use categories that align with the boundaries of global semantics at all. It is not always a case therefore of four local terms always mapping to one global term. Sometimes the global taxonomy has no terms at all for areas that are vital to the local domain.

            But the thing is, if the mapping of global categories to the local domain is wrong, mapping approximately from the local to the global semantics does not produce worse global semantics than if they were entered directly. (In fact, the mapping will at very least be more consistent.) But attempting to use global semantics in the local domain does great harm in the local domain.

            “An enterprise taxonomy should be exactly the same as a local taxonomy and just as easy to learn.”

            It simply isn’t possible to have an enterprise taxonomy that is exactly the same as a local taxonomy is you expect the local taxonomy to accurately reflect the language of the local domain. Local domains use terminology in specialized ways to describe distinctions and categories that are of interest, and that are understood, only within the domain.

            And an enterprise taxonomy cannot be as easy to learn as a local taxonomy, since the local taxonomy is based on the domain in which people in that domain live and work. They know the local taxonomy already, and think in the categories it describes. An enterprise taxonomy, on the other hand, is expressed in terms that are either not familiar, or are familiar but used in a different or less precise way than they are use to. The result is the people from different domains will use the enterprise taxonomy differently both then they tag something and when they try to find something.

            Similarly, applications are not supposed to be globally interoperable, they are suppose to perform specific functions in specific domains. They may be partially interoperable insofar as there is a match between domains. But what I was describing was an API or a protocol, which provides global access to a local domain.

            Local semantics exist first and foremost to solve local problems within a domain. Progress in structured writing has been greatly hampered by the idea that structure and semantics must be global. This makes much of the local domain-specific automation, guidance, and validation which structured writing ought to bring impossible because the global structures and global semantics are too general to support it in a meaningful way. It also makes the deployment of structured writing systems slow and cumbersome because the large scale of global vocabularies makes them both hard to learn and hard to publish, and because of the amount of management overhead that is incurred because of the lack of automation and validation.

            Local semantics can be captured locally at low cost, and managed and processed locally at low cost and in an agile manner. Global semantics and global structure make that impossible. In many cases the aspirational imposition of global systems fails to solve any global problem, while making life significantly more difficult in each of the local domains it affects. This is why we are often better of solving known local problems in the short term — but doing so in a forward looking way.

          • John O'Gorman 2014/10/23 at 11:59 #

            Oxygen is oxygen regardless what they call it in Mumbai or London.

            There is a universal classification technique – one that Karen Lowe of LoweTechSolutions says enables “Extreme Reuse” – that I am currently using with great success right now. The ‘language’ that locals use versus the ‘language’ that the world uses may be different in form but not in function.

            So, if I call my city Cowtown and everyone else calls it Calgary but we all classify it as a Place…where is the disconnect?

          • Mark Baker 2014/10/23 at 12:58 #

            Do you mean Oxygen the gas? Oxygen the XML editor? Oxygen the TV channel? If you mean the gas, some uses of that gas will use the chemical term O2 — but if you use O2, be careful, because that term has a dozen other meanings as well.

            To bring this closer to home, it is clear that we understand different things by the word “global”. My use had nothing to do with Mumbai vs. London. I used it to mean a domain that encompassed a set of smaller domains, such as a corporation that encompases an R&D domain and a finance domain that have completely different lexicons, but probably several “words” in common.

            But the language that locals use is different in both form and function from the language that other locales use. There really is no one language that the world uses. There are certain domains with a lexicon that is standardized across the terrestrial globe — such as air traffic control. But that lexicon is highly domain specific and much of its vocabulary uses “words” in special senses that don’t correspond to what those words mean in other domains.

            Other domains are geographically local. Or local in some other way. Even the same field may exist in multiple local domains, so that a cook in Mumbai and a cook in London probably don’t speak the same language.

            That is where the disconnect is. All language is local and domain specific. Most of us inhabit and can converse in multiple domains, understanding intuitively which domain we are in as we select the words we use. But the domains are separate. The meaning of words within those domains is specific to the domain. In some cases there are equivalent (if less precise) terms in other domains (myocardial infarction = (less precisely) heart attack). In many cases there simply is no equivalent in other domains. (Try explaining the difference between an XML element and an entity to a florist.)

            You cannot establish a domain free taxonomy. All meaningful taxonomies belong to specific domains. All corporations consist of multiple domains. Therefore there cannot be a meaningful corporate taxonomy that encompases all the domains within a corporation. (There can, very usefully, be domain specific taxonomies within a corporation.)

            If you want to get meaningful actionable semantics out of authors in a domain, you must ask for it in terms of structures and vocabulary that make sense in the domain they are working in.

          • John O'Gorman 2014/10/23 at 14:42 #

            Two points:

            “All language is local and domain specific.” This is confusing to me…does this mean that there is no such thing as global (let’s say enterprise to contain things) or cross-domain language? Does the software engineering domain have no terms in common with chip manufacture or the users of their product? If not, how would the lack of overlap manifest itself in application performance or UI?

            “You cannot establish a domain free taxonomy.” This reminds me of the statement by aviation engineers, that bumble bees can’t fly. I have one ( a domain free taxonomy) I use it every day and my clients (all of the domains currently developing domain-specific content management solutions love it…how can that be?

          • Mark Baker 2014/10/23 at 14:52 #

            Certainly domain can have terms in common with other domains. Sometimes they mean the same thing, though often they do not — sometimes in subtle ways not apparent outside their respective domains. But a language is not simply a collection of terms. It is the relationships between those terms and experience of the practitioners in those domains.

            If local domains had no terms in common, a global taxonomy would simply be a compilation of local taxonomies. But because domains have terms in common, but understand them in subtly different ways, that global taxonomies fail to say the same thing to people working in different domains.

            I haven’t seen it, of course, but I would guess your taxonomy is specific to the domain of content management solution builders. It probably does a good job of solving the problems of content management solution builders. But there is a reason why most users hate their CMS.

          • Edwina Lui 2014/10/23 at 14:53 #

            I think what Mark is saying (and this is just my interpretation) is that languages are developed and exist in the context of a specific domain; a global or enterprise language exists in its own domain, which happens to be one that encapsulates smaller local domains.

            However, that doesn’t mean that the enterprise language should be a superset that simply contains everything in the local language—if an enterprise taxonomy were to include every individual term and relationship defined in each of its component local domains, down to the same level of detail and specificity one would expect to find in a local taxonomy, it would very likely be unwieldy and practically unusable. (Shameless plug: This is the problem illustrated on slide 13 of my deck, which Mark linked in his article. If, however, that enterprise taxonomy served as a sort of translation hub—a semantic exchange model, of sorts—it could serve to programmatically connect content from one domain to another.

    • Mark Baker 2014/10/23 at 15:15 #

      Absolutely, Edwina.

      Insofar as there there is semantic equivalence between domains, it can indeed be mapped in this way. But that mapping is generally not a usable taxonomy for any of the domains concerned. Nor is there any advantage to trying to make it so. The point of domain taxonomies is to get the most reliable semantics from domain authors by using terms and structures they are familiar with.

      The pity is that the experience of unusable enterprise taxonomies puts local domains off the idea of formally managing their semantics at all, which both robs them of automation and validation opportunities in the content they create, and makes their content inaccessible to any such translation hub should one be developed.

      And if forced to use one, their usage of the terms is often so eccentric as to be meaningless at the global level.

  9. Cruce Saunders 2014/10/22 at 17:59 #

    Enjoyed the article Mark, and embrace the approach that emphasises practical, rapid-shipping action ahead of ponderous pontificating.

    I’m seeing that identifying patterns and principles might be the answers to how one balances big picture structure and agile implementation.

    Re: “The big question is, can you be both structured and agile? I believe we can.” Yes. The chess player that thinks 30 moves ahead is a unicorn…but the one that sees the now and operates against a set of patterns and principles to steer towards an imagined end game, is more mortal.

    So, if a principle is essential content structure, and content relationships – authors can be taught to incorporate those patterns into production. The stated “why” provides a guideline, and intelligent humans make real-time decisions towards that end. The arc bends towards order around principles.

    If one builds a pattern for basic semantic markup into the CMS, the content is that much richer…but only if that’s important to that publisher. So patterns provide containers for actions, again moving the arc in the right direction.

    I do however, caution against a rush to publish with neither patterns, nor principles. In that case, eventually things get sorted out but at the cost of unnecessary quality and maintenance pain and overhead. Like anything in life, it’s a balance and the best solutions often integrate two poles of thinking.

    • Mark Baker 2014/10/23 at 08:01 #

      Thanks for the comment, Cruce.

      In a comment on LinkedIn, someone came up with the formulation “short term forward looking”. I think that is exactly what we need. Publishing without pattern or principle may be short term, but it is not forward looking.

      Trying to establish and teach a single global information model, on the other hand, seems forward looking (though I would argue this is an illusion — a bias to believe in a level of order and consistency that does not exist in the real world). But is is not actionable in the short term, and therefore proceeds without the benefit of incremental learning (and therefore tends to be based on old models and old practices, as Edwina notes).

      What is both short term and forward looking it to capture specific, actionable local semantics that improve content quality and process efficiency today. As I noted in my reply to John above, any accurately captured semantic is mappable to the global level as and when required, so acting locally on semantics does not compromise any more global use or initiative in the future, but it get useful content out to consumers today.


  1. Authoring tools for startups — Guest post by Vinish Garg | I'd Rather Be Writing - 2014/12/13

    […] to dedicate exclusive time or resources for documentation. Mark Baker has an excellent post titled In Praise of Short Term Thinking that is quite relevant to startups’ documentation […]