Topics, Pages, Articles, and the Nature of Hypertext

What is the right word to describe a node of a hypertext?

What should we call the basic unit of information that we present to readers? Is it a page, a topic, or an article? (I’m going to take it as read that the answer is no longer “a book”. If you disagree, that’s what the comments are for.)

I raise this now because of Tom Johnson’s latest blog post, DITA’s output does not require separation of tasks from concepts in which he makes the distinction between topics as building blocks and articles as finished output:

One reason so many people mistake the architecture of the source files with the architecture of the output files is because the term “topic” tends to get used for both situations. I prefer to call the output files “articles” rather than topics. An article might consist of several topics. Each of those topics might be of several different types: concept, task, or reference.

Both in my book and in this blog I have made the case for the word “topic” as the unit of output, and have criticized DITA for muddying the waters in just the way that Tom describes. During the development of the book, Tom was good enough to serve as a technical reviewer, and one of the questions he asked was, why don’t I just use the word article rather than topic? It’s a fair question, and I want to try to address it.

Certainly, Tom is not the only one to choose the word article for use in this context. In the book I quote Scott Nesbitt on the subject of Google’s article-based approach to documenting Chrome:

One of the first things that I noticed was the way in which the documentation was described. Help articles. Yes, articles and not documentation or user manual or online help. That’s a very subtle (or maybe not) distinction. But it’s a distinction that can be psychologically powerful.

I also make extensive references to Wikipedia in the book, and I use the word article frequently to describe Wikipedia entries, which I often use as examples of good Every Page is Page One topics.

So why not just adopt the word “article” and be done with it?

Or, for that matter, since my book title and my catch phrase is “Every Page is Page One”, why not “page”?

Why do I keep insisting on using “topic” as the right word to describe a unit of output for the reader?

If you are thinking “sheer cussedness” or “because you are a crank”, okay. I can live with that. I am a crank. Sometimes, at least. But whether the point is worth the struggle, there is at least a point here, and it has to do with the nature of hypertext.

First, why not “page”? The phrase “Every Page is Page One” was coined to describe a break with the past. Pages, at least in the paper sense, were inherently linear things, related to each other in linear ways. And in common website designs, also, pages were designed with hierarchical relationships: there are home pages and “inside” pages, as if the site were a magazine.

What “Every Page is Page One” proclaims is that this idea of linear and/or hierarchical relationship of pages in which there is a particular page one, the head of the linear or hierarchical organization, is meaningless in the age of the Web where people navigate by search or links and can land on any page regardless of its place in the sequence or hierarchy.

“Page” has too much of that sense of linearity or hierarchy about it for my ear. I feel the need for a word without those associations: thus “topic” and my frequent use of the phrase “Every Page is Page One topic”.

“Article” certainly has the sense of independence that I am looking for. Articles are independent items that (except by an accident of print technology, when printed in journals) have no linear relationship to other articles.

The problem, to my ear at least, is that “article” has too great a sense of independence. It implies a work that is entirely independent, without relationships to other works at all. When we use the words “collection of articles”, the implication is not that the articles were written to work together, but that they were written entirely independently and later collected by another hand.

That is not what we are talking about when we discuss something like a documentation set. There may indeed be articles written about a technology by many independent authors, but when we talk about “the documentation” we are talking about something more planned and organized than that.

The point of “Every Page is Page One” is not that the items of content have no relationship to each other, but that their relationships are neither linear nor hierarchical. They are organized bottom up, not top down. Every item is equally a place to start, and every item is a hub which connects to other items along lines of subject affinity. In short, they are a hypertext.

“Page” implies the wrong sort of relationships. “Article” implies too little relationship. Topic seems like the closest word for the thing I mean. I just can’t get comfortable with another.

And I think it is important to highlight this aspect of hypertext, the organization of topics in a hypertext, and the writing of hypertext topics. I am often accused of being too optimistic about search. The truth is quite different. I am not at all optimistic about the ability of search alone to deliver readers the content they need.

Unlike many others, however, I don’t believe that the solution to the inadequacies of search lies in creating other forms of organization or information retrieval. For reasons I explore in detail in the book, I believe search is here to stay as the dominant method by which people seek information.

Read the book for the full argument, but the essence is captured in a phrase coined by David Weinberger: people now prefer to “include it all and filter it afterwards“. The old model of information seeking was this: first find an authority, then ask your question. The new model is: first ask your question, then verify the authority of the answer. People search because they want to consult multiple source with a single query. No navigation scheme that any one of those sources can come up with can do much to change that.

Readers are search dominant, and becoming more so. That is a fact from which we cannot escape. The right question to be asking is not “what can we offer them instead of search” but “what can we offer them after they have found us by search”.

This is why it matters that we write in an Every Page is Page One style, so that the page they land on when they search works for them. But because search is not always particularly accurate, it matters what happens when they are not on the right page, and what happens when they are on the right page, but they don’t have the right prerequisites.

Pages and articles will be found by search just as well as EPPO topics. But if what the reader lands on is simply a page, or simply an article? If they are on the wrong article or don’t have the prerequisites to understand the current page, they have little recourse but to search again. If they have landed on a hypertext topic, however, they have other options. If the topic is a good hypertext, if it is a hub of its locale in subject space, full of rich links to topics on associated subjects, then the reader can proceed to find what they need following accurate links, rather than submitting themselves to the vagaries of search once again.

Links and search are the bones and sinews of  hypertext. Links are a hypertext tool for authors. Search is a hypertext tool for readers. Search alone is not enough. We need to create effective hypertexts that can deliver the reader the last mile to the content they really need.

So, the word for the unit we need to create (to the ear of this crank at least) is not page (too linear) nor article (too independent) but topic. But language is a democracy, and mine is just one vote. The comment box is below. Have at it.


, , , , , , , ,

29 Responses to Topics, Pages, Articles, and the Nature of Hypertext

  1. Jonatan Lundin 2014/01/07 at 12:49 #

    I find this interesting (I like it): The old model of information seeking was this: first find an authority, then ask your question. The new model is: first ask your question, then verify the authority of the answer.

    But I have two spontaneous doubts. Firstly, users are very often poor at expressing their information need (“ask your question”). A user might express their situation or question to an authority as: “I am looking for, what I think is called, an ‘icon’ which I shall click to do a task which I do not know the name of”. A user expressing a query to a search engine as “Product +icon” would get many many results.

    Secondly it is sometimes difficult to verify the authority (or judge the relevance) of the answer by examining the context the topic provides or by examining the hyperlinks.

    Sure, many users do start their information-seeking journey by searching on the web. But I argue that it is possible to design an information retrieval environment, as a complement to web searches, that helps the user in expressing the information need and at the same time judging the relevance (something I have described here:

    And, a user assistance environment that mimics a human situation assessment dialog can help the user express a query to search the web. A user who has defined the search situation (the situation assessment), should be able to ask the user assistance environment to perform a web search based on the current selection and return matching EPPO topics within the environment.

    • Mark Baker 2014/01/07 at 13:18 #

      Thanks for the comment, Jonatan,

      Yes, users are not particularly good at information seeking. At least, so it appears under laboratory observation by people who actually know where the information is. (Jakob Neilsen makes similar comments about the users he has observed.)

      I’m actually not so sure about this. I suspect that observer bias and the curse of knowledge is skewing the observations. Knowledge is incredibly hard to come by, and yet once the struggle is over for us, we seem to forget how hard it was, and become perplexed and frustrated when we see others struggle to attain what now seems so clear to us.

      Information foraging theory suggest that people’s information seeking behavior may indeed be optimal, however random it may look to the observer, because it is based on limited information.

      Similarly, the notion that people are poor at formulating their queries is probably the result of observer bias. What it really means is that the outsider does not express themselves like an insider does, which is certainly true. I believe that the user’s search is actually as well expressed as it could be given what they currently know. To search better they would have to know the information they are looking for. Everything is easier to find once you know what it looks like and where it lives.

      I think that the quest for more intelligent information systems continually shipwrecks on this very rock: the intelligence baked into the system is actually information that the insider has and the outsider does not. Classification-based schemes are a perfect example of this: they are useful only to the insiders who know the classification.

      The outsider is looking for the door that leads in, but without actually knowing what a door looks like. What seems like perfectly clear labeling of the door by those on the outside remains hard to decipher by those on the outside because it is based on the insider’s knowledge.

      Search is the outsider’s tool, not because it is most accurate, but because it presents the broadest field in which sniff for the scent of information.

      • Jonatan Lundin 2014/01/08 at 03:21 #

        Hi Mark,
        What you say certainly rings true. But I believe that our view on the user’s ability to express their information need is somewhat different. What you are saying is that an outsider does not express themselves like an insider does, is certainly very true. But you are implying that the outsider and the insider are expressing the same subject differently but yet the expressions are on the same “level” of relevance. But this is not the case.

        Library and information science has for over 50 years discussed and debated many issues, among them the ability to express an information need. We know that humans are sometimes poor at expressing relevant queries representing an information need. This is known from many many empirical studies which have used other methods than observation in a laboratory as Carroll and his colleagues used (thus I do not agree that observer bias is the fault to what we know).

        The above statement is valid for subject item searches but maybe less for known item searches – like looking up the phone number to the nearby pizza restaurant. I would argue that users that have difficulty in using a manual often find themselves in a subject item search and not in a known item search.

        Taylor described a four step model in 1962, which is still valid. The model says (simplified) that an information need arises out of an uncertainty and that a human can at some point compromise and express the information need as “rambling statements”. In the last step when an information system is used, the information need must be formalized to be suitable for the information system used. This is a complex process.

        Belkin formulated a hypothesis in 1982 that says that humans’ starts to seek due to a recognized “anomalous state of knowledge” and that human are unable to precisely specify what is needed to resolve an anomaly. It is unrealistic to ask a user of an information retrieval system to say exactly what she needs to know since it is just the lack of that knowledge that has brought him or her to the system in the first place.

        In other words, a user of a product that experiences a problem (uncertainty or anomaly) does often not know what information is needed yet it becomes very difficult to put a label on something you do not know.

        A user tries to form a mental model that put forward and explanation to make sense. But the search terms a user must recall is (often) not relevant. The user searches Google for “the blue thing on top of the product”, where the expert user (the outsider) calls it the “binary fault locator”.

        Search fails since the user is searching for something that is totally irrelevant. But the world is not black or white; sometimes (often) search is successful and sometimes not.

        What I propose as a solution is not to introduce a classification scheme that is only logical and useful for the ones that made it or is using it a lot. I agree that such schemes are doomed to fail.

        What I propose is an interface which is mimicking an expert-novice dialog. The manual is assessing the situation through a dialog, where the manual asks a number of questions one at a time and provides a number of options where the user selects the option that meets the current situation (thus the search situation facets). The user gets enlightened about the information need as we know that recognition outperforms recall.

        • Mark Baker 2014/01/09 at 20:00 #

          “But you are implying that the outsider and the insider are expressing the same subject differently but yet the expressions are on the same “level” of relevance”

          That is not what I was intending to imply. Indeed, I think the insider and the outsider express the subject differently precisely because they are not on the same level of relevance. Each commonly has a different standard of relevance and is interested in different distinctions for different reasons.

          My comment about observer bias was not intended to question the impartiality of observation (though that is obviously a hugely difficult problem in these kinds of studies). Rather I was commenting on the scale — the decision about what constitutes “poor” performance.

          “Taylor described a four step model in 1962, which is still valid. The model says (simplified) that an information need arises out of an uncertainty and that a human can at some point compromise and express the information need as “rambling statements”. In the last step when an information system is used, the information need must be formalized to be suitable for the information system used. This is a complex process.”

          For the information systems of 1962, certainly. But what web search does is blow that complexity away. It allows the user to input “rambling statements” and it gives them immediate results, without the need to formalize their query for the information system. It is no longer a complex process. It is as simple a process as it is possible to imagine.

          And that is the key thing, I believe, which makes the search for “better” information systems futile. They all rely on a process of formalization for which the user had no patience anymore.

          The other element of this is scope. Web search has such a broad scope that you don’t even have to think about where you look for things. No matter how clever you make an individual information system, it will never have that scope, which means that not only is it harder to use, it is less likely to contain the information the user wants.

          Except in certain specific cases, this double problem of increased complexity and limited scope is going to make all such systems unattractive.

          If you could come up with a system that mimics novice/expert dialog, that might indeed be attractive. But the thing about the novice/expert dialog is first that it often fails in real life (because of the curse of knowledge) and that it presupposes a very sophisticated level of AI that I don’t think exists anywhere today (and which would probably not require structured content if it did).

          Don’t get me wrong. I applaud the attempt. But I think web search is going to be the dominant form of information seeking for years to come.

  2. Marcia Riefer Johnston 2014/01/07 at 13:47 #

    I resonate with Tom’s point that it’s confusing to use “topic” to refer to both the building block and the thing assembled from the building blocks.

    I have no answer for a solution, though, because building blocks can exist at many levels of granularity. An assembly of building blocks can itself become a building block, and on and on. Finding one generic term that covers all levels of assembled content in all contexts may be impossible. But I’m glad for the discussion.

    • Mark Baker 2014/01/07 at 14:01 #

      Thanks for the comment, Marcia.

      What you say about components of components is true of monoliths, but not of hypertexts. Relationships between nodes of a hypertext are all peer to peer. Part of the terminology problem arises because we are trying to comprehend two such radically different information structures with a single set of terminology.

      For components I prefer the term “component”, or the Information Mapping term, “block”. It occurs to me that the ambiguity that exists in DITA may come from trying to bridge the divide and create units that might be nodes in a hypertext or components in a monolith.

      Personally, though, I don’t think that is feasible. A hypertext node needs to do things that a book component does not, and vice-versa. At least, as Tom indicates, you would need to build a hypertext node from several components.

  3. Don Day 2014/01/07 at 15:16 #

    Fishes for dishes (or is that “horses for courses?”). I may prefer “articulos, paginas, y temas.” Or “stories, leaves, and essays.” Everyone has their preferences, and even within a language, one set of terms may not bridge subcultures or professions, or even time (vernacular usage is so in the immediate). In the mobile world, I suspect that many readers on mobile devices may even conflate the link with the content itself–after all, a link is subjectively not that different from an “expand” toggle or a swipe to see more of the same. And since we need to accommodate discussion about adaptive content as well, we may as well accept that all of those terms (page, article or topic) are but expressions of what users will increasingly see as a spectrum of content, not just an endpoint.

    • Mark Baker 2014/01/09 at 22:14 #

      Thanks for the comment, Don.

      Indeed, and as I am fond of saying, all language is local. Local in space. Local in time. Too many concepts. Not enough words. About the best we can really hope for is to pin down some terms for purposes of an immediate discussion.

      A lot of the time we will end up talking at cross purposes because we understand something ever so slightly different by one term or another. (“Structured” is a prime example!)

      One might say of language what Churchill said of democracy: it’s the worst form of communication except for all the rest.

      • Don Day 2014/01/10 at 21:00 #

        It strikes me that the right word may be be different between the author, the publishing practitioner, and the person reading it. In my web site publishing code, I refer to the node at the end of a reference as an endpoint, but the rendition of that resource (there’s a CS-rooted term) may differ depending on the content type (is it a calendar script, and API request, a link, or an editing request?). I use “endpoint()” consistently in the code because it is the one place from which all the various renditions are shunted into the pipeline for output. But the user will see a widget, a JSON or XML encoding of structured content, or the inline editor view of the content.

        The practice of using “topics” for DITA as source chunks is deeply rooted and pervasive among the authoring practitioners. Changing this practice won’t happen without even much more commotion, confusion, and cost of redoing training materials, help docs, and the like.

        What I think Mark is asking is that we be discerning with our consuming audiences about how to refer to these various products as endpoints. To a PHP programmer, I might say “the rendered output of a calendar script,” but to an end user I’d simply say “the calendar widget.” When you are transforming a topic (be it DITA or EPPO) to a portion of HTML page content on the fly, again the right term depends on need to know.

        This is a particular issue for CSS names for the template spaces that embed database query results into a web page. In the massively popular WordPress world, these chunks are “posts” (or “pages” if written as static content) whereas newer, HTML5-based templates are using the new or element to wrap the unique embedded content.

        On this, my point is that, no matter how you like or despise the term “article” for the essential content (and again, I differentiate the essence from the trappings that comprise the “page”), the popularity of HTML5 is likely to cause content engineers, at the least, to start using “article” more frequently to refer to that essence of content, the endpoint-become-deliverable. End users won’t see or care about that embedding distinction–to them, it’s all a web page that the URL points to (even though half of that adorning content might be ads and other dynamically selected material that will change over time or depending on reader).

        This discussion is probably most useful for making us aware of what TS Eliot called in “The Naming of Cats” that ineffable, effable, effanineffable, deep and inscrutable Singular Name.

        • Mark Baker 2014/01/11 at 01:01 #


          You make an excellent point. A thing tends to be called by a different name by everyone in the supply chain who handles it. This is natural because each person in the supply chain handles it in a different way, and groups it with other objects in a different way.

          So many of the attempts to standardize terminology come to grief over this very issue. Trying to force everyone to use one term — in the belief that it will eliminate confusion — often actually adds to the confusion, as the standardized term conflicts with the vocabulary of different groups who handle the object.

          Standardized vocabularies are highly valuable in certain circumstances. Standardized medical terminology saves lives. But doctors do not attempt to communicate with their patients in the same terms they use to communicate with their colleagues. To do so would probably cost lives by confusing patients.

          One place I worked had a policy that banned to use of “function” or “method” to describe a programming function. It had to always be “routine”. This caused no end of difficulties when the dev group started to introduce callback functions into the API. To use the term “callback routine” would have been confusing, and to use “routine” to mean “function” everywhere but where you said “callback function” would have been ridiculous.

          Vocabularies differ for a reason, and the job of professional communicators is to navigate the differences, not obliterate them. All language is local, and we all need to be multilingual, even in our native tongue.

          What this means, in effect, is that for purposes of certain discussions you have no choice to define certain terms specifically as you mean to use them for your immediate purpose, which is what I did in the book when I defined “Every Page is Page One topic” and specified in the glossary that “topic” in the book should be read as “Every Page is Page One topic” unless otherwise qualified (for instance as a “building block topic”).

          DITA is, of course, entitled to use the word “topic” in its own way. Unfortunately, that way turns out to be ambiguous. But you are right that it is not likely to change now, which means the ambiguity will have to be highlighted at certain times, as Tom found it necessary to do in his blog post. Language tends to grow up around things as they develop, and terms with deep roots can often cause difficulties as the thing develops and diversifies. But terms with deep roots are also hard to root out. We live with ambiguity.

      • Don Day 2014/01/10 at 21:05 #

        And of course, “the new or element” would be “the new [main] or [anchor] element” if I had been mindful of angle bracket cleanup on comments.

  4. Alex Knappe 2014/01/08 at 05:52 #

    Why don’t you simply use synonyms here?
    Use the word ‘subject’.
    A subject can consist of one or or more topics (and vice versa), while not being used in the structured terminology.
    A topic can also consist of one or more subjects, while the subject still is the smallest defined bit of information content.
    This makes it easier to differ topics from topics.
    While the subject can be as large as a book or as small as a single sentence, it is possible to be self-contained or open – just as you need it.
    Every other expression, be it ‘page’ or ‘article’ already qualifies the content of the subject already, while neither topic nor subject do that.

    • Mark Baker 2014/01/09 at 22:16 #

      Thanks for the comment, Alex.

      Trouble is, I need the word subject to refer to things in the real world that the content is about. I need it, for instance, to talk about creating subject affinities rather than creating links. I need it to make the key point that we should focus far less on the relationship between topics and far more on the relationship between subjects.

      • Alex Knappe 2014/01/10 at 05:03 #

        Hi Mark,
        the only other word I could think of is ‘yield’.
        The yield of a topic or subject matter is the essence of the content presented within. While ‘page’ or ‘article’ do describe the shell around the content (at least for me they do), ‘yield’ describes the meat of the content.
        Thinking twice about it, you could also use ‘payload’.

        I think in tech comm it is essential to differentiate the outer shell of a piece of information from its contained subject matter. The discussions I had with my boss, while trying to establish a new structure for the documents of one of our customers showed me, that this a mayor issue (in any language).
        The lesson was, that in – opposite to DITA – topic should only be used for a general subject matter, and every other piece of information needs to be qualified by its type. So notes are defined as , overviews are defined as , I think you get the idea.
        But topics are just that: containers for a specific topic each.

        • Mark Baker 2014/01/11 at 01:07 #

          Hmmm. Yield and payload sound rather explosive. They make sense in a technical context, for content engineers. They would mean nothing, though, for writers or readers.

          The problem, fundamentally, is that all the words that mean something to people already mean something.

  5. Mark Nathans 2014/01/09 at 00:29 #

    While this all may seem like semantic hair-splitting to some, I have also struggled to find best way to describe the basic unit of information. It’s a worthy discussion because it is such a key concept when considering the best way to structure an organization’s content, yet we have no single universally accepted term for that unit.

    It’s worth remembering that the word article encompasses another more general meaning, as in an article of clothing for example. This is the sense in which it was used by the W3C to clarify the intended use of the new article tag in the HTML5 spec:

    “The article element represents a component of a page that consists of a self-contained composition in a document, page, application, or site and that is intended to be independently distributable or reusable, e.g. in syndication. This could be a forum post, a magazine or newspaper article, a blog entry, a user-submitted comment, an interactive widget or gadget, or any other independent item of content.”

    While this more general definition does emphasize an article’s autonomy, it doesn’t preclude its interrelatedness or reliance on other entities within the same information set. On the other hand, what we commonly refer to as articles have historically been found in magazines and newspapers, a context in which they can retain meaning and purpose within the larger whole (the publication) and share an overarching theme, unified style, or visual commonalities. So I am not sure “article” connotes quite the level of independence that you ascribe to it. Even so, we may need to broaden our notion of an “independent” article in an age where so much of the content we consume online is enhanced by numerous inline hyperlinks and almost always followed by a list of related stories.

    I too have encountered many people who are confused by the tech comm application of “topic” because they take that term to be a synonym for “subject”, a meaning that more commonly refers to a characteristic that a number of similar articles share, rather than the singular article itself. This misunderstanding is significant if we wish to broaden the notion of EPPO (as I believe we must) to structuring other non-linear, task-oriented content that lies outside the realm of technical information.

    • Mark Baker 2014/01/09 at 22:29 #

      Thanks for the comment, Mark.

      I actually see EPPO as something originating the the broader context which I am trying to bring into tech comm. One of the complaints I get from tech comm folk is that many of the examples I use are not tech comm examples. Which is true, because so much of tech comm is stuck in the book paradigm.

      You are absolutely right about the lack of an agreed term for a unit of information. Part of that, I suspect, is that people coming from a book background often don’t see how different hypertext is. A book, and book-like things like help systems and some web sites are constructed as trees, with topics/articles hanging off them like leaves. Structure and relationships are established by the tree, not the leaves. Any linking in the leaves is essentially secondary, and is often discouraged.

      A hypertext, on the other hand, has no tree. It is a web formed by connections between topics/articles. Each topic/article is a node in the Web, a hub of its local subject space. Structure and relationships are established by the links between topics/articles.

      While a content leaf and a content hub may be of similar length, they are very different beasts. Can we really have one word to describe them both. Page, I suppose, is one such word. But it is a word for the container, not the content.

      • Phonobarb Bambalam 2014/01/10 at 15:00 #

        Modelling topics (nodes) and hyperlinks and subject scheme maps (edges) using Graphviz:

        Books and DITA are dot. EPPO is neato.

        • Mark Baker 2014/01/11 at 01:11 #

          Thanks for the comment Phonobarb. It took me a minute to figure out that “dot” and “neato” are Graphviz terms, but I see what you mean.

          Are you currently using Graphviz to model content relationships?

          • PB 2014/01/11 at 03:03 #

            Yes. It scales up very well because you can keep adding nodes. I use different edge colours to signify different kinds of relationship (cross-reference for information/context, mandatory cross-reference, bibliographic reference, etc.) I recently modelled a large amount of my employer’s information set like this to (successfully) argue the case for a CMS.

          • Mark Baker 2014/01/11 at 11:43 #

            Interesting. You should write that up somewhere. People are very always interested in “how to convince the boss” stories. You would also have no trouble placing this as a conference presentation

  6. Ray Gallon 2014/01/14 at 09:20 #

    Well – just to throw a monkey-wrench in the works, how about “document?” While this word certainly has print heritage, we already got used to calling a video a document, sound a document, even a photo – as soon as multimedia on PC’s got serious.

    So, given that we have a heterogenous and multi-media concept of “document” already in existence, it seems to me to be the best alternative to the double use of “topic.”

    While we’re on the subject (oh, did I use that word?) of what we call things – I detest the use of “Map” in DITA. It’s highly inappropriate. A DITA “Map” is a “container” and that’s what it should be called.

    I think this is important because eventually the XML world is going to have to take output more seriously, and give us some easy to use publishing tools, and “Map” is a perfect way to talk about how we “map” DITA topics or documents to areas or fields of an output medium, whatever it might be.

    • Mark Baker 2014/01/14 at 11:45 #

      Thanks for the comment, Ray.

      Document, alas, has too wide a connotation, as it encompasses 500 page books as well as Every Page is Page One topics.

      The fundamental problem, of course, is that when you are looking for terminology to make a distinction that most people do not currently make, you are not going to find it. One of the things I wish I could convince the global taxonomy folks of is that vocabulary is highly fluid and is invariably shaped to fit the distinctions that people are interested in at a particular point in space and time. (A broad study for history and literature should be a firm prerequisite for anyone who wants to set up in the taxonomy business. It would teach them just how fluid words are.)

      You make an excellent point about map. I have spent so much time beefing about DITA’s use of topic that I have not given much thought to their use of map. But it is surely somewhat ironic that, given that there is a longstanding XML standard called Topic Maps, in which neither topic nor map means what it means in DITA.

      But then, my own argument above rebukes me. These are fluid terms and doubtless they seemed like the best fit for the distinctions that the DITA architects were trying to make when they invented it.

      As my father (a professor of English) told me once, the smallest unit of meaning is not the word but the sentence. In other words, it is only in a well constructed sentence that you can isolate the multitude of connotations of its nouns and verbs to accurately express the particular thought you have in mind.

      All language is local. (Which is why it is so important for an Every Page is Page One topic to establish its context. Only by doing so can it place its vocabulary in its proper locale.)

    • Don Day 2014/01/14 at 13:27 #

      Ray, I do not understand how you conflate a DITA map as a container, or why that is evil in all use cases. A DITA map is literally a formal tree graph (truly a “dot” in Graphviz as earlier noted) representing nodes and arcs and associated metadata. ‘map’ is a root node for XML well-formedness; an application’s use of that containing node does not need to interpret that processing event as a container. The specialized root of a map may represent the business rules for managing the use of that graph (e.g. bookmap), but even map specializations are fundamentally just pointers whose sequence and nesting and meaning may or may not matter, depending on the query (for example, a selector widget in an editor showing an alphabetized list of all files referenced in a currently open bookmap). Rename ‘topicref’ to ‘a’ and change ‘scope=”external”‘ to ‘target=”_new”‘ and you will see how directly the renamed model applies to many kinds of resource or concept relationships on the Web and potentially beyond.

      Mark has observed correctly that the chosen DITA terms are what they are: they are design points and usage that seemed right at the time. And he is aware that terminologists outside of the tech writing profession may well have issues with the choice of usage built into EPPO were it to be applied to their uses. I plead for arguments to stand on their respective merits wherever possible.

      • Mark Baker 2014/01/14 at 17:27 #

        Actually, Don, I think you are making Ray’s point for him. “formal tree graph” is language from mathematics and computer science. Unless they come from those domains, authors do not think that way. To them, describing a document as a container for building-block topics would be much more natural language. “formal tree graph” would make their eyes glaze over. Different vocabularies for different audiences.

        But, as always, it is more than a difference in vocabularies. The vocabularies are different because the two groups are interested in different sets of distinctions and relationships. There is no one-to-one mapping between their vocabularies.

        Indeed, it is something of a general problem with SGML and XML that they were designed by mathematicians and programmers, not by the people who were supposed to use them: writers. Their form and their terminology both come from the math/computer science domain and don’t make a lot of sense to most writers.

        This in no small part explains why getting people to write in XML is such a challenge, and why only full-time technical writers have proved capable of handling it in any numbers.

        Of course, this is a hard problem. SGML and XML exist to make content computable, so they need input from both domains: content and computation. But it was computation that dominated in their design and implementation, with the result that the writing side has never felt comfortable with them.

        The decision to model a document as a formal tree graph may have made perfect sense to mathematicians and computer scientists, but it is a model that has far more to do with solving a data representation problem than it has to do with how writers conceive of the documents they are creating. I promise you that writers never conceive of or plan their documents as formal tree graphs.

        That DITA further extends this concept of formal tree graph from the page to the document to the library is actually a significant part of my discomfort with it. One of the beauties of language is that its irregularities equip it so well to express the irregularities of the real world. One of the beauties of hypertext is that is allows writers to express irregular relationships between subjects that do not fit on a formal tree graph. (Graphing sentences never made much sense to most writers either.)

        A formal tree graph is a great data structure for reducing complex data to an easily computable form, but it is not a great structure for expressing the kinds of complex and multifaceted ideas and relationships that human language, and the documents written in it, have to express. It is not how most authors think. The tool is a poor fit for the task.

        • Don Day 2014/01/14 at 21:36 #

          I’m unconvinced by your argument, Mark. I agree that the nature of human expression does not align perfectly with the existing methods of computed data definition. In the HTML world, authors live with the imperfection of those content models, and get by with overlapping hierarchies thanks to browser lenience. The content engineers are the ones who get to figure out how to manage this squishy mess and wrap computational order back around it, as both DITA and EPPO do with their largely HTML-informed internal content. It’s about the only way to balance the tension given the tools we have to work with.

          And Ray said:
          A DITA “Map” is a “container” and that’s what it should be called.

          I completely disagree with that statement. The map element may be called anything but it is never just a “container” unless you need it to be. I think we are all highly defensive of the use cases we have in mind, like Humpty Dumpty (“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”). You may call map anything–that’s just specialization. The problem, if any, is choosing a base name that does not imply only one named role. I feel that writers should not need to be exposed to architectural details anyway, which is why I also feel that the whole naming debate is sort of pointless anyway. Let it go.

          Outside of academic research on SGML/XML and overlapping hierarchies (, Web content practitioners and XML content practitioners are each limited to the tools in their respective toolboxes. Unless the great thinkers come up quickly with an intuitive and cuddly way to represent content that both authors and computers can embrace, we need to manage this problem space more by negotiation than by intimidation. Do you agree? I’m tired of feeling defensive all the time. I want to see us build something that will bridge the issues that we are all too aware of. When do we start looking ahead?

          • Mark Baker 2014/01/15 at 01:40 #

            Don, I’m quite sure that authors are not thinking in terms of overlapping hierarchies. In fact, authors generally think of documents as sequences, not hierarchies. This is how word processors and DTP applications have always treated text as well: as a sequence of objects. (Neither Word nor Frame even has such a simple bit of hierarchy as a list container.)

            Word does not say that you can only put a forth level heading “inside” a third level heading. You can put one wherever you want. HTML is much the same. It really is very flat and very loose compared to any SGML/XML vocabulary I have ever seen, be it DocBook, DITA, or SPFE’s EPPO-simple schema. (EPPO itself is not a content model but a design pattern. It has no content model).

            Markup mavens tend to see heading levels as steps in a hierarchy. Writers treat them more like road signs. Big cities get big signs because they are big. Towns get medium signs. Historical plaques get tiny signs. The signs are placed wherever on the road the cities, towns, or plaques occur. An historical plaque in a big city gets a tiny sign, even if there is no medium sized town sign between it and the big city sign. The size of the sign is proportional to the size of the thing it announces, not to its place in a hierarchy of signs.

            Authors will often choose an H4 heading because the amount of text under it is very short and they don’t want a page cluttered with big headings over short paragraphs. People who write HTML in browser-based editors think this way. They are creating sequences, not hierarchies.

            This was actually my first objection when SGML was first explained to me: documents are not hierarchies, they are sequences. I have learned to accommodate to hierarchies, but its an accommodation.

            Is there some hard and fast reason why we can’t attach semantics to sequences (and perhaps derive hierarchy behind the scenes were appropriate) rather than forcing writers to think hierarchically? Is there some necessary reason why the blank space between the end of a list and the start of a section heading has to contain five or six distinct and invisible container boundaries (as it does in XML), which the writer has to navigate correctly in order to continue the list or insert a paragraph, or maybe start a new section?

            You say, “I feel that writers should not need to be exposed to architectural details anyway,” and I agree with you wholeheartedly. The problem is, we have never found a way to implement that successfully in XML-based systems. You call a map a map because that is how it is implemented. Ray (I take it) wants to call it a container because that is the function it performs for him as a writer. The reason you are arguing about it is that the implementation sticks through into the application domain, just as the cascade of XML end tags sticks through into the application domain problem of adding something between a list and a section title.

            And when this occurs, the name that makes sense in the implementation domain does not make sense in what is (as you point out, in the case of maps) only one of the potential application domains. The name that makes sense in that particular application domain does not make sense in the implementation domain.

            (For anyone other than me, Ray, and Don who is still following this conversation: this is why technical communication is hard, because the same vocabulary does not work across the different domains, but the different domain are often not completely hidden from each other.)

            Structured writing has made great strides. DITA has tipped, and it will be the way many large and medium technology companies do technical documentation for quite a while to come. (And I am going to be spending a lot of time helping people do EPPO in DITA.) DITA has done far more to spread structured writing practice than anything that came before it, and you can take much of the credit for that.

            But you and I both feel, I think, that structured writing is still too hard for it to go where we both want it to go. The great thinkers behind every move forward come from the people who recognize and care about the problem. I think every part of this needs to be made simpler, and I think the history of technology shows that radical simplification is both possible and powerful. So let’s be the great thinkers we are waiting for. I have some ideas I am not ready to publish, but would love to discuss.

  7. Ray Gallon 2014/01/15 at 08:47 #

    @Don, @Mark, thanks for having so much to say in response to my little comment 😉

    I’d love to add my own response, but I don’t like redundancy, and Mark has said it all, from my POV.

    Mark, I especially appreciate your observation, “this is why technical communication is hard, because the same vocabulary does not work across the different domains, but the different domain are often not completely hidden from each other”

    This is absolutely true, and much of our work consists of translating from one application domain to another in a way that remains lucid, and valid, in both.

    It would be a correct assessment to say that I come more from the editorial side than the technical side of our profession, and my bias is always toward the editorial. Joe Gollner actually remarked once in a workshop that the two of us together made up one compleat technical communicator 😉

    I do think a lot like a writer, even though I haven’t called myself that for some time. And I do think about the writers I have to support, when I’m creating an architecture. I probably don’t share the notion that all structure should be hidden from them (and as has been frequently observed, it’s probably not possible, anyway). I just think the structure needs to be expressed in terms they can understand, and they should not have to deal with details of structure that are not semantic – in the sense of the content they are developing.

    What makes DITA still over-complex for many to learn is that there’s a lot of detail that might be considered irrelevant for an author that forms a part of out of the box DITA. This is why constrained, or specialized DITA, and applications such as oXygen permits (form-based authoring) are so important – folks like us need to create those so that writers can just write. That doesn’t mean they don’t need to think about structure, to write – just that the structure they need to think about is related to what they are writing, not an underlying data model.

    • Mark Baker 2014/01/15 at 11:37 #

      Thanks Ray,

      I very much agree with: “I just think the structure needs to be expressed in terms they can understand, and they should not have to deal with details of structure that are not semantic – in the sense of the content they are developing.”

      Indeed, this is one of the stated principles of the SPFE architecture: authors should not have to deal with semantics in the application domain.

      One of the reasons that it is taking so long for me to get the next beta of the SPFE Open Toolkit finished (other than the fact that I spent most of last year writing a book) is that in order to remove all application domain semantics from the author’s task, the build system has to be able to derive application domain semantics from the subject domain semantics provided by the author. And while that is easy enough to do in individual cases, generalizing it and making it easy to specify is a more complex problem.

      One of the things I have come to realize is that the problem is not just to make structured authoring easier. We also need to make it easier to define structures and to process them. If you want to derive application semantics from subject-domain semantics, you need the subject domain semantics to be highly specific, and that means custom schemas for specific subject domains.

      Today, creating custom schemas is too hard, so people turn to generic schemas instead. But generic schemas have to include applications domain semantics because they don’t capture enough subject domain semantics to be able to derive the application semantics behind the scenes. This then makes the authoring task more difficult and thus limits the potential authoring pool.

      Making structured writing easier, therefore, means making the specification of structure and processing easier. That is where most of my focus is at the moment. I’m not ready to publish any of the stuff I am working on yet, but I would love to bounce it off anyone who is interested in the problem.