No “Right Place” for Content

By | 2014/03/31

Summary: Today there is no “right place” to put information that will ensure that readers find it. Instead, we have to focus on making sure our content gets filtered in to the reader’s search. Every Page is Page One information design and bottom-up information architectures are key to achieving this. 

The “right place” for content

Traditionally, one of the key concerns for both the writer and the information seeker was to determine the “right place” to put content and to look for content. The reader had to look in the right place” if they expected to find relevant content. The writer had to put their content in the right place” if they expected content to be found. And, of course, much frustration and disappointment occurred if the reader’s idea of the right place” was not the same as the writer’s idea of the right place”.

Defining the “right place”

In an attempt to work around these problems, several attempts have been made to standardize the definition of “the right place”. Library cataloging systems, such as the Dewey Decimal or Library of Congress systems were devised to determine the “right place” to shelve a book in a library. But anyone with any experience of libraries that used these systems knows that books on a particular subject of interest are often scattered across different parts of the cataloging system. It was seldom possible to go to one shelf and find all the books you needed stacked side by side.

Content at the intersection of diverse subjects

This is partly because cataloging books is hard, and librarians sometimes make mistakes, but, more basically, it is because books generally treat multiple subjects, and the intersection between multiple subjects, and those subjects may be in very different parts of the cataloging scheme. The person cataloging the book then has to decide what is the “primary” subject of a given book and shelve it appropriately, breaking its connection to all of its secondary subjects. The subject index in a card catalog helped people find books according to their secondary subjects (to a limited extent) but it also sent them running all over the library looking for books that were in the “wrong place” according to the priority of the reader’s interests.

For instance, I have a book on my shelves by Lynn White Jr. entitled Medieval Religion and Technology. (Because I have an MA in the History of Technology, that’s why.) Where to shelve this title? Even I can’t decide in my modest collection. It is a book at an intersection point of major ideas. As the jacket blurb notes, “During the Middle Ages, values and the motivations springing from them—even those underlying many activities that to us today seem purely secular—were often expressed in religious presuppositions.” Thus part of the reason the book defies our normal classification schemes is that it deals with the intersection of ideas that we now consider entirely separate.

Doubtless a professional librarian could tell us how the difficulty should be resolved for cataloging purposes, but that hardly resolves the problem for the reader who is trying to locate information on the influence of religion on the development of technology. Without the same training as the librarian receives, they are not going to know where to look for books on this subject.

The manual is no longer the “right place”

For conventional tech comm, of course, the “right place” was the manual that shipped with the product. The “right place” problem was thus a local one — determining where in the manual, or where in the doc set or help system was the “right place” for a particular piece of information. Attempts to standardize the titles of manuals such as “Users Guide”, “Administrator’s Guide”, and “Reference Manual” were attempts to define the “right place” for  both writers and readers. Today, the division of content into concept, task, and reference (with all its significant faults) is again an attempt at defining “right places”.

In traditional practice, a documentation plan would lay out a set of manuals and a table of content for each that defined the “right place” for the writer. However, we know very well that this seldom worked as the “right place” for the reader. And today, for numerous reasons (which I lay out in detail in my book), people increasingly prefer to use the Web, or at least the company website, to look for information. Thus the problem of defining the “right place” for content moves to the Web sphere and into the realm of information architecture.

Information architecture and the “right place”

In contemporary information architecture, the definition of “right place” is generally done using taxonomies. Taxonomies, like card catalogs before them, define the “right place” to put, and to look for, information by tagging the content with attributes that specify its subject matter. Online, the digital environment enables us to offer the reader more ways to locate the right place for content. Faceted navigation, for instance, allows the reader (rather than the writer) to designate which subjects are primary and which secondary for defining the “right place” to look for content.

Taxonomies not shared by readers and writers

But while such systems are often better than the chaos that they replaced, we are still a long way from solving the “right place” problem. Taxonomies imply a cultural understanding of a subject area which may not be shared by writers and readers. Even with the improved navigation they offer, therefore, they still often fail to locate content in what the reader considers to be the “right place” — something that is often highly individual to the reader’s immediate concerns as well as their general background.

Standard vs. local taxonomies

One of the questions that has to be addressed when designing the taxonomy of a site is whether to use a standard taxonomy or a local one. Local taxonomies can make more sense to both the writer and the reader, because they deal with the particular concerns of a particular product or trade. They can be smaller, more specific, and easier to manage. Standard taxonomies remove the need to design your own, and may already be familiar to your audience, but they may not describe your particular subject area or business issues as well as a local taxonomy. Particularly, they may not make distinctions that matter in your area, and may not use the terminology your readers are used to.

The collapse of the social contract of “right places”

Implicit in all of this is a kind of social contract between writers and readers. By this contract, it is the writer’s responsibility to put content in the “right place” and the reader’s responsibility to look for it in the “right place”. This implies that it is part of the reader’s responsibilities to educate themselves about the “right place” — to know the global taxonomy you are using, or learn your local one. This responsibility of the reader to know (or to learn) the “right place” to look for content is so culturally ingrained that we seldom question it, or even include it in our assumptions about our audience. We assume the responsibility exists, and we assume the reader accepts it.

The problem is, the social contract has never worked very well, and readers are constant frustrated and constantly complain that writers have not put information in the right place. We know from the study of tech support call patterns that many of the things that people complain they cannot find in the manual are in fact in there. The reader was simply looking in the “wrong place”. The social contract of “right places” breaks down on a regular basis.

Is standardization the answer?

Who is to blame? Are readers not living up to their responsibilities to learn the “right place” to look? Are writers failing to put content in the “right place”? Will greater standardization help, as many believe? I don’t think so. As noted above, local taxonomies are small and more specific to individual needs, making them simpler  and more precise. They fall down because they have to be learned to be used. Standardized taxonomies are generally more complex and less precise, which means that there is more opportunity for writer and reader to come to different conclusions about what the “right place” is to put something within the taxonomy. And the standard taxonomies also have to be learned — something which many readers will not have done, and which is a major chore that most will not be willing to undertake.

All definitions of the “right place” are artificial

I think the failure of the social contract about the “right place” for content is more fundamental than this. It fails, ultimately, because there is no right place for content. Any definition of a “right place” is artificial: its rightness comes not from nature, but from the fiat of the person who proposes it. Thus it works only within the tight circle of people who accept and learn it.

The general reading public, in the meantime, has largely abandoned the concept of a “right place” for most of the information they seek. (This is an ongoing transition, and different people are at different stages of abandonment.) Rather than ask what the right place is to look for information, they either simply search for it (via Google, Bing, Yahoo, etc.) or they try to find the right people to ask (via Facebook, Twitter, StackOverflow, etc.)

How to ask, not where to look

Search and social skills are thus replacing traditional research and cataloging skills; how to ask is replacing where to look at the primary definition of information finding ability.

This is what it means to say (as David Weinberger does) that Everything is Miscellaneous. It is not that things can’t be organized — you can organize anything as long as you make up a suitable principle of organization. The problem is that, for many things, there is no natural “rightness” about such principles (or about how to apply them). There is no “right place” to put things that every writer and every reader can naturally and easily agree on. Thus, despite any overarching principle of organization that may be applied to the things organized, they are effectively miscellaneous to those unfamiliar with the principal of organization.

Every Page is Page One

In this world of effectively miscellaneous content, every page is page one. “Every page is page one” is first and foremost a statement of fact about how people access content today. Secondarily it is design pattern for creating content that works well when people find content via search and social curation, or any other method of random access.

In a world in which there is no “right place” for content, the reader’s approach to finding content shifts from trying to find the right place to (in David Weinberger’s phrase again) including it all and filtering it afterward. The writer’s concern therefor must shift from placing the content in the “right place” to making sure that it gets filtered in appropriately when the reader’s filter is applied.

Bottom-up information architecture

In this world also, we have to pay far greater attention to bottom-up information architecture. Top-down information architecture is all about defining the “right place” for content and ensuring content is put in the “right place”. But with readers increasingly ignoring the concept of “right place” and going straight to search, information architecture has to concern itself more with what happens next, after a reader lands on an individual page.

Bottom-up information architecture is all about constructing the individual page or topic so that the reader can recognize and follow the information scent they are seeking, and about providing local navigation along lines of subject affinity to the related content they may want to consult next.

In a world in which there is no “right place” for information, we have to start paying far more attention to these things than we do at present. To begin exploring what this means for your content and your content creation process, contact me and let’s talk.

21 thoughts on “No “Right Place” for Content

  1. Larry Kunz

    Good analysis, Mark. Would you agree that the very notion of “place” has become outmoded? I used to get a book’s Dewey Decimal number from the card catalog and then find it in the physical stacks. Now I go online and search. If the information architect has created a path to the content that aligns with my expectations, it doesn’t matter what server contains the content, where the content exists within the server’s file structure, or even where the server is physically located.

    So not only is there no “right place,” it seems like nonsense even to talk about “place.”

    Reply
    1. Alex Knappe

      Hi Larry,
      I would like to object to that notion of not talking about places anymore.
      There are definitely places – even in the Google era.
      While our search behavior is usually somewhat erratic, we usually tend to hold our noses in the wind and sniff for information in the general surrounding (aka Google). Once we picked up the information scent, we tend to find places, where the information we seek is aggregated into larger densities.
      Those are the places we will then continue to sniff around for a while, either finding what we seek or then moving away from the place and seek out another place, where the information scent seems to be more dense that in other areas of the web.
      Those aggregation sites are the bookshelves and libraries of former times.
      And in objection to Marks post, those sites are also clearly the right place to put information to – in a macro management attempt.
      Micro management of content on the other hand is another beast to tame.

      Reply
      1. Mark Baker Post author

        Thanks for the comment Alex.

        Certainly information clumps on the Web. But are these clumps really places? Yes, there may be a clump of related content that happens to reside on one site, but there are also clumps that reside on many different sites. The #techcomm Twitter hash tag clumps references to diverse blogs and articles on technical communication, but does that make Twitter a place or a filter?

        More importantly, these clumps are dynamic in nature. One page of one site may belong to may different clumps, and people searching Google often create clumps dynamically on the fly. Dynamic semantic clustering is one of the key operational features of the Web. Clumps are formed far more by readers summoning and filtering dynamically than by writers co-locating statically.

        Certainly there are places like StackOverflow that are great for asking certain kinds of questions. But these tend to be communities first and foremost. They are places to ask questions of people. If you want to find an existing answer to a programming problem, it make far more sense to Google for the answer than to attempt to navigate the hierarchy of Stack Overflow, since Google will aggregate answers from many different sites.

        And StackOverflow itself works primarily as a filter, using voting to surface the most interesting questions and the most valuable answers.

        That said, of course, we, as information providers have an professional and commercial interest in keeping readers in our content as long as possible so we do indeed want to them to keep sniffing around our content. This is why I think we have to focus far more on bottom-up information architecture. Because as long as people are arriving from Google or Twitter, it is Google or Twitter they are sniffing around, not our site.

        Reply
    2. Mark Baker Post author

      Thanks for the comment, Larry. I do agree. For instance, we could note how different the notion of an address is online compared to the physical world. In the physical world, an address is a place, and you can go to it, and doing so take time and energy. On line, an address is a summons which calls content to you, no matter where you are. To address content online is thus far more like addressing a person than addressing a building.

      As you say, we often don’t know where our content is physically — its in some data center. Nor do the people who run the data center. Indeed, our data is often spread across many drives, or even many data centers, to guard against loss. There is a reason the cloud is called a cloud.

      That said, there is still a kind of virtual location that we create on screen to help analog creatures navigate a digital world. That approach as dominated early attempts at online publishing. But even this virtualization of place is breaking down as we become digital natives (or, at least, digital landed immigrants), increasingly accustomed to summoning rather than seeking.

      Increasingly, therefore, what matters is not to put things were people might seek them, but to ensure that they come when someone summons them.

      Reply
  2. Jonatan Lundin

    True; the way we organize the world is subjective. How I prefer to organize content differs from your preference. Thus, I may have difficulties to understand your way, since I interpret it from my own viewpoint. Thus, there are not right place for content.

    The ancient Greeks thought that the Oracle of Delphi spoke on behalf of gods. If we where to organize content on that time, maybe we could have consulted the oracle to get the “right place”. Unfortunately, the oracle is not around anymore.

    But the design of a bottom-up information architecture, where taxonomies are used to provide local navigation along lines of subject affinity, is also a subjective organization. When designing, we cannot consciously ignore our preferences, experiences and believes etc. These preferences, experiences and believes etc “talks” to us from the artifacts we design.

    I agree to the underlying concept in an EPPO design (although we have different views on what constitutes a page – I say it is an answer to a user question). And that most users prefer and start from a key word search. But to me, we need to give the user additional search aids besides the pure text-search algorithm.

    Filters are such additional aids. Taxonomies are a type of filter. The problem is that any taxonomy is a subjective organizational scheme which I – as a user – have to learn in some respect. So I believe that, to find an answer – the page – the user must sometimes interact with a filter, and thus try to interpret it.

    So my conclusion so far is that we need to find the most efficient way a human interprets a filter. To me, a human dialog is the most efficient way. When I ask you something (=I type key words in Google), and you do not understand what I am asking (=I get millions of hits), you ask me questions back and you may provide examples such as “What type of car are you looking for – do you mean Mercedes, Volvo, Saab or Ford?

    This type of dialog is full of subjective views on how the world is organized – thus full of “taxonomies” in some sense which can mimic in user assistance. Humans seem to not have much of a problem “filtering” their way through such a dialog. The context in which the dialogs happen is probably of much help.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Jonatan. I agree we need to give people something other than search. Thus the importance I place on linking, and linking in a disciplined and systematic way. (In some sense, I agree, all design is subjective, but there is a huge difference between individual authors linking on a whim and links generated based on agreed categories and terms defined by taxonomies.)

      I don’t agree that taxonomies are filters. Taxonomies are simply namespaces. As such, they are useful for defining filters, but filters are specific mechanisms, that have to be engineered and used. As I have noted before, the Web can usefully be thought of not as a giant collection of content but as a giant collection of filters. Search is a filter. Links are filters. Twitter is a filter. Facebook is a filter. Amazon is a filter. When people use the Web, they are using the filters it provides to summon content that meets their needs.

      There is certainly a place for providing additional local filters. But local filters have two major hurdles to overcome. First, people have to be willing to learn to use them. Evidence shows that people are increasingly unwilling to navigate sites: they go straight to search. People use the tools they already know how to use — even if they know there are better tools — because they don’t want to bother learning the other tools.

      Secondly, local filters suffer from lack of scope. People prefer — in Weinberger’s words — to include it all, filter it afterward. I don’t want to spend time filtering one particular site if I can filter the whole Web.

      The idea of a filter that works like a conversation is certainly an intuitively appealing idea — in some sense, it is the Turing test itself — but such interfaces don’t seems to have ever gained much traction. (The attempt to get people to use natural language phone tree systems proved to be a bust.)

      For these reasons, I think it is far more valuable to work on creating content that gets filtered in by the filters that people already use, rather than trying to get them to use our local filters. And if you do create a local filter, I think is is imperative, in the general case at least, that the content it sits over should also be accessible to the Web’s filters, should work as page one for the reader, and should have a robust bottom-up information architecture so that the reader can follow an information scent when they pick one up.

      Reply
  3. Chris Despopoulos

    What I like about these posts is that I’m slowly getting the EPPO idea. Not that I’m falling for it 100%, but I think I’m getting it. 🙂

    I think I *can* say there’s a concept of “right place” for information… As local as possible. Of course, locality is a hard one to pin down these days, but we have a few indicators:
    * Embedded documentation — document the knob on the knob itself
    * Mobile devices — Google Glass being the acme at the moment
    * Adaptive, content — varies depending on context and/or device

    From here the arguments can ensue… Is EPPO the best approach to address the emerging notions of locality? Is there a hybrid approach? Does locality of this sort give us or yield to other “best place” indicators — content models? Are these necessarily static or mutable? Can people search for content models as easily as they can for content topics? Assemble content models on the fly? Save them for later? Publish them?

    Here’s what I’m not getting… Why can’t a page one page be a member of any number of content models, and carry that membership with it? Why would I not be as interested in searching for adequate content models as I am in searching for an adequate topic? Are people abandoning “right place” models because they don’t work at a fundamental level? Or is it because the current concept of “place” is paper-based?

    Finally, I reject the idea that “right-place” modeling never worked. That’s like saying a travois never worked because we have trucks now. Our technology has outstripped our capacity to apply it — nothing new there. Even today we bring the travois onto the super-highway in the form of 18-wheelers… What will we do with content?

    No question, we’re coming into interesting times for the history of text…

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Chris.

      Yes, in a sense, the right place model did work, because it was better than chaos — in fact, a lot better. Our idea of what “works” is often comparative rather than absolute. Things work until something better comes along, and then they don’t work anymore. Getting your water from a well by letting down a bucket on a rope works, but it does not “work” for most city dwellers today.

      By content models, I take it you mean systems for organizing groups of content objects. (I say this because to me, a content model is a model or template for an individual item of content — which is perhaps a reflection of the bottom-up vs. top-down approach to information architecture.)

      In that sense, I would say that a piece of content can be part of any number of what I call “semantic clusters”. Semantic clusters can be formed by authors and expressed top-down via TOCs or bottom up via linking. They can be created dynamically by readers when they do a search. They can be created consciously by content curators, or unconsciously by crowds of people by the way they tag content, tweet about it, or simply visit it.

      In the physical world, one form of clustering excludes all others, because clustering is done by putting things in the “right place” according to some organizational principle. In the digital world, one form of clustering does not exclude any other form or clustering, either static or dynamic. Clusters are virtual and often dynamic. They are created not by putting things in the “right place” but by calling them by their right name. As noted above, we do not seek, we summon.

      This notion of summoning content does, however, have a profound effect on how content can be delivered into specific real-world locations. All of the location based services that we enjoy on our mobile devices rely on the fact that users don’t have to go to the “right place” to find content, but rather, the content can be brought to them in their current location by summoning it with the right incantation.

      Thus “as local as possible” is actually not about content having a place (which we have to go to) but about content coming to us, to the place we are located, when we call for it.

      And yes, I believe EPPO is the best approach to create content that can be summoned in this manner. Content that is summoned cannot assume its context from the location in which it resides (as typical “located” content does) because it will not be viewed in that context. The reader does not come to the content, and does not learn its context on the journey. The content comes to the reader, naked and alone. It is always page one.

      Reply
      1. Chris Despopoulos

        Awesome… I can easily go with semantic cluster. Now I’m starting to think what makes me scratch my head is that (perhaps) I’m considering a page to be a semantic cluster as well — one that should have as much potential for dynamic assembly as any other semantic cluster. Before anybody shrugs that off as academic clap-trap, I’ll say that I’m currently working with assembling pages with content that comes in live from the product state. It’s a short step to use that state information to trigger affinities between predefined page subsets (what DO we call these units of information?) and assemble a page out of that.

        I will say that there still is a static location when you’re documenting a specific knob. You put the content on or in that knob. Maybe you summon the content to the knob, but we can low-tech it and hard-code the content with the knob. That’s because the GUI is as much a page as any other “page” construct. So static embedded help, while social and technical barriers remain (as in, Dev doesn’t want to invest in making this dynamic), is a location to consider. Maybe the last static location we’ll have?

        Anyway, lots to think about…

        Reply
        1. Mark Baker

          Oh, a page is definitely a semantic cluster, and definitely has a potential for dynamic assembly. Not in every case, certainly. Plain pages work very well for many purposes. There is no dynamic assembly of pages in Wikipedia, for instance.

          But for the right purpose, dynamic assembly of pages can be very powerful. Amazon is a great example.

          Dynamic assembly on the back end can also be very powerful. I have built a number of systems that extracted content from multiple source and combined it with authored content to produced richly linked reference works that could never have been cost-effectively created by hand.

          The SPFE architecture (http://spfe.info) has a separate layer devoted to synthesis of topics. For ordinary written content, it mostly just normalizes names and references, but for other types of content it does dynamic assembly — so yes, I think dynamic assembly of topics is pretty important.

          I agree about the knobs as well. The right place for information of immediate effect about a knob is on or beside the knob. In this case, of course, the reader does not have to know the “right place” to find the content, just the right place to find the knob.

          And there are already many cases of dynamically binding content to the knob. Soft keys have used this technique for a long time.

          Of course, finding the right place to find the knob sometimes requires content, which can’t be attached to the knob, since the reader does not know where the knob is. And information that leads you from the business problem you know you have to the need to turn the knob you didn’t previously know about (that is, information about the knob that is not of immediate effect) cannot usefully be placed on the knob.

          So yes, embedded content is in the right place when it is of immediate effect in the interface, but much or our most important content is not of immediate effect. It is attached to business problems and planning issues and mental models — things to which it cannot be physically attached and on which it cannot be physically displayed.

          Reply
          1. Chris Despopoulos

            Just a few things to add… First, in today’s virtual world the distinction between the knob and the documentation is blurring. So, for example, you could say that the GUI is page one. We’re getting ready for dynamically (or *more* dynamically) assembled GUIs, and I think rules and concepts for assembling docs are not much different than the same for a GUI. Which is just another way of reiterating the statement, the GUI is page one. (This applies to the rarefied world of computer software, of course).

            Second, call me old but I can’t let go of the idea that there’s a value proposition in assembling semantic clusters to lay over a topic domain… Based on the “author’s” expert knowledge. I believe that is still a viable way to address issues of content with no immediate effect. I also believe it’s what modern software products actually do — they lay a model over a data domain to compress that data into human-scale information.

            Combining the two means that I should be able to assemble a page that not only leads you to a description of the knob, but gives you the knob itself… And maybe other knobs on other machines, effectively combining authored content and GUI in a dynamically assembled dashboard.

            Would you want to sniff out your own assemblies of these things? Sure, in a domain that really interests you. But in other domains, why not just use the assemblies of somebody you trust?

          2. Mark Baker

            Agreed about the distinction between knob and documentation in the virtual world. But to me this simply means that there is less and less need to document GUIs as GUIs get better (thanks to better UX) and people become more accustomed to working with them. Embedded docs is a great idea, but they only address a small area of what docs can or should be doing, which is helping users address their business problems.

            And I agree that there is value in the author assembling semantic clusters to lay over a topic domain, with the following caveats:

            * Assembling semantic clusters does not have to mean linearizing them. Bottom up architectures that link along lines of subject affinity form rich semantic clusters that allow readers to navigate in the way that best suits their immediate information need.

            * Sometimes, a linear assembly does work, and is required, but only if the reader is willing to submit themselves to it. Sometimes readers are willing. Sometimes they demand that the author take the wheel and guide them. Not often, but sometimes, and sometimes in critical situations. No one information architecture is right for all situations and EPPO does not claim to be.

            * It does not matter if an arrangement of content provide value unless real users are willing to use it in the arrangement provided. That involves the reader pausing to understand that arrangement and most are not willing to do that. Most, indeed, probably won’t recognize that there is such an arrangement to understand or use. Good organization is no better than bad organization if readers ignore all forms of overt organization.

            * Information foraging is the best model we have of information seeking behavior. The way to add value by creating semantic clusters, therefore, is to optimize them for information foraging. If we do this well, I believe it can add a very large amount of value. The characteristics of a topic in a bottom-up architecture — working well as a search result and linking richly — are essentially covert forms of organization. The reader just sees familiar search boxes and links, not an overt structure. There is no “structure” that demands their attention, just information sets that work better than others.

            So yes, the “old” idea that there is value in authors creating semantic clusters is still valid, but the old way of doing it is largely invalid. The author is no longer the sole provided of structure — dynamic semantic clusters are formed in many ways — but the author can still play a crucial role, as long as they recognize that they are no longer designing a curriculum, but cooperating in a the population and navigation of a dynamic information environment.

            Finally, yes, readers do not want to assemble things for themselves. That is not what information foraging is. Dynamic semantic clusters form on the Web as a result of reader actions, often the result of many readers’ actions, but the reader is not consciously constructing a semantic cluster. Build-your-own-documentation systems have no appeal in this environment. Readers would greatly prefer that we writers assemble navigable semantic clusters for them. Today, that means creating content that works as a search result and that links richly to related and ancillary subjects.

  4. Tom Johnson

    I liked the clarity and argument in this post. I realize it’s nothing that you haven’t already covered in earlier posts and your book, but you do a great job articulating it here.

    Overall, I would like to see the next step in this evolution of thought. If search replaces the TOC, then how do you enhance search so that the right content surfaces? Once the content surfaces, how do you create efficient links to all the other related content? If you do show articles in the sidebar, such as related articles by keyword, how do you construct the keyword algorithms? In other words, what’s the next step after deciding there is no right place?

    Re the library metaphor, I kind of agree with Alex. I was at the public library the other day searching for books on basketball. I got the dewey decimal number of one book and went to the shelf area. I knew that the shelf would contain lots of other basketball books — I just needed to know where that shelf was. I ended up getting books other than the one whose decimal number I initially wrote down.

    I think search works in a similar way. Sure, basketball might be in the 500’s rather than the 700s, or grouped in reference, or fitness, or games — I could never really put basketball in a proper hierarchy of information about the world. But I also don’t want basketball books to be spread out throughout the library, with little references inside books that give me clues to the whereabouts of other basketball books. It is nice to browse a collection of related materials.

    Granted, there are many items that don’t fit into the basketball section. For example, Bill Bradley’s basketball memoir — I’m sure it could fit in the essays, autobiographies, and basketball sections. But I’d say only about 20% of content is like this.

    In sum, aggregating like material helps increase findability, even if some of the material doesn’t fit.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Tom.

      I think that the amount of content that is like this depends very much on what particular line through the subject your interest takes. The cluster that matters to you is different from the cluster that matters to the next person. The library’s single static cluster contains 80 percent of one person’s personal cluster (though probably with a very large number of items which don’t belong in the personal cluster) and omits 20%. For another person, the numbers could be the reverse.

      The point is, clustering content is extremely valuable, but it is also extremely individual. The single static cluster does not fail because clusters are bad, but because it is a really inadequate form of clustering.

      The key operational characteristic of the Web is dynamic semantic clustering. That is what makes it so wonderful and so popular. It is the reason that people are not interested in learning what static clusters you have chosen to deposit your content in (the “right place” for content): they want to form their own clusters dynamically, which will include content from you and from many others. Clustering is too important to be left to libraries.

      I don’t think “how do we enhance search” is the right question for content creators to be asking. It is an important question — lots of brilliant Google engineers and their rivals are working on it all the time — but it is not our question. Our question should be, how do we make content work better with search.

      For the most part, unfortunately, tech comm had gone on creating content the way it always has and complaining that search does not work well enough. The real problem is that we have not been creating content that works well with search. That is really what Every Page is Page One is about: creating content that work with search — that has a strong information scent so it gets found, that functions independently when found, and assist the reader in charting their own onward course.

      The other part of your question, though, about creating efficient links, is definitely a question we should be addressing. Links are a part of dynamic semantic clustering — allowing reader’s to cluster information of interest to them as they read. They should thus be created systematically following lines of significant subject affinity. For one practical technique for achieving this, see this post: /2011/08/01/more-links-less-time/

      Reply
      1. Vinish Garg

        “For the most part, unfortunately, tech comm had gone on creating content the way it always has and complaining that search does not work well enough. The real problem is that we have not been creating content that works well with search. That is really what Every Page is Page One is about: creating content that work with search — that has a strong information scent so it gets found, that functions independently when found, and assist the reader in charting their own onward course.”

        I absolutely loved it! Thanks for making it so clear.

        Reply
  5. Alessandro Stazi

    Good evening Mark. In the COM&TEC conference of Bologna (December 2013):
    http://artigianodibabele.blogspot.it/2013/12/10-anni-di-com-una-bella-festa.html

    … i have spoken about an “old paradigma” (the users who seek the information) and a “new paradigma” (the information that seeks the users, since focused in the user context and user-task driven). I say “old paradigma” in a sense very close to your concept of a broken social contract between writer and reader, according to the “right place” where someone write and someone else read. But where is the “right place” today? Web 2.0/3.0, social networks, blogs, wikis, mobile devices, cloud, big data: everyone can create contents, is a relation N to N. My idea is very alike to your scheme of “bottom-up” architecture. Old, printable manuals of 600 pages can be still useful support for the knowledge. But in the next future, the main activity of a Tech Communicator not will be based on the production of traditional, sequential, hierarchical manuals. I will speak about these issues on TCEurope Colloquia conference in Aix en Provence, the next 25 of April:
    http://www.tceurope.org/colloquia/41-2014aix

    I hope to share other ideas on this question that is becoming a real pivoting issue in the future of technical communications best practices.

    Reply
  6. Alessandro Stazi

    Good article Mark, as usual. In the COM&TEC conference of Bologna (December 2013):
    http://artigianodibabele.blogspot.it/2013/12/10-anni-di-com-una-bella-festa.html

    … i have spoken about an “old paradigma” (the users who seek the information) and a “new paradigma” (the information that seeks the users, since focused in the user context and user-task driven). I say “old paradigma” in the sense in which you write about the broken social contract between writer and reader, according to the “right place” where someone write and someone else read. But where is the “right place” today? Web 2.0/3.0, social networks, blogs, wikis, mobile devices, cloud, big data repositories: everyone can create contents, is a relation N to N. My idea is very alike to your scheme of “bottom-up” architecture. Old, printable manuals of 600 pages can be still useful support for the knowledge. But in the next future, the main activity of a Tech Communicator not will be based on the production of traditional, sequential, hierarchical manuals. I will speak about these issues on TCEurope Colloquia conference in Aix en Provence, the next 25 of April.
    I’m glad to share and compare my vision with your approach.
    The idea of botto-up information architecture and production of modular contents will be the most interesting issue for Tec Comm in the next years.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Alessandro.

      Indeed, it seems like we are looking at the same phenomena in very similar ways. Good luck with your presentation.

      Reply
  7. Jeff Coatsworth

    Just curious Mark – where’d you do your MA? I was at IHPST in the early ’90s doing a thesis in a History of Medicine (and Technology) combo.

    Reply
    1. Mark Baker Post author

      Hi Jeff,

      My MA is from the University of Western Ontario, in London, Ontario, Canada. I was very lucky, actually, because they really let me construct my own program and found a prof in the engineering department who was willing to be my supervisor. This was fortunate because there were very few history of technology programs around at the time, and I could not have afforded to go to any of the schools that offered them.

      Reply
  8. Pingback: No "Right Place" for Content | Techni...

Leave a Reply