21 Responses to No “Right Place” for Content

  1. Larry Kunz 2014/03/31 at 21:01 #

    Good analysis, Mark. Would you agree that the very notion of “place” has become outmoded? I used to get a book’s Dewey Decimal number from the card catalog and then find it in the physical stacks. Now I go online and search. If the information architect has created a path to the content that aligns with my expectations, it doesn’t matter what server contains the content, where the content exists within the server’s file structure, or even where the server is physically located.

    So not only is there no “right place,” it seems like nonsense even to talk about “place.”

    • Alex Knappe 2014/04/01 at 07:48 #

      Hi Larry,
      I would like to object to that notion of not talking about places anymore.
      There are definitely places – even in the Google era.
      While our search behavior is usually somewhat erratic, we usually tend to hold our noses in the wind and sniff for information in the general surrounding (aka Google). Once we picked up the information scent, we tend to find places, where the information we seek is aggregated into larger densities.
      Those are the places we will then continue to sniff around for a while, either finding what we seek or then moving away from the place and seek out another place, where the information scent seems to be more dense that in other areas of the web.
      Those aggregation sites are the bookshelves and libraries of former times.
      And in objection to Marks post, those sites are also clearly the right place to put information to – in a macro management attempt.
      Micro management of content on the other hand is another beast to tame.

      • Mark Baker 2014/04/02 at 22:19 #

        Thanks for the comment Alex.

        Certainly information clumps on the Web. But are these clumps really places? Yes, there may be a clump of related content that happens to reside on one site, but there are also clumps that reside on many different sites. The #techcomm Twitter hash tag clumps references to diverse blogs and articles on technical communication, but does that make Twitter a place or a filter?

        More importantly, these clumps are dynamic in nature. One page of one site may belong to may different clumps, and people searching Google often create clumps dynamically on the fly. Dynamic semantic clustering is one of the key operational features of the Web. Clumps are formed far more by readers summoning and filtering dynamically than by writers co-locating statically.

        Certainly there are places like StackOverflow that are great for asking certain kinds of questions. But these tend to be communities first and foremost. They are places to ask questions of people. If you want to find an existing answer to a programming problem, it make far more sense to Google for the answer than to attempt to navigate the hierarchy of Stack Overflow, since Google will aggregate answers from many different sites.

        And StackOverflow itself works primarily as a filter, using voting to surface the most interesting questions and the most valuable answers.

        That said, of course, we, as information providers have an professional and commercial interest in keeping readers in our content as long as possible so we do indeed want to them to keep sniffing around our content. This is why I think we have to focus far more on bottom-up information architecture. Because as long as people are arriving from Google or Twitter, it is Google or Twitter they are sniffing around, not our site.

    • Mark Baker 2014/04/02 at 21:59 #

      Thanks for the comment, Larry. I do agree. For instance, we could note how different the notion of an address is online compared to the physical world. In the physical world, an address is a place, and you can go to it, and doing so take time and energy. On line, an address is a summons which calls content to you, no matter where you are. To address content online is thus far more like addressing a person than addressing a building.

      As you say, we often don’t know where our content is physically — its in some data center. Nor do the people who run the data center. Indeed, our data is often spread across many drives, or even many data centers, to guard against loss. There is a reason the cloud is called a cloud.

      That said, there is still a kind of virtual location that we create on screen to help analog creatures navigate a digital world. That approach as dominated early attempts at online publishing. But even this virtualization of place is breaking down as we become digital natives (or, at least, digital landed immigrants), increasingly accustomed to summoning rather than seeking.

      Increasingly, therefore, what matters is not to put things were people might seek them, but to ensure that they come when someone summons them.

  2. Jonatan Lundin 2014/04/01 at 08:16 #

    True; the way we organize the world is subjective. How I prefer to organize content differs from your preference. Thus, I may have difficulties to understand your way, since I interpret it from my own viewpoint. Thus, there are not right place for content.

    The ancient Greeks thought that the Oracle of Delphi spoke on behalf of gods. If we where to organize content on that time, maybe we could have consulted the oracle to get the “right place”. Unfortunately, the oracle is not around anymore.

    But the design of a bottom-up information architecture, where taxonomies are used to provide local navigation along lines of subject affinity, is also a subjective organization. When designing, we cannot consciously ignore our preferences, experiences and believes etc. These preferences, experiences and believes etc “talks” to us from the artifacts we design.

    I agree to the underlying concept in an EPPO design (although we have different views on what constitutes a page – I say it is an answer to a user question). And that most users prefer and start from a key word search. But to me, we need to give the user additional search aids besides the pure text-search algorithm.

    Filters are such additional aids. Taxonomies are a type of filter. The problem is that any taxonomy is a subjective organizational scheme which I – as a user – have to learn in some respect. So I believe that, to find an answer – the page – the user must sometimes interact with a filter, and thus try to interpret it.

    So my conclusion so far is that we need to find the most efficient way a human interprets a filter. To me, a human dialog is the most efficient way. When I ask you something (=I type key words in Google), and you do not understand what I am asking (=I get millions of hits), you ask me questions back and you may provide examples such as “What type of car are you looking for – do you mean Mercedes, Volvo, Saab or Ford?

    This type of dialog is full of subjective views on how the world is organized – thus full of “taxonomies” in some sense which can mimic in user assistance. Humans seem to not have much of a problem “filtering” their way through such a dialog. The context in which the dialogs happen is probably of much help.

    • Mark Baker 2014/04/02 at 22:38 #

      Thanks for the comment, Jonatan. I agree we need to give people something other than search. Thus the importance I place on linking, and linking in a disciplined and systematic way. (In some sense, I agree, all design is subjective, but there is a huge difference between individual authors linking on a whim and links generated based on agreed categories and terms defined by taxonomies.)

      I don’t agree that taxonomies are filters. Taxonomies are simply namespaces. As such, they are useful for defining filters, but filters are specific mechanisms, that have to be engineered and used. As I have noted before, the Web can usefully be thought of not as a giant collection of content but as a giant collection of filters. Search is a filter. Links are filters. Twitter is a filter. Facebook is a filter. Amazon is a filter. When people use the Web, they are using the filters it provides to summon content that meets their needs.

      There is certainly a place for providing additional local filters. But local filters have two major hurdles to overcome. First, people have to be willing to learn to use them. Evidence shows that people are increasingly unwilling to navigate sites: they go straight to search. People use the tools they already know how to use — even if they know there are better tools — because they don’t want to bother learning the other tools.

      Secondly, local filters suffer from lack of scope. People prefer — in Weinberger’s words — to include it all, filter it afterward. I don’t want to spend time filtering one particular site if I can filter the whole Web.

      The idea of a filter that works like a conversation is certainly an intuitively appealing idea — in some sense, it is the Turing test itself — but such interfaces don’t seems to have ever gained much traction. (The attempt to get people to use natural language phone tree systems proved to be a bust.)

      For these reasons, I think it is far more valuable to work on creating content that gets filtered in by the filters that people already use, rather than trying to get them to use our local filters. And if you do create a local filter, I think is is imperative, in the general case at least, that the content it sits over should also be accessible to the Web’s filters, should work as page one for the reader, and should have a robust bottom-up information architecture so that the reader can follow an information scent when they pick one up.

  3. Chris Despopoulos 2014/04/01 at 08:23 #

    What I like about these posts is that I’m slowly getting the EPPO idea. Not that I’m falling for it 100%, but I think I’m getting it. 🙂

    I think I *can* say there’s a concept of “right place” for information… As local as possible. Of course, locality is a hard one to pin down these days, but we have a few indicators:
    * Embedded documentation — document the knob on the knob itself
    * Mobile devices — Google Glass being the acme at the moment
    * Adaptive, content — varies depending on context and/or device

    From here the arguments can ensue… Is EPPO the best approach to address the emerging notions of locality? Is there a hybrid approach? Does locality of this sort give us or yield to other “best place” indicators — content models? Are these necessarily static or mutable? Can people search for content models as easily as they can for content topics? Assemble content models on the fly? Save them for later? Publish them?

    Here’s what I’m not getting… Why can’t a page one page be a member of any number of content models, and carry that membership with it? Why would I not be as interested in searching for adequate content models as I am in searching for an adequate topic? Are people abandoning “right place” models because they don’t work at a fundamental level? Or is it because the current concept of “place” is paper-based?

    Finally, I reject the idea that “right-place” modeling never worked. That’s like saying a travois never worked because we have trucks now. Our technology has outstripped our capacity to apply it — nothing new there. Even today we bring the travois onto the super-highway in the form of 18-wheelers… What will we do with content?

    No question, we’re coming into interesting times for the history of text…

    • Mark Baker 2014/04/02 at 23:01 #

      Thanks for the comment, Chris.

      Yes, in a sense, the right place model did work, because it was better than chaos — in fact, a lot better. Our idea of what “works” is often comparative rather than absolute. Things work until something better comes along, and then they don’t work anymore. Getting your water from a well by letting down a bucket on a rope works, but it does not “work” for most city dwellers today.

      By content models, I take it you mean systems for organizing groups of content objects. (I say this because to me, a content model is a model or template for an individual item of content — which is perhaps a reflection of the bottom-up vs. top-down approach to information architecture.)

      In that sense, I would say that a piece of content can be part of any number of what I call “semantic clusters”. Semantic clusters can be formed by authors and expressed top-down via TOCs or bottom up via linking. They can be created dynamically by readers when they do a search. They can be created consciously by content curators, or unconsciously by crowds of people by the way they tag content, tweet about it, or simply visit it.

      In the physical world, one form of clustering excludes all others, because clustering is done by putting things in the “right place” according to some organizational principle. In the digital world, one form of clustering does not exclude any other form or clustering, either static or dynamic. Clusters are virtual and often dynamic. They are created not by putting things in the “right place” but by calling them by their right name. As noted above, we do not seek, we summon.

      This notion of summoning content does, however, have a profound effect on how content can be delivered into specific real-world locations. All of the location based services that we enjoy on our mobile devices rely on the fact that users don’t have to go to the “right place” to find content, but rather, the content can be brought to them in their current location by summoning it with the right incantation.

      Thus “as local as possible” is actually not about content having a place (which we have to go to) but about content coming to us, to the place we are located, when we call for it.

      And yes, I believe EPPO is the best approach to create content that can be summoned in this manner. Content that is summoned cannot assume its context from the location in which it resides (as typical “located” content does) because it will not be viewed in that context. The reader does not come to the content, and does not learn its context on the journey. The content comes to the reader, naked and alone. It is always page one.

      • Chris Despopoulos 2014/04/03 at 09:33 #

        Awesome… I can easily go with semantic cluster. Now I’m starting to think what makes me scratch my head is that (perhaps) I’m considering a page to be a semantic cluster as well — one that should have as much potential for dynamic assembly as any other semantic cluster. Before anybody shrugs that off as academic clap-trap, I’ll say that I’m currently working with assembling pages with content that comes in live from the product state. It’s a short step to use that state information to trigger affinities between predefined page subsets (what DO we call these units of information?) and assemble a page out of that.

        I will say that there still is a static location when you’re documenting a specific knob. You put the content on or in that knob. Maybe you summon the content to the knob, but we can low-tech it and hard-code the content with the knob. That’s because the GUI is as much a page as any other “page” construct. So static embedded help, while social and technical barriers remain (as in, Dev doesn’t want to invest in making this dynamic), is a location to consider. Maybe the last static location we’ll have?

        Anyway, lots to think about…

        • Mark Baker 2014/04/04 at 03:23 #

          Oh, a page is definitely a semantic cluster, and definitely has a potential for dynamic assembly. Not in every case, certainly. Plain pages work very well for many purposes. There is no dynamic assembly of pages in Wikipedia, for instance.

          But for the right purpose, dynamic assembly of pages can be very powerful. Amazon is a great example.

          Dynamic assembly on the back end can also be very powerful. I have built a number of systems that extracted content from multiple source and combined it with authored content to produced richly linked reference works that could never have been cost-effectively created by hand.

          The SPFE architecture (http://spfe.info) has a separate layer devoted to synthesis of topics. For ordinary written content, it mostly just normalizes names and references, but for other types of content it does dynamic assembly — so yes, I think dynamic assembly of topics is pretty important.

          I agree about the knobs as well. The right place for information of immediate effect about a knob is on or beside the knob. In this case, of course, the reader does not have to know the “right place” to find the content, just the right place to find the knob.

          And there are already many cases of dynamically binding content to the knob. Soft keys have used this technique for a long time.

          Of course, finding the right place to find the knob sometimes requires content, which can’t be attached to the knob, since the reader does not know where the knob is. And information that leads you from the business problem you know you have to the need to turn the knob you didn’t previously know about (that is, information about the knob that is not of immediate effect) cannot usefully be placed on the knob.

          So yes, embedded content is in the right place when it is of immediate effect in the interface, but much or our most important content is not of immediate effect. It is attached to business problems and planning issues and mental models — things to which it cannot be physically attached and on which it cannot be physically displayed.

          • Chris Despopoulos 2014/04/05 at 06:03 #

            Just a few things to add… First, in today’s virtual world the distinction between the knob and the documentation is blurring. So, for example, you could say that the GUI is page one. We’re getting ready for dynamically (or *more* dynamically) assembled GUIs, and I think rules and concepts for assembling docs are not much different than the same for a GUI. Which is just another way of reiterating the statement, the GUI is page one. (This applies to the rarefied world of computer software, of course).

            Second, call me old but I can’t let go of the idea that there’s a value proposition in assembling semantic clusters to lay over a topic domain… Based on the “author’s” expert knowledge. I believe that is still a viable way to address issues of content with no immediate effect. I also believe it’s what modern software products actually do — they lay a model over a data domain to compress that data into human-scale information.

            Combining the two means that I should be able to assemble a page that not only leads you to a description of the knob, but gives you the knob itself… And maybe other knobs on other machines, effectively combining authored content and GUI in a dynamically assembled dashboard.

            Would you want to sniff out your own assemblies of these things? Sure, in a domain that really interests you. But in other domains, why not just use the assemblies of somebody you trust?

          • Mark Baker 2014/04/05 at 09:10 #

            Agreed about the distinction between knob and documentation in the virtual world. But to me this simply means that there is less and less need to document GUIs as GUIs get better (thanks to better UX) and people become more accustomed to working with them. Embedded docs is a great idea, but they only address a small area of what docs can or should be doing, which is helping users address their business problems.

            And I agree that there is value in the author assembling semantic clusters to lay over a topic domain, with the following caveats:

            * Assembling semantic clusters does not have to mean linearizing them. Bottom up architectures that link along lines of subject affinity form rich semantic clusters that allow readers to navigate in the way that best suits their immediate information need.

            * Sometimes, a linear assembly does work, and is required, but only if the reader is willing to submit themselves to it. Sometimes readers are willing. Sometimes they demand that the author take the wheel and guide them. Not often, but sometimes, and sometimes in critical situations. No one information architecture is right for all situations and EPPO does not claim to be.

            * It does not matter if an arrangement of content provide value unless real users are willing to use it in the arrangement provided. That involves the reader pausing to understand that arrangement and most are not willing to do that. Most, indeed, probably won’t recognize that there is such an arrangement to understand or use. Good organization is no better than bad organization if readers ignore all forms of overt organization.

            * Information foraging is the best model we have of information seeking behavior. The way to add value by creating semantic clusters, therefore, is to optimize them for information foraging. If we do this well, I believe it can add a very large amount of value. The characteristics of a topic in a bottom-up architecture — working well as a search result and linking richly — are essentially covert forms of organization. The reader just sees familiar search boxes and links, not an overt structure. There is no “structure” that demands their attention, just information sets that work better than others.

            So yes, the “old” idea that there is value in authors creating semantic clusters is still valid, but the old way of doing it is largely invalid. The author is no longer the sole provided of structure — dynamic semantic clusters are formed in many ways — but the author can still play a crucial role, as long as they recognize that they are no longer designing a curriculum, but cooperating in a the population and navigation of a dynamic information environment.

            Finally, yes, readers do not want to assemble things for themselves. That is not what information foraging is. Dynamic semantic clusters form on the Web as a result of reader actions, often the result of many readers’ actions, but the reader is not consciously constructing a semantic cluster. Build-your-own-documentation systems have no appeal in this environment. Readers would greatly prefer that we writers assemble navigable semantic clusters for them. Today, that means creating content that works as a search result and that links richly to related and ancillary subjects.

  4. Tom Johnson 2014/04/01 at 16:24 #

    I liked the clarity and argument in this post. I realize it’s nothing that you haven’t already covered in earlier posts and your book, but you do a great job articulating it here.

    Overall, I would like to see the next step in this evolution of thought. If search replaces the TOC, then how do you enhance search so that the right content surfaces? Once the content surfaces, how do you create efficient links to all the other related content? If you do show articles in the sidebar, such as related articles by keyword, how do you construct the keyword algorithms? In other words, what’s the next step after deciding there is no right place?

    Re the library metaphor, I kind of agree with Alex. I was at the public library the other day searching for books on basketball. I got the dewey decimal number of one book and went to the shelf area. I knew that the shelf would contain lots of other basketball books — I just needed to know where that shelf was. I ended up getting books other than the one whose decimal number I initially wrote down.

    I think search works in a similar way. Sure, basketball might be in the 500’s rather than the 700s, or grouped in reference, or fitness, or games — I could never really put basketball in a proper hierarchy of information about the world. But I also don’t want basketball books to be spread out throughout the library, with little references inside books that give me clues to the whereabouts of other basketball books. It is nice to browse a collection of related materials.

    Granted, there are many items that don’t fit into the basketball section. For example, Bill Bradley’s basketball memoir — I’m sure it could fit in the essays, autobiographies, and basketball sections. But I’d say only about 20% of content is like this.

    In sum, aggregating like material helps increase findability, even if some of the material doesn’t fit.

    • Mark Baker 2014/04/02 at 23:23 #

      Thanks for the comment, Tom.

      I think that the amount of content that is like this depends very much on what particular line through the subject your interest takes. The cluster that matters to you is different from the cluster that matters to the next person. The library’s single static cluster contains 80 percent of one person’s personal cluster (though probably with a very large number of items which don’t belong in the personal cluster) and omits 20%. For another person, the numbers could be the reverse.

      The point is, clustering content is extremely valuable, but it is also extremely individual. The single static cluster does not fail because clusters are bad, but because it is a really inadequate form of clustering.

      The key operational characteristic of the Web is dynamic semantic clustering. That is what makes it so wonderful and so popular. It is the reason that people are not interested in learning what static clusters you have chosen to deposit your content in (the “right place” for content): they want to form their own clusters dynamically, which will include content from you and from many others. Clustering is too important to be left to libraries.

      I don’t think “how do we enhance search” is the right question for content creators to be asking. It is an important question — lots of brilliant Google engineers and their rivals are working on it all the time — but it is not our question. Our question should be, how do we make content work better with search.

      For the most part, unfortunately, tech comm had gone on creating content the way it always has and complaining that search does not work well enough. The real problem is that we have not been creating content that works well with search. That is really what Every Page is Page One is about: creating content that work with search — that has a strong information scent so it gets found, that functions independently when found, and assist the reader in charting their own onward course.

      The other part of your question, though, about creating efficient links, is definitely a question we should be addressing. Links are a part of dynamic semantic clustering — allowing reader’s to cluster information of interest to them as they read. They should thus be created systematically following lines of significant subject affinity. For one practical technique for achieving this, see this post: http://everypageispageone.com/2011/08/01/more-links-less-time/

      • Vinish Garg 2014/04/03 at 12:28 #

        “For the most part, unfortunately, tech comm had gone on creating content the way it always has and complaining that search does not work well enough. The real problem is that we have not been creating content that works well with search. That is really what Every Page is Page One is about: creating content that work with search — that has a strong information scent so it gets found, that functions independently when found, and assist the reader in charting their own onward course.”

        I absolutely loved it! Thanks for making it so clear.

  5. Alessandro Stazi 2014/04/01 at 20:36 #

    Good evening Mark. In the COM&TEC conference of Bologna (December 2013):
    http://artigianodibabele.blogspot.it/2013/12/10-anni-di-com-una-bella-festa.html

    … i have spoken about an “old paradigma” (the users who seek the information) and a “new paradigma” (the information that seeks the users, since focused in the user context and user-task driven). I say “old paradigma” in a sense very close to your concept of a broken social contract between writer and reader, according to the “right place” where someone write and someone else read. But where is the “right place” today? Web 2.0/3.0, social networks, blogs, wikis, mobile devices, cloud, big data: everyone can create contents, is a relation N to N. My idea is very alike to your scheme of “bottom-up” architecture. Old, printable manuals of 600 pages can be still useful support for the knowledge. But in the next future, the main activity of a Tech Communicator not will be based on the production of traditional, sequential, hierarchical manuals. I will speak about these issues on TCEurope Colloquia conference in Aix en Provence, the next 25 of April:
    http://www.tceurope.org/colloquia/41-2014aix

    I hope to share other ideas on this question that is becoming a real pivoting issue in the future of technical communications best practices.

  6. Alessandro Stazi 2014/04/02 at 06:36 #

    Good article Mark, as usual. In the COM&TEC conference of Bologna (December 2013):
    http://artigianodibabele.blogspot.it/2013/12/10-anni-di-com-una-bella-festa.html

    … i have spoken about an “old paradigma” (the users who seek the information) and a “new paradigma” (the information that seeks the users, since focused in the user context and user-task driven). I say “old paradigma” in the sense in which you write about the broken social contract between writer and reader, according to the “right place” where someone write and someone else read. But where is the “right place” today? Web 2.0/3.0, social networks, blogs, wikis, mobile devices, cloud, big data repositories: everyone can create contents, is a relation N to N. My idea is very alike to your scheme of “bottom-up” architecture. Old, printable manuals of 600 pages can be still useful support for the knowledge. But in the next future, the main activity of a Tech Communicator not will be based on the production of traditional, sequential, hierarchical manuals. I will speak about these issues on TCEurope Colloquia conference in Aix en Provence, the next 25 of April.
    I’m glad to share and compare my vision with your approach.
    The idea of botto-up information architecture and production of modular contents will be the most interesting issue for Tec Comm in the next years.

    • Mark Baker 2014/04/02 at 23:33 #

      Thanks for the comment, Alessandro.

      Indeed, it seems like we are looking at the same phenomena in very similar ways. Good luck with your presentation.

  7. Jeff Coatsworth 2014/04/02 at 18:09 #

    Just curious Mark – where’d you do your MA? I was at IHPST in the early ’90s doing a thesis in a History of Medicine (and Technology) combo.

    • Mark Baker 2014/04/02 at 23:41 #

      Hi Jeff,

      My MA is from the University of Western Ontario, in London, Ontario, Canada. I was very lucky, actually, because they really let me construct my own program and found a prof in the engineering department who was willing to be my supervisor. This was fortunate because there were very few history of technology programs around at the time, and I could not have afforded to go to any of the schools that offered them.

Trackbacks/Pingbacks

  1. No "Right Place" for Content | Techni... - 2014/04/04

    […] Summary: Today there is no "right place" to put information that will ensure that readers find it. Instead, we have to focus on making sure our content gets filtered in to the reader's search. Ever…  […]