The Web Does Minimalism

By | 2012/02/06

It struck me today that the Web does Minimalism. Not only does it do it, it does it naturally, and it does it well. Consider:

Here’s a common listing of the principle tenants of minimalism (borrowed from via Google):

  • Take An Action-Oriented Approach
  • Aim for Guided Exploration
  • Position the Documentation in the Task Domain
  • Support Error Recognition and Recovery
  • Design For Non-linear Reading
  • Embrace the Motto: Less Is more

Let’s look at how the Web does each of these:

Take An Action-Oriented Approach

Search for help on the web, and you will get instructions for the task you are trying to accomplish. Virtually every hit you get will be action oriented. You could stumble into a uber-geek forum that will baffle you with baffle-gab in the hopes that you will go away, but so what? There is lots and lots of action oriented help out there.

Aim for Guided Exploration

The Web is constantly inviting you to explore. People tweet about things they want you to check out. Blogs enthusiastically recommend trying out such and such a feature of a product. But the real beauty of the Web is that, for most tasks, you can begin in confidence knowing that if you get stuck, you can Google for help. And in the unlikely event that you don’t find any preexisting information that helps you, you can certainly find a forum where you can ask for help and where helpful people will guide you through your problem step by step.

Position the Documentation in the Task Domain

The Web is anchored firmly in the task domain. People ask how to do something. Other people answer with the necessary steps to do that thing. It is all about the task. The answers may assume more knowledge than you have, but you can usually find another answer that does not make the same assumption. Overall, though, the vast majority of technical communication on the web is rooted in the task domain. Not only that, it is written by people who have actually done the task.

Support Error Recognition and Recovery

As David Weinberger points out in Too Big To Know, it you get an error message for a system these days, all you have to do is Google the text of the error message and you will find an answer. No matter what kind of mess you get into, you can find resources on the Web that will help you recognize the error you have made and tell you how to fix it. Those resources may be existing content, or they may be forum members. The fact that you can communicate with real people who have already solved the very problem you are having means that the Web supports error recognition and recovery better than any manual ever could.

Design For Non-linear Reading

Check. No need to elaborate on this one.

Embrace the Motto: Less Is more

At first glance, this may not seem to fit. If the Web is about anything, it is about more, more, and more more. But in an important way, that more is less. My personal take on Minimalism is that it is not about less content, it is about the reader spending less time on the content. The problem with trying to reduce the amount of time that the reader spends on the content is that every reader is different. Cut back heavily to serve the quick, well-informed reader and you risk marooning the slow, clueless reader. Expand to support the slow, clueless reader and you risk boring the quick, well-informed reader. But the web has a cure for this: massive redundancy. The web does not choose which reader to serve; it serves them all. Every reader can find instructions that match their level of knowledge and their vocabulary. The obscene prolixity of the web, as sifted by fantastic relevance engines, results in less reading for the individual reader. (Not, of course, that it does this equally well every time, but what does?)

So there we have it. The web does Minimalism, and it does it well. Makes you want to ask if our role needs to be re-examined, doesn’t it.


9 thoughts on “The Web Does Minimalism

  1. Larry Kunz

    I’m not totally buying this, Mark. The web is just…there. The biggest repository of content in human history. Through deliberate curation and judicious use of search engines, people have found ways to tease out information that’s germane to doing tasks. But saying the web is minimalist is like saying a block of marble is beautiful while ignoring the sculptor’s role in forming something beautiful from it.

    You say “the Web is anchored firmly in the task domain.” Yet I constantly turn to it for conceptual and reference information. What are the basic concepts behind macroeconomics? What will the weather be like in Chicago this week? Again, the web is just a mass of content, and each of us makes of it whatever we want to make of it.

    1. Mark Baker Post author

      Thanks for the reply, Larry.

      You’ve forced me to think through what I was trying to express a little more. I think it’s this: when I say that the web does minimalism, I don’t mean that it produces an artifact that looks anything like what an individual writer would produce if they were writing in a minimalist way. The Web, as a shared space, does not work that way. My point is that the Web, as it is, performs the functions that minimalism recommends. That is, it delivers the kind of information that adult learners need in the way, and at the time, that the adult learner needs it.

      A writer would attempt to be minimalist by careful selection and organization of content. The web does minimalism by supplying vast amounts of miscellaneous content and then organizing it on the fly for a particular user via search, interaction, and social curation. It isn’t a block of marble, its a huge collection of pebbles. The beauty of it is, very often the perfect pebble is out there, and very often the combination of Google, forums, and social networks turns up that perfect pebble.

      The essential difference between writer minimalism and web minimalism, is that the writer can only produce a small select amount of content, and can only draw on a small amount of information and experiences to produce that content. The writer’s content may be brilliantly curated, organized, and presented, but it cannot always hit the mark for every user and every user problem. The web, on the other hand, draws on vastly more information and vastly more experience to create vastly more content, so that the chances that it has the content that will hit the mark for every user and every user problem are vastly better.

      Yes, the users road to get to the web’s content may be a little longer and little bumpier than with the writer’s work, but search and social curation are good enough that they can usually get there. More to the point, the exact answer your need is more likely to be there. And I have noticed myself that I am more patient with the web than with docs because if the docs don’t give me an answer immediately I have little faith that the answer is in there somewhere, whereas on the web, I am more confident that the answer is available and therefore more willing to dig for it.

      As for being anchored in the task domain, I do think that there is a huge preponderance of task-based material on the web, but you are right that there is also a large amount of conceptual and reference material. Perhaps anchored is not quite the right word for the web. It might be truer to say that the web turns whatever face to you that you ask of it. If you ask something in the task domain, you get responses in the task domain. It isn’t anchored in any domain, but if you address it in that domain, it returns answers anchored in that domain.

      Which, I guess, points to the essence of the difference: writer minimalism is statically minimalist, while the web is dynamically minimalist.

  2. Larry Kunz

    Thanks, Mark. That helps me better understand what you were saying. I like your comparing the web to a collection of pebbles rather than a block of marble – that’s spot on. I also like your insight in saying that we’re more willing to trust the web than we’re willing to trust documentation to contain that just-right piece of content.

    It seems like you’re saying — and I don’t want to put words into your mouth — that search engines and social curation evolved out of necessity, as a direct outgrowth of the nature of the web. They had to evolve so that people would have a way to find the pebbles they wanted. Would you say that as the web continues to develop in the future, these tools will continue to evolve (and perhaps new ones will emerge) as a natural outgrowth of that development?

    You’ve given us a lot to think about, and I thank you.

    1. Mark Baker Post author

      Hi Larry,

      Well, I wasn’t saying that the web evolved that way out of necessity, but now you say it, I think you are right. Early on, Yahoo tried to do it the old way, by hiring editors to catalog the web, but it soon outgrew any possibility of that being practical. At that point the web was open for Google and for Facebook and Twitter to figure out a new paradigm.

      I’m beginning to see that what sets the Web apart from the world of books is that books are designed around the problem of scarcity. Information was hard to find and hard to navigate and so selection and organization were valuable. The author selected and organized material optimally for a cross sections of readers, and that service was of value to them.

      On the web, everything revolves around the problem of abundance. Selection and organization turn abundance into scarcity, and we don’t want that: we want to benefit from the abundance, and that requires entirely different mechanisms. So the web provides the mechanisms, through search and social curation, to find the perfect morsel in the vast abundance.

      These mechanisms don’t require an intermediary, because no one intermediary could every keep up. In David Weinberger’s words, the Web is too big to know. The crowd and the machine are the intermediaries, because only they have the bandwidth. Whatever the web evolves into, this will be the fundamental fact to which everything must adapt: you can’t curate it; it curates itself.

      These mechanisms may not be as refined or as precise as the mechanisms of scarcity, but the refinement of the mechanism is not the point. The point is the value of what is found. Searching the web may have the messiness and the uncertainty of panning for gold, but the point is, when you find the gold, it’s gold.

      Will it continue to evolve and improve? I certain it will. Will it ever look or behave like the old mechanisms of scarcity? I’m certain it will not.

  3. Ray Gallon

    Mark and Larry both, thanks for this stimulating article and exchange.

    The web is a body of knowledge. Some of that knowledge, by the way, is false, as we well know.

    People have been saying that the challenge is to find, as Mark puts it, the perfect pebble, amid the mass of pebbles rolling around in the ebb and flow of the tides. Information Architect Ian Barker says, “Finding is the New Doing.”

    If Google, social nets, etc. are useful tools for doing this, we need to be conscious that they are also creating a scarcity model by limiting what we see on the basis of “personal preference” algorithms that are being used by more than just Amazon. Check out this TED talk by Eli Pariser if you haven’t already seen it:

    Curating also creates a scarcity model.

    This is not always negative. If all our consumption of knowledge and entertainment follows the “on demand” model (UTube, Spotify, etc.), we only repeat what we already know, based on memory, taste and judgement. We lose the element of surprise and discovery. If Google and Facebook and all the rest narrow their search algorithms to our memory, taste and judgement, we also lose.

    I come from the world of radio, originally, and one of the nice things about radio, when it’s not governed solely by audience numbers, is that someone’s intelligence is making the selection, and if we go to a radio station because we mostly like that selection, we are almost certain to also find something new and surprising chosen by that intelligence.

    Of course, we need lots of stations with lots of different intelligences programming them to avoid another narrowing, scarcity model, but it’s perhaps paradoxical that selection can sometimes lead to opening, and overabundance can also sometimes be paralyzing.

    If what I’ve just written seems contradictory with itself – hey – welcome to our complex world. There is no simple explanation.

    To come back to Mark’s original premise, however, that the web meets the requirements of minimalism, in a certain fashion, I’d have to agree – but there is one objective of minimalism that is not met by the web: lowering the cost of information creation. Unless, of course, we just abandon our users to the web…

    1. Mark Baker

      Ray, thanks for your thoughtful comment. You raise a number of important issues.

      In Everything is Miscellaneous, Weinberger says that the Web transfers the power to select and organize content from the producer to the consumer. Since selecting and organizing content is a source of power, the producers naturally fight back. SOPA/PIPA is just one skirmish in that war. This war is not generally about ideology, but about money. The Facebook behavior Eli Pariser describes is not ideological in nature. FaceBook is not filtering conservative voices from his feed because Mark Zuckerberg is a liberal and wants everyone to read liberal ideas. FaceBook will filter out liberal voices from the feeds of conservative members just the same way. It is doing it because Mark Zuckerberg is greedy and want to keep people on his site for as long as possible so he can show them more ads. He is building a honey-trap, nothing more, nothing less.

      But it is not like you could not isolate yourself from conflicting voices before the Internet. You could read partisan periodicals, join partisan clubs, attend partisan parties. Dickens pilloried this behavior in the Pickwick Papers, with the story of the editor of the Eatonswill Gazette. One could argue that the leakiness of the net’s filers actually increases the chances that those who want to isolate themselves politically will still be exposed to opposing points of view. The fact is, some people like to expose themselves to opposing voices, and some do not. Eli Pariser actually self-identified himself as someone who prefers not to expose himself to opposing views by not clicking on conservative links.

      Filtering the vastness of the Web has to have a strong algorithmic component. Only algorithms can sift so much content, and only algorithms can co-ordinate the conscious and unconscious curation of billions of users. But we still get to pick with algorithms to trust. As long as we have competing commercial algorithms to choose from, I don’t think the danger is to great.

      And, of course, we have always been subject to competing commercial algorithms. Your radio station programmer was executing an algorithm intended to keep listeners tuning in by giving them just enough novelty to keep them tuned in without so distressing their sensibilities that they tuned to a different station. Computerizing that algorithm simply makes it more efficient.

      The great difference in the commercially driven filtering that occurs on the web is that, unlike the old human/paper filtering, it does not begin by immediately filtering out the long tail. In the human/paper world the cost of publishing any items means that it has to meet a certain sales projection before it is made publicly available at all. In the algorithm/web model, those input costs drop to zero, so everything is available to be matched by filter. Thus the web has been a huge boon to independent musicians who can’t sell enough to interest a label, can can sell enough to live on, or at least finance their hobby. The long tail will remain a tail, of course. But at least the net does not dock that tail the way the paper world did.

      This is really important for technical communications, because almost every individual task problem that users of most products face is part of the long tail. Not enough people have that exact problem to make it worthwhile for a publisher or a manufacturer to include it in formal documentation, but for the few that do have it, it is really important information. I estimate that I make two or three such long tail inquiries every day.

      So, I don’t think it is a matter of technical writers deciding whether or not to abandon readers to the web. The shoe is on the other foot. In droves, readers are abandoning us for the web (just as writers, in droves, are abandoning STC). They are doing so not because we did a bad job by paper-world standards, necessarily, but because the information they want is in the long tail. I myself hardly ever consult the help for any program or gadget I use. I Google for help as my *first* option.

      This does not mean that there is no role for technical communications. Sometimes, the information I find does come from docs written by professional technical writers. At other times, it comes from fellow users. The point is, on the web, I expect to find it all, and thus the Web is the first and only place I look. The only way in which technical writers can be said to abandoning their users to the web (as opposed to vice versa) is if they try to set up a rival channel and don’t put their material on the web where Google can find it.

      1. Ray Gallon

        Mark, no argument about the need for algorithms. But I do have an argument about not having a choice of which algorithm, or whether the algorithm, is applied to my searches. If I’m on Amazon, I expect them to try to sell me by passing more stuff my way like what I looked at last time. I don’t expect that from Facebook. I’m not paranoid about this stuff, I just want it to be up front and to have the right to switch it off without digging around in some very obscure text (and FB’s so-called “help” is a wonderful example of what not to do).

        Your comment about the radio programming that says, “Your radio station programmer was executing an algorithm intended to keep listeners tuning in by giving them just enough novelty to keep them tuned in without so distressing their sensibilities that they tuned to a different station. Computerizing that algorithm simply makes it more efficient.” is off the mark. Radio stations programmed by computer generally don’t work, and don’t last. The point is the human intelligence that makes crazy associations that the computer won’t do provides the element of surprise.

        So, according to you, there is still a role for technical communications. Is there a role for technical communicators? Professional ones, I mean?

        1. Mark Baker

          Hi Ray,

          Well, you may not expect it from Amazon and not from FaceBook, but FaceBook is selling stuff as well. The difference is that Amazon is selling stuff to you, whereas FaceBook is selling you — or a portion of your attention — to advertisers. But both are attempting essentially the same thing: to tune your experience so that you will stick around and/or come back often. Sometimes their tactics will strike people as dirty pool, at which point they will have the opposite effect and drive people away. If enough people react this way, the system self corrects because the dirty-pool algorithm stops working.

          There is nothing unique to the Web about these tactics. Grocery stores putting the milk at the very end of the store are practicing a similar sort of retain-and-up-sell tactic. If there is anything different on the web, it is that the dirty pool tactics get exposed sooner. So your concerns may be justified, but I don’t think they are concerns about the web so much as concerns about commerce generally. Of course, commerce itself is regulated by the algorithm of free market economics. In a real sense, people don’t create these algorithms — the market does by rewarding those algorithms that work and punishing those that don’t.

          You are right the programming a radio station by algorithm does not seem to work. But programming iTunes or YouTube by algorithm does. Does it work in the same way that a successful radio programmer works? Maybe not. But people vote with their feet. One of the things that annoys me about much of the discussion in technical communications today is that people persist in arguing with success. If the algorithm works, it works. If it attracts people and holds their attention, then the algorithm is successful.

          Is there still a role of professional technical communicators? I think so, yes. It may even turn out to be an expanded one. But we have to stop arguing with success. The Web works as a source of technical content. Community content works. People are flocking to the web as a source of technical content, and as long as we keep arguing against it, we will become more and more irrelevant.

          Where are the opportunities to continue getting paid to do technical communications? I think they will exist in a number of areas including, but not limited to:

          * The stuff that happens off the net — and that will always be a significant niche with major military and industrial concerns needing to develop content in secure environments.

          * The stuff that the Web does not do well — the boring stuff like comprehensive reference material.

          * Content marketing — helping companies to keep the consumer’s eyes on their site and on their content. This means not trying to play in a closed world that nobody visits, but playing on the Web and providing superior value that attracts eyeballs in the open information market. And also in doing what FaceBook and Amazon do — focus on retention, on keeping people on the site and keeping them coming back.

          But we have to stop arguing with success, and instead acknowledge it and emulate it, however much it violates our old sense of propriety. To put it another way, the technical communications profession in the future will consist of those technical writers who fit the new information marketplace and its algorithms. We do not get to choose; the market will chose what pleases it. We can only attempt to make ourselves more attractive to the market.

  4. Pingback: Why Isn’t It? | Rant of a Humanist Nerd

Leave a Reply