Why You Hate Your CMS

By | 2011/08/11

Today, Alan Houser (@arh) tweeted:

Before I die, I want to hear somebody speak well of their CMS. Especially in #techcomm. Surely somebody must be happy with theirs.

To which I (@mbakeranalecta) replied:

Indeed, but the CMS model is wrong. Can’t manage large data sets on desktop model. Can’t have good implementation of a broken model.

Which needs more explanation than 140 characters allows. So here goes. The problem with CMS generally is that they apply one scale of solution to a different scale of problem.

We can distinguish four scales of interest here: tiny, small, large, and enormous. To set the scale, I’ll define the extremes first, and work toward the middle.

  • Tiny: At the tiny end of the scale are small pieces that you can keep in your head as you write. Topics are tiny. Blog posts are tiny. Memos are tiny. You generally don’t need to plan or outline a topic, blog, or memo. You can keep the whole shape of it in your head as you write.
  • Enormous: The web is enormous. No one can hold the web in their head, no one could read the web in a lifetime, or a thousand lifetimes. You can only sample the web. One experiences the web as an enormous sea of tiny pieces.
  • Small: Compared to the scale of the web, books are small. War and Peace is small. The Widget Programmer’s Guide, all 600 pages of it, is small. Unlike tiny things, you can’t hold an entire small thing in your head at once. (We have tiny minds.) To write a book, you need to plan, to outline, to create a scaffolding to guide you as you write.
  • Large: Between the enormousness of the Web and the smallness of War and Peace, lies the large. Amazon.com is large. Wikipedia is large. A good-sized documentation set for a major product is large. The large is never the work of one person. But it has boundaries. You may not be able to read it all, but you can at least comprehend it all.

As I have hinted in the descriptions, different management techniques apply to content on different scales.

By themselves, tiny items of  content require little or no management. Large and enormous collections of tiny items may impose standards on tiny objects that are to be included in large and enormous collections. But even if the tiny must meet standards, as an individual object, it doesn’t require any significant degree of management to create.

The enormous, on the other hand, is essentially unmanageable. To borrow David Weinberger’s word, the enormous is miscellaneous. The power of the web, as Weinberger argues, lies in its miscellaneous character. No one can manage the enormous; everyone can search it, tag it, add to it, like it, link it, and even curate it, but it defies all attempts to manage it.

The small can be managed at the desktop level. Many desktop tools allow you to organize small pieces of content. Word has its outline mode. FrameMaker allows you to divide a book into separate chapter files and combine them with a book file. Desktop tools provide the means to search the small, and its parts. They let you browse the structure. They let you create links and cross reference, and generate indexes and tables of contents.

The wealth of organizing methods that desktop tools provide allow you to efficiently navigate the small, examine its parts, and create and manage connections between them. In short, desktop tools support management by inspection.

The large, on the other hand, is too big to be managed by inspection. The large falls between the ungovernable miscellany of the enormous and the human management by inspection that is possible with the small. The large is managed by structure. It is the realm of the database, the query, and the report.

Amazon.com is large. The number of pages on Amazon.com is practically infinite, since they include not only details of each of the books in Amazon’s collection, but suggestions for the particular reader, based on everything Amazon knows about that reader, as well as data about other readers who have read the same or similar books. At the same time, almost every page on Amazon has the exact same structure. All the different parts: the book description, the reviews, the recommendations, are always there, in the same order.

Amazon pages look like this because none of them is written or organized by a human writer. Every page is created dynamically by a series of database queries. Every page is what in database terms would be called a report.

Wikipedia is similarly large. Though its pages are not quite so dynamic as Amazon’s,  Wikipedia is not organized by hand. If you enter through the portal, you are browsing query results. If you arrive by search, each article stands alone, though it is linked to countless other articles. But no one organized Wikipedia. It is a database; not a book. (This is not new with Wikipedia. Early encyclopedias attempted to create a hierarchical taxonomy of all human knowledge. Later print encyclopedias abandoned this attempt at organization as hopeless and simply listed articles alphabetically and provided an index volume for querying the collection.)

In a perfect world, these different scales of content would be nicely aligned with each other. It would always be clear when you were dealing with the tiny, the small, the large, and the enormous. You would clearly see which type of management technique to apply to each, and all would be well.

In the imperfect world we live and write in, unfortunately, there is an uncomfortable gap between the small and the large. There are content sets that are getting too big to be managed the way small things are managed, by human inspection, but which are not regular enough, not structured enough to be managed as a database, where queries replace human inspection as the means of selecting, ordering, and linking content.

It is this uncomfortable gap that content management systems try to fill. Most CMS have a database as a back end, but they accept content that is not regular enough to be managed purely by queries and report generators. So the CMS attempts to provide a desktop interface to the content, to allow human beings to organize it the way they are used to organizing small content in a word processor or DTP application.

The model of the CMS is to use the desktop model to manage large content sets. But the desktop model only works for small content sets. However sophisticated the CMS is in allowing writers to browse the content collection, the human brain is still overwhelmed by the attempt to organize that much content by inspection.

CMSs also attempt to help by supporting collaboration, allowing the management chores to be distributed among several writers (or information architects, as people are called when their entire job has become to organize the large). But management by inspection does not really work in a collaborative environment, because there are not enough constraints to keep the collaborators from working at cross purposes. (One of the main reasons that structure is required for managing the large is to constrain the actions of many contributors so that they all work to the same end without constantly running into each other.)

The consequence of trying to manage the large using the management model of the small is that the content management overhead grows as the content set gets larger. This is why so many content management systems that seemed to work well at first slowly grind to a halt. It is why so many DITA users are starting to complain that they spend more time maintaining maps than they do creating content.

And that is why Alan may never get his wish to hear someone speak well of their CMS, especially in #techcomm. The model is wrong for the scale of the problem, and the result must always be frustration.

(PS: I retitled this post from Content Management and the Problem of Scale to Why You Hate Your CMS. Here’s why.)

7 thoughts on “Why You Hate Your CMS

  1. Paul Korir

    Your argument, quite articulate and well reasoned, gives credence to the notion of object-oriented methods and patterning in software development – the growing need to retain a handle on data structure without sacrificing manipulability. The consequences are increasingly slower code (hence the need for faster machines) albeit with greater functionality (how high can high-level programming languages go?). Perhaps it’s time for computer scientists to come up with abstractions that lend themselves well to both needs: preservation of structure and functional flexibility.
    PK

    Reply
  2. Chris Atherton

    Nice. I’m reminded of the observation that practicing gynaecology is akin to trying to paint a hallway through the letterbox; managing content through a CMS seems at least partly analogous.

    Reply
    1. Mark Baker Post author

      Hi Chris. Thanks for the comment. That is a great metaphor! I agree it seems apt. Sometimes it is not only the size of the data set that is the problem, but the angles from which you can reach it.

      Reply
      1. Chris Atherton

        Absolutely. Had a nice moment of this the other day, when we took a bunch of website abstractions and functional requirements off the computer and organised it all over the wall. Went from “gah, I can’t apprehend this” to “I see exactly how this all fits together” in about two hours.

        Reply
        1. Mark Baker Post author

          Yes, it’s great when you can do that. Beyond a certain scale of data, however, it is impossible to spread it out on the wall. There is just too much. It is then that you need to turn to analytic techniques to query, summarize, and re-order the data so that you can comprehend it.

          This is where content management systems tend to fall down. They contain too much data to comprehend by simply viewing it spread out, but they don’t make the data accessible enough to apply effective analytic techniques to it.

          Reply
  3. Pingback: Time for Content Management to Come out of the Closet - Every Page is Page One

Leave a Reply