Transclusion is pulling content dynamically from one page into another page. Rather than cutting and pasting text from one page to another, you create a pointer to the page you are borrowing from. That pointer is resolved at run time, pulling content from the other page when your page is loaded. Transclusion was a fundamental part of Ted Nelson’s original concept of hypertext. It has never caught on, except in specific confined circumstances. Despite continued interest, it isn’t going to catch on.
Rick Yagodich has an interesting post today, On third-party transclusion, in which he discusses some of the problems inherent in one current proposal for implementing transclusion in HTML. Rick’s analysis of that proposal strikes me as sound, but I think there are two even more fundamental reasons why not just the proposal he discusses, but all proposals for implementing transclusion are doomed to fail.
Transclusion violates the fundamental nature of the Web
David Weinberger described the fundamental nature of the Web as small pieces loosely joined. That is, the Web works because all of its billions of pages are joined together loosely. If they were more tightly joined, it would be too hard to insert a new page into the Web and there would not be billions of Web pages. If they were not joined at all, you could not move through the Web, and thus those billions of Web pages would not be found or read (and therefore, again, most of them would not exist). The specific level of looseness of the connections between the page is essential to making the Web work.
But transclusion creates a tighter join between pages. When one page transcludes another, it is tightly joined to that page, and that page (whether the author knows it or not) is tightly joined to all the pages that transclude from it. Transclusion, therefore, is contrary to the nature of the Web. While Web protocols could easily be adapted to implement transclusion, the very notion of transclusion breaks the model of the Web at a much more fundamental level.
The more formal name for the practice of creating small pieces loosely joined is the software design principle of tight cohesion and loose coupling. Tight cohesion and loose coupling are essential properties of an adaptable software system (and, not coincidentally, of Web based software architectures). A software module has high cohesion if it is self contained and self sufficient. A system composed of such modules is loosely coupled if the individual modules do not depend on any knowledge of how the other modules are implemented.
Generally such architectures are implemented by having the modules communicate by passing messages to each other. You can then replace any module in the system without having to rewrite any other modules, or change the basic architecture, as long as the new module sends and receives the same message formats. This creates robust systems that are easy to change as new needs arise.
Transclusion, on the other hand, creates a content object that lacks cohesion in the most fundamental way. The implementation of the content object then relies on the internal implementation of another content object. It is therefore loosely cohesive and tightly coupled — the very antithesis of the principle that makes the Web work.
You do, of course, find transclusion in many current publishing systems, DITA being a notable example. But DITA, like most current publishing systems, is loosely cohesive and tightly coupled. This is what makes all such systems prone to be fragile, and what makes them fundamentally incompatible with each other, making exchange between systems difficult. Tight coupling is not without its advantages. It can make certain kinds of functionality — such as reuse by transclusion — easier to implement and understand. But it is fundamentally un-Web like. (Examples of publishing systems that feature tight cohesion and loose coupling are Wikis and SPFE.)
Transclusion is based on an outdated model of publishing
The idea behind transclusion is to create a new publication by borrowing content dynamically from an existing publication. This rests on the idea that publication is a permanent state: that publication creates a stable object with a stable address. This too is fundamentally un-Web-like. The Web is a dynamic media where publication is a dynamic event that comes with no guarantees of permanence either of content or place or time.
With paper publishing, the writer controls the right to make copies (copyright) but once a copy is made and sold, the copy — not the work, but the copy of the work — becomes the property of the buyer. The sale is then irrevocable. The author has no right to demand the copy back. Once published, a work belongs to the public forever, or at least as long as a legible copy remains in existence.
Electronic publishing does not work this way. Works are not so much copied as cached, and the cache of a work on your Kindle, for instance, can be removed. You buy a licence to view the content, and that license is (or certainly can be) revocable.
This is one of the fundamental objections that many have to the electronic publishing model, with some vowing for this very reason never to buy ebooks, but always to buy paper. They see the ebook model as violating the fundamental rights of the buyer. really, though, it just reflects how a different technology makes a different kind of relationship possible.
There is nothing inherently wrong with the idea of renting content, after all. Libraries do it. Video services do it. Indeed, for a great deal of the content that we consume, there is very little likelihood of our wanting to consume it again. It makes more economic sense for us to rent a one time use of the content rather than to purchase a long term or permanent right to consume it many times.
There is nothing wrong, either, with the idea of being able to unpublish something. The recent European right to be forgotten laws enact some version of this. It is a misguided version, because it grants the right for the subject of a publication to revoke it, rather than the author. But the idea that if I have published something in the past that now embarases me, I should not be allowed to take it down, makes no sense. It is equivalent to saying that once you have opened your curtains, you may never close them again. If I want to make a certain work public for a time, and them make it private again, why should I not be able to do so?
Yes, unpublishing breaks all kinds of reference and citation mechanisms that we have built up on the presumption that publishing is irrevocable, but that is not sufficient reason for denying the right to unpublish. (Spell check objects to the word unpublish — an indication that it has not been something we considered possible. It is a word we need to add to our lexicon.)
Equally important is the right to amend a publication as and when we see fit. This is so fundamental to the way the Web works, that it has changed reader’s expectations about what a publication should be. Rather than expecting a URL to permanently point to the original content that was published there, we expect it to point to the latest version of that content. We expect it to be current, not permanent.
Such an expectation, though, is fundamentally incompatible with transclusion. The act of publication provides no guarantee that the material you transclude today will be the material that shows up in your transcluding page tomorrow.
Yes, in some sense you could deliberately transclude the current state of a page element, such as when you include a live stock ticker on your page. But this is really making a call to a web service, and relying on the specific promise that that Web service makes about the content it will deliver in response to a particular query. It is not transclusion as a general mechanism for ad hoc content reuse. An ordinary Web page provides no such guarantees about what it will include in the future or what address will fetch that content.
Transclusion, as a general mechanism, therefore, will never catch on.
Web services, as a specific mechanism, providing specific promises in specific formats, have, of course, caught on big time. there is nothing more that needs to be invented here. Rather, we need to change our view of how we create, manage, and deliver content to take greater advantage of the Web services model.