Decoupling your CMS means new opportunities for collaboration, but also new possibilities to scale, more flexibility in implementation and the chance to realize the dream of multi channel content authoring.
How would the communication between the web editing tool and the backend work, then?
First of all, the web editing tool has to understand the contents of the page. It has to understand what parts of the page should be editable, and how they connect together. If there is a list of news for instance, the tool needs to understand it enough to enable users to add new news items. The easy way of accomplishing this is to add some semantic annotations to the HTML pages. These annotations could be handled via Microformats, HTML5 microdata, but the most power lies with RDFa.
RDFa is a way to describe the meaning of particular HTML elements using simple attributes. For example:
News item title
News item contents
Here we get all the necessary information for making a blog entry editable:
As a side effect, we also manage to make our page more understandable to search engines and other semantic tools. So the annotations are not just needed for UI, but also for SEO.
Having contents of a page described via RDFa makes it very easy to extract the content model into JavaScript. We can have a common utility library for doing this, but we also should have a common way of keeping track of these content objects. Enter Backbone.js:
Backbone supplies structure to JavaScript-heavy applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing application over a RESTful JSON interface. With Backbone, the content extracted from the RDFa-annotated HTML page is easily manageable via JavaScript. Consider for example:
This JS would work across all the different CMS implementations. Backbone.js provides a quite nice RESTful implementation of communicating with the server with JSON, but it can be easily overridden with CMS-specific implementation by just implementing a new Backbone.Sync method. Look for example at the localStorage Backbone.js sync implementation.
The purpose of the content repository layer is to separate the server side business logic from the actual storage. By doing this it becomes possible to reuse the same business logic with radically different storage implementations. For example a CMS like Drupal that is used from very small sites to some of the biggest CMS sites in the world faces a difficult challenge in trying to find an optimal solution for all user groups. In the end the storage layer ends up being a compromise rather than an optimal solution for each target group. If Drupal would leverage a clearly defined storage API, ideally based on an independent standard like PHPCR, users would be able to choose the implementation that best fits their scalability requirements that also match the available hardware and software infrastructure.
For example smaller sites might choose to use SQLite for persistence, while a larger site might prefer to leverage a solution like Jackrabbit. Yet other sites might prefer using the file system for persistence but want to hook in a full text search indexing solution like Solr or ElasticSearch. The key thing is that any of these choices should only require changes in the configuration, but not in any actual business logic. Via a feature discovery API the business logic can automatically adjust itself to leverage optional features.
Once the different Content Management Systems describe their content with RDFa, and provide an unified JavaScript API to it, lots of things become possible. While most systems probably want to have their own look-and-feel, still many features can be shared. Consider for example:
All of these would be quite hard to implement by an individual CMS project. But if we have a common JS layer available, the effort can be shared by all CMS projects implementing these ideas.
In the same way once a CMS is using a content repository it suddenly becomes possible to collaborate on its implementation with other projects, thereby increasing the choices for users and reducing the required development resources from each project.
Obviously when decoupling the content authoring experience it is critical to also ensure that content is actually
managed as content rather than as “final pages” with a specific representation linked to a single page (on a single
device). As such it is important to realize that just because one is using Javascript rather than native HTML-forms
for editing, it is not necessary to use a WYSIWIG approach. In this sense even when using tools like create.js it
might still make sense to explicitly not use inline editing or use inline editing without WYSIWIG. As such the concepts
described on this page obviously apply to not only editing in the frontend but also editing in a backend system. For
example a key advantage to using inline editing over form based, specifically textarea
form fields, is that many
users find it needlessly constraining to use a fixed size field for editing when in the end the content on the pages
is allowed to flow freely. Furthermore browsers still lack many widgets f.e. for maps, dates and more application
specific content. However it should be noted that the key here is decoupling and not the specific representation of
editing tools in the client. As such even when using RDFa or other similar semantic markup to describe the content it
can at times still make sense to render the editing UI using standard HTML form elements.
There have been prior efforts at doing something similar. In the early 2000s, OSCOM made the Twingle tool that was able to edit and save content with multiple CMSs. Then there was the Atom Publishing Protocol and the Neutrol Protocol efforts, and also CMIS. But all of these mandated that the systems would have to implement some particular server-side protocol. The advantage of the approach promoted here is that the only server-side change needed is adding RDFa annotations to HTML templates, and then the rest happens on the JavaScript level.