In summary, I don't like writing more code than I have to...
- This post first appeared on the Talis Consulting blog.
I opened my mailbox the other morning to a question from David Norris at BBC. They've been doing a lot of Linked Data work and we've been helping them on projects for a good while now.
The question surrounds an ongoing debate within their development community and is a very fine question indeed:
We are looking at our architecture for the Olympics. Currently, we have:
- a data layer comprised of our Triple Store and Content store.
- a service layer exposing a set of API's returning RDF.
- a presentation layer (PHP) to render the RDF into the HTML.
All fairly conventional - but we have two schools of thought:
Do the presentation developers take the RDF and walk the graph (say using something like easyRDF) and pull out the properties they need.
Or:
Do we add a domain model in PHP on top of the easyRDF objects such that developers are extracted from the RDF and can work with higher-level domain objects instead, like athlete, race etc.
One group is adamant that we should only work with the RDF, because that is the domain model and it's a performance hit (especially in PHP) and is just not the "Symantec Web way" to add another domain model.
Others advocate that adding a domain model is a standard OO approach and is the 'M' in 'MVC': the fact that the data is RDF is irrelevant.
My opinion is that it comes down to the RDF data, and therefore the ontology: if the RDF coming through to the presentation layer is large and generic, it may benefit from having a model on top to provide more high-level relevant domain objects. But if the RDF is already fairly specific, like an athlete, then walking through the RDF that describes that athlete is probably easy enough and wouldn't require another model on top of it. So I think it depends on the ontology being modelled close enough to what the presentation layer needs.
What do you think? I'd be really interested in your view. Having received it I figured a public answer would be really useful for people to consider and chime in on in the comments, David kindly agreed.
First up, the architecture in use here is nicely conventional; simple and effective. The triple store is storing metadata and the XML content store is storing documents. We would tend to put everything into the triple store by either re-modelling the XML in RDF or using XML Literals, but this group need very fast document querying using xpath and the like, so keeping their existing XML content store is a very sensible move. Keep PHP, or replace it with a web scripting language of your choice, and you have a typical setup for building webapps based on RDF.
The question is totally about what code and how much code to write in that presentation layer and why, the data storage and data access layers are sorted, giving back RDF. Having built a number of substantial applications on top of RDF stores, I do have some experience in this space and I've taken both of the approaches discussed above - converting incoming RDF to objects and simply working with the RDF.
Let's get one thing out of the way - RDF, when modelled well, is domain-modelled data. With SQL databases there are a number of compromises required to fit within tables that create friction between a domain model and the underlying SQL schema (think many-to-many). Attempting to hide this is the life's work of frameworks like Hibernate and much of Rails. If we model RDF as we would a SQL schema then we'll have the same problems, but the IAs and developers in this group know how to model RDF well, so that shouldn't be a problem.
With RDF being domain-modelled data, and a graph, it can be far simpler to map incoming RDF to objects in your code than it is with SQL databases. That makes the approach seem attractive. There are, however, some differences too. By looking at the differences we can get a feel for the problem.
Cardinality & Type
When consuming RDF we generally don't want to make any assumptions about cardinality - how many of some property there will be. With properties in our data we can cope with this by making every member variable an array, or by keeping only the first value we find if we only ever expect one. Neither is ideal but both approaches work to map the RDF into object properties.When we come to look at types, classes of things, we have a harder problem, though. It's common, and useful, in RDF to make type statements about resources and very often a resource will have several types. Types are not a special case in RDF, just as with other properties there can be many of them. This presents a problem in mapping to an OOP object model where an object is of one type (with supertypes, admittedly). You can specify multiple types in many OOP languages, often through the use of interfaces, but you do this at the class level and it is consistent across all instances. In RDF we make type statements at the instance level, so a resource can be of many types. Mapping this, and maintaining a mapping in your OOP code will either a) be really hard or b) constrain what you can say in the data. Option b is not ideal as it can prevent others from doing stuff in the data and making more use of it.
Part of this mismatch on type systems comes from the OOP approach of combining data and behaviour into objects together. Over time this has been constrained and adapted in a number of ways (no multiple inheritance, introduction of interfaces) in order to make a codebase more manageable and hopefully prevent coders getting themselves too tied up in knots. RDF carries no behaviour, it's a description of something, so the same constraints aren't necessary. This is the main issue you face mapping RDF to an OOP object model.
Programming Style
What we have ended up with, in libraries like [Moriarty](http://code.google.com/p/moriarty/), are tools that help us work with the graph quickly and easily. SimpleGraph has functions like get_subjects_of_type($t) which returns a simple array of all the resource URIs of that type. You can then use those in get_subject_subgraph($s) to extract part of the graph to hand off to something else, say a render function.Moriarty's SimpleGraph has quite a number of these kinds of functions for making sense of the graph without ever having to work with the underlying nested arrays directly. This pairs up very nicely with functions to do whatever it is you want to do.
$events = $graph->get_subjects_of_type(Ontologies::Sport.'Event');
foreach ($events and $event) {
render_sporting_event($event);
}
Of course, functions in PHP and other scripting languages are global, and that's really not nice, so we often want to scope those and that's where objects tend to come back into play.
Say we're rendering information about a sporting event the pseudocode might look something like this:
$events = $graph->get_subjects_of_type(Ontologies::Sport.'Event'); foreach ($events and $event) { SportingEvent::render($event); }
This approach differs from a MVC approach because the graph isn't routinely and completely converted into domain model objects, as that approach is very constraining. What it does is combine graph handling using SimpleGraph with objects for code scoping, but by late-binding of the graph parts and the objects used to present them, the graph is not constrained by the OOP approach.If you're using a more templated approach, so you don't want a render() function, then simple objects that give access to the values for display is a good approach and can make the code more readable than using graph-centric functions throughout and also offer components that can be easily unit-tested.