For the next three days, I’ll be reporting from the 8th International Semantic Web Conference (ISWC), taking place near Washington DC. A lot of what’s going on here is very technical, so rather than repeat everything I’m hearing, I’m going to talk about the broader themes that I see emerging. After this conference, I may try to tie them together into one comprehensive post.
This is my first theme. It’s about ontology alignment but is nevertheless very interesting. Yes, actually, it really is.
An ontology is basically a taxonomy of concepts and categories and the relationships between them – it’s sort of like a network but includes heritability (if I specify properties about some group, like “dogs can bark,” then it carries down to things within that group, so we know that Shih Tzus can bark). Ontologies are pretty key to the Semantic Web because expressing relationships between concepts is essentially defining those concepts – I could turn philosopher and argue that the meaning of something can only be found in the way it relates to other things. Or I could not, and just argue that defining things in terms of their relationships is a really useful way to do it, especially if the point is to make machines understand those things and be able to reason about them. That’s why a large percentage of the people here are obsessed with building ontologies about certain things (like jet engines).
But ontologies are personal. What if I think of “Shih Tzu” as a sub-category of “pets” but you think it belongs under “dinner proteins?” Or how about if a liberal defines a homosexual relationship as a type of family and a conservative thinks it belongs under sexual perversion? There’s no way the world would ever be able to agree on one definitive ontology. Nor should it. The way we categorize things, the way we cut up and connect up everything in the world is key to who we are, how we think, and what we do. I – an atheist and cognitive psychology nerd – would go so far as to say that the human soul exists in our subjective, idiosyncratic ways of linking up information. So to impose a single ontology on the whole world – no matter how well thought out and exhaustive it is – would be tantamount to mind control or soul stealing.
To their credit, most semantic technologists I’ve talked to think this way also. That’s why they’re encouraging ontologies to be fruitful and multiply and represent as many worldviews as there are ontology-builders (though ideally there would be more than 15. (I’m joking, I’m sure there are over 22 people who can build ontologies)). But having a bunch of rivaling ontologies out there that define and categorize things in unique ways doesn’t sound like much of an organized system of data, right? That’s true, and that’s why a lot of other people are involved in aligning ontologies – matching up the instances of some concept that shows up in different ontologies.
But…they’re still not doing it that well. That’s something Pat Hayes brought up during his keynote this morning. His topic was “blogic,” or, the new form of logic (formal logic) that’s required for the web. One of his problems with using traditional logic for the web is that people are mapping instances between different ontologies using the relationship “SameAs” – even though the fact that they come from different ontologies means they’re clearly not the same as each other. People are usually aware of that, but there’s still not much they can do because there’s no “SortOfSameAs” or “SameAsInThisOneParticularWay” relationships in traditional logic that they can use instead.
Ontology alignment is still a Big Problem and it’s acknowledged as such by much of the Semantic Web community. If anyone knows of good solutions in the works, I’d love to hear about them or add to this post with some comments.