Semantic Embed: Part 3

Librarians are all about categorizing things. So is RDF. Are they a good match? That’s what I was thinking about at my third New York Semantic Web Meetup event…

The Librarian and RDF (Barbara McGlamery)

Barbara McGlamery, the first librarian of the evening, is actually an ex-librarian and, recently, an ex-ontologist for Time Inc (she just left Time for Martha Stewart). She talked about the application of the Semantic Web to Time’s online content, which basically follows an ontology-lite approach that consists of 1) setting up ontologies to define some rules and properties, 2) importing them into taxonomies, where resources (like ‘Will Smith’) are described, and 3) creating ‘navigational taxonomies’ so that editors and other people can access the information in whatever ways they want (for example, by using alternate names). Whenever an editor publishes a new article, he or she manually tags it with all the relevant resources, which makes it possible for machines to do basic inferences (like noticing that you were reading an article about Will Smith and recommending articles to you about movies Will Smith starred in, based on its awareness of the RDF triple: ‘Will Smith is a leadPerformerIn Hancock’). Which sounds great, except that the inferencing part didn’t work that well. McGlamery explained that the information just ended up being too heavy, which meant that inferencing was slow and couldn’t be very complex.

I thought Time’s attempt to plunge into the Semantic Web was admirable (they were apparently very early adopters of the technology), but I couldn’t quite understand their reasons for it until it became clear that it was just-another-Old-Media story. Sure, Time was adopting innovative technology, but it was for decidedly non-innovative ends: as another means of control over their content. After her talk, I asked McGlamery why Time had even bothered with all this Semantic Web inferencing for their article-recommendation feature – why not just recommend articles that were popular with other readers like you? That’s how most recommendation engines work. McGlamery’s reply was that Time is a hundred-year-old company and therefore favors the ‘curatorial’ approach over the crowdsourcing one, which I think explains why ontologies looked so good to them. I’ve talked before about how ontologies are in some sense a form of control – though I think they can be used for great things, especially in the news business. The question is just whether Time is going about using them in the right way…

…which is something I won’t answer, but instead briefly describe:

Jon Phipps’ Rant about RDA

Actually, this story isn’t much different from the first one. Jon Phipps’ rant was also about old control-systems adjusting (or failing to adjust to) to the new landscape of data and metadata. RDA stands for ‘Resource Description and Access’ and at this point consists of 1300+ pages intended to represent the collective wisdom of generations of catalogers. Phipps still thinks cataloging is worth doing (especially the informal kind that everyone does when they tag a photo in Flickr or bookmark a site on Delicious), but was mostly frustrated about the inflexibility of legions of catalogers in transitioning from their old rules to new ones.

Quote of the evening (from an audience member): “You can’t even get people to use Excel in the public library system. RDF? Forget about it.”

A Call for Data Visualization

This is my second post about the themes/problems with the Semantic Web, inspired by the three-day International Semantic Web Conference, which I’m attending. My first post was about Ontology Alignment. This post is about the critical need for ways to visualize the Semantic Web. I have no idea what my next post will be about.

Until people have some visual way of understanding the Semantic Web, it’s going to have an extremely difficult time breaking out of its self-selected, academic bubble.

That’s a problem, because even though technologists here understand it and have some great ideas about applying it to various industries, it won’t be until random people with unrelated, unforeseen problems really get the Semantic Web that there’s even a chance of that coveted aha! moment where it suddenly rises to prominence. (Note: whether or not that moment is going to happen – or should – is a matter of dispute). I don’t mean that people need to be able to squeeze their eyes shut and imagine some tangled, shifting blob-thing that is the Semantic Web, though that would be nice – I’ve tried asking around for good metaphors for the Semantic Web (it’s just a giant database – well, more like a multi-dimensional database, if you can imagine that – or like that big computer they use in Minority Report – a ‘web of data’ sure – but what does data look like?). It isn’t easy. But what we can see is semantic data, and if it can be presented in a clear, flexible, and – importantly – intuitive way (no SPARQL querying required for interaction!) then I think people would start to get what it’s all about.

I’ve seen lots of applications of Semantic Web data, but what I keep looking for are visualizations. The difference being that applications show you data regarding some particular problem or in some particular context, whereas a good visualization would provide a much broader look at a bunch of different kinds of data that can be easily manipulated and viewed in as many creative, useful ways as possible. Stuff like Allegrograph and Simile are much closer to what I’m imagining, but still not as broad and flexible as I’d like. I want something that I can point a friend to and say, “You’re looking at the Semantic Web! This is the data of the web, and these are the ways we have of linking it up so far. If it looks like a mess to you, try playing around until you’re looking at something important to you in a way that makes sense to you.”

Why aren’t applications good enough for this? Because:

1) they don’t look that different from regular mashups, so it’s hard for users to grasp the significance of the technology. From then on, they’ll think of the Semantic Web as something that solved some minor data integration problem, and won’t be able to imagine it in different contexts or solving different sorts of problems.

2) the people here aren’t going to come up with every potential application of Semantic Web data and – quite possibly – they won’t come up with the best applications. Someone else – less tech-savvy but more plugged into marketing or social networking or whatever – might be able to leverage the technology in a much better way, if only they understood its full power.

I’ve heard rumors (won’t say where) that a panel/workshop focusing on data visualization was turned down by the ISWC (apparently, this topic was also brought up at the Town Hall meeting, which I didn’t attend). At any rate, “interface” was among the terms most frequently associated with papers that were turned down, as they told us during Tuesday’s opening ceremony. I accept that, as a primarily academic conference, ISWC is catering more toward Semantic technologists/scholars than industry-oriented people. But I feel strongly enough about the need for better visualization of the Semantic Web to argue that this is a mistake. It reflects the internally-oriented nature of the Semantic Web academic community, which could benefit greatly from outside perspectives (Tom Mitchell – who’s not a traditional Semantic Web guy – gave such an intruiging keynote this morning in part because he’s able to bring foreign ideas and solutions to the community). The Semantic Web movement is past ready to open itself up to the rest of the world and making the Semantic Web into something everyone can see and understand is the first step.

Ontology Alignment (is not the SameAs but is CloselyRelatedTo) Reconciling Worldviews

For the next three days, I’ll be reporting from the 8th International Semantic Web Conference (ISWC), taking place near Washington DC. A lot of what’s going on here is very technical, so rather than repeat everything I’m hearing, I’m going to talk about the broader themes that I see emerging. After this conference, I may try to tie them together into one comprehensive post.

This is my first theme. It’s about ontology alignment but is nevertheless very interesting. Yes, actually, it really is.

An ontology is basically a taxonomy of concepts and categories and the relationships between them – it’s sort of like a network but includes heritability (if I specify properties about some group, like “dogs can bark,” then it carries down to things within that group, so we know that Shih Tzus can bark). Ontologies are pretty key to the Semantic Web because expressing relationships between concepts is essentially defining those concepts – I could turn philosopher and argue that the meaning of something can only be found in the way it relates to other things. Or I could not, and just argue that defining things in terms of their relationships is a really useful way to do it, especially if the point is to make machines understand those things and be able to reason about them. That’s why a large percentage of the people here are obsessed with building ontologies about certain things (like jet engines).

But ontologies are personal. What if I think of “Shih Tzu” as a sub-category of “pets” but you think it belongs under “dinner proteins?” Or how about if a liberal defines a homosexual relationship as a type of family and a conservative thinks it belongs under sexual perversion? There’s no way the world would ever be able to agree on one definitive ontology. Nor should it. The way we categorize things, the way we cut up and connect up everything in the world is key to who we are, how we think, and what we do. I – an atheist and cognitive psychology nerd – would go so far as to say that the human soul exists in our subjective, idiosyncratic ways of linking up information. So to impose a single ontology on the whole world – no matter how well thought out and exhaustive it is – would be tantamount to mind control or soul stealing.

To their credit, most semantic technologists I’ve talked to think this way also. That’s why they’re encouraging ontologies to be fruitful and multiply and represent as many worldviews as there are ontology-builders (though ideally there would be more than 15. (I’m joking, I’m sure there are over 22 people who can build ontologies)). But having a bunch of rivaling ontologies out there that define and categorize things in unique ways doesn’t sound like much of an organized system of data, right? That’s true, and that’s why a lot of other people are involved in aligning ontologies – matching up the instances of some concept that shows up in different ontologies.

But…they’re still not doing it that well. That’s something Pat Hayes brought up during his keynote this morning. His topic was “blogic,” or, the new form of logic (formal logic) that’s required for the web. One of his problems with using traditional logic for the web is that people are mapping instances between different ontologies using the relationship “SameAs” – even though the fact that they come from different ontologies means they’re clearly not the same as each other. People are usually aware of that, but there’s still not much they can do because there’s no “SortOfSameAs” or “SameAsInThisOneParticularWay” relationships in traditional logic that they can use instead.

Ontology alignment is still a Big Problem and it’s acknowledged as such by much of the Semantic Web community. If anyone knows of good solutions in the works, I’d love to hear about them or add to this post with some comments.

Semantic Embed: Part 2

This is my second posting on an event by the New York Semantic Web Meetup, which covers all aspects of the W3C recommended Semantic Web from technology to business. An offshoot Meetup, which will focus more on natural language processing, computational linguistics, and machine learning is supposed to start having meetings in January, and I plan to be there. See my first Meetup post here.

Semantic Web Programming – the book (John Hebeler)
The first slide in John Hebeler’s presentation last night had just one sentence: “Our ability to create information far exceeds our ability to manage it,” which is actually the best and most succinct argument for the Semantic Web that I’ve heard thus far. Hebeler made his point more visceral by asking us to guess how many files there were on his MacBook (the answer is over a million, about twice as many as most of us guessed). Imagining that many files on every computer hooked up to the Internet (there were over 1.5 billion Internet users as of June 30) is already overwhelming. And the bigger this mass of information gets, the stronger its pull toward entropy and the more we lose control. It’s something that should scare us, Hebeler said, because all that information is only as useful to us as our tools to sort through it; if we can’t find what we want, it’s the same as having lost it.

Luckily, Hebeler sees our salvation in the Semantic Web – or more specifically, in a highly flexible knowledge base that can handle both complex and simple types of data – and he’s co-authored the book to guide us there. It looks like it’s pretty easy to use: I’m not much of a programmer, but even I could follow the examples, all of which are demonstrated using Java code in the book. In trying to integrate data from, for example, Facebook and Gmail, which represent it in totally different formats, Hebeler gave us seven basic steps, or areas of code:

1) Knowledge-base creation

2) How to query it – just a simple search

3) Setting up your ontologies

4) Ontology/instance alignment – combine two ontologies, for example by teaching your program that what one ontology calls an “individual” is the same thing as what the other calls a “person,” or that “Kathryn” is equivalent to “Kate”

5) Reasoner – your program won’t incorporate its new understanding of equivalencies until you apply the reasoner

6) OWL restriction – allows you to apply constraints

7) Rules – allows you to apply rules

He and the other co-authors also maintain a website where they field questions and add updates about the book.

Lucene (Otis Gospodnetic)

The Lucene presentation by Otis Gospodnetic was aimed primarily at programmers who might want to use the Lucene software for indexing and searching of text. Lucene is actually just one piece of Apache Lucene, an Apache Software Foundation open-source project that includes other sub-projects like Nutch (a framework for building web-crawlers) and Solr (a search server). All of it, of course, is free, and since I’m not expert enough to vouch for any of it, I’d suggest checking out the Apache Lucene website where everything is available for download.

Semantic Embed: Part 1

In her quest to bring you the most authentic, up-to-date news about the evolution of the web, this reporter is venturing where few go: straight to the heart of NYC’s little-known Semantic Web community. It is there, buried in rule interchange formats and Unicode, hidden behind coke-bottle glasses and tablet PCs, that she hopes to find the people who are actually building the web.

Last night was my second New York Semantic Web Meetup event, so I knew a little more about what to expect (free pizza and liberal use of PowerPoint, unusually high Y to X chromosome ratio). The night was divided between two speakers: Mike Cataldo (CEO of Cambridge Semantics, which uses semantic web technology to solve businesses’ problems) and Lee Feigenbaum (a VP at Cambridge Semantics and co-chair of W3C‘s SPARQL working group – which I’ll explain later). It alternated between pretty heavy business-talk (“…and that’s game-changing!”) and tech-talk (“Supplant the mystifying OPTIONAL/!bound method of negation with a dedicated construct), but here’s what I was able to scrape together:

Cambridge Semantics
So Cambridge Semantics provides “practical solutions for today‚Äôs business problems using the most advanced semantic technology” – what does that mean? Essentially, they make it easier to get the data a company needs out of the applications that keep it locked up. A cool feature is that they have a plug-in to use Microsoft Excel as both a source for the data and as an interface for looking at it.

Apparently, there are a couple companies using Cambridge Semantics technology now, including a biopharmaceutical firm in Belgium and a startup called Book Of Odds that calculates the odds of various everyday activities.


As Lee Feigenbaum told me later, if SPARQL is working the way it should, most people shouldn’t even know that it’s there. That said, it’s probably useful to know a little about this core Semantic Web technology, if only to get a better idea of what the Semantic Web might be capable of.

SPARQL is a query language – it’s built for asking questions (and getting back the right answers). It’s got to be able to ask questions that pull together data from a lot of different sources in new, complex ways. The example query Feigenbaum gave in his talk was: “What are the names of all landlocked countries with a population greater than 15 million?” To answer that question, SPARQL first has to know about words like “country” and “population” (that “population” is a property of “country,” for example, and that “population” should refer to a number) and then combine information from different databases to get the right answer. What SPARQL does, then, is a whole lot more powerful than what Google does (Google just matches words in your question to popular pages where the same words show up). Try typing the example question into Google: when I did, the first hit was an entreaty for the world to help “the landlocked heart of Africa” and the second was actually a reference to Feigenbaum’s lecture. I’d tell you how long it took to get the right answer, except that I got sick of looking through the irrelevant documents somewhere on the sixth page of results.

So it’s easy to see how SPARQL can be a great piece of technology. It’s also easy to see why SPARQL is a semantic web technology – it can only come up with answers if the information it’s looking at is written in a computer-understandable language – RDF in this case. One of the main things that gets people excited about the Semantic Web is it’s question-answering ability, and SPARQL is what’s going to make that possible.*

*Note: actually what Feigenbaum was talking about last night was SPARQL 2 – the next version of SPARQL that he’s helping to develop at the W3C. In the interests of space and your waning interest, I’m not going to outline the differences between SPARQL and SPARQL 2 – if you’re really concerned about it, take a look at Feigenbaum’s presentation slides yourself.