Watson Puts the Current State of Search in Jeopardy

Does anyone have a map?

Is the robot apocalypse closer than originally thought?  After watching IBM’s Watson computer crush Jeopardy’s all-star contestants this week, one must surely wonder.

More to the point, what are the various practical applications of Watson now that its dominance, at least as a quiz-show contestant on “Jeopardy!”, is unquestioned?

Watson was designed by IBM researchers, who suggested the idea of a “Jeopardy!” challenge because they considered the game show to be an excellent test of a computer’s ability to understand information expressed using natural language.

Watson could neither see nor hear, nor was it actually on the stage; an avatar that resembled a giant iPad represented it there.  Watson received all of its information electronically in the form of text at the same point at which the human contestants saw the answer.

Not surprisingly, the computer power behind Watson took up most of a large room and, due to the heat generated by the IBM servers, required extensive cooling as well.

Competing on “Jeopardy!” is a huge challenge, even for the most intelligent and well-read individual.  The provided answers are sometimes vague, make use of clever word play, and require extensive knowledge of a vast range of topics, from history and current events to art and pop culture.  Entering the clues into a standard search engine would result in complete failure.

Far from being a simple search tool, Watson is designed to use natural language processing and thousands of algorithms to understand the phrasing, grammar, and intent of a question, and then find the answer that has the highest probability of being correct.  Just as human players must rely on their own knowledge, Watson uses a content library of thousands of documents, reference materials, encyclopedias, books, and plays and is not connected to the Internet or other online databases.

Let’s look at a few of the questions that Watson had difficulty answering.

On Monday, the first day of the match, Watson was incorrect in the category “Olympic Oddities”, when the provided answer was “It was the anatomical oddity of U.S. gymnast George Eyser who won a gold medal on the parallel bars in 1904.”

Watson responded “leg”, which was ruled incorrect because Watson did not specify that it was the lack of Eyser’s leg that was the oddity.  David Ferrucci, the manager of the Watson project at IBM Research, explained that Watson might not have understood “oddity,” and “The computer wouldn’t know that a missing leg is odder than anything else.”  He also noted that given enough time to absorb more material, Watson would probably make that connection.

Another interesting incorrect response (in the form of a question) occurred Tuesday night.

In the category U.S. Cities, the provided answer was “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”  Watson incorrectly wrote “What is Toronto”, while both humans correctly wrote “What is Chicago,” referencing O’Hare and Midway airports.  The problem for Watson, according to Ferrucci, was that Watson had learned that the categories in the show often have weak relationships to the questions, and thus Watson placed only a weak emphasis on “U.S. Cities”.  In addition, he noted that there are cities in the U.S. named Toronto, and that Toronto has an American League Baseball team, all snippets of information that may have led Watson astray.  Watson only gave “Toronto” a 32% of being correct, and only bet $947, a tiny sum that did not expose it to the risk of losing its lead in the game.

Despite a few other missteps and miscues Watson dominated the matches played displaying an amazing ability to understand what was being asked and to quickly respond correctly.

How did Watson reach a final response?  Watson considered thousands of possibilities and ranked them.  The top three were displayed, with the probability it assigned to each.  That would be a welcome addition to a Google search, an honest appraisal of how likely each result was of being correct, or even better, its usefulness.

With Watson, at least on “Jeopardy!”, we moved one step closer to the type of search envisaged in science fiction (“Computer.  Does the king of the Watusis drive an automobile?”).  We won’t have the computer from Star Trek here tomorrow, or even next year, but the technology developed for Watson does bring us that much closer to developing search tools that will resolve search’s Achilles heel, the fact that it delivers results as opposed to answers.

Jonathan B. Spira is CEO and Chief Analyst at Basex.  Cody Burke is a senior analyst at Basex.

One Response to “Watson Puts the Current State of Search in Jeopardy”


google