Viewing Story

IBM’s supercomputer “Watson” competes on Jeopardy, ties for top spot (1st night)

Posted by on 14/02/2011


Earlier tonight, IBM’s supercomputer “Watson” went head to head against Jeopardy’s two top players in history, Ken Jennings and Brad Rutter. The system that has been in the making for 3 years and was specifically designed for the Jeopardy competition, tied with Brad at $5000 at the end of the 1st night’s competition.

The Jeopardy crew headed to the IBM Thomas J. Watson Research Center near New York where the Watson supercomputer is housed. Getting into the specs of the system, it is split into two sections with each one having 5 racks. Each rack has 10 IBM Power7 750 Express units and when all combined together, the system has the networked power of 2800 supercomputers. The Power7 series was unveiled in mid 2010 to be able to deal with some of the world’s most demanding analytical and computational electronic processes. One of the biggest users of Power7 systems is eMeter, which handles Grid computing technology used mostly in the data aggregation and energy sectors.

Getting back on track, this tuned system was developed by some of the most ingenious scientists on the globe, and have developed a system that can interpret human algorithm (a.k.a. questions) and can determine answers for it based on the knowledge it has stored. It is not connected to the Internet has also has to trigger a button to buzz it if it knows the answer. The system basically uses the knowledge it knows and applies it to the questions by using word associations as one of its main bases.

Some of the interesting features include its avatar, which shows how much thinking goes into a question and how confident it is in an answer it has derived. There is also a confidence rating which shows the top 3 answers it came up with, showing the confidence rating for each. You’ll notice a minimum level and if all the potentially “thought” answers fall below the level, it will not buzz in.

Unlike the other two contestants, Watson received his answers via text input (someone typing it in) rather in interpreting voice, even though I believe it could hear what other contestants said when replying to clues.

At the beginning of the game, Brad started the game but Watson easily took charge raking up $5200. However it started to falter along the way, even repeating an answer said by Ken Jennings. The competition came down to Brad and Watson, but with Watson’s still chiming in a couple wrong answers, Brad was able to catch up and tie at $5000. Ken Jennings was at the bottom with $2000.

At the end of the night, pretty much everyone was impressed with how well Watson performed versus other supercomputers in a similar situation.

Does this mark the first serious step in developing AI that could eventually think and learn on its own, or will computers always be limited to their programming boundaries? It’s obviously hard to come up with a definite answer since it really depends on the viewpoint taken, but generally the Watson computer has performed quite remarkably and it’s a sign that technology is definitely progressing.

Check out full coverage in the videos below.



Related Posts

No related photos.

More in Computing, Featured, Policies/Ethics (19 of 70 articles)