What is… Watson?
Just a year ago, a supercomputer called Watson changed forever how we imagine machine intelligence.
(via Sean Kelly Studio)

What is… Watson?

Just a year ago, a supercomputer called Watson changed forever how we imagine machine intelligence.

(via Sean Kelly Studio)

Book excerpt: Educating Watson - Stephen Baker

Book excerpt: Educating Watson - Stephen Baker

FEBRUARY 2011 • Stephen Baker

In 2007, IBM computer scientist David Ferrucci and his team embarked on the challenge of building a computer that could take on—and beat—the two best players of the popular US TV quiz show Jeopardy!, a trivia game in which contestants are given clues in categories ranging from academic subjects to pop culture and must ring in with responses that are in the form of questions. The show, a ratings stalwart, was created in 1964 and has aired for more than 25 years. But this would be the first time the program would pit man against machine.

In some sense, the project was a follow-up to Deep Blue, the IBM computer that defeated chess champion Garry Kasparov in 1997. Although a TV quiz show may seem to lack the gravitas of the classic game of chess, the task was in many ways much harder. It wasn’t just that the computer had to master straightforward language, it had to master humor, nuance, puns, allusions, and slang—a verbal complexity well beyond the reach of most computer processors. Meeting that challenge was about much more than just a Jeopardy! championship. The work of Ferrucci and his team illuminates both the great potential and the severe limitations of current computer intelligence—as well as the capacities of the human mind. Although the machine they created was ultimately dubbed “Watson” (in honor of IBM’s founder, Thomas J. Watson), to the team that painstakingly constructed it, the game-playing computer was known as Blue J.

The following article is adapted from Final Jeopardy: Man vs. Machine and the Quest to Know Everything (Houghton Mifflin Harcourt, February 2011), by Stephen Baker, an account of Blue J’s creation.


It was possible, Ferrucci thought, that someday a machine would replicate the complexity and nuance of the human mind. In fact, in IBM’s Almaden Research Center, on a hilltop high above Silicon Valley, a scientist named Dharmendra Modha was building a simulated brain equipped with 700 million electronic neurons. Within years, he hoped to map the brain of a cat, and then a monkey, and, eventually, a human. But mapping the human brain, with its 100 billion neurons and trillions or quadrillions of connections among them, was a long-term project. With time, it might result in a bold new architecture for computing, one that could lead to a new level of computer intelligence. Perhaps then, machines would come up with their own ideas, wrestle with concepts, appreciate irony, and think more like humans.

But such machines, if they ever came, would not be ready on Ferrucci’s schedule. As he saw it, his team had to produce a functional Jeopardy!-playing machine in just two years. If Jeopardy!’s executive producer, Harry Friedman, didn’t see a viable machine by 2009, he would never green-light the man–machine match for late 2010 or early 2011.


How Watson’s $1 Million Jeopardy Win Helps IBM’s Other Supercomputer
Source: Fast Company

After crushing the humanoids on Jeopardy this week, IBM’s Watson computer took home $1 million in prize money. Instead of throwing it away on Tiffany diamond-encrusted circuit boards or a Lamborghini to show up other supercomputers, Watson is showing that its silicon heart is in the right place. Half the money will go to the World Vision, a nonprofit that helps children in poverty, the other half to World Community Grid, IBM’s humanitarian supercomputer.
As we chronicled last year, World Community Grid, or WCG, is an enormous volunteer computer network dedicated to scientific research. Ordinary citizens donate their idle laptops and desktops to be used for crunching algorithms and conducting mathematical experiments that accelerate research on clean energy and high-yield rice crops as well as cures for cancer, AIDS, muscular dystrophy, and other diseases. IBM started the free,open-source lab in 2004 to make a virtual supercomputer available to researchers who couldn’t otherwise afford one.
Half of Watson’s winnings, $500,000, will be given to scientists who apply for grants to use the WCG. The publicity should only help the grid grow beyond the 535,000 participants (and 1.7 million computers) in more than 80 countries. Yesterday, as word spread of the grid’s windfall, 1,300 people signed up, seven times more than on a typical day. (The Daily Septuple perhaps?)

How Watson’s $1 Million Jeopardy Win Helps IBM’s Other Supercomputer

Source: Fast Company

After crushing the humanoids on Jeopardy this week, IBM’s Watson computer took home $1 million in prize money. Instead of throwing it away on Tiffany diamond-encrusted circuit boards or a Lamborghini to show up other supercomputers, Watson is showing that its silicon heart is in the right place. Half the money will go to the World Vision, a nonprofit that helps children in poverty, the other half to World Community Grid, IBM’s humanitarian supercomputer.

As we chronicled last year, World Community Grid, or WCG, is an enormous volunteer computer network dedicated to scientific research. Ordinary citizens donate their idle laptops and desktops to be used for crunching algorithms and conducting mathematical experiments that accelerate research on clean energy and high-yield rice crops as well as cures for cancer, AIDS, muscular dystrophy, and other diseases. IBM started the free,open-source lab in 2004 to make a virtual supercomputer available to researchers who couldn’t otherwise afford one.

Half of Watson’s winnings, $500,000, will be given to scientists who apply for grants to use the WCG. The publicity should only help the grid grow beyond the 535,000 participants (and 1.7 million computers) in more than 80 countries. Yesterday, as word spread of the grid’s windfall, 1,300 people signed up, seven times more than on a typical day. (The Daily Septuple perhaps?)

Final Jeopardy: How can Watson conclude that Toronto is a U.S. city? - Stephen Baker
When I met yesterday with IBM’s chief scientist behind  Jeopardy, David Ferrucci, he was wearing a Toronto Blue Jays jacket. It  had to do with Watson’s only signficant blooper in an otherwise dominant  performance in the second half of its first game. The Final Jeopardy category was U.S. Cities. The clue: “Its largest  airport is named for a World War II hero, its second largest for a World  War II battle.” Watson, strangely, came up with the response: “What is  Toronto??????” It was programmed to add all those question marks to show  the audience that it had very low confidence in the response. But  still, how could it choose Toronto in a category for U.S. cities. After the game, Ferrucci and his team were eager to explain Watson’s  thinking process. Strangely, from a PR point of view, they seemed  determined to focus on one moment of weakness in session that exhibited  Watson’s strengths. But they have poured four years of research into  this machine, and they like to clear up doubts. A few key issues: 1)  Watson can never be sure of anything. Is it possible that the old  rock star Alice Cooper is a man? If Watson finds enough evidence, it  will bet on it—even though the name “Alice” is sure to create a lot of  doubt. This flexibility in its thinking can save Watson from gaffes—but  also lead to a few. 2) Category titles cannot be trusted. I blogged about this earlier, in a post How Watson Thinks.  It has learned through exhaustive statistical analysis that many clues  do not jibe with categories. A category about US novelists, for example,  can ask about J.D. Salinger’s masterpiece. Catcher in the Rye is a  novel, not a novelist! These things happen time and again, and Watson  notices. So it pays scant attention to the categories. 3) If this had been a normal Jeopardy clue, Watson would not have  buzzed. It had only 14% confidence in Toronto (whose Pearson airport is  named for a man who was active in World War two One), and 11% in Chicago. Watson simply did not come up with the answer, and Toronto was its guess. Even so, how could it guess that Toronto was an American city? Here we  come to the weakness of statistical analysis. While searching through  data, it notices that the United States is often called America. Toronto  is a North American city. Its baseball team, the Blue Jays, plays in  the American League. (That’s why Ferrucci was wearing a Blue Jay  jacket). If Watson happened to study the itinerary of my The Numerati  book tour, it included a host of American cities, from Philadelphia and  Pittsburgh, to Seattle, San Francisco, and Toronto. In documents like  that, people often don’t stop to note for inquiring computers that  Toronto actually shouldn’t be placed in the group.

Final Jeopardy: How can Watson conclude that Toronto is a U.S. city? - Stephen Baker

When I met yesterday with IBM’s chief scientist behind Jeopardy, David Ferrucci, he was wearing a Toronto Blue Jays jacket. It had to do with Watson’s only signficant blooper in an otherwise dominant performance in the second half of its first game.

The Final Jeopardy category was U.S. Cities. The clue: “Its largest airport is named for a World War II hero, its second largest for a World War II battle.” Watson, strangely, came up with the response: “What is Toronto??????” It was programmed to add all those question marks to show the audience that it had very low confidence in the response. But still, how could it choose Toronto in a category for U.S. cities.

After the game, Ferrucci and his team were eager to explain Watson’s thinking process. Strangely, from a PR point of view, they seemed determined to focus on one moment of weakness in session that exhibited Watson’s strengths. But they have poured four years of research into this machine, and they like to clear up doubts.

A few key issues:

1)  Watson can never be sure of anything. Is it possible that the old rock star Alice Cooper is a man? If Watson finds enough evidence, it will bet on it—even though the name “Alice” is sure to create a lot of doubt. This flexibility in its thinking can save Watson from gaffes—but also lead to a few.

2) Category titles cannot be trusted. I blogged about this earlier, in a post How Watson Thinks. It has learned through exhaustive statistical analysis that many clues do not jibe with categories. A category about US novelists, for example, can ask about J.D. Salinger’s masterpiece. Catcher in the Rye is a novel, not a novelist! These things happen time and again, and Watson notices. So it pays scant attention to the categories.

3) If this had been a normal Jeopardy clue, Watson would not have buzzed. It had only 14% confidence in Toronto (whose Pearson airport is named for a man who was active in World War two One), and 11% in Chicago. Watson simply did not come up with the answer, and Toronto was its guess.

Even so, how could it guess that Toronto was an American city? Here we come to the weakness of statistical analysis. While searching through data, it notices that the United States is often called America. Toronto is a North American city. Its baseball team, the Blue Jays, plays in the American League. (That’s why Ferrucci was wearing a Blue Jay jacket). If Watson happened to study the itinerary of my The Numerati book tour, it included a host of American cities, from Philadelphia and Pittsburgh, to Seattle, San Francisco, and Toronto. In documents like that, people often don’t stop to note for inquiring computers that Toronto actually shouldn’t be placed in the group.

IBM Watson: Final Jeopardy! and the Future of Watson (via ibm)

After defeating the two greatest Jeopardy! champions of all time, the technology behind Watson will now be applied to some of the world’s most enticing challenges. Watch a breakdown of the match from Ken Jennings, Brad Rutter and the IBM team members as they look toward the future.

IBM Watson: The Face of Watson 

Preparing Watson for the Jeopardy! stage posed a unique challenge to the team: how to represent a system of 90 servers and hundreds of custom algorithms for the viewing public. IBM, in collaboration with a team of partners, created a representation of this computing system for the viewing audience — from its stage presence to its voice.

Watson Supercomputer Terminates Humans in First Jeopardy Round
Source: Wired

IBM supercomputer Watson closed the pod-bay doors on its human competition Tuesday night in the first round of a two-game Jeopardy match designed to showcase the latest advances in artificial intelligence. The contest concludes Wednesday.
By the end of the Tuesday’s shellacking, Jeopardy’s greatest champions, Ken Jennings and Brad Rutter, were sporting decidedly sour looks.
Watson had a near-miss at the end of the game, when it incorrectly answered the Final Jeopardy clue, but when the dust settled, the supercomputer had earned $35,734, blowing out Rutter and Jennings, who had earned $10,400 and $4800, respectively.
That final missed clue puzzled IBM scientists. The category was US Cities, and the clue was:  “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”
Rutter and Jennings both correctly wrote “What is Chicago?” for O’Hare and Midway, but Watson’s response was a baffling “What is Toronto???” complete with the additional question marks.
How could the machine have been so wrong? David Ferrucci, the manager of the Watson project at IBM Research,explained on the company’s blog that several things probably confused Watson, as reported by Steve Hamm:

Watson Supercomputer Terminates Humans in First Jeopardy Round

Source: Wired

IBM supercomputer Watson closed the pod-bay doors on its human competition Tuesday night in the first round of a two-game Jeopardy match designed to showcase the latest advances in artificial intelligence. The contest concludes Wednesday.

By the end of the Tuesday’s shellacking, Jeopardy’s greatest champions, Ken Jennings and Brad Rutter, were sporting decidedly sour looks.

Watson had a near-miss at the end of the game, when it incorrectly answered the Final Jeopardy clue, but when the dust settled, the supercomputer had earned $35,734, blowing out Rutter and Jennings, who had earned $10,400 and $4800, respectively.

That final missed clue puzzled IBM scientists. The category was US Cities, and the clue was:  “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”

Rutter and Jennings both correctly wrote “What is Chicago?” for O’Hare and Midway, but Watson’s response was a baffling “What is Toronto???” complete with the additional question marks.

How could the machine have been so wrong? David Ferrucci, the manager of the Watson project at IBM Research,explained on the company’s blog that several things probably confused Watson, as reported by Steve Hamm:

Engineering Intelligence: Why IBM’s Jeopardy-Playing Computer Is So Important
Source: Mashable

Language is arguably what makes us most human. Even the smartest and  chattiest of the animal kingdom have nothing on our lingual cognition.
In  computer science, the Holy Grail has long been to build software that  understands — and can interact with — natural human language. But dreams  of a real-life Johnny 5 or C-3PO have always been dashed on the great  gulf between raw processing power and the architecture of the human  mind. Computers are great at crunching large sets of numbers. The mind  excels at assumption and nuance.
Enter Watson,  an artificial intelligence project from IBM that’s over five years in  the making and about to prove itself to the world next week. The  supercomputer, named for the technology company’s founder, will be  competing with championship-level contestants on the quiz show Jeopardy!. The episodes will air on February 14, 15 and 16, and if recent practice rounds are any indication, Watson is in it to win it.
At first blush, building a computer with vast amounts of knowledge at its disposal seems mundane in our age. Google has already indexed a wide swath of the world’s codified information,  and can surface almost anything with a handful of keywords. The  difference is that Google doesn’t understand  a question like, “What  type of weapon is also the name of a Beatles record?”  It may yield some  information about The Beatles, or perhaps an article that mentions  weapons and The Beatles, but it’s not conceptualizing that the weapon  and recording in question have the same name: Revolver.
Achieving this is what makes Watson a contender on Jeopardy!,  a quiz known for nuance, puns, double entendres and complex language  designed to mislead human contestants. Google Search, or any common  semantic software, wouldn’t stand a chance against these lingual  acrobatics.

Engineering Intelligence: Why IBM’s Jeopardy-Playing Computer Is So Important

Source: Mashable

Language is arguably what makes us most human. Even the smartest and chattiest of the animal kingdom have nothing on our lingual cognition.

In computer science, the Holy Grail has long been to build software that understands — and can interact with — natural human language. But dreams of a real-life Johnny 5 or C-3PO have always been dashed on the great gulf between raw processing power and the architecture of the human mind. Computers are great at crunching large sets of numbers. The mind excels at assumption and nuance.

Enter Watson, an artificial intelligence project from IBM that’s over five years in the making and about to prove itself to the world next week. The supercomputer, named for the technology company’s founder, will be competing with championship-level contestants on the quiz show Jeopardy!. The episodes will air on February 14, 15 and 16, and if recent practice rounds are any indication, Watson is in it to win it.

At first blush, building a computer with vast amounts of knowledge at its disposal seems mundane in our age. Google has already indexed a wide swath of the world’s codified information, and can surface almost anything with a handful of keywords. The difference is that Google doesn’t understand a question like, “What type of weapon is also the name of a Beatles record?” It may yield some information about The Beatles, or perhaps an article that mentions weapons and The Beatles, but it’s not conceptualizing that the weapon and recording in question have the same name: Revolver.

Achieving this is what makes Watson a contender on Jeopardy!, a quiz known for nuance, puns, double entendres and complex language designed to mislead human contestants. Google Search, or any common semantic software, wouldn’t stand a chance against these lingual acrobatics.

Turns out IBM’s Watson is not a supercomputer! - Stephen Baker
Talking to a friend at IBM last night, I learned that Watson, technically speaking, is not a supercomputer. This was a bit disconcerting to me, since I refer to it as one a couple times in the book (which comes out as an ebook on Wednesday).
Without getting into a long discussion of MIPs and Petaflops, two points: As I understand it, broadly speaking there are two types of supercomputers. The traditional kind, which is important for jobs like modeling the bending of proteins and the blasts of atomic bombs, requires insane amounts of mathematical calculations. Watson is definitely not one of those.The second kind is more like the Google computer. It’s called “data-intensive supercomputing.” In Google’s case, it involves clusters of commodity computers working in concert to process the chaos of unstructured data that you find on the Web. Watson is closer to this model. Its specialty is words. (Here’s a 2007 paper on it by Randall Bryant, head of the computer science dept at Carnegie Mellon. That paper started me on a path that led to a BusinessWeek cover story I wrote on Google’s cloud.)The Watson you’ll see playing Jeopardy on Feb 14, 15, and 16 runs on a cluster of IBM Power 7 servers, and it features 2,880 processing cores. I guess by today’s standards, that unit can’t handle enough calculations per second to qualify as a supercomputer. (Here’s a piece with more of the technical specs.)

Turns out IBM’s Watson is not a supercomputer! - Stephen Baker

Talking to a friend at IBM last night, I learned that Watson, technically speaking, is not a supercomputer. This was a bit disconcerting to me, since I refer to it as one a couple times in the book (which comes out as an ebook on Wednesday).


Without getting into a long discussion of MIPs and Petaflops, two points: As I understand it, broadly speaking there are two types of supercomputers. The traditional kind, which is important for jobs like modeling the bending of proteins and the blasts of atomic bombs, requires insane amounts of mathematical calculations. Watson is definitely not one of those.

The second kind is more like the Google computer. It’s called “data-intensive supercomputing.” In Google’s case, it involves clusters of commodity computers working in concert to process the chaos of unstructured data that you find on the Web. Watson is closer to this model. Its specialty is words. (Here’s a 2007 paper on it by Randall Bryant, head of the computer science dept at Carnegie Mellon. That paper started me on a path that led to a BusinessWeek cover story I wrote on Google’s cloud.)

The Watson you’ll see playing Jeopardy on Feb 14, 15, and 16 runs on a cluster of IBM Power 7 servers, and it features 2,880 processing cores. I guess by today’s standards, that unit can’t handle enough calculations per second to qualify as a supercomputer. (Here’s a piece with more of the technical specs.)

Final Jeopardy: Can a Machine Think?
Source: ReadWriteWeb
In early April of 1990, I was a contestant on Jeopardy. If you were  watching back then, I was the “Supercomputer Programmer from Aloha,  Oregon” who won three games and $38,000 and then lost - badly - in the  fourth. So there’s quite a bit of personal history tied in with the news  last week that a supercomputer from IBM, called Watson, had beaten two all-time Jeopardy! winners, Brad Rutter and Ken Jennings, in a practice round for the three-day charity competition on Feb. 14, 15 and 16.
A few weeks ago, I predicted that Jennings would win, Watson would place a close second and Rutter would place third in the overall contest,  and I’m sticking with that prediction in spite of Watson’s first-place  finish in the practice round last week. When I put on my handicapper’s  hat, the scores of the practice round - $4,400 for Watson, $3,400 for  Jennings and $1,200 for Rutter - are consistent with my assessment that  Jennings and Watson are evenly matched and that Rutter is unlikely to  win.

Final Jeopardy: Can a Machine Think?

Source: ReadWriteWeb

In early April of 1990, I was a contestant on Jeopardy. If you were watching back then, I was the “Supercomputer Programmer from Aloha, Oregon” who won three games and $38,000 and then lost - badly - in the fourth. So there’s quite a bit of personal history tied in with the news last week that a supercomputer from IBM, called Watson, had beaten two all-time Jeopardy! winners, Brad Rutter and Ken Jennings, in a practice round for the three-day charity competition on Feb. 14, 15 and 16.

A few weeks ago, I predicted that Jennings would win, Watson would place a close second and Rutter would place third in the overall contest, and I’m sticking with that prediction in spite of Watson’s first-place finish in the practice round last week. When I put on my handicapper’s hat, the scores of the practice round - $4,400 for Watson, $3,400 for Jennings and $1,200 for Rutter - are consistent with my assessment that Jennings and Watson are evenly matched and that Rutter is unlikely to win.


WISE OF THE MACHINE   In its first public demonstration, the computer system built by IBM defeated two “Jeopardy!” champions, including 74-consecutive-game-winner Ken Jennings, above, in a practice match ahead of a formal competition that will air on TV in mid-February.  Afterwards, the computer, known as Watson, also took Ken Jennings’s lunch money.  (Photo: AP via the Wall St. Journal; caption via the Journal.  Except that last part.)

inothernews:

WISE OF THE MACHINE   In its first public demonstration, the computer system built by IBM defeated two “Jeopardy!” champions, including 74-consecutive-game-winner Ken Jennings, above, in a practice match ahead of a formal competition that will air on TV in mid-February.  Afterwards, the computer, known as Watson, also took Ken Jennings’s lunch money.  (Photo: AP via the Wall St. Journal; caption via the Journal.  Except that last part.)

inothernews: