One of the latest artificial intelligence systems from MIT is as smart as a 4-year-old
When kids eat glue, they’re exhibiting a lack of common sense. Computers equipped with artificial intelligence, it turns out, suffer from a similar problem.
While computers can tell you the chemical composition of glue, most can’t tell you if it is a gross choice for a snack. They lack the common sense that is ingrained in adult humans. 
For the last decade, MIT researchers have been building a system called ConceptNet that can equip computers with common-sense associations. It can process that a person may desire a dessert such as cake, which has the quality of being sweet. The system is structured as a graph, with connections between related concepts and terms.
The University of Illinois-Chicago announced today that its researchers put ConceptNet to the test with an IQ assessment developed for young children. ConceptNet 4, the second-most recent iteration from MIT, earned a score equivalent to the average 4-year-old. It did well at vocabulary and recognizing similarities, but did poorly at answering “why” questions. Children would normally get similar scores in each of the categories.

One of the latest artificial intelligence systems from MIT is as smart as a 4-year-old

When kids eat glue, they’re exhibiting a lack of common sense. Computers equipped with artificial intelligence, it turns out, suffer from a similar problem.

While computers can tell you the chemical composition of glue, most can’t tell you if it is a gross choice for a snack. They lack the common sense that is ingrained in adult humans. 

For the last decade, MIT researchers have been building a system called ConceptNet that can equip computers with common-sense associations. It can process that a person may desire a dessert such as cake, which has the quality of being sweet. The system is structured as a graph, with connections between related concepts and terms.

The University of Illinois-Chicago announced today that its researchers put ConceptNet to the test with an IQ assessment developed for young children. ConceptNet 4, the second-most recent iteration from MIT, earned a score equivalent to the average 4-year-old. It did well at vocabulary and recognizing similarities, but did poorly at answering “why” questions. Children would normally get similar scores in each of the categories.

2045: A New Era for Humanity

In February of 2012 the first Global Future 2045 Congress was held in Moscow. There, over 50 world leading scientists from multiple disciplines met to develop a strategy for the future development of humankind. One of the main goals of the Congress was to construct a global network of scientists to further research on the development of cybernetic technology, with the ultimate goal of transferring a human’s individual consciousness to an artificial carrier.

[N.B.  Some of this is way out there, and breathlessly speculative. But from everything we know about exponential technological change, the world in 10, 20 or 30 years from is likely to be much more radically different than we can even imagine.]

New AI Can Learn a Game By Watching You Play, Develop Its Own Strategies to Beat You.
As it watches, [the computer] uses standard image-processing tools to recognise changes in the separate board squares and pieces of a game, while ignoring extra details like human hands. The videos allow the system to learn the rules by logging what the board looks like when a game has been won, and what count as legal moves. Having mastered the rules, the software plays the game by examining all possible moves and choosing those it deems most likely to lead to a win.
As you would expect, its performance depends on the complexity of the game. Connect 4 has few possibilities, making it very hard to beat the trained computer.
(via Computer watches you play a game, then beats you at it - tech - 10 July 2012 - New Scientist)

New AI Can Learn a Game By Watching You Play, Develop Its Own Strategies to Beat You.

As it watches, [the computer] uses standard image-processing tools to recognise changes in the separate board squares and pieces of a game, while ignoring extra details like human hands. The videos allow the system to learn the rules by logging what the board looks like when a game has been won, and what count as legal moves. Having mastered the rules, the software plays the game by examining all possible moves and choosing those it deems most likely to lead to a win.

As you would expect, its performance depends on the complexity of the game. Connect 4 has few possibilities, making it very hard to beat the trained computer.

(via Computer watches you play a game, then beats you at it - tech - 10 July 2012 - New Scientist)

(via joshbyard)

The Stanford Education Experiment Could Change Higher Learning Forever | Wired Science | Wired.com
 I’m enrolled in CS221: Introduction to Artificial Intelligence, a graduate- level course taught by Stanford professors Sebastian Thrun and Peter Norvig.
Last fall, the university in the heart of Silicon Valley did something it had never done before: It opened up three classes, including CS221, to anyone with a web connection. Lectures and assignments—the same ones administered in the regular on-campus class—would be posted and auto-graded online each week. Midterms and finals would have strict deadlines. Stanford wouldn’t issue course credit to the non-matriculated students. But at the end of the term, students who completed a course would be awarded an official Statement of Accomplishment.
People around the world have gone crazy for this opportunity. Fully two-thirds of my 160,000 classmates live outside the US. There are students in 190 countries—from India and South Korea to New Zealand and the Republic of Azerbaijan. More than 100 volunteers have signed up to translate the lectures into 44 languages, including Bengali. In Iran, where YouTube is blocked, one student cloned the CS221 class website and—with the professors’ permission—began reposting the video files for 1,000 students.

The Stanford Education Experiment Could Change Higher Learning Forever | Wired Science | Wired.com

 I’m enrolled in CS221: Introduction to Artificial Intelligence, a graduate- level course taught by Stanford professors Sebastian Thrun and Peter Norvig.

Last fall, the university in the heart of Silicon Valley did something it had never done before: It opened up three classes, including CS221, to anyone with a web connection. Lectures and assignments—the same ones administered in the regular on-campus class—would be posted and auto-graded online each week. Midterms and finals would have strict deadlines. Stanford wouldn’t issue course credit to the non-matriculated students. But at the end of the term, students who completed a course would be awarded an official Statement of Accomplishment.

People around the world have gone crazy for this opportunity. Fully two-thirds of my 160,000 classmates live outside the US. There are students in 190 countries—from India and South Korea to New Zealand and the Republic of Azerbaijan. More than 100 volunteers have signed up to translate the lectures into 44 languages, including Bengali. In Iran, where YouTube is blocked, one student cloned the CS221 class website and—with the professors’ permission—began reposting the video files for 1,000 students.

What is… Watson?
Just a year ago, a supercomputer called Watson changed forever how we imagine machine intelligence.
(via Sean Kelly Studio)

What is… Watson?

Just a year ago, a supercomputer called Watson changed forever how we imagine machine intelligence.

(via Sean Kelly Studio)

Watson’s New Job: IBM Salesman - Technology Review
IBM’s Watson supercomputer reached a milestone in artificial intelligence last February when it beat two Jeopardy! champions. Millions watched, and while some experts dismissed it as a publicity stunt, IBM said Watson would soon be helping doctors diagnose illness, and hinted at talks with gadget companies about Watson helping consumers with questions.
As IBM prepares to celebrate the first anniversary of the televised  contest on February 16, though, it is not yet offering the  question-answering system for sale. Although limited trials using Watson  technology are underway in health and financial services businesses,  the AI prodigy is having its biggest impact by pulling in new customers  for existing business products—as IBM persuades them to organize their  data into formats that an AI like Watson can better understand. IBM has  created a slogan, “Ready for Watson,” to help sell its products that  way.
IBM hasn’t disclosed how much it spent developing Watson, but the  lengthy research and development process is believed to have cost in the  tens of millions of dollars. To play Jeopardy, the system  needed to understand the meaning of the answers posed as clues, and to  rapidly apply general knowledge—distilled from the Internet and other  sources—to identify possible answers. That required novel software and  an expensive supercomputer.
"Customers are coming to us and saying ‘I’d like a Watson,’ " says  Stephen Gold, IBM’s director of worldwide marketing for Watson.  Eventually, that might be possible, but first they need to have the  right data sets for Watson to operate on.

Watson’s New Job: IBM Salesman - Technology Review

IBM’s Watson supercomputer reached a milestone in artificial intelligence last February when it beat two Jeopardy! champions. Millions watched, and while some experts dismissed it as a publicity stunt, IBM said Watson would soon be helping doctors diagnose illness, and hinted at talks with gadget companies about Watson helping consumers with questions.

As IBM prepares to celebrate the first anniversary of the televised contest on February 16, though, it is not yet offering the question-answering system for sale. Although limited trials using Watson technology are underway in health and financial services businesses, the AI prodigy is having its biggest impact by pulling in new customers for existing business products—as IBM persuades them to organize their data into formats that an AI like Watson can better understand. IBM has created a slogan, “Ready for Watson,” to help sell its products that way.

IBM hasn’t disclosed how much it spent developing Watson, but the lengthy research and development process is believed to have cost in the tens of millions of dollars. To play Jeopardy, the system needed to understand the meaning of the answers posed as clues, and to rapidly apply general knowledge—distilled from the Internet and other sources—to identify possible answers. That required novel software and an expensive supercomputer.

"Customers are coming to us and saying ‘I’d like a Watson,’ " says Stephen Gold, IBM’s director of worldwide marketing for Watson. Eventually, that might be possible, but first they need to have the right data sets for Watson to operate on.

 What does Watson know about birds? - Stephen Baker
Stephen Baker is author of Final Jeopardy. the definitive book on the IBM Watson computer that competed successfully on Jeopardy in early 2011.
I happened to see this heron in a Montclair pond a week ago, and it led me to wonder what my old friend Watson would make of that lovely water bird. Specifically, if Watson’s analysis indicates that a heron is a bird and it also has strong evidence that birds fly, would Watson be able to infer that a heron would fly?The answer to that is no. The reason is that Watson, for all of its achievements in Jeopardy, is incapable of generalizing, much less coming up with theories. Humans do this all the time. You see toddlers who mess up their irregular verbs, saying that Johnny “falled” or that Timmy “eated.” They’re creating a theory of language based on the patterns they’ve picked up. And when toddlers sees that robins and bluebirds and cardinals fly, they quickly generalize about birds. That’s a key aspect of human intelligence, and Watson doesn’t have it. Watson could find many references to herons, both as birds and flying animals, and it could correctly answer a question about what herons do with their wings. But that conclusion doesn’t inform any broader thinking on the subject. That’s not what it’s built for.So, as we look to the job market, humans who make a living by synthesizing information and coming up with theories are not likely to be displaced by computers anytime soon. Those who comb through data to find answers, by contrast, will be facing increasingly stiff (and tireless) competition.To be fair to Watson, I should add that its inability to generalize sometimes pays dividends. This is because generalizations usually have exceptions. If we come up with a theory that birds fly, penguins and ostriches can confound us. Watson, by contrast, would come across very little evidence of flying ostriches or penguins and—unburdened by theory—would sidestep that trap.It’s odd that I’m writing here about flying, because that’s the very metaphor that occurs to me. Our minds soar—and sometimes we lose sight of the reality on the ground. Watson never leaves the ground. It sifts ceaselessly, analyzing and crunches. Never distracted by ego, desire, theory, or hundreds of other human qualities, it just churns out the statistically most likely answers. In the cognitive realm, Watson’s our beast of burden.

 What does Watson know about birds? - Stephen Baker

Stephen Baker is author of Final Jeopardy. the definitive book on the IBM Watson computer that competed successfully on Jeopardy in early 2011.

I happened to see this heron in a Montclair pond a week ago, and it led me to wonder what my old friend Watson would make of that lovely water bird. Specifically, if Watson’s analysis indicates that a heron is a bird and it also has strong evidence that birds fly, would Watson be able to infer that a heron would fly?

The answer to that is no. The reason is that Watson, for all of its achievements in Jeopardy, is incapable of generalizing, much less coming up with theories. Humans do this all the time. You see toddlers who mess up their irregular verbs, saying that Johnny “falled” or that Timmy “eated.” They’re creating a theory of language based on the patterns they’ve picked up. And when toddlers sees that robins and bluebirds and cardinals fly, they quickly generalize about birds. 

That’s a key aspect of human intelligence, and Watson doesn’t have it. Watson could find many references to herons, both as birds and flying animals, and it could correctly answer a question about what herons do with their wings. But that conclusion doesn’t inform any broader thinking on the subject. That’s not what it’s built for.

So, as we look to the job market, humans who make a living by synthesizing information and coming up with theories are not likely to be displaced by computers anytime soon. Those who comb through data to find answers, by contrast, will be facing increasingly stiff (and tireless) competition.

To be fair to Watson, I should add that its inability to generalize sometimes pays dividends. This is because generalizations usually have exceptions. If we come up with a theory that birds fly, penguins and ostriches can confound us. Watson, by contrast, would come across very little evidence of flying ostriches or penguins and—unburdened by theory—would sidestep that trap.

It’s odd that I’m writing here about flying, because that’s the very metaphor that occurs to me. Our minds soar—and sometimes we lose sight of the reality on the ground. Watson never leaves the ground. It sifts ceaselessly, analyzing and crunches. Never distracted by ego, desire, theory, or hundreds of other human qualities, it just churns out the statistically most likely answers. In the cognitive realm, Watson’s our beast of burden.


100,000  Sign Up For Stanford’s Open Class on Artificial Intelligence. Classes With 1 Million  Next? | Singularity Hub
A groundbreaking change has struck academia, and its reverberations may  be felt for years to come. One of Stanford’s first full courses to ever  be openly made available online has gone viral. In a matter of weeks it has signed up more than 100,000  students from around the world! Even as I wrote this article, another  5000 joined! As news of the course continues to spread, the ultimate  size of the class could reach greater epic proportions – we could easily  see interest skyrocket to 200,000 or even 300,000 or more.  Classes of 1  million or tens of millions may be in our future. If Stanford can  succeed in teaching classes of 100k+ students at a time, what will it  mean for education in general?

100,000 Sign Up For Stanford’s Open Class on Artificial Intelligence. Classes With 1 Million Next? | Singularity Hub

A groundbreaking change has struck academia, and its reverberations may be felt for years to come. One of Stanford’s first full courses to ever be openly made available online has gone viral. In a matter of weeks it has signed up more than 100,000 students from around the world! Even as I wrote this article, another 5000 joined! As news of the course continues to spread, the ultimate size of the class could reach greater epic proportions – we could easily see interest skyrocket to 200,000 or even 300,000 or more.  Classes of 1 million or tens of millions may be in our future. If Stanford can succeed in teaching classes of 100k+ students at a time, what will it mean for education in general?

Artificial Intelligence without Human Intervention from ai-one - semanticweb.com
A recent article reports, “A new technology enables almost any application to learn like a human. The Topic-Mapper software development kit (SDK) by ai-one inc. reads and understands unstructured data without any human intervention.  It allows developers to build artificial intelligence into almost any  software program. This is a major step towards what Ray Kurzweil calls  the technological singularity – where superhuman intelligence will  transform history.”
ai-one, a SemTech Silver Sponsor,  has developed a machine learning approach that is unlike any other:  ‘ai-one’s technology extracts the inherent meaning of data without the  need for any external references. A team of researchers spent more than  eight years and $6.5 million building what they call ‘biologically  inspired intelligence’ that works like a brain. It learns patterns by  reading data at the bit-level. ‘It has no preconceived notions about  anything,’ explains founder Walt Diggelmann, ‘so it works in any  language and with any data set. It simply learns what you feed it. The  more it reads, the more it learns, the better it gets at recognizing  patterns and answering questions.’”

Artificial Intelligence without Human Intervention from ai-one - semanticweb.com

A recent article reports, “A new technology enables almost any application to learn like a human. The Topic-Mapper software development kit (SDK) by ai-one inc. reads and understands unstructured data without any human intervention. It allows developers to build artificial intelligence into almost any software program. This is a major step towards what Ray Kurzweil calls the technological singularity – where superhuman intelligence will transform history.”

ai-one, a SemTech Silver Sponsor, has developed a machine learning approach that is unlike any other: ‘ai-one’s technology extracts the inherent meaning of data without the need for any external references. A team of researchers spent more than eight years and $6.5 million building what they call ‘biologically inspired intelligence’ that works like a brain. It learns patterns by reading data at the bit-level. ‘It has no preconceived notions about anything,’ explains founder Walt Diggelmann, ‘so it works in any language and with any data set. It simply learns what you feed it. The more it reads, the more it learns, the better it gets at recognizing patterns and answering questions.’”

IBM Watson: Final Jeopardy! and the Future of Watson (via ibm)

After defeating the two greatest Jeopardy! champions of all time, the technology behind Watson will now be applied to some of the world’s most enticing challenges. Watch a breakdown of the match from Ken Jennings, Brad Rutter and the IBM team members as they look toward the future.

Watson Supercomputer Terminates Humans in First Jeopardy Round
Source: Wired

IBM supercomputer Watson closed the pod-bay doors on its human competition Tuesday night in the first round of a two-game Jeopardy match designed to showcase the latest advances in artificial intelligence. The contest concludes Wednesday.
By the end of the Tuesday’s shellacking, Jeopardy’s greatest champions, Ken Jennings and Brad Rutter, were sporting decidedly sour looks.
Watson had a near-miss at the end of the game, when it incorrectly answered the Final Jeopardy clue, but when the dust settled, the supercomputer had earned $35,734, blowing out Rutter and Jennings, who had earned $10,400 and $4800, respectively.
That final missed clue puzzled IBM scientists. The category was US Cities, and the clue was:  “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”
Rutter and Jennings both correctly wrote “What is Chicago?” for O’Hare and Midway, but Watson’s response was a baffling “What is Toronto???” complete with the additional question marks.
How could the machine have been so wrong? David Ferrucci, the manager of the Watson project at IBM Research,explained on the company’s blog that several things probably confused Watson, as reported by Steve Hamm:

Watson Supercomputer Terminates Humans in First Jeopardy Round

Source: Wired

IBM supercomputer Watson closed the pod-bay doors on its human competition Tuesday night in the first round of a two-game Jeopardy match designed to showcase the latest advances in artificial intelligence. The contest concludes Wednesday.

By the end of the Tuesday’s shellacking, Jeopardy’s greatest champions, Ken Jennings and Brad Rutter, were sporting decidedly sour looks.

Watson had a near-miss at the end of the game, when it incorrectly answered the Final Jeopardy clue, but when the dust settled, the supercomputer had earned $35,734, blowing out Rutter and Jennings, who had earned $10,400 and $4800, respectively.

That final missed clue puzzled IBM scientists. The category was US Cities, and the clue was:  “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”

Rutter and Jennings both correctly wrote “What is Chicago?” for O’Hare and Midway, but Watson’s response was a baffling “What is Toronto???” complete with the additional question marks.

How could the machine have been so wrong? David Ferrucci, the manager of the Watson project at IBM Research,explained on the company’s blog that several things probably confused Watson, as reported by Steve Hamm: