How New IBM Fellow Pours Analytics into Real-World Problems A Smarter Planet Blog
As a teenager parking cars at a Fort Lauderdale country club, IBM customer analytics consulting leader Mike Haydock picked up much more than just tips. Take the life lesson he received one day from Academy Award winning actor George C. Scott. “He gave me a tremendous insight on how he got into the role of Patton,” Haydock said. “He told me he became that role. He became Patton. That’s how he was able to pull that performance off.” Haydock says he applies that same philosophy to his own work with clients. “I start to think like them,” he said. “So I know everything about the problem they’re trying to solve and probably more.” That immersive approach has made Haydock, known as the ‘Math Maestro,’ one of IBM’s most sought after analytics experts, a demand that is likely to grow now that he has been named an IBM Fellow. The Fellow designation acknowledges an employee’s important contributions as well as their industry-leading innovations in developing some of the world’s most important technologies. From designing the most efficient way to butcher cattle, to creating an original dynamic pricing model for airline fares, Haydock has applied deep analytics solutions with clients across a broad set of industries.

How New IBM Fellow Pours Analytics into Real-World Problems A Smarter Planet Blog

As a teenager parking cars at a Fort Lauderdale country club, IBM customer analytics consulting leader Mike Haydock picked up much more than just tips.
Take the life lesson he received one day from Academy Award winning actor George C. Scott. “He gave me a tremendous insight on how he got into the role of Patton,” Haydock said. “He told me he became that role. He became Patton. That’s how he was able to pull that performance off.”
Haydock says he applies that same philosophy to his own work with clients. “I start to think like them,” he said. “So I know everything about the problem they’re trying to solve and probably more.”
That immersive approach has made Haydock, known as the ‘Math Maestro,’ one of IBM’s most sought after analytics experts, a demand that is likely to grow now that he has been named an IBM Fellow. The Fellow designation acknowledges an employee’s important contributions as well as their industry-leading innovations in developing some of the world’s most important technologies.
From designing the most efficient way to butcher cattle, to creating an original dynamic pricing model for airline fares, Haydock has applied deep analytics solutions with clients across a broad set of industries.

explore-blog:

Massive visualization uses Google’s Ngram Viewer – a remarkable big-data tool for tracking changes in culture though word usage in more than 4 billion books – to depict political, scientific, cultural, and philosophical themes. 
One of the most prominent patterns is the fall of “God” over the course of the 20th century, as well as the rise of utopias – a concept that has always enchanted us – in the aftermath of WWII.

explore-blog:

Massive visualization uses Google’s Ngram Viewer – a remarkable big-data tool for tracking changes in culture though word usage in more than 4 billion books – to depict political, scientific, cultural, and philosophical themes.

One of the most prominent patterns is the fall of “God” over the course of the 20th century, as well as the rise of utopias – a concept that has always enchanted us – in the aftermath of WWII.

IBM to start crunching connected car data for Peugeot — GigaOM


IBM is putting its data analytics to work on information collected from Peugeot’s in-car sensors, ostensibly combining it with data from traffic infrastructure and smartphones to create better car apps and more network-aware vehicles.

IBM to start crunching connected car data for Peugeot — GigaOM


IBM is putting its data analytics to work on information collected from Peugeot’s in-car sensors, ostensibly combining it with data from traffic infrastructure and smartphones to create better car apps and more network-aware vehicles.

IBM Invests $100 Million To Expand Design Business | Co.Design | business   design
In response to growing demand, IBM is spending $100 million to acquire talent and open Interactive Experience labs around the world.

IBM Invests $100 Million To Expand Design Business | Co.Design | business design
In response to growing demand, IBM is spending $100 million to acquire talent and open Interactive Experience labs around the world.

The Future of Retailing

In response to the rising tide of online shopping, Macy’s is overhauling its flagship New York City store—with the goal of making it the most technologically advanced and compelling shopping destination anywhere. Macy’s and IBM teamed up this year to give a tour to retail industry influencers and reveal some of the insights from IBM’s annual retail consumer survey.

IBM’s more powerful Watson supercomputer is opening up for public use | The Verge
IBM’s Watson supercomputer is taking a big step towards public use. Today, the company announced plans to open Watson up to developers in 2014, establishing an open platform and API that would let coders to build apps on top of the supercomputer’s database and natural language skills. It’s not the first time the project’s been used by outside groups, but the new platform will give developers complete control of the front-end, and require only minimal input from the Watson team at IBM. Companies will still have to contract an instance of Watson from IBM, but once that’s done, their programs will be able to pull questions and answers from the supercomputer in real time.
IBM says the API itself is unusually simple, providing programs with a direct path to ask Watson natural language questions and get an answers back with links to the relevant content from Watson’s database. The question is what the rest of the world might use it for. “We believe that this is such a significant development in the future of computing that we want other people involved in it,” said IBM’s chief technology officer Rob High. “We want to let other partners to have a much deeper say in how cognitive computing evolves.” The program is launching with three partners, including a Fluid Retail deployment that plans to bring a Watson-powered personal-shopper feature to North Face’s e-commerce shop in 2014.

IBM’s more powerful Watson supercomputer is opening up for public use | The Verge

IBM’s Watson supercomputer is taking a big step towards public use. Today, the company announced plans to open Watson up to developers in 2014, establishing an open platform and API that would let coders to build apps on top of the supercomputer’s database and natural language skills. It’s not the first time the project’s been used by outside groups, but the new platform will give developers complete control of the front-end, and require only minimal input from the Watson team at IBM. Companies will still have to contract an instance of Watson from IBM, but once that’s done, their programs will be able to pull questions and answers from the supercomputer in real time.

IBM says the API itself is unusually simple, providing programs with a direct path to ask Watson natural language questions and get an answers back with links to the relevant content from Watson’s database. The question is what the rest of the world might use it for. “We believe that this is such a significant development in the future of computing that we want other people involved in it,” said IBM’s chief technology officer Rob High. “We want to let other partners to have a much deeper say in how cognitive computing evolves.” The program is launching with three partners, including a Fluid Retail deployment that plans to bring a Watson-powered personal-shopper feature to North Face’s e-commerce shop in 2014.

Scientific Data Has Become So Complex, We Have to Invent New Math to Deal With It - Wired Science
Simon DeDeo, a research fellow in applied mathematics and complex systems at the Santa Fe Institute, had a problem. He was collaborating on a new project analyzing 300 years’ worth of data from the archives of London’s Old Bailey, the central criminal court of England and Wales. Granted, there was clean data in the usual straightforward Excel spreadsheet format, including such variables as indictment, verdict, and sentence for each case. But there were also full court transcripts, containing some 10 million words recorded during just under 200,000 trials.
How the hell do you analyze that data?” DeDeo wondered. It wasn’t the size of the data set that was daunting; by big data standards, the size was quite manageable. It was the sheer complexity and lack of formal structure that posed a problem. This “big data” looked nothing like the kinds of traditional data sets the former physicist would have encountered earlier in his career, when the research paradigm involved forming a hypothesis, deciding precisely what one wished to measure, then building an apparatus to make that measurement as accurately as possible.

Scientific Data Has Become So Complex, We Have to Invent New Math to Deal With It - Wired Science

Simon DeDeo, a research fellow in applied mathematics and complex systems at the Santa Fe Institute, had a problem. He was collaborating on a new project analyzing 300 years’ worth of data from the archives of London’s Old Bailey, the central criminal court of England and Wales. Granted, there was clean data in the usual straightforward Excel spreadsheet format, including such variables as indictment, verdict, and sentence for each case. But there were also full court transcripts, containing some 10 million words recorded during just under 200,000 trials.

How the hell do you analyze that data?” DeDeo wondered. It wasn’t the size of the data set that was daunting; by big data standards, the size was quite manageable. It was the sheer complexity and lack of formal structure that posed a problem. This “big data” looked nothing like the kinds of traditional data sets the former physicist would have encountered earlier in his career, when the research paradigm involved forming a hypothesis, deciding precisely what one wished to measure, then building an apparatus to make that measurement as accurately as possible.

Since the Affordable Care Act was signed three years ago, more than 370 innovative medical practices, called accountable care organizations, have sprung up across the country, with 150 more in the works. At these centers, Medicare or private insurers reward doctors financially when their patients require fewer hospital stays, emergency room visits and surgeries — exactly the opposite of what doctors have traditionally been paid to do. The more money the organization saves, the more money its participating providers share. And the best way to save costs (which is, happily, also the best way to keep patients alive) is to catch problems before they explode into emergencies.

Thus the accountable care organizations have become the Silicon Valley of preventive care, laboratories of invention driven by the entrepreneurial energy of start-ups.

These organizations have invested heavily in information technology so they can crunch patient records to identify those most at risk, those who are overdue for checkups, those who have not been filling their prescriptions and presumably have not been taking their meds. They then deploy new medical SWAT teams — including not just doctors but health coaches, care coordinators, nurse practitioners — to intervene and encourage patients to live healthier lives.

Winning World Wide | The IBM Research & GBS Data Visualization Prototype

What does a year of business wins — thousands of client projects from across every industry and around the world — look like for IBM’s consulting organization?

Based on actual data from GBS Sales Operations over the past 12 months, this big data visualization prototype by IBM Research and GBS Communications brings the velocity, volume and variety of all this real-world work to life, in 3D geography and a global mapping.mode.

IBM research stakes its future on cognitive computing | ZDNet
IBM Senior Vice President John E. Kelly / Photo: Audrey Quinn
YORKTOWN HEIGHTS, NY – IBM began its colloquium on cognitive computing today with a jewel in the company’s crown. Senior Vice President John E. Kelly took the stage following a video from January 14th, 2011 – the day when IBM’s Watson machine handedly beat Jeopardy champs Ken Jennings and Ken Rutter.
“I remember saying to the audience at that time,” recalled Kelly, “I don’t know if we’re going to win today. But it’s only a matter of when not if a system like Watson is going to surpass human beings at this task. People asked, ‘When did you realize how important this was?’ I think I realized in the year coming up to this that this was really special. Something was really changing in the way that computer systems interacted with people – something very big beyond just a game show is occurring here.”
So what is going on here? The world of data is now exploding, Kelly said, and machines like Watson have arose to provide us with better ways of harnessing this information.
“We are literally creating a digital universe,” he said. “And the way we have to process that is different than we’ve ever experienced before. What we were creating was a system that would be able to deal with portions of this tsunami of data coming at us. If we try to use first generation computing against this wave, it can’t be done. So we need a whole different set of systems, extracting information from noisy data sources in order to come up with rational answers.”
Kelly broke down the history of computing into three eras. First, there was the the tabulating era, with early calculators and tabulating machines made of mechanical systems and later, vacuum tubes. “In the first era of computing we basically fed data in on punch cards,” he said. “There was really no extraction of the data itself, the data was just going along for the ride.”
Next came the programmable era of computing, which ranged in form from vacuum tubes to microprocessors. “It was about taking processes and putting them into the machine,” Kelly explained. “It’s completely controlled by the programming we inflict on the system.”
And now, Kelly said, we are entering the era of cognitive computing, where computers can help us to unlock the insights that this new wealth of data holds. “If we don’t make this transition,” Kelly argued, “the data will be too big for us to have any impact on it. I think that this era of computing is going to be about scaling human capability. The separation between human and machine is going to blur in a very fundamental way.”

IBM research stakes its future on cognitive computing | ZDNet

IBM Senior Vice President John E. Kelly / Photo: Audrey Quinn

YORKTOWN HEIGHTS, NY – IBM began its colloquium on cognitive computing today with a jewel in the company’s crown. Senior Vice President John E. Kelly took the stage following a video from January 14th, 2011 – the day when IBM’s Watson machine handedly beat Jeopardy champs Ken Jennings and Ken Rutter.

“I remember saying to the audience at that time,” recalled Kelly, “I don’t know if we’re going to win today. But it’s only a matter of when not if a system like Watson is going to surpass human beings at this task. People asked, ‘When did you realize how important this was?’ I think I realized in the year coming up to this that this was really special. Something was really changing in the way that computer systems interacted with people – something very big beyond just a game show is occurring here.”

So what is going on here? The world of data is now exploding, Kelly said, and machines like Watson have arose to provide us with better ways of harnessing this information.

“We are literally creating a digital universe,” he said. “And the way we have to process that is different than we’ve ever experienced before. What we were creating was a system that would be able to deal with portions of this tsunami of data coming at us. If we try to use first generation computing against this wave, it can’t be done. So we need a whole different set of systems, extracting information from noisy data sources in order to come up with rational answers.”

Kelly broke down the history of computing into three eras. First, there was the the tabulating era, with early calculators and tabulating machines made of mechanical systems and later, vacuum tubes. “In the first era of computing we basically fed data in on punch cards,” he said. “There was really no extraction of the data itself, the data was just going along for the ride.”

Next came the programmable era of computing, which ranged in form from vacuum tubes to microprocessors. “It was about taking processes and putting them into the machine,” Kelly explained. “It’s completely controlled by the programming we inflict on the system.”

And now, Kelly said, we are entering the era of cognitive computing, where computers can help us to unlock the insights that this new wealth of data holds. “If we don’t make this transition,” Kelly argued, “the data will be too big for us to have any impact on it. I think that this era of computing is going to be about scaling human capability. The separation between human and machine is going to blur in a very fundamental way.”

Vending machine offers free samples, provides rich user data to retailers
Free samples are a common way for retailers to promote new products, but supermarkets often use staff to hand them out and don’t collect data about the success those campaigns. New startup Freeosk has now created a vending machine that automatically dispenses samples to loyalty card holders. READ MORE…

Vending machine offers free samples, provides rich user data to retailers

Free samples are a common way for retailers to promote new products, but supermarkets often use staff to hand them out and don’t collect data about the success those campaigns. New startup Freeosk has now created a vending machine that automatically dispenses samples to loyalty card holders. READ MORE…

IBM’s massive bet on Watson
Dr. Mark Kris is among the top lung cancer specialists in the world. As chief of thoracic oncology at Memorial Sloan-Kettering (MSK) Cancer Center in New York City, he has been diagnosing and treating patients for more than 30 years. But even he is overwhelmed by the massive amount of information that goes into figuring out which drugs to give his patients — and the relatively crude tools he has to decipher that data. “This is the standard for treatment today,” he says, passing me a well-worn printout of the 2013 treatment guidelines in his office. We choose a cancer type. A paragraph of instructions says to pair two drugs from a list of 16. “Do the math,” he says. It means more than 100 possible combinations. “How do you figure out which ones are the best?” 

IBM’s massive bet on Watson

Dr. Mark Kris is among the top lung cancer specialists in the world. As chief of thoracic oncology at Memorial Sloan-Kettering (MSK) Cancer Center in New York City, he has been diagnosing and treating patients for more than 30 years. But even he is overwhelmed by the massive amount of information that goes into figuring out which drugs to give his patients — and the relatively crude tools he has to decipher that data. “This is the standard for treatment today,” he says, passing me a well-worn printout of the 2013 treatment guidelines in his office. We choose a cancer type. A paragraph of instructions says to pair two drugs from a list of 16. “Do the math,” he says. It means more than 100 possible combinations. “How do you figure out which ones are the best?”