At some point you’ll need to filter information from your organization’s social media systems to avoid information overload. This article discusses considerations in using ”metadata” for filtering, whether implemented by algorithm or by human trial and error.
If someone defines their filters too narrowly, they reduce the opportunity for serendipity; but if they define their filters too widely, they are back to information overload.
Knowing how many people have read an item is a big clue to its value
When you look at content ratings consider that people are more comfortable giving positive ratings than negative ones, though cultural differences exist between Europe and US [article doesn’t say which way this difference goes… anybody have any ideas on that?]
Comments indicate how interesting something is — number of commenters suggests breadth of interest and number of comments its depth
While the most valued content does not always come from the most senior employees, high ratings from highly ranked employees usually have more weight
Implement an enterprise social network without adequate filtering and you risk subjecting employees to information overload. Or if they deal with it by ignoring the social network content altogether, they end up with too little information.
Only by embracing the rich vein of content metadata that a social network provides, will employees be able to find the information they need. Via InfoManagement Direct
Big Data and online video analytics deliver extremely personalized media experiences that benefit both viewers and content publishers. Rather than “killing television,” the shift to mobile, multiscreen video viewing offers entertainment and technology companies a tremendous opportunity to create new and profitable digital distribution models. The key is for those companies to collaborate within a media universe that is changing dramatically, quarter by quarter.
How quickly is this space changing? Three years ago, only 2 percent of video consumption in the U.S. occurred online. Today, that number has jumped to 10 percent. And if current growth trends continue, it will reach nearly one-third by 2015.
“Currently being developed by DARPA … are contact lenses that enhance normal vision by allowing a wearer to view virtual and augmented reality images without the need for bulky apparatus. Instead of oversized virtual reality helmets, digital images are projected onto tiny full-color displays that are very near the eye. These novel contact lenses allow users to focus simultaneously on objects that are close up and far away. This could improve ability to use tiny portable displays while sill interacting with the surrounding environment.”—
(N.B. IBM Research is also well along on various aspects of AI, such as Watson, and a new generation of cognitive computing chips and other frontiers of neuroscience, supercomputing and nanotechnology)
John Markoff of the New York Times reports, “Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain. There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own. Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.”
He continues, “The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items. The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.”
“Smartphones are helping make the densest cities the best places to live, as witnessed by property prices in Hong Kong, New York, Paris and London. By contrast, sprawling cities that rely heavily on cars – Moscow, Istanbul, Beijing – are becoming dysfunctional as roads clog up.”—The app of life (FT)
Fortune 500 companies are using Hadoop and big data technologies to transform their use of data for financial analysis, retail customer intelligence, IT operational insight, environmental and biomedical research, energy management, and even national security. But what results are they seeing? What have the early adopters of big data systems learned, and what business benefits have been realized from these investments?
Now that companies are starting to capture all of this
data, translating it from the raw data source into information that makes sense to business users is no small task. This webinar will examine this issue and the techniques companies are using to generate meaningful business insights from big data.
We will discuss these topics and more:
Customer examples of big data analytics
What business benefits are customers seeing from big data implementations today?
What were the challenges and how were they overcome?
A mathematical model that has been used for more than 80 years to determine the hunting range of animals in the wild holds promise for mapping the territories of street gangs, a UCLA-led team of social scientists reports in a new study.
A 10-year-old Swedish girl has had a potentially life-threatening condition alleviated by receiving a vein grown from her own stem cells. Her condition, called extrahepatic portal vein obstruction, blocks the bloodflow between the intestines and the liver.
Doctors at the Sahlgrenska University Hospital and the University of Gothenburg, Sweden, removed cells from a deceased donor’s vein with detergents and enzymes, leaving a scaffold upon which the girl’s own stem cells could attach and differentiate into the endothelial and smooth muscles cells that make up a healthy vein. This vein was then transplanted into the patient, and normal blood flow was reestablished.
Technology that connects people and objects with online identities is only as good or bad as the way we choose to use it
ometimes trying to predict the digital revolution seems very much like that old cliche of waiting for a bus. You know, you spend 12 years waiting for mobile to come along and then social and cloud arrive at the same time. That perfect, ahem, storm of connectivity – combined with the rapid adoption of smartphones – is changing society and shaking up business faster than any of
us can imagine.
So imagine something even more disruptive: the social, cloud and mobile connected identity of everyday objects, or the “internet of things” as it is often called. Just two years ago, the internet of things was widely framed by examples such as a washing machine telling its owner and the manufacturer when it needed a service. Or the fridge having a chat with Waitrose (other fine supermarkets are available) when you’re running low on milk.
We’ll be freed from the rigidity of conventional input devices (e.g., keyboard, mouse, screen, remotes) and able to interact with the digital world anywhere—and any way—using a combination of gesture, touch, verbal commands, and targeted use of traditional interfaces.
“Imagine drinking orange juice out of an orange-flavored container that you can chew after. Or ice cream in a non-melting chocolate envelope. WikiCells, unveiled last week, could change how we store our food.”—
IBM UK are at Base Cities London and have been collecting Big Ideas for sustainable cities from delegates and our Twitter and Tumblr followers. We’ve passed your Big Ideas to Scriberia to animate and here are five of the best, along with Sriberia’s pictures.
We want your help choosing a winner, so take a look at the ideas below and cast your vote!
One Switch - how about one switch near your front door that turned off all non-critical electrical equipment when you left the house? How much energy could we save?
Solar and wind power on every roof - should urban rooftops be used to power cities?
Smart water meters - Can we encourage sustainable behavior through providing more information on the resource we consume?
Building blocks from mixed plastic waste - Recycling plastic can be complex - what if we could turn mixed plastic into lightweight and affordable building materials?
Smart controls for decentralized infrastructure - can we optimize waste streams by monitoring decentralized infrastructure?
Whatever you think of Hollywood’s current preoccupation with all things three dimensional, in the world of mechanical engineering, 3D is definitely the way to go. But sharing those CAD-generated renders with clients is more often done Old Skool using print screens, drawings or the dreaded powerpoint presentation. That’s a pretty static way of doing things since those renders can’t be interacted with and the only way to level-up is a full physical model or 3D prin
t out, adding unnecessary expense and friction at various stages of the design process.
Therefore, it seems rather remiss that mechanical engineering community GrabCAD doesn’t support a 3D view.
Until now, that is.
GrabCAD’s new-fangled 3D functionality, which is currently only supported in Firefox and Chrome and is in Beta (like, seriously – see example), brings the startup closer to realising its vision of making it easier for mechanical engineers to collaborate, both amongst themselves by sharing models in an ‘open source’ way and thus reducing duplication, but also with clients by making it infinitely quicker to get feedback.
ST. PAUL, Minn. — Remember Watson, the IBM supercomputer which made headlines last year by trouncing the top two contestants on the TV game show, Jeopardy?
Watson’s million-dollar prize went to charity and now Big Blue is seeking gainful employment for Watson other than as a professional game show contestant.
Today, IBM’s chief medical scientist visited a Minneapolis hospital to talk about how Watson’s artificial intelligence could help doctors wade through loads of research data and apply that knowledge to treating patients.
Dr. Martin Kohn, chief medical scientist at IBM Research, talks about how supercomputer Watson might be used in the health care industry Wednesday, June 13, 2012 at Abbott Northwestern Hospital in Minneapolis. Watson is IBM’s artificial intelligence computer system most famous for winning the quiz show Jeopardy. (MPR Photo/Jennifer Simonson)