These days, many of us in consumer and enterprise tech companies are working on predictive systems that provide modest but valuable augmentation of human intelligence and business processes. I think this scale of ambition is a good fit for the current state of the art in machine learning and probabilistic inference. Think personal assistants like Siri or Google Now, predictive analytics in the enterprise for churn detection and ad campaign targeting, and personalized news apps like Prismatic.
But I think that the long term story is much more exciting, and much further from our experience with synthetic intelligence to date.?I believe that we are on the path to building the equivalent of global-scale nervous systems. I?m thinking Gaia?s brain: distributed but unified intelligences that gather data from sensors all over the world, and that synthesize those data streams to perceive the overall state of the planet as naturally as we perceive with our own sensory systems. This isn?t just big data?this is big inference.
The world as brain
To make this idea of a global intelligence more concrete, consider the startup Premise. As a first step toward the kind of perceptual systems that I am talking about, Premise is using various signals from the public internet as a set of massively distributed sensory organs, and then leveraging this information to develop more informative economic indexes.
Now consider what other problems such systems could solve in the coming decades. We could gain a true understanding of the climate system on a granular but global level. We could track and coordinate every vehicle on the planet, to improve energy efficiency and optimize scheduling to all but eliminate traffic jams. Or moving from vehicles to parts and materials, we could create and manage truly robust supply chains that maintain efficiency and resilience in the face of unexpected events. The possibilities go on, and are truly awesome.
Key problems in the way
To get there though we?ll have to confront a number of hurdles:
We need to gather the data.?Emerging, massively distributed and networked sensors will be the equivalent of human sensory transducers like rods and cones. The rise of the Internet of Things also means that every device will be able to contribute its own data stream to a collective understanding of the current state of the world.
Much of the content of big data these days is exhaust ? data originally collected for transactional or other purposes, for which mining and analysis are afterthoughts, and whose characteristics are often ill-suited to further analysis. This will certainly change, as data collection matures into a process explicitly designed to improve our peceptual and decision-making capabilities.
We need the processing power to interpret the data?While it has become fashionable to note how cheap compute cycles have become, it?s certainly not the case that we can process billions or trillions of input streams in real time ?especially when we need to find patterns that are distributed across many noisy and possibly contradictory sensor inputs (i.e., we can?t just process each stream in isolation). We may need to develop new processor technologies to handle these kind of astronomically parallel and heterogeneous inputs.
We need the algorithms. To actually make sense of the data and decide what actions and responses to take, we have to figure out how to extract high-level patterns and concepts from the raw inputs. There is an ongoing debate over the right approach: Most researchers will say that we need something more ?brain-like? than current systems, but there are many different (and opposing) theories about which aspects of our brain?s computational architecture are actually important. My own bet is on?probabilistic programming methods, which are closely aligned with an emerging body of theory that views the brain as a Bayesian inference and decision engine.
But there are other important research threads. Google is backing?so-called deep learning methods, a fundamental advance in the artificial neural networks (ANNs) that promised so much in previous decades before falling short of expectations. And Jeff Hawkins? cortical learning algorithm (CLA)?claims to replicate the human brain?s ability to capture spatiotemporal patterns in arbitrary sensory inputs.
While exciting, all three of these approaches currently fall?well short. More research is needed, as they say.
Scaling isn?t enough
One approach that won?t work is just scaling up the current state-of-the-art in machine learning. The brain must constantly merge its previous experience with new and diverse sensory data to quickly interpret the current situation and decide how to act. The brain doesn?t start from scratch every time it encounters a new set of observations. Instead, it leverages all of its previous inputs ? in the form of a sophisticated model of ?how the world works? ? to quickly discover the most likely explanation(s) for new information. This is why phenomena such as priming, expectations, and framing matter so much in how we perceive our physical and social environments.
Of course, all of this coming power could be used to control and oppress just as easily as it could be used to improve the human condition. I think that world-spanning intelligence can help us to overcome some of the most fundamental challenges that we face as a civilization, but recent events demonstrate that technologies developed with one set of intentions are often put to other uses. As scientists and engineers, we need to take responsibility for our creations. This will only become more important as we create global-scale intelligent systems.
Note: This article represents the author?s own opinion and not the that of his employer, Salesforce.com.
Beau Cronin is a product manager at Salesforce. Previously he co-founded Prior Knowledge, which was acquired by Salesforce in 2012, and?Navia Systems. He has a PhD in computational neuroscience from MIT. ?Follow him on Twitter @beaucronin.
Have an idea for a post you?d like to contribute to GigaOm? Click?here for our guidelines?and contact info.
Photo courtesy Zsschreiner/Shutterstock.com.
Source: http://gigaom.com/2013/06/30/what-happens-when-the-world-turns-into-one-giant-brain/
chicago bulls ncis how i met your mother tesla barbara walters 24 kermit gosnell
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.