::: nBlog :::
“In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that, its powers will be incalculable.”
This might sound like a recent Google or IBM statement, but in fact it was written by Marvin Minsky, ‘the father of artificial intelligence’, in 1970. Yes, a year before I was born. For AI research and funding these over-optimistic predictionscaused quite some harm, in form of diminished funding and reputation; some call it ‘the AI winter’.
Since the 1970s we have seen promising steps like IBM’s Deep Blue defeating world chess champion in 1997, and as recently as 2012 IBM Watson defeating Jeopardy! game show masters. Automatic speech and facial recognition are becoming household features in our computers and gadgets; we might even see computer passing the Turing test soon.
AI is usually split in two areas – original rule-based learning, which showed great promise in the 70s labs but quickly ran into complexity problems, and neural networks emulating the human brain synaptic structure with modular computing structure. Combinations are called machine- or deep learning, or haphazardly cognitive computing. Thanks to Moore’s law, there is sheer computing power available for these concepts to become useful, so they’re becoming scientifically reputable again.
Our spime containers will inevitably provide rule-based and neural network backends for practical AI applications, such as self-driving cars, adaptive security systems or real-time language translation. However, I see that the ubiquitous and affordable connectivity between the spimes will also introduce new kinds of opportunities – creating open-ended, self-organizing ‘meta neural networks’ with parts developed by different people and organizations.
This requires a new, non-deterministic programming approach in which resources are dynamic, evolving and unpredictable. Sustainability is king. The paradigm change is difficult for many programmers and electronics engineers who were educated to squeeze out any uncertainties from their systems. In the end, it is likely that those uncertainties are the key for real cognition.