Real-Valued Recurrent Neural Networks more Powerful than Turing Machines May 30, 2008Posted by olvidadorio in Machine Learning.
Tags: computability theory, neural networks, turing machine
How significant this result is — well, it’s debatable. On the one hand it should seem relatively obvious that a machine using exact real-valued parameters can be more precise when it comes to computing e.g. non-rational values. So no big deal? I wouldn’t be too sure of that. Generally speaking it’s the case that an exact, ideal machine (of what type whatsoever) cannot be found in reality. Neither an infinite-tape Turing Machine nor a Real-Valued Recurrent Neural Network.
However, the neural networks Mis Siegelmann speaks of do not require anything infinetely large, instead all they require something infinetely precise. I would argue that this requirement is much nearer to the basic assumptions made in the physics that inform our world-view. [To avoid confusion, infinite precision is only required in terms what is actually there, not in terms of what can be measured! There may be some quantum-mechanical pitfalls, however to my knowledge it still should be possible to reduce a real-valued neural net to the basic laws of contemporary physics.]
It is not a great leap to suggest that our brain might much better be modeled by a real-valued, recurrent NN. But if this is a much better model for what our brain is — and as this model has been shown to be more powerful in principle than a Turing Machine — we might actually be dealing with quite an interesting result;
The basic assumption of “Good old-fashioned AI” (GofAI) that has dominated the philosophical discourse in this area has for long been that a Universal Turing Machine is all that’s needed to model human-level intelligence. This assumption must be re-examined.
For all the constructive and technological advantages that a simplified, discrete and digital world-view offers, it may be of critical importance to start using different models to capture the movement twoards non-symbolic, analog-ish computation which has already been under way for some time now. So this is simply about moving to a different model and stating why it should be more useful.
It should be clear that our brain is better modeled by some sort of truly real-valued recurrent neural network. It is my personal intuition that evolution has played a crucial role in fleshing out those fine little differences that actually distinguish biological living beings from human-made machines.
Conversely, I do not believe that it is impossible to achieve something “intelligent” on a substrate different from our organic-chemistry-based form of life. It’s a hard task to rebuild intelligence – and you won’t reach it by wildly applying back-propagation to 3-layered feed-forward neural nets. Nor by training Aibo-robots to walk in straight lines. I believe that many of these tiny little, evolutinarily implanted upon us, miniscule little nuances that form us as biological beings, that these little differences have a lot to do with what was once known as the relevance problem.
Does this mean it’s impossible to create AI? I don’t think so. I would even go further to say that, much like one doesn’t actually need infinite memory to do many interesting things with computers-as-substrate of Turing machines, one also doesn’t need analog computers such as the brain to do lots of interesting analog-style computing. I still believe that many interesting things can be done with simulations of real-valued subsymbolic computers, realized on the digital machines we have today.
What I would much rather do is to point out why the paradigm-shift that so-called artificial-intelligence research has been undergoing for some time is reasonable, fundamentally correct and that sub-symbolic models that are being used should be principally prefered when talking about a priori human-style intelligence. Simply because our intelligence arises from a body with a brain. A very non-discrete environment that is.
Hence it is not the sub-symbolicists task to argue why digital computers aren’t the right paradigm, rather the symbolicists / GofAIite will have to come up with reasons why his formal models should ever be able to do the same as a brain can. And, of course, I do not here mean exactly the same, but rather something principally indistinguishable, e.g. solving the Turing Test.
As one of my major fields of interest currently involves neuro-symbolic integration, I’m glad to take up the challenge from the other side.