Work In Progress August 11, 2008Posted by olvidadorio in Programming.
Tags: neural networks, obama, thesis, xmpp
add a comment
I am still hacking away at my thesis. That’s why I haven’t posted in a while. I would have loved to have made some pithy comments about Obama’s evolving policy for president and how seamlessly he’s getting into the mood. Oh, and I’d loved to have talked about a few more issues: There were some further thoughts on Browser as Platform vs. other ways of doing that (remember, I had opted for IM’s). It’s interesting that XMPP has been getting more and more attention (as well as discontention) as it becomes clear that some sort of push-mechanism is really kinda necessary…
Most of the other topics are fermenting! My thesis, as I said, has taken a front seat. I still haven’t really started writing a text (that could worry me). I set my time to finish until the end of September, where I should be done & ready to review by mid Sept. – which doesn’t leave a heck of a lot of time. Oh no. Next steps are: wrapping up the programming part. Since a straightforward way of learning the task hasn’t worked, I’m opting for an augmentative strategy. Very much like heuristics help you in search problems, I’ll be helping the learning algorithm along by providing a (hopefully) well-picked set of more easily learnable sub-problems and sub-networks that are merged and integrated step-by-step. Lets hope it’ll fly.
Back to hack.
Tags: computability theory, neural networks, turing machine
1 comment so far
How significant this result is — well, it’s debatable. On the one hand it should seem relatively obvious that a machine using exact real-valued parameters can be more precise when it comes to computing e.g. non-rational values. So no big deal? I wouldn’t be too sure of that. Generally speaking it’s the case that an exact, ideal machine (of what type whatsoever) cannot be found in reality. Neither an infinite-tape Turing Machine nor a Real-Valued Recurrent Neural Network.
However, the neural networks Mis Siegelmann speaks of do not require anything infinetely large, instead all they require something infinetely precise. I would argue that this requirement is much nearer to the basic assumptions made in the physics that inform our world-view. [To avoid confusion, infinite precision is only required in terms what is actually there, not in terms of what can be measured! There may be some quantum-mechanical pitfalls, however to my knowledge it still should be possible to reduce a real-valued neural net to the basic laws of contemporary physics.]
It is not a great leap to suggest that our brain might much better be modeled by a real-valued, recurrent NN. But if this is a much better model for what our brain is — and as this model has been shown to be more powerful in principle than a Turing Machine — we might actually be dealing with quite an interesting result;