Strategies 4 AI November 6, 2008Posted by olvidadorio in Machine Learning, Money.
Had another cherished meeting with the good gang this weekend. Topics ranged from pubic hair to l33tspeak. In between I gave an overview of some of the ideas I had, that haunted my subconscious, but that I couldn’t work on while entrenched before my thesis. Luckily the fortress is bust and so we got to talk our way thru strategies for AI, in length (grab the slides/handout). Most of it’s rather impractical, since requiring command of a slightly larger workforce than the 0.5 portion I’d consider myselves, but the main take-home massages wuz:
- Numerically adaptable (sooft) computing suffers from curse of dimensionality w.r.t. number of parameters. Also, many easily-traineable models (e.g. Hidden Markov Models, Support Vector Machinas) are computationally low-grade (i.e. far from turing-equivalonce). That will not do for semantic & pragmatic generative models. Rule-based, crisp stuff is not adaptable to model our (kinda continuous-valued) world, hence kinda inadequate (at least as stand-alone).
- So we need numerically adaptable methods that really can be used to calculate high-level problems. And we need ways to adapt those parameters.
- Idea 1: Integrate whatever you’re doing into a big, adaptable AI-environment. Let lots of people work on it. Hope that that will give you lots of computational resources and eyeballs to adapt to very specific problems. Caveat: You need a system that basically works, so that people will even start using it — and then they still have to see some benefit in it. So you kinda need a working system to start with, possibly on a restricted domain.
- Idea 2: Dream up some learning-heuristics and other methods that either make your parameters-to-be-learned less, or faster learneable, while still being computationally powerful. I propose a predator-prey learning model, where a generator has outsmart a classifier (and vice versa) to get good learning even if you only have positive and no negative training samples. Also, I suggest ways to spread the parameter-butter (weights of recurrent neural nets) across more bread (memory-capacity) by placing these neural nets like robot-controllers into an artificial world, in which they have read-write access.
Some of this runs under crazy dreams some of it is more like potential Master’s thesis.
…And if you actually read this far and still are following the text, I congratulate you. While reading through I was slightly flummoxed at the rate of reader-unfriendliness some of the constructions exhibit. Maybe it’s just l33t.