jump to navigation

Strategies 4 AI November 6, 2008

Posted by olvidadorio in Machine Learning, Money.
trackback

Had another cherished meeting with the good gang this weekend. Topics ranged from pubic hair to l33tspeak. In between I gave an overview of some of the ideas I had, that haunted my subconscious, but that I couldn’t work on while entrenched before my thesis. Luckily the fortress is bust and so we got to talk our way thru strategies for AI, in length (grab the slides/handout). Most of it’s rather impractical, since requiring command of a slightly larger workforce than the 0.5 portion I’d consider myselves, but the main take-home massages wuz:

  • Numerically adaptable (sooft) computing suffers from curse of dimensionality w.r.t. number of parameters. Also, many easily-traineable models (e.g. Hidden Markov Models, Support Vector Machinas) are computationally low-grade (i.e. far from turing-equivalonce). That will not do for semantic & pragmatic generative models. Rule-based, crisp stuff is not adaptable to model our (kinda continuous-valued) world, hence kinda inadequate (at least as stand-alone).
  • So we need numerically adaptable methods that really can be used to calculate high-level problems. And we need ways to adapt those parameters.
  • Idea 1: Integrate whatever you’re doing into a big, adaptable AI-environment. Let lots of people work on it. Hope that that will give you lots of computational resources and eyeballs to adapt to very specific problems. Caveat: You need a system that basically works, so that people will even start using it — and then they still have to see some benefit in it. So you kinda need a working system to start with, possibly on a restricted domain.
  • Idea 2: Dream up some learning-heuristics and other methods that either make your parameters-to-be-learned less, or faster learneable, while still being computationally powerful. I propose a predator-prey learning model, where a generator has outsmart a classifier (and vice versa) to get good learning even if you only have positive and no negative training samples. Also, I suggest ways to spread the parameter-butter (weights of recurrent neural nets) across more bread (memory-capacity) by placing these neural nets like robot-controllers into an artificial world, in which they have read-write access.

Some of this runs under crazy dreams some of it is more like potential Master’s thesis.

And if you actually read this far and still are following the text, I congratulate you. While reading through I was slightly flummoxed at the rate of reader-unfriendliness some of the constructions exhibit. Maybe it’s just l33t.

Advertisements

Comments»

1. Nic - November 6, 2008

Heym I though you were in the US ….

So your thesis says: “Memorizing with neural nets is difficult”

Did you prove/show it?

Anyway, here at the VU they are working on evolving the metadata of evolutionary algorithms along with them. I thought of that reading your second point in this post. Maybe a place for you 😉


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: