jump to navigation

Well, here’s my actual thesis.. November 11, 2008

Posted by olvidadorio in Machine Learning.
Tags: , , , ,
2 comments

.. as was handed in on the 27th Oct.: Subsymbolic String Representations Using Simple Recurrent Neural Nets (Bsc_Pickard.pdf)

@Nic:

  • Actually, I’m not flying until Monday.
  • Yes, one of the outcomes is that it’s rather difficult to memorize fairly large sequences using feedforward nets and backpropagation. Did I finally prove it? No. But I produced material to support that claim, I guess. And it’s also my strong intuition that regardless of the learning-paradigm used, fairly small (i.e. non-huge) nets just run into big problems. Non generalizing. Most probably this has to do with the fact that neural activation (with non-linear activation-function) can at most encode ordinal information, not scalar data (due to the non-linear distortion). One way or another this makes it hard to encode arbitrarily long sequences, as — and this I haven’t proven — all linearily-decodeable encoding-schemes that press arbitrarily long sequences into fixed-size vectors rely heavily on scaling.
    So the problem is that ffNN’s have non-linear distortion which mixes up signals, but only linear means to decode with.
  • So about that “evolving metadata along with evolution” approach — what do they work on concretely? Like, for what kind of a domain is that useful? I can see that as being a fairly interesting area… I think I read something on that once. One of the issues is that one tends to end up in is solving the problem already within the evolutionary parameters. But no idea. Just keep giving me reasons to move my ass to Amsti! ;}

Strategies 4 AI November 6, 2008

Posted by olvidadorio in Machine Learning, Money.
1 comment so far

Had another cherished meeting with the good gang this weekend. Topics ranged from pubic hair to l33tspeak. In between I gave an overview of some of the ideas I had, that haunted my subconscious, but that I couldn’t work on while entrenched before my thesis. Luckily the fortress is bust and so we got to talk our way thru strategies for AI, in length (grab the slides/handout). Most of it’s rather impractical, since requiring command of a slightly larger workforce than the 0.5 portion I’d consider myselves, but the main take-home massages wuz:

  • Numerically adaptable (sooft) computing suffers from curse of dimensionality w.r.t. number of parameters. Also, many easily-traineable models (e.g. Hidden Markov Models, Support Vector Machinas) are computationally low-grade (i.e. far from turing-equivalonce). That will not do for semantic & pragmatic generative models. Rule-based, crisp stuff is not adaptable to model our (kinda continuous-valued) world, hence kinda inadequate (at least as stand-alone).
  • So we need numerically adaptable methods that really can be used to calculate high-level problems. And we need ways to adapt those parameters.
  • Idea 1: Integrate whatever you’re doing into a big, adaptable AI-environment. Let lots of people work on it. Hope that that will give you lots of computational resources and eyeballs to adapt to very specific problems. Caveat: You need a system that basically works, so that people will even start using it — and then they still have to see some benefit in it. So you kinda need a working system to start with, possibly on a restricted domain.
  • Idea 2: Dream up some learning-heuristics and other methods that either make your parameters-to-be-learned less, or faster learneable, while still being computationally powerful. I propose a predator-prey learning model, where a generator has outsmart a classifier (and vice versa) to get good learning even if you only have positive and no negative training samples. Also, I suggest ways to spread the parameter-butter (weights of recurrent neural nets) across more bread (memory-capacity) by placing these neural nets like robot-controllers into an artificial world, in which they have read-write access.

Some of this runs under crazy dreams some of it is more like potential Master’s thesis.

And if you actually read this far and still are following the text, I congratulate you. While reading through I was slightly flummoxed at the rate of reader-unfriendliness some of the constructions exhibit. Maybe it’s just l33t.