jump to navigation

Well, here’s my actual thesis.. November 11, 2008

Posted by olvidadorio in Machine Learning.
Tags: , , , ,
trackback

.. as was handed in on the 27th Oct.: Subsymbolic String Representations Using Simple Recurrent Neural Nets (Bsc_Pickard.pdf)

@Nic:

  • Actually, I’m not flying until Monday.
  • Yes, one of the outcomes is that it’s rather difficult to memorize fairly large sequences using feedforward nets and backpropagation. Did I finally prove it? No. But I produced material to support that claim, I guess. And it’s also my strong intuition that regardless of the learning-paradigm used, fairly small (i.e. non-huge) nets just run into big problems. Non generalizing. Most probably this has to do with the fact that neural activation (with non-linear activation-function) can at most encode ordinal information, not scalar data (due to the non-linear distortion). One way or another this makes it hard to encode arbitrarily long sequences, as — and this I haven’t proven — all linearily-decodeable encoding-schemes that press arbitrarily long sequences into fixed-size vectors rely heavily on scaling.
    So the problem is that ffNN’s have non-linear distortion which mixes up signals, but only linear means to decode with.
  • So about that “evolving metadata along with evolution” approach — what do they work on concretely? Like, for what kind of a domain is that useful? I can see that as being a fairly interesting area… I think I read something on that once. One of the issues is that one tends to end up in is solving the problem already within the evolutionary parameters. But no idea. Just keep giving me reasons to move my ass to Amsti! ;}
Advertisements

Comments»

1. Nic - November 12, 2008

This guy gave a good lecture, and I guess you would find a Master topic with him, were you here:

http://www.cs.vu.nl/~gusz/#Research

I was thinking of the “self-calibrating algorithms” part he does (there is a new paper obviously, haven’t read it).

I am working with this guy (who, to my surprise also co-authored the self-calibrating algorithm paper):

http://www.cs.vu.nl/~schut/

Are you in Europe in February? Then try to enlist in this workshop: http://decoi2009.collectivae.net/ (It’s in Leiden, but some people will get paid for accomodation. Tell them about your bsc, when you enlist.) I attended this 2007 and it was superb.

2. olvidadorio - November 12, 2008

Yeah, sadly enough I won’t be in europe in Feb. so I guess decoi won’t be for me!

Thanks for the links — will follow.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: