Two-stage Learning Step for Backpropagation in a Relational Autoencoder

Home / General / Two-stage Learning Step for Backpropagation in a Relational Autoencoder

Working on gradient descent / backpropagation for a relational autoencoder.   I’m not really sure this is needed yet, so I have to build the testing framework for it all.  Separately I’m implementing a RL agent that uses Least Squares Policy Iteration to learn.

Let X and Y be two sets of training examples of sizes N and M respectively

Select x \in X and y \in Y randomly

  1. Feed (x,y) forward
  2. Step 1 – Train X
    1. Hold w and x constant and minimize cost by treating y’ as a parameter
    2. Calculate error on x side of cost function
    3. Backprop error
    4. Update W (and b) only on the X side
  3. Step 2 – Train Y
    1. Hold w and y constant and minimize cost by treating x’ as a parameter
    2. Calculate error on y side of cost function
    3. Backprop error
    4. Update W (and b) only on the Y side

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

%d bloggers like this: