in General

Two-stage Learning Step for Backpropagation in a Relational Autoencoder

Working on gradient descent / backpropagation for a relational autoencoder.   I’m not really sure this is needed yet, so I have to build the testing framework for it all.  Separately I’m implementing a RL agent that uses Least Squares Policy Iteration to learn.

Let X and Y be two sets of training examples of sizes N and M respectively

Select x \in X and y \in Y randomly

  1. Feed (x,y) forward
  2. Step 1 – Train X
    1. Hold w and x constant and minimize cost by treating y’ as a parameter
    2. Calculate error on x side of cost function
    3. Backprop error
    4. Update W (and b) only on the X side
  3. Step 2 – Train Y
    1. Hold w and y constant and minimize cost by treating x’ as a parameter
    2. Calculate error on y side of cost function
    3. Backprop error
    4. Update W (and b) only on the Y side

Write a Comment

Comment