Home / Blog

# Two-stage Learning Step for Backpropagation in a Relational Autoencoder

Working on gradient descent / backpropagation for a relational autoencoder.   I'm not really sure this is needed yet, so I have to build the testing framework for it all.  Separately I'm implementing a RL agent that uses Least Squares Policy Iteration to learn. Let X and Y be two sets of training examples of sizes N and M respectively Select $latex x \in X$ and $latex y \in Y$ randomly Feed $latex (x,y)$ forward Step 1 - Train X Hold w and x constant and minimize cost by treating y' as a parameter Calculate error on x side of cost function Backprop error Update W (and b) only on the X side Step 2 - Train Y Hold w and y constant and minimize cost by treating x' as a…

# Preview Problem – Share the beer

Recently, I shared a problem on Facebook.  It's typically called the "sharing wine" problem, but my friend, Cyrus, thought it'd be better with beer.  I agree, so I've modified these slightly. It went something like this: We have three containers of different sizes, 30L / 11L / 7L.  The 30L is filled with beer.  Empty exactly half of the 30L using only the 11L and 7L containers. I'll introduce some notation here so that we can talk about the answer.  Let's say that we create a triple (a, b, c) indicating the amount of beer in each container at any one time.  Say that we order these largest to smallest for convenience.  Let a = the 30L container, b = the 11L container, c = the 7L container. So, starting out we have (30,0,0).  Cyrus…

# Analogy is a core concept in cognition

We begin with a couple of simple queries about familiar phenomena: “Why do babies not remember events that happen to them?” and “Why does each new year seem to pass faster than the one before?” I wouldn’t swear that I have the final answer to either one of these queries, but I do have a hunch, and I will here speculate on the basis of that hunch. And thus: the answer to both is basically the same, I would argue, and it has to do with the relentless, lifelong process of chunking — taking “small” concepts and putting them together into bigger and bigger ones, thus recursively building up a giant repertoire of concepts in the mind. How, then, might chunking provide the clue to these riddles? Well, babies’ concepts…

# C2FO/nools – A rete-based rules engine for Node

I was reminiscing over the rules-based interaction system I worked with on the Zeno robot back in the day.  Particularly, I was thinking about how how the deterministic, one-shot state transition sometimes led to some undesirable and aberrant behavior.  If you weren't careful, the Personality could slam back and forth between different "mental" states and even seem somewhat ADD.  Even though that behavior was mitigated by writing soft transition language into the resulting state's verbal actions, the Personality sometimes seemed to have no sense of attention or immediate auto-biography. So, today I was thinking about a state-transition system where the transition actions were less one-shot.  My thinking goes like this: Mental states prime us towards the following mental behavior: accentuate thinking about some things suppress thinking about other things thinking about things in a certain way…

# I hate Verdana too

Typography can be fun and zany, but there's always a purpose behind it.  Often, you select fonts for readability.  On the web, the challenge is even bigger because the delivery is often so variable with differently sized screens ranging from huge monitors to small mobile devices. So, for most applications, you need a font that helps the eyes grasp what they are seeing.  Verdana is just not one of those.  In fact, it leans towards making reading difficult.  This article, by Pamela Wilson, explains some of the problems and has a good set of examples. Go see the article here : Why I Hate Verdana - Big Brand System -.

# The Ultimate Software Development Office Layout

This is a pretty decent article on office layouts for software development. I would make a few adjustments. These are based on my own observations and experience. Your mileage may vary. 1) War rooms are good for rapid product development -> driving to version 1.0 quickly. They are less productive for long-term software development, or for multiple project development. 7) Emphasize #7. 8) In fact, people should move to the space most conducive to the type of work they are doing at that time. 11) With Agile development, you should be able to break teams into no more than 4 people each.  I can't imagine a room with 12 people working in it. Add #23) A project status board should be placed conspicuously.  It's best if this is electronic and automatically updated.…

# Depth of Tech Screen

I can't tell if I'm expecting too much from candidate developers or not. I do tech screens for various projects and we've had quite a few candidates come through lately.  I realize I need to expand tactics to get a better feel for expertise in the general programming area. Working on this custom product has caused me to learn some technologies in-depth so I could create some of the custom UI elements that we've needed.  I do realize that not all developers have such challenging work.  However, when we've got a requirement for a developer with two years experience in technology X, I try to compare what I've learned about that technology in the same amount of time.  I ask about scenarios that I've faced. It's the most available experience I can discuss with…

# Notes from Ammar/Mocanu – Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines

Citation H. Ammar and D. Mocanu, “Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines,” Mach. Learn. …, 2013. Abstract Existing reinforcement learning approaches are often hampered by learning tabula rasa.  Transfer for reinforcement learning tackles this problem by enabling the reuse of previously learned results, but may require an inter-task mapping to encode how the previously learned task and the new task are related.  This paper presents an autonomous framework for learning inter-task mappings based on an adaptation of restricted Boltzmann machines.  Both a full model and a computationally efficient factored model are introduced and shown to be effective in multiple transfer learning scenarios. Quotes & Notes Re:Random or not Unfortunately, learning in this model cannot be done with normal CD. The main reason is that if…

# Random or Not?

One of the basic questions that needs to be answered about using the autoencoder architecture to learn a mapping function between two domains is a question of randomness and of what model the autoencoder is learning. Do I have to pair correlated SARS samples together for input or can I, as with a probabilistic model  (See Ammar for TrRBM.), introduce pairs randomly?

# How Artificial Intelligence And Robots Will Impact Jobs And How We Think About Work – The Diane Rehm Show

Let me coin a new term. We're talking here about the gig economy. It used to be called moonlighting, meaning that you had a daytime job. This is moonlighting without a daytime job. Maybe we should call it daylighting because in daylighting, you just piece together a life out of a series of different activities that you enjoy or you're involved in that -- each of which gives you either some personal pleasure or some kind of economic result. I think that very well could be the future, and Dean's got a very good point. Basic health care is an important pillar in being able to build a society around that concept. via How Artificial Intelligence And Robots Will Impact Jobs And How We Think About Work - The Diane…

Fatal error: Call to undefined function the_posts_pagination() in /home/timdockins/public_html/wp-content/themes/total/index.php on line 44