Notes from Ammar/Mocanu – Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines

Notes from Ammar/Mocanu – Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines

Annotations
Citation H. Ammar and D. Mocanu, “Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines,” Mach. Learn. …, 2013. Abstract Existing reinforcement learning approaches are often hampered by learning tabula rasa.  Transfer for reinforcement learning tackles this problem by enabling the reuse of previously learned results, but may require an inter-task mapping to encode how the previously learned task and the new task are related.  This paper presents an autonomous framework for learning inter-task mappings based on an adaptation of restricted Boltzmann machines.  Both a full model and a computationally efficient factored model are introduced and shown to be effective in multiple transfer learning scenarios. Quotes & Notes Re:Random or not Unfortunately, learning in this model cannot be done with normal CD. The main reason is that if…
Read More
Notes from Memisevic – Learning to Relate Images

Notes from Memisevic – Learning to Relate Images

Annotations, Dissertation, General, Research
Citation Memisevic, Roland. “Learning to Relate Images.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 8 (2013): 1829–1846. doi:10.1109/TPAMI.2013.53. Abstract A fundamental operation in many vision tasks, including motion understanding, stereopsis, visual odometry, or invariant recognition, is establishing correspondences between images or between images and data from other modalities. Recently, there has been increasing interest in learning to infer correspondences from data using relational, spatiotemporal, and bilinear variants of deep learning methods. These methods use multiplicative interactions between pixels or between features to represent correlation patterns across multiple images. In this paper, we review the recent work on relational feature learning, and we provide an analysis of the role that multiplicative interactions play in learning to encode relations. We also discuss how square-pooling and complex cell models can…
Read More
Notes from Memisevic – Gradient-based learning of higher-order image features

Notes from Memisevic – Gradient-based learning of higher-order image features

Annotations, Dissertation, Research
Citation Memisevic, Roland. “Gradient-Based Learning of Higher-Order Image Features.” Proceedings of the IEEE International Conference on Computer Vision (November 2011): 1591–1598. doi:10.1109/ICCV.2011.6126419. Abstract Recent work on unsupervised feature learning has shown that learning on polynomial expansions of input patches, such as on pair-wise products of pixel intensities, can improve the performance of feature learners and extend their applicability to spatio-temporal problems, such as human action recognition or learning of image transformations.  Learning of such higher order features, however, has been much more difficult than standard dictionary learning, because of the high dimensionality and because standard learning criteria are not applicable.  here, we show how one can cast the problem of learning higher-order features as the problem of learning a parametric family of manifolds.  This allows us to apply a variant…
Read More
Notes from Lazaric – Transfer in Reinforcement Learning

Notes from Lazaric – Transfer in Reinforcement Learning

Annotations, Dissertation
See more bibliography here Abstract Transfer in reinforcement learning is a novel research area that focuses on the development of methods to transfer knowledge from a set of source tasks to a target task. Whenever the tasks are similar, the transferred knowledge can be used by a learning algorithm to solve the target task and significantly improve its performance (e.g., by reducing the number of samples needed to achieve a nearly optimal performance). In this chapter we provide a formalization of the general transfer problem, we identify the main settings which have been investigated so far, and we review the most important approaches to transfer in reinforcement learning. Finds This paper presents a formal framework for transfer in reinforcement learning differentiating the algorithmic approaches by the knowledge transferred into instances, representation, and parameters. Goal: "identify the characteristics…
Read More