Dissertation Bibliography

Home / Research / Current Reading Stack / Dissertation Bibliography

Here’s a list of interesting papers broken down by topic:

Transfer Learning

Transfer learning is a central theme to this research.  It is the general idea that previously learned/known knowledge can be utilized to improve the process of learning something new.  

L. Torrey and J. Shavlik, “Transfer Learning,” Mach. Learn., pp. 1–22, 2009.

W. Dai, O. Jin, G. R. Xue, Q. Yang, and Y. Yu, “Eigentransfer: a unified framework for transfer learning,” in Proceedings of the 26th Annual International Conference on Machine Learning, 2009, pp. 193–200.

K. D. Feuz, “Transfer Learning across Feature-Rich Heterogeneous Feature Spaces via Feature-Space Remapping ( FSR ),” ACM Trans. Intell. Syst. Technol., vol. V, no. 212, 2014.

M. T. Rosenstein, Z. Marx, L. P. Kaelbling, C. Science, and T. G. Dietterich, “To Transfer or Not To Transfer,” NIPS Work. Inductive Transf., pp. 1–4, 2005.

Reinforcement Transfer Learning [key]

The way this research is looking at transferring knowledge is by creating a mapping between the spaces defining one task (the source) and those defining another task (the target).  To do this, this research uses the approach of learning a common subspace between the two tasks and using that as a model for transforming samples from one space into samples in the other space.  This was shown in the following two paper by Ammar, et al.  

H. B. Ammar, M. E. Taylor, K. Tuyls, and G. Weiss, “Common subspace transfer for reinforcement learning tasks,” Belgian/Netherlands Artif. Intell. Conf., 2011.

H. Ammar and D. Mocanu, “Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines,” Mach. Learn. …, 2013.

A. Lazaric, “Transfer in Reinforcement Learning: a Framework and a Survey,” in Reinforcement Learning, vol. 12, M. Wiering and M. van Otterlo, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 143–173.

M. E. Taylor and P. Stone, “Cross-domain transfer for reinforcement learning,” in Proceedings of the 24th international conference on Machine learning, 2007, pp. 879–886.

Building [Common] Feature Spaces  [key]

R. Memisevic, “Gradient-based learning of higher-order image features,” Proc. IEEE Int. Conf. Comput. Vis., pp. 1591–1598, Nov. 2011.

R. Memisevic, “Learning to relate images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1829–1846, 2013.

P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” Proc. 25th Int. Conf. Mach. Learn. – ICML ’08, pp. 1096–1103, 2008.

Common Feature Space [supporting]

[1] T. Van De Cruys, L. Rimell, T. Poibeau, and A. Korhonen, “Multi-way Tensor Factorization for Unsupervised Lexical Acquisition,” Proc. COLING, vol. 2, no. December, pp. 2703–2720, 2012.

[2] L. Deng, “Three Classes of Deep Learning Architectures and Their Applications: A Tutorial Survey,” Research.Microsoft.Com, 2013.

[3] B. Hutchinson, L. Deng, and D. Yu, “Tensor deep stacking networks,” Pattern Anal. Mach. …, pp. 1–15, 2013.

[4] H. Lee and A. Y. Ng, “Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations,” 2009.

[5] M. Nickel, V. Tresp, and H.-P. Kriegel, “A Three-Way Model for Collective Learning on Multi-Relational Data,” in 28th International Conference on Machine Learning, 2011, pp. 809–816.

[6] J. P. Royer, N. Thirion-Moreau, and P. Comon, “Nonnegative 3-way tensor factorization taking in to account possible missing data,” in European Signal Processing Conference, 2012, pp. 71–75.

[8] D. Yu, L. Deng, and F. Seide, “The deep tensor neural network with applications to large vocabulary speech recognition,” Audio, Speech, Lang. …, vol. 0, no. 0, 2013.

Reinforcement Transfer Learning [supporting]

H. Ammar, M. Taylor, and K. Tuyls, “Reinforcement Learning Transfer using a Sparse Coded Inter-Task Mapping,” personeel.unimaas.nl, 2011.

H. Ammar, K. Tuyls, and M. Taylor, “Reinforcement learning transfer via sparse coding,” Proc. 11th Int. Conf. Auton. Agents Multiagent Syst., vol. 1, pp. 383–390, 2012.

M. Asadi and M. Huber, “Effective control knowledge transfer through learning skill and representation hierarchies,” in Proceedings of the 20th International Joint Conference on Artificial Intelligence, 2007, pp. 2054–2059.

M. Asadi, “Learning State and Action Space Heirarchies for Reinforcement Learning using Action-Dependent Partitioning,” University of Texas at Arlington, 2006.

S. Barrett, M. E. Taylor, and P. Stone, “Transfer learning for reinforcement learning on a physical robot,” in Ninth International Conference on Autonomous Agents and Multiagent Systems-Adaptive Learning Agents Workshop (AAMAS-ALA), 2010, no. May.

L. a. C. Jr., J. P. Matsuura, R. L. de Mantaras, and R. a. C. Bianchi, “Using Transfer Learning to Speed-Up Reinforcement Learning: A Cased-Based Approach,” 2010 Lat. Am. Robot. Symp. Intell. Robot. Meet., pp. 55–60, Oct. 2010.

G. D. Konidaris, “Autonomous robot skill acquisition,” PhD thesis, University of Massachusetts Amherst, 2011.

G. Konidaris and A. Barto, “Autonomous shaping: Knowledge transfer in reinforcement learning,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 489–496.

A. Lazaric, “Transfer from Multiple MDPs,” Arxiv Prepr. arXiv1108.6211, pp. 1–20, Aug. 2011.

M. E. Taylor and P. Stone, “An Introduction to Inter-task Transfer for Reinforcement Learning,” AI Mag., vol. 32, no. 1, p. 15, 2011.

M. E. Taylor and P. Stone, “Representation transfer for reinforcement learning,” in AAAI 2007 Fall Symposium on Computational Approaches to Representation Change during Learning and Development, 2007, pp. 78–85.

M. E. Taylor and P. Stone, “Transfer Learning for Reinforcement Learning Domains : A Survey,” J. Mach. Learn. Res., vol. 10, pp. 1633–1685, 2009.

M. Taylor, N. Jong, and P. Stone, “Transferring Instances for Model-Based Reinforcement Learning,” in Machine Learning and Knowledge Discovery in Databases, vol. 5212, W. Daelemans, B. Goethals, and K. Morik, Eds. Springer Berlin / Heidelberg, 2008, pp. 488–505.

L. Torrey, “Lightweight Adaptation in Model-Based Reinforcement Learning,” in Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011, pp. 44–49.

L. Torrey, J. Shavlik, and T. Walker, “Relational macros for transfer in reinforcement learning,” in Inductive Logic Programming, H. Blockeel, J. Ramon, J. Shavlik, and P. Tadepalli, Eds. Berlin: Springer Berlin / Heidelberg, 2008, pp. 254–268.

L. Torrey, J. Shavlik, T. Walker, and R. Maclin, “Skill acquisition via transfer learning and advice taking,” Mach. Learn. ECML 2006, pp. 425–436, 2006.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

%d bloggers like this: