Interesting papers (potentially annotated)

Home / Research / Interesting papers (potentially annotated)

Here are a number of interesting papers that apply to my current research.  I’ll try to keep them in some sort of order.  As I annotate them, I’ll add them to the annotated bibliography category of this blog.

Deep and/or Wide?

The term Deep Learning seems to imply the ability to learn to generalize such that such learning can be applied to new observations.  Wide Learning, to me, seems to be more about applying that learning to new learning scenarios i.e. transfer learning.

Pandey, Gaurav, and Ambedkar Dukkipati. “{To go deep or wide in learning?}.” Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics. 2014.

Considered for 3-Way Autoencoders

[1] R. Memisevic, “Gradient-based learning of higher-order image features,” Proc. IEEE Int. Conf. Comput. Vis., pp. 1591–1598, Nov. 2011.

[2] R. Memisevic, K. Konda, and D. Krueger, “Zero-bias autoencoders and the benefits of co-adapting features,” arXiv Prepr. arXiv1402.3337, no. 1, pp. 1–10, 2014.

[3] R. Memisevic, “Learning to relate images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, pp. 1829–1846, 2013.

[4] B. Hutchinson, L. Deng, and D. Yu, “Tensor deep stacking networks,” Pattern Anal. Mach. …, pp. 1–15, 2013.

[5] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” Proc. 25th Int. Conf. Mach. Learn. – ICML ’08, pp. 1096–1103, 2008.

[6] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828, 2013.

[7] M. A. Ranzato and G. E. Hinton, “Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images,” Artif. Intell., vol. 9, pp. 621–628, 2010.

[8] G. Pandey and A. Dukkipati, “To go deep or wide in learning?,” Proc. Seventeenth Int. Conf. Artif. Intell. Stat., vol. 33, p. 9, 2014.


[10] H. B. Ammar, M. E. Taylor, K. Tuyls, and G. Weiss, “Common sub-space transfer for reinforcement learning tasks,” Belgian/Netherlands Artif. Intell. Conf., 2011.

[11] K. Feuz and D. Cook, “Transfer Learning across Feature-Rich Heterogeneous Feature Spaces via Feature-Space Remapping (FSR),” ACM Trans. Intell. Syst. Technol., vol. V, no. 212, 2014.

[12] H. Ammar and D. Mocanu, “Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines,” Mach. Learn. …, 2013.

[13] R. Memisevic and G. E. Hinton, “Learning to represent spatial transformations with factored higher-order Boltzmann machines.,” Neural Comput., vol. 22, no. 6, pp. 1473–1492, 2010.



Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

%d bloggers like this: