Deep learning

From InterSciWiki
Revision as of 08:54, 23 February 2015 by Douglas R. White (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

2015 Chronicle

http://chronicle.com/article/The-Believers/190147/?cid=cr&utm_source=cr&utm_medium=en

supervised+deep+learning

Jason Weston, Frederic Ratle and Ronan Collobert. 2008. Deep Learning via Semi-Supervised Embedding. pdf of ppt. Proceedings of the Twenty-fifth International Conference on Machine Learning. Google Research Group.

supervised+deep+learning

UCSD

Youngmin Cho, Lawrence K. Saul. 2009. Kernel Methods for Deep Learning http://cseweb.ucsd.edu/~yoc002/ at Google, student of http://cseweb.ucsd.edu/~saul/

Julian Mairal, Francis Bach, Jean Ponce. ... Task-Driven Dictionary Learning. pdf of ppt. students of http://cseweb.ucsd.edu/~dasgupta/

NYT

Scientists See Promise in Deep-Learning Programs NYT November 23, 2012 By JOHN MARKOFF

Geoffrey E. Hinton. Home page

[http://www.cs.toronto.edu/~hinton/absps/tics.pdf

Nature 323, 533-536 (9 October 1986) | doi:10.1038/323533a0; Accepted 31 July 1986

David E. Rumelhart*, Geoffrey E. Hinton† & Ronald J. Williams* Learning representations by back-propagating errors

  • Institute for Cognitive Science, C-015, University of California, San Diego, La Jolla, California 92093, USA

†Department of Computer Science, Carnegie-Mellon University, Pittsburgh, Philadelphia 15213, USA †To whom correspondence should be addressed. Top of pageAbstract We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal 'hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.