课程介绍:
2 t- \5 [6 @/ E, P2 f% H* _
都是一些关于大数据深度学习的视频教程,国外教授录制,带英文字幕.& L6 \7 g8 K: w* `) P, D
0 c2 T% t$ ~7 s- h7 e% e1 x% ~
详细目录:
├─00_Neural Networks for Machine Learning
│ └─00_Neural Networks for Machine Learning
│ ├─hinton-ml* U+ J9 y- X- t e( u
│ │ 1.Why do we need machine learning* @8 | ~* M' j' ]3 P' A
│ │ 1.Why do we need machine learning.mp4
│ │ 10.What perceptrons can't do [15 min].mp4
│ │ 10.What perceptrons can't do [15 min].srt
│ │ 11.Learning the weights of a linear neuron [12 min].mp4" p v0 A# c& r( }- w$ \
│ │ 11.Learning the weights of a linear neuron [12 min].srt3 V% K8 W6 A6 {! [. R5 I9 h6 v) R
│ │ 12.The error surface for a linear neuron [5 min].mp43 W7 G/ L2 A6 C: A- K2 N7 }
│ │ 12.The error surface for a linear neuron [5 min].srt
│ │ 13.Learning the weights of a logistic output neuron [4 min].mp4! F) a) k( u/ m4 x
│ │ 13.Learning the weights of a logistic output neuron [4 min].srt
│ │ 14.The backpropagation algorithm [12 min].mp4
│ │ 14.The backpropagation algorithm [12 min].srt8 i! V% y( v! ?) k0 Z5 @4 I) ?
│ │ 15.Using the derivatives computed by backpropagation [10 min].mp4, p- X6 N' m; a% }! W" \* B
│ │ 15.Using the derivatives computed by backpropagation [10 min].srt3 ]. x: R; e8 ^) j
│ │ 16.Learning to predict the next word [13 min].mp4 L8 G% _+ Z( m- w5 I" `
│ │ 16.Learning to predict the next word [13 min].srt$ x" D% O0 p! c+ ?; Q ~- W- ^" C
│ │ 17.A brief diversion into cognitive science [4 min].mp4
│ │ 17.A brief diversion into cognitive science [4 min].srt
│ │ 19.Neuro-probabilistic language models [8 min].mp48 d( ^. a% e* T# f* O
│ │ 19.Neuro-probabilistic language models [8 min].srt/ m( _* @* Z) }
│ │ 2.What are neural networks1 R0 n6 ]; a4 O/ c! C
│ │ 2.What are neural networks.mp4
│ │ 20.Ways to deal with the large number of possible outputs [15 min].mp45 `' Y6 l* L( u3 u
│ │ 20.Ways to deal with the large number of possible outputs [15 min].srt! q9 X! l* N( x
│ │ 21.Why object recognition is difficult [5 min].mp48 B6 s; j7 i4 G- Z8 O; B
│ │ 21.Why object recognition is difficult [5 min].srt
│ │ 22.Achieving viewpoint invariance [6 min].mp4
│ │ 22.Achieving viewpoint invariance [6 min].srt3 n4 k+ h& C. f. O
│ │ 23.Convolutional nets for digit recognition [16 min].mp4' n" G$ A2 o4 Q3 H0 N
│ │ 23.Convolutional nets for digit recognition [16 min].srt# k, U3 F" H* Q6 V7 l9 ?8 o6 B
│ │ 24.Convolutional nets for object recognition [17min].mp44 X# `# Y# S) i. Q: q* h
│ │ 24.Convolutional nets for object recognition [17min].srt7 i( {, v- z6 w' b" z) ]4 d& u
│ │ 25.Overview of mini-batch gradient descent.mp4/ w! O1 I4 r5 G; l) S" c, w* M1 S
│ │ 25.Overview of mini-batch gradient descent.srt
│ │ 26.A bag of tricks for mini-batch gradient descent.mp4
│ │ 26.A bag of tricks for mini-batch gradient descent.srt6 T/ i( s1 w' }% J: I
│ │ 27.The momentum method.mp4: s+ Y" x8 z! ?1 l, v
│ │ 27.The momentum method.srt: I4 ~! P4 x2 p- t5 n
│ │ 28.Adaptive learning rates for each connection.mp4
│ │ 28.Adaptive learning rates for each connection.srt
│ │ 3.Some simple models of neurons [8 min].mp47 P7 R& x3 s, ]7 e
│ │ 3.Some simple models of neurons [8 min].srt, D4 t; \2 u5 v; }: c4 T
│ │ 31.Training RNNs with back propagation.mp4) |$ Q+ g$ m$ R5 S
│ │ 31.Training RNNs with back propagation.srt0 @2 z- h$ y) [/ R4 a
│ │ 32.A toy example of training an RNN.mp4$ k- R! e) N& D+ d1 ]* M$ v" m. ?
│ │ 32.A toy example of training an RNN.srt
│ │ 33.Why it is difficult to train an RNN.mp43 R) \0 n$ l) t0 |) k5 S& m
│ │ 33.Why it is difficult to train an RNN.srt
│ │ 34.Long-term Short-term-memory.mp4
│ │ 34.Long-term Short-term-memory.srt
│ │ 35.A brief overview of Hessian Free optimization.mp4
│ │ 35.A brief overview of Hessian Free optimization.srt
│ │ 37.Learning to predict the next character using HF [12 mins].mp4
│ │ 37.Learning to predict the next character using HF [12 mins].srt
│ │ 38.Echo State Networks [9 min].mp43 B7 R* ~: \3 V4 Q
│ │ 38.Echo State Networks [9 min].srt7 ?( f7 v" Z/ z o# i' e( E( b
│ │ 39.Overview of ways to improve generalization [12 min].mp4
│ │ 39.Overview of ways to improve generalization [12 min].srt
│ │ 4.A simple example of learning [6 min].mp4
│ │ 4.A simple example of learning [6 min].srt: V- V3 x9 A6 Z. ^
│ │ 40.Limiting the size of the weights [6 min].mp4
│ │ 40.Limiting the size of the weights [6 min].srt
│ │ 41.Using noise as a regularizer [7 min].mp4% X8 ?* F) p v$ f
│ │ 41.Using noise as a regularizer [7 min].srt" n/ \3 A- f& P7 O
│ │ 42.Introduction to the full Bayesian approach [12 min].mp4
│ │ 42.Introduction to the full Bayesian approach [12 min].srt
│ │ 43.The Bayesian interpretation of weight decay [11 min].mp4# \9 @5 d2 K2 s8 l2 z) E! d
│ │ 43.The Bayesian interpretation of weight decay [11 min].srt
│ │ 44.MacKay's quick and dirty method of setting weight costs [4 min].mp4
│ │ 44.MacKay's quick and dirty method of setting weight costs [4 min].srt
│ │ 45.Why it helps to combine models [13 min].mp4
│ │ 45.Why it helps to combine models [13 min].srt: m5 U p5 _! k" K
│ │ 46.Mixtures of Experts [13 min].mp4
│ │ 46.Mixtures of Experts [13 min].srt
│ │ 47.The idea of full Bayesian learning [7 min].mp48 @1 l X, E" E" c" \
│ │ 47.The idea of full Bayesian learning [7 min].srt
│ │ 48.Making full Bayesian learning practical [7 min].mp4
│ │ 48.Making full Bayesian learning practical [7 min].srt
│ │ 49.Dropout [9 min].mp4& j4 M! q( Q7 k& H4 }) C
│ │ 49.Dropout [9 min].srt& _) U- g8 q, Q2 W% R
│ │ 5.Three types of learning [8 min].mp4; n4 R# R N2 v: o
│ │ 5.Three types of learning [8 min].srt
│ │ 50.Hopfield Nets [13 min].mp4* W; t0 H# K2 h, t) X& o
│ │ 50.Hopfield Nets [13 min].srt
│ │ 51.Dealing with spurious minima [11 min].mp4
│ │ 51.Dealing with spurious minima [11 min].srt% R' [' D4 w0 b" p! E
│ │ 52.Hopfield nets with hidden units [10 min].mp47 r& u% \- Z$ Q
│ │ 52.Hopfield nets with hidden units [10 min].srt9 [7 _3 }. T! B9 H0 V: |5 c) T, D# B
│ │ 53.Using stochastic units to improv search [11 min].mp4
│ │ 53.Using stochastic units to improv search [11 min].srt
│ │ 54.How a Boltzmann machine models data [12 min].mp4: n5 T7 m- L: i! n4 x& {, Z) s/ x
│ │ 54.How a Boltzmann machine models data [12 min].srt
│ │ 55.Boltzmann machine learning [12 min].mp4) `+ V/ I! v7 o: l. e
│ │ 55.Boltzmann machine learning [12 min].srt
│ │ 57.Restricted Boltzmann Machines [11 min].mp4
│ │ 57.Restricted Boltzmann Machines [11 min].srt
│ │ 58.An example of RBM learning [7 mins].mp49 P8 X) n4 T) {8 w1 O
│ │ 58.An example of RBM learning [7 mins].srt! m w& t2 p( G5 f6 J
│ │ 59.RBMs for collaborative filtering [8 mins].mp4$ C7 S; |. q$ V. d! @2 j
│ │ 59.RBMs for collaborative filtering [8 mins].srt. n0 @3 j' q: v- q2 Y& ~" X; E
│ │ 6.Types of neural network architectures [7 min].mp4
│ │ 6.Types of neural network architectures [7 min].srt
│ │ 60.The ups and downs of back propagation [10 min].mp4
│ │ 60.The ups and downs of back propagation [10 min].srt
│ │ 61.Belief Nets [13 min].mp4. Z( R! t3 K( r. j4 Y% n- Q
│ │ 61.Belief Nets [13 min].srt
│ │ 62.Learning sigmoid belief nets [12 min].mp4
│ │ 62.Learning sigmoid belief nets [12 min].srt" T' Y& F3 r' A# R) _6 f& n
│ │ 63.The wake-sleep algorithm [13 min].mp4
│ │ 63.The wake-sleep algorithm [13 min].srt
│ │ 64.Learning layers of features by stacking RBMs [17 min].mp4) N0 L- U# H X
│ │ 64.Learning layers of features by stacking RBMs [17 min].srt0 z5 v0 f7 h7 e- _
│ │ 65.Discriminative learning for DBNs [9 mins].mp4
│ │ 65.Discriminative learning for DBNs [9 mins].srt
│ │ 66(1).What happens during discriminative fine-tuning- t. z9 q& v' u+ v* h8 p
│ │ 66.What happens during discriminative fine-tuning
│ │ 67.Modeling real-valued data with an RBM [10 mins].mp4" U. n" Q" H5 W0 ]1 Y
│ │ 67.Modeling real-valued data with an RBM [10 mins].srt
│ │ 69.From PCA to autoencoders [5 mins].mp4; g, J. G+ c/ s* g
│ │ 69.From PCA to autoencoders [5 mins].srt- t/ h- M6 d9 R. @! t" D
│ │ 70.Deep auto encoders [4 mins].mp4( \ V. H' |; c$ y
│ │ 70.Deep auto encoders [4 mins].srt$ K- U/ v& F& h' g8 m) I1 \3 A
│ │ 71.Deep auto encoders for document retrieval [8 mins].mp48 n: f5 p) t5 ]+ c7 U. J3 u
│ │ 71.Deep auto encoders for document retrieval [8 mins].srt! B7 t4 Y. X, M. W1 }; V
│ │ 72.Semantic Hashing [9 mins].mp4
│ │ 72.Semantic Hashing [9 mins].srt7 M8 `' J8 q, F; i0 J
│ │ 73.Learning binary codes for image retrieval [9 mins].mp4
│ │ 73.Learning binary codes for image retrieval [9 mins].srt2 A5 F+ {! l" O6 K
│ │ 74.Shallow autoencoders for pre-training [7 mins].mp4- S6 U8 h; ~) x* ^: d
│ │ 74.Shallow autoencoders for pre-training [7 mins].srt1 M. ?( K* {) [7 O# j8 x
│ │ 8.A geometrical view of perceptrons [6 min].mp42 ~- O+ n& K6 Y/ a1 V
│ │ 8.A geometrical view of perceptrons [6 min].srt% C9 e# A: k% {, f! Z- ~8 D
│ │ 9.Why the learning works [5 min].mp4
│ │ 9.Why the learning works [5 min].srt. H- J. N0 U/ k1 D6 G2 K
│ │ ( k. |+ K( K- w% ^5 V9 T: k
│ └─neuralnets-2012-001
│ ├─01_Lecture16 j! Q% ]1 P9 d- W" c
│ │ 01_Why_do_we_need_machine_learning_13_min.mp4
相关资源