//img.uscri.be/pth/5789af10189612a9a0dc8d935ffea964d6db20db
La lecture en ligne est gratuite
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
Télécharger Lire

Significant Water Management Issues

32 pages
Sava River Basin Management Plan Background Paper No. 5 Significant Water Management Issues Version 4.0 Zagreb November 2011
  • quantitative status of groundwater bodies
  • undesirable dis- turbance to the balance of organisms
  • river basin management plan background
  • use for national planning
  • organic pollution
  • hazardous substances pollution
  • river basin
  • measures
  • issues
Voir plus Voir moins

Unsupervised Learning Multi-Layer Networks Summary
CS4619 Artificial Intelligence II
Today: Unsupervised Learning
Multi-Layer Networks
Thomas Jansen
thJanuary 11
60Unsupervised Learning Multi-Layer Networks Summary
Announcement
Unforuntely no lecture next Friday
13 January 2012
Next meeting next Wednesday
18 January 2012
10am, WGB G21
2pm, WGB G15
61Unsupervised Learning Multi-Layer Networks Summary
Plans for Today
1 Unsupervised Learning
Reinforcement Learning
2 Multi-Layer Networks
Introduction and Bipolar Encoding
Learning Algorithms
3 Summary
Summary & Take Home Message
62Unsupervised Learning Multi-Layer Networks Summary
Unsupervised Reinforcement Learning
Consider Principal Component Analysis:
Goal: Reduce dimensionality
nof (empirical) input data x ,...,x ∈R .1 m
mP
1 2First Principle Component w maximizes |w·x| .
m
i=1
Reduce dimensionality by replacing x by its distance1
to its projection on the f.p.c.
63Unsupervised Learning Multi-Layer Networks Summary
Linear Associators
new type of neuron: linear associator
ninputs x={x ,...,x }∈R1 n
nweights: w =(w ,...,w )∈R1 n
nP
output: wxi i
i=1
Assume w.l.o.g.: center of data in origin
otherwise compute and translate.
64Unsupervised Learning Multi-Layer Networks Summary
Oja’s Algorithm (1982)
Algorithm for computing the first principle component.
nInput: X ={x ,...,x } with x ,...,x ∈R1 m 1 m
1. Initialisation
Select w∈R\{0} randomly. learning constant γ with 0<γ≤1.
2. Update
Select x∈X uniformly at random.
Replace w by w+γ(x·w)(x−(x·w)w).
Reduce γ.
Continue at line 2.
65w
Unsupervised Learning Multi-Layer Networks Summary
Analysing Oja’s Algorithm
Claim: Oja’s algorithm keeps w close to normalised.
Case 1:|w|>1
(w·x)w
x
x−(w·x)w
−(w·x)w
w is pulled into the correct direction,
and, on average, reduced in length.
66w
Unsupervised Learning Multi-Layer Networks Summary
Analysing Oja’s Algorithm (cont.)
Claim: Oja’s algorithm keeps w close to normalised.
Case 2:|w|<1
(w·x)w x
−(w·x)w
x−(w·x)w
w is pulled into the correct direction,
and, on average, increased in length.
67Unsupervised Learning Multi-Layer Networks Summary
Multi-Layer Networks
Directed weighted graph,
most often with a layered architecture:
input layer hidden layers output layer
layered⇒ no cylces
Input units do nothing, really.
Output units are just marked as output.
68Unsupervised Learning Multi-Layer Networks Summary
Computing XOR
Remember: Single Perceptron cannot compute XOR.
A three-layerd Network can compute XOR.
1x1 0.5 1
−1
0.5−1 1
1x2 0.5
Is this the only possible net with three neurons
computing XOR?
How does this net work?
69