A model of visual motion perception [Elektronische Ressource] / von Pierre Bayerl
129 pages
English

A model of visual motion perception [Elektronische Ressource] / von Pierre Bayerl

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
129 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

ECOD·ODNEICS·MLUTÄTA model of visual motion perceptionPierre BayerlDissertation zur Erlangung des Doktorgrades Dr.rer.natder Fakult¨at fur¨ Informatik der Universit¨at Ulm.von Pierre Bayerl aus Wurzbur¨ g2005Universit¨at UlmAbteilung NeuroinformatikISREVINU·ODNARUC·ODNAmtierender Dekan: Prof. Dr. Helmuth PartschGutachter: Prof. Dr. Heiko Neumannhter: Prof. Dr. Gun¨ ther Palm(Gutachter:) Prof. Dr. Reinhard EckhornTag der Promotion: 21.12.2005AbstractTheneuralmechanismsunderlyingthesegregationandintegrationofdetectedmotionstill remain unclear to a large extent. Motion of an extended boundary can locally bemeasured by neurons only orthogonal to its orientation (aperture problem) while thisambiguity is resolved for localized image features, e.g. corners or non-occlusion junc-tions. In this thesis, a novel neural model of visual motion processing is developed thatinvolves early stages of the cortical dorsal and ventral pathways of the primate brainto integrate and segregate visual motion and in particular to solve the motion apertureproblem. Our model makes predictions concerning the time course of cells in area MTand V1 and serves as a means to link physiological mechanisms with perceptual behav-ior. We further demonstrate that our model also successfully processes natural imagesequences.

Sujets

Informations

Publié par
Publié le 01 janvier 2005
Nombre de lectures 36
Langue English
Poids de l'ouvrage 21 Mo

Extrait

E
C
O
D
·

O
D
N
E
I
C
S
·
M
L
U
T
Ä
T
A model of visual motion perception
Pierre Bayerl
Dissertation zur Erlangung des Doktorgrades Dr.rer.nat
der Fakult¨at fur¨ Informatik der Universit¨at Ulm.
von Pierre Bayerl aus Wurzbur¨ g
2005
Universit¨at Ulm
Abteilung Neuroinformatik
I
S
R
E
V
I
N
U
·
O
D
N
A
R
U
C
·

O
D
NAmtierender Dekan: Prof. Dr. Helmuth Partsch
Gutachter: Prof. Dr. Heiko Neumannhter: Prof. Dr. Gun¨ ther Palm
(Gutachter:) Prof. Dr. Reinhard Eckhorn
Tag der Promotion: 21.12.2005Abstract
Theneuralmechanismsunderlyingthesegregationandintegrationofdetectedmotion
still remain unclear to a large extent. Motion of an extended boundary can locally be
measured by neurons only orthogonal to its orientation (aperture problem) while this
ambiguity is resolved for localized image features, e.g. corners or non-occlusion junc-
tions. In this thesis, a novel neural model of visual motion processing is developed that
involves early stages of the cortical dorsal and ventral pathways of the primate brain
to integrate and segregate visual motion and in particular to solve the motion aperture
problem. Our model makes predictions concerning the time course of cells in area MT
and V1 and serves as a means to link physiological mechanisms with perceptual behav-
ior. We further demonstrate that our model also successfully processes natural image
sequences. Moreover we present several extensions of the neural model to investigate
the influence of form information, the effects of attention, and the perception of trans-
parent motion. The major computational bottleneck of the presented neural model is
the amount of memory necessary for the representation of neural activity. In order to
deriveacomputationalmechanismforlarge-scalesimulationsweproposeasparsecoding
framework for neural motion activity patterns and suggest a means how initial activi-
tiesaredetectedefficiently. Wedefinealgorithmicoperationstoimplementtheproposed
neuralmechanismsandthusrealizeacomputationallyefficientalgorithmicversionofourmodelofmotionsegmentationinareasV1andMT.Thepresentedworkcombines
conceptsandfindingsfromcomputationalneuroscience,neurophysiologicalobservations,
psychophysical observations, and computer science. The outcome of our investigations
is a biologically plausible model of motion segmentation together with a fast algorith-
mic implementation which explains and predicts perceptual and neural effects of motion
perception and allows to extract optic flow from given image sequences.
iAcknowledgements
Firstofall,Iwouldliketothankthesupervisorofmythesis,Prof.Dr.HeikoNeumann.
Heiko introduced me to the field of computational neuroscience and aroused my interest
in the interdisciplinary nature of this special field. I am grateful for his ideas and
his inspiration and I particularly acknowledge his guidance which at the same time
did not restrict my liberty to develop my own ideas and to choose my own way to
develop my work. Importantly, I appreciate that Heiko always had time for discussions
and that he encouraged me in many ways. He enabled me to make a lot of great
travels to conferences and other labs to present my work in different countries and to
discuss with other scientists. Thus, I owe my knowledge and experience to Heiko and
to many other scientists I met during my time as his student. Second, I would like to
thank Prof. Dr. Gun¨ ther Palm for his valuable comments and for kindly undertaking
the second expert’s report. Third, I would like to thank Prof. Dr. Reinhard Eckhorn
for his commitment to act as third referee for this thesis. I am especially grateful
for his invitations to some meetings with him and his research group which gave me
opportunities for discussions and inspiration for further work.
Furthermore I thank all my colleagues from the department of Neural Information
Processing and from our Computer and Biological Vision group for the nice work envi-
ronment and for the social events.
Finally I thank my wife Judith for her love. I am also deeply thankful for the support
I got from my parents which enabled my studies of computer sciences and therefore
founded the basis of this work.
iiContents
Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Motion detection and integration . . . . . . . . . . . . . . . . . . 4
1.2.2 Biological background . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Neural modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Outline and style of this thesis . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Neural model of motion perception 13
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Input stage of model V1 (initial motion detection) . . . . . . . . . 15
2.2.2 Motion processing in model area V1 and MT (motion integration) 17
2.2.3 Interpretation of population code . . . . . . . . . . . . . . . . . . 19
2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.1 Empirical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.2 Quality and properties of estimated motion. . . . . . . . . . . . . 23
2.3.3 Feedback vs. hierarchical feedforward processing . . . . . . . . . . 24
2.3.4 Fk vs. lateral information processing. . . . . . . . . . . . . 26
2.3.5 Interpolation properties . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.1 Relevance and biological plausibility . . . . . . . . . . . . . . . . 34
2.4.2 Comparison with existing models of motion processing . . . . . . 36
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3 Extension of the neural model: influence of form information 43
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Model extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4 Extension of the neural model: influence of attention 61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2 Model extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.1 Task-specific attention . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.2 Experimental data and simulations: gain of feature similarity effects 65
4.4 Discussion and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 69
iiiContents
5 Extension of the neural model: the perception of transparent motion 71
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Model extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.4 Discussion and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6 Neuromorphic algorithm for motion integration and segregation 83
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.2 Neuromorphic algorithm of recurrent motion segmentation . . . . . . . . 85
6.2.1 Motion representation and data access . . . . . . . . . . . . . . . 86
6.2.2 Algorithmic realization of initial motion estimation . . . . . . . . 87
6.2.3 Neuromorphic algorithm for motion integration and segregation . 88
6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.3.1 Results with artificial motion sequences . . . . . . . . . . . . . . . 93
6.3.2 with real-world motion sequences . . . . . . . . . . . . . . 94
6.3.3 Interpretation of algorithmic population codes . . . . . . . . . . . 97
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.4.1 Comparison of the neuromorphic algorithm with the full neural
model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.4.2 Comparison to the approach of (Stein, 2004) . . . . . . . . . . . . 101
6.4.3 Comparison to other approaches . . . . . . . . . . . . . . . . . . . 102
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7 Summary 107
7.1 General nature of this work . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.2 A survey of major results . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.3 Relevant publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
References 111
Summary (German) 123
iv1 Introduction
1.1 Motivation
The visual input we process contains motion almost all the time. Motion may be caused
by m

  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents