La lecture en ligne est gratuite
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
Télécharger Lire

Neural architectures for unifying brightness perception and image processing [Elektronische Ressource] / Matthias Sven Keil

204 pages
Neural Architectures for UnifyingBrightness Perception and ImageProcessingInstituto de Optica (CSIC)Image and Vision DepartmentSerrano 121, E-28006 Madrid (Spain)andAbteilung NeuroinformatikFakult at fur InformatikUniversit at UlmAlbert-Einstein-Allee, D-89069 Ulm (Germany)Dissertation zur Erlangung des Doktorgrades Dr.rer.nat.der Fakultat fur Informatik der Universitat UlmMatthias Sven Keil aus Hof an der Saale(erschienen 2002)Amtierender Dekan: Prof. Dr. F. W. von HenkeGutachter 1: Prof. Dr. Heiko NeumannGutachter 2: Prof. Dr. Gunther PalmGutachter 3: Dr. Gabriel Crist obalTag der Promotion: 16. Juni 2003Contents1 Biophysical Principles 61.1 Biological neurons and the equivalent circuit . . . . . . . . . . . . . . 61.2 The membrane equation of a passive neuron . . . . . . . . . . . . . . 71.2.1 Synaptic input . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3 Realistic vs. abstract modeling of biological neurons . . . . . . . . . 131.3.1 Spike rate vs. mean ring rate . . . . . . . . . . . . . . . . . 131.3.2 Driving Potential vs. Potential-Independent Synaptic Input . 141.4 Dendrites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 An Introduction to Brightness Perception 202.1 Luminance, brightness, and the visual pathway . . . . . . . . . . . . 212.2 Retinal Ganglion cells constitute the retinal output . . . . . . . . . . 222.
Voir plus Voir moins

Neural Architectures for Unifying
Brightness Perception and Image
Processing
Instituto de Optica (CSIC)
Image and Vision Department
Serrano 121, E-28006 Madrid (Spain)
and
Abteilung Neuroinformatik
Fakult at fur Informatik
Universit at Ulm
Albert-Einstein-Allee, D-89069 Ulm (Germany)
Dissertation zur Erlangung des Doktorgrades Dr.rer.nat.
der Fakultat fur Informatik der Universitat Ulm
Matthias Sven Keil aus Hof an der Saale
(erschienen 2002)Amtierender Dekan: Prof. Dr. F. W. von Henke
Gutachter 1: Prof. Dr. Heiko Neumann
Gutachter 2: Prof. Dr. Gunther Palm
Gutachter 3: Dr. Gabriel Crist obal
Tag der Promotion: 16. Juni 2003Contents
1 Biophysical Principles 6
1.1 Biological neurons and the equivalent circuit . . . . . . . . . . . . . . 6
1.2 The membrane equation of a passive neuron . . . . . . . . . . . . . . 7
1.2.1 Synaptic input . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Realistic vs. abstract modeling of biological neurons . . . . . . . . . 13
1.3.1 Spike rate vs. mean ring rate . . . . . . . . . . . . . . . . . 13
1.3.2 Driving Potential vs. Potential-Independent Synaptic Input . 14
1.4 Dendrites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 An Introduction to Brightness Perception 20
2.1 Luminance, brightness, and the visual pathway . . . . . . . . . . . . 21
2.2 Retinal Ganglion cells constitute the retinal output . . . . . . . . . . 22
2.2.1 Retinal ganglion cells respond to luminance contrasts . . . . 22
2.2.2 The di erence-of-Gaussian (DOG) model . . . . . . . . . . . 22
2.2.3 Nonlinearly summing ganglion cells . . . . . . . . . . . . . . . 23
2.2.4 Ganglion cells in the primate retina . . . . . . . . . . . . . . 23
2.3 Beyond the retina - cortical representations of surfaces . . . . . . . . 26
2.3.1 Viewing brightness perception as a coding problem . . . . . . 26
2.3.2 Cortical surface representations . . . . . . . . . . . . . . . . . 26
2.3.3 Creating - the lling-in hypothesis . . 27
2.3.4 Neurophysiological correlate for lling-in . . . . . . . . . . . . 29
2.3.5 Filling-in models of brightness perception . . . . . . . . . . . 30
2.3.6 Formal description of standard lling-in . . . . . . . . . . . . 32
2.3.7 Filling-in and inverse problems . . . . . . . . . . . . . . . . . 32
2.3.8 Standard lling-in is a special case of con dence-based lling-in 34
2.4 Models for brightness perception and the anchoring problem . . . . . 34
2.4.1 An extra luminance-driven or low-pass channel . . . . . . . . 35
2.4.2 Superimposing band-pass lters . . . . . . . . . . . . . . . . . 37
2.4.3 Directional lling-in (1-D) . . . . . . . . . . . . . . . . . . . . 39
2.4.4 The multiplexed retinal code - a novel approach . . . . . . . 39
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
23 Novel Retinal Models 41
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 The luminance correlation coe cien t (LCC) . . . . . . . . . . . . . . 42
3.3 The standard model of retinal ganglion cells . . . . . . . . . . . . . . 43
3.3.1 Di eren tial equations for the membrane potential . . . . . . . 43
3.3.2 Steady-state solutions . . . . . . . . . . . . . . . . . . . . . . 44
3.3.3 Choice of receptive eld parameter . . . . . . . . . . . . . . . 45
3.3.4 Positions of retinal responses relative to luminance dis-
continuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3.5 Luminance correlation coe cien t with the standard model . . 48
3.4 How luminance information may be passed into the cortex . . . . . . 49
3.4.1 Neurophysiological evidence - the extensive disinhibitory sur-
round (DIR) . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4.2 Incorporating the three-Gaussian model into the standard
retinal model . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5 Novel retinal models - multiplexing contrast and luminance in parallel
channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.1 A novel model for retinal ganglion cells . . . . . . . . . . . . 53
3.5.2 Model I - divisive gain control . . . . . . . . . . . . . . . . . . 56
3.5.3 Model II - multiplicative gain control . . . . . . . . . . . . . . 59
3.5.4 Improving the modulation depth of the multiplicative gain
control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5.5 Model III - saturating multiplicative gain control . . . . . . . 62
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4 A New Role for Even Simple Cells 65
4.1 A brief overview of the proposed architecture . . . . . . . . . . . . . 65
4.2 Formal description of the texture system . . . . . . . . . . . . . . . . 67
4.2.1 Orientation selectivity as quasi one dimensional framework . 67
4.2.2 Detecting even symmetric features (\texture") . . . . . . . . 70
4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5 A Novel Nonlinear Filling-in Framework 79
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.1.1 Limitations of Filling-in . . . . . . . . . . . . . . . . . . . . . 80
5.2 Detecting odd symmetric features . . . . . . . . . . . . . . . . . . . . 84
5.2.1 Excitatory input into an odd symmetric simple cell . . . . . . 85
5.2.2 Inhibitory input into an odd simple cell . . . . . . 85
5.2.3 Odd symmetric cell . . . . . . . . . . . . . . . . . . . . . . . . 86
5.3 Gating of multiplexed activity by odd-cell activity . . . . . . . . . . 86
5.3.1 Combination of orientation channels . . . . . . . . . . . . . . 88
5.4 Generalized di usion operators . . . . . . . . . . . . . . . . . . . . . 89
5.5 BEATS lling-in di usion layer . . . . . . . . . . . . . . . . . . . . . 90
5.6 Exploring the parameter space for surface syncytia . . . . . . . . . . 934
5.7 Simulations of brightness illusions . . . . . . . . . . . . . . . . . . . . 97
5.7.1 Craik-O’Brien-Cornsweet e ect (COCE) . . . . . . . . . . . . 97
5.7.2 Grating induction . . . . . . . . . . . . . . . . . . . . . . . . 99
5.7.3 Chevreul’s illusion . . . . . . . . . . . . . . . . . . . . . . . . 106
5.7.4 A modi ed Chevreul illusion . . . . . . . . . . . . . . . . . . 108
5.7.5 Simultaneous brightness contrast . . . . . . . . . . . . . . . . 109
5.7.6 White’s e ect (Munker-White e ect) . . . . . . . . . . . . . . 111
5.7.7 Benary cross . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.7.8 The Hermann/Hering grid . . . . . . . . . . . . . . . . . . . . 116
5.7.9 The scintillating grid illusion . . . . . . . . . . . . . . . . . . 118
5.8 Surface representations with real-world images . . . . . . . . . . . . 118
5.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6 Recovering luminance gradients 132
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.2 Formal description of the gradient system . . . . . . . . . . . . . . . 133
6.2.1 Detecting linear and nonlinear luminance gradients . . . . . . 133
6.2.2 Recovering luminance gradients . . . . . . . . . . . . . . . . . 135
6.3 Simulations of brightness illusions . . . . . . . . . . . . . . . . . . . . 139
6.3.1 Mach bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.3.2 Sine wave gratings and Gabor patches . . . . . . . . . . . . . 147
6.4 Simulations with real-world images . . . . . . . . . . . . . . . . . . . 147
6.4.1 Varying the feature inhibition weight . . . . . . . . . . . . . . 149
6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7 Binding of surfaces, texture, and gradients 161
7.1 Combining maps computationally . . . . . . . . . . . . . . . . . . . . 161
7.2 Combining maps in the brain . . . . . . . . . . . . . . . . . . . . . . 162
8 Zusammenfassung (in German) 164
A Nonlinear Di usion 166
A.0.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
A.0.2 Formal Description . . . . . . . . . . . . . . . . . . . . . . . . 167
A.0.3 Global normalization by local interactions (dynamic normal-
ization) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
A.0.4 Nonlinear contrast extraction . . . . . . . . . . . . . . . . . . 176
A.0.5 Could dynamic normalization account for brightness phenom-
ena? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
A.0.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189An overview
The architecture for brightness processing proposed with the present work aims
to unify two seemingly diverging goals, that is image processing and brightness
perception. A successful uni cation has not been achieved so far, since models
which predict brightness phenomena only rarely produce meaningful results when
processing real-world images (although some results have been demonstrated, e.g.
[Sepp & Neumann, 1999]). On the other hand, models for image processing tasks
(typically coding or denoising), which often claim to provide some account to early
vision, fail to predict phenomena associated with brightness perception. Usually,
both model classes compute their output by superimposing processed lter outputs
over various scales and orientations, whereby lter outputs are processed in order
to ful ll a certain pre-de ned goal (coding, denoising, predicting psychophysical
results, etc.). None of these models has achieved any segregation of the visual input
in way compatible with object recognition; rather, these models create only an
internal (or cortical) representation of the visual input, thus deferring segregation
mechanisms to higher level cortical processing.
Furthermore, there is no model available for processing two-dimensional lu-
minance patterns which comes up with a neurophysiological plausible so-
lution to the anchoring problem (although a one-dimensional solution was
suggested by [Arrington, 1996]). This problem is commonly solved by employ-
ing an additional \luminance channel" in the form of a low-passed ltered
(or large-scale band-passed, e.g. [Sepp & Neumann, 1999]) version of the vi-
sual input, e.g. [Pessoa et al., 1995, du Buf & Fischer, 1995, Neumann, 1996,
McArthur & Moulden, 1999, Blakeslee & McCourt, 1999]. Yet, evidence support-
ing the existence of such a channel is still lacking.
In this thesis a novel architecture for foveal brightness perception is presented in
agreement with both neurophysiological and psychophysical data (see gure 4.1 on
page 65). It is proposed that cortical simple cells of di eren t symmetries (even,
odd) and sizes extract di eren t aspects from the visual input, which are (i) texture
(here de ned as small-scale even symmetric features, such as lines and points,
chapter 4), (ii) surfaces (corresponding to small-scale odd symmetric features for
building cortical surface representations, chapter 5), and (iii) luminance gradients
(corresponding to large-scale even and odd symmetric features, for example
out-of-focus lines or edges, chapter 6). Simulations show how this segregation
process renders cortical representations of object surfaces invariant to noise and
illumination gradients.
Also, a neurophysiologically plausible solution to the anchoring problem is sug-
gested by proposing a \multiplexed" retinal code which at the same time represents
information about contrast and brightness (ON-cell) and contrast and darkness
(OFF-cell) of a visual input (chapter 3).
The architecture builds upon lling-in theory [Gerrits & Vendrik, 1970,
Grossberg & Todorovic, 1988]. In the present work, however, lling-in is im-
plemented with a novel nonlinear di usion paradigm (\BEATS" lling-in), instead
of using a linear di usion mechanism as it is normally the case (nonlinear di usion
is presented and analyzed in appendix A).
5Chapter 1
Biophysical Principles
This chapter gives a concise introduction into the biophysical concepts of neurons.
The equation for the description of a neuron’s membrane potential is derived, which
serves as a \working horse" for the computational modeling level which is used in
subsequent chapters. Thus, this chapter examines the interplay between the bio-
physical level of description (low and detailed), and the computational level of
description (higher, but more abstract). Above all, the following questions will
be addressed: (i) What computations can be carried out by biological neurons?
This endows us with a set of biophysically plausible mathematical operations on the
computational description level. (ii) Which description level should be chosen for
our purposes? It certainly makes no sense to model ionic channels, dendritic com-
partments, etc. Rather, on a computational level, these details are approximated
by corresponding mathematical operations. (iii) Under which conditions does a
simpli ed description of a neuron yield biological plausible results? This question
addresses the interplay between biophysical details on the one hand and an ade-
quate formulation on the computational level of description on the other. As it will
turn out, these questions can not be answered independently. This chapter builds
to a large extent on the books of [Koch, 1999] and [Kandel et al., 2000].
1.1 Biological neurons and the equivalent circuit
In a crude picture, a biological neuron consists of a soma endowed with neurites
(dendrites and axons). The dendrite usually collects information from within a
volume, where it makes connections to axons of other neurons. Axons may be seen
as the output channel of a neuron. Connections between di eren t neurons are made
through special structures, called synapses (see below).
The quantity which describes the state of our model neurons corresponds to the
biophysical membrane potential. The membrane potential of a cortical neuron is
de ned as the di erence between the intracellular and the extracellular potential
at a certain time. This di erence exists because a cell membrane separates the
cytoplasm from ionic solutions in extracellular space. Usually, these solutions di er
in their respective concentrations of ionic charges.
If no signaling takes place, and the neuron’s membrane potential remains constant
over time, then the membrane potential is said to be at its resting value. Because
the extracellular potential is, by convention, de ned as zero, the neuron’s inside
resting potential falls usually into the range between 70mV and 60mV.
If we had a passive membrane, the resting potential of an ionic species would be
determined by two counterbalancing forces. On the one hand, a chemical driving
6CHAPTER 1. BIOPHYSICAL PRINCIPLES 7
force (which depends on the concentration gradient across the membrane) would
act to make ion concentrations on either side of the cell membrane homogeneous
by means of di usion. Di usion takes place over ion speci c resting membrane
channels. These channels are always opened irrespective of the current value of
the membrane potential. On the other hand ions \see" an electrical driving force
which would counteract di usion. This electrical force depends on the degree of
polarization of the cell membrane. Summarizing:
ion ux = membrane conductance
(electrical driving force + chemical driving force)
The neuronal resting potential actually derives from a dynamical equilibrium, since
active (i.e. energy consuming) processes keep the ionic gradients from reaching their
thermodynamical equilibrium. For example, the sodium-potassium pump maintains
+ + 1the electrochemical gradients of Na and K by hydrolysis of one ATP molecule
+ +to export three Na cations from, and to import two K cations into the neuron.
The e ect is a decrease of the membrane potential compared to its value at ther-
modynamical equilibrium (i.e. the absence of ionic gradients).
In a dynamic scenario a neuron can be modeled with an electrical circuit, called the
equivalent circuit. In the equivalent circuit, ion channels are represented by resistors
and conductors, respectively, corresponding to electrical driving forces. Concentra-
tion gradients (i.e. the chemical driving forces) are represented by ctiv e batteries.
An ion channel is described by a conductor in series with a battery. Finally, the
2polarization of the cell membrane (which is made up of two layers of phospholipid
molecules) is described by a capacitor.
1.2 The membrane equation of a passive neuron
In the absence of any concentration gradient (i.e. no di usion takes place), the
current through a channel is described by Ohm’s law: I = gV where g ism
the total conductance (=1=resistance R) of all channels of a speci c type (g =
number of channels conductance of an individual channel). V is the membranem
potential. In the presence of a concentration gradient, however, in addition the
corresponding chemical driving force must be taken into account. In the equivalent
circuit, the c force for each ion is represented by a battery, whose
electromotoric force is independent of the number of ion channels, but depends
3only on the concentration gradient. Thus, the battery’s electromotive force is
4given by the Nernst potential (i.e. the equilibrium potential)E for that ion , which
is proportional to its logarithm of inside-to-outside concentration ratio.
Electrical and chemical driving forces act in opposed directions, i.e. I =g (Vm
E). The negative sign is de ned by convention. Consequently, the electrochemical
driving force or driving potential across the channel is represented by the term
V E.m
The ionic charges Q which accumulate on the inside and the outside of the cell
membrane give rise to a potential di erence V over the cell membrane: Q =CV .m m
If we assume the capacityC to be constant over the cell membrane, then we obtain
for some resting channel (by calculating I =dQ=dt)
1Adenosine triphosphate or ATP is an important energy source for biological processes.
2any of numerous lipids (as lecithins and phosphatidylethanolamines) in which phos-
phoric acid as well as a fatty acid is esteri ed to glycerol, and which are found in all living
cells, and in the bilayers of plasma membranes.
3something that moves or tends to move electricity: the potential di erence derived
from an electrical source per unit quantity ofy passing through the source (such
as a cell or generator).
4e.g. E + = 75 mV, E + = +55 mV, E = 60 mVK Na ClCHAPTER 1. BIOPHYSICAL PRINCIPLES 8
Figure 1.1: Equivalent circuit. (A) A sketch of the lipid bilayer membrane of a
passive neuron with embedded resting ion channels. (B) Corresponding e ectiv e equiv-
alent circuit, as described by equation 1.2. (Reprinted without any permission from
[Hille, 1992, Koch, 1999])
dV (t)m
C =g (E V ) (1.1)m
dt
In reality there is no o w any (capacitive) current over the lipid bilayer of the
membrane, since it is an insulator. Rather, the last equation should be interpreted
as a redistribution of ionic charges over time, as a consequence of ion ux over the
5channels embedded in the membrane : rapid changes in voltage are associated with
large capacitive currents.
The complete equivalent circuit consists of various channel types (e.g. potassium,
+ +sodium and chloride), two extra current sources for the Na -K pump, and a
capacitor. However, we can lump together these channels and the ionic pump into an
e ectiv e conductance, called the leakage conductance or decay constant g . Thisleak
follows from the fact that the conductance of the individual resting channels does
6not change as function of membrane potential. Furthermore, the electrogenesis
+ +due to the Na -K pump is small, so it may be neglected. Then, we can express
the passive properties of the neuron as deviation from its resting potential Vrest
dV (t)m
C =g (V V ) (1.2)leak rest m
dt
Equation 1.2 describes the e ectiv e equivalent circuit shown in gure 1.1. The
\reaction-time" of the passive neuron is characterized by the time constant =
C=g , which describes the time to charge an isopotential patch of membrane toleak
the 1=e part of its steady-state value. Imagine that we initialize V with somem
value and subsequently monitor V with time. If g is big, then V will quicklym leak m
approach V . This endows the neuron with a high temporal resolution ( fromrest
1 to 2 msec as typical values). For relatively slow neurons, may be as big as
100 msec. Moreover, it is known that the conductivity of the potassium resting
channel is not xed, but may be modulated by so-called neuromodulators, such as
acetylcholine (ACh).
5The presence of resting ion channels increases a neuron’s membrane conductance by
a factor of about 40000 compared to the case without channels in the cell membrane.
6The production of electrical activity in living tissue.CHAPTER 1. BIOPHYSICAL PRINCIPLES 9
1.2.1 Synaptic input
Neurons are interconnected via synapses, of which two general types are known.
Chemical synapses separate the cytoplasm of two neurons by a synaptic cleft, and
transmission of information proceeds in one distinct direction. Hence, we have an
input or presynaptic site, and an output or postsynaptic site. Electrical synapses
(or gap junctions) on the other hand establish cytoplasmic continuity between pre-
and postsynaptic cells. They usually operate bidirectionally. Gap junctions pro-
vide a \high-conductance pathway" for the instantaneous exchange of ionic cur-
rents between adjacent neurons. Additionally, the strength of coupling between
gap junction channels can be modulated by, for example, the intracellular con-
centration of calcium ions, or the cytoplasmic pH value [Bennett & Spray, 1987].
2+Most gap-junction channels close whenever the pH is low or the Ca concen-
tration is high. Another source for modulating the conductivity of gap-junction
channels are neurotransmitters released by nearby chemical synapses. Moreover,
there exist specialized gap junctions endowed with voltage-dependent channels
[Edwards et al., 1991, Edwards et al., 1998]). These so-called rectifying gap junc-
tions conduct depolarizing currents only in one direction.
Transmission of signals by chemical synapses involves a signi can t delay (called la-
tency) by at least 0:3 msec from the pre- to the postsynaptic site, but longer delays
which range from 1 to 5 msec are more typical. The reason for the latency is that
an electrical signal which arrives at a chemical synapse must be converted into a
chemical messenger, consisting of a neurotransmitter (transmitting step). The neu-
rotransmitter subsequently di uses 20 to 40 nm over the synaptic cleft. On the
postsynaptic site, neurotransmitters act like a \key" which must t into the \lock"
(the receptor molecule) in order to open or close the associated \gate" (the ionic
channel). The resulting ion ux alters the membrane conductance and potential of
the postsynaptic cell. The value by which the membrane potential changes is called
the p potential (PSP).
Postsynaptic potentials (PSPs) come in two di eren t a vors: excitatory (EPSPs)
and inhibitory (IPSPs). IPSPs increase the polarization relative to rest. This
so-called hyperpolarization leads thus to a more negative membrane potential. Con-
versely, dep refers to a more positive membrane potential as a consequence
of EPSPs impinging on the neuron.
If depolarizing current pulses manage it to increase the membrane potential over
some threshold value, then voltage-gated ion channels will open. This triggers a
cascade a biophysical events in a somatic zone called the axon hill-lock, culminat-
ing in the generation of binary events called action potentials or spikes. The spikes
then travel down the axon, and their dissipation is prevented by active regeneration
mechanism embedded in the axon. This corresponds to the \output" information
or response delivered to other neurons. A con rmation signal for active dendritic
synapses is provided by means of backpropagating action potentials, which may
induce long term potentiation if pre- and postsynaptic cells re within some time
7window, or long term depression if pre- and p cell re asynchronously
[Holmgren & Zilberter, 2001] (see also footnote 1 on page 162).
Below a neuron’s ring threshold does the membrane respond only passively with
8so called electrotonic potentials.
A presynaptic cell has to re before information transmission with chemical synapses
can take place. This stands in contrast to most electrical synapses, which can trans-
7Long term potentiation refers to strengthening of the synaptic weight, whereas long
term depression leads to a reduction in the synapse’s e ectivit y. This essentially corre-
sponds to Hebbian and anti-Hebbian learning, respectively [Hebb, 1949].
8The spread of electrical activity through living tissue or cells in the absence of repeated
action potentials.

Un pour Un
Permettre à tous d'accéder à la lecture
Pour chaque accès à la bibliothèque, YouScribe donne un accès à une personne dans le besoin