Analyse Master Cours de Francis Clarke
16 pages
English

Analyse Master Cours de Francis Clarke

-

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
16 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

Niveau: Supérieur, Master
Chapter 7 Hilbert spaces Analyse Master 1 : Cours de Francis Clarke (2011) An inner product on X refers to a bilinear mapping · , · X : X?X ? R (that is, linear separately in each variable) such that x,y X = y,x X ?x, y ? X . A Banach space X is said to be a Hilbert space if it admits an inner product satis- fying x2 = x,x X ?x ? X . Canonical cases of Hilbert spaces include Rn, L2(?), and 2. We have, for exam- ple, u,v Rn = u • v, f ,g L2(?) = ? f (x)g(x)dx. Some rather remarkable consequences follow just from the existence of this scalar product. We suspect that the reader has seen the more immediate ones; we review them now, rather expeditiously. 7.1 Basic properties The first conclusion below is called the Cauchy-Schwarz inequality, and the sec- ond is known as the parallelogram identity. 121

  • then

  • distance between

  • appropriate finite sums

  • then any orthonormal

  • ?y ?m

  • hilbert space

  • conclusion below

  • finite-dimensional subspaces

  • let u?


Sujets

Informations

Publié par
Nombre de lectures 63
Langue English

Extrait

Chapter 7 Hilbert spaces
Analyse Master 1 : Cours de Francis Clarke (2011)
An inner product on X refers to a bilinear mapping ￿ , X : X × X R (that is, linear separately in each variable) such that ￿ x , y X = ￿ y , x X x , y X . A Banach space X is said to be a Hilbert space if it admits an inner product satis-fying ￿ x ￿ 2 = ￿ x , x X x X . Canonical cases of Hilbert spaces include R n , L 2 ( ) , and ￿ 2 . We have, for exam-ple, ￿ u , v R n = u v , ￿ f , g L 2 ( ) = f ( x ) g ( x ) dx . Some rather remarkable consequences follow just from the existence of this scalar product. We suspect that the reader has seen the more immediate ones; we review them now, rather expeditiously.
7.1 Basic properties
The first conclusion below is called the Cauchy-Schwarz inequality , and the sec-ond is known as the parallelogram identity .
121
122 Cours de Francis Clarke : Hilbert spaces 7.1 Proposition. Let X be a Hilbert space, and let x , y be points in X . Then ￿ x , y X ￿ ￿ x ￿ ￿ y ￿ and x + 2 y 2 + x 2 y 2 = 12 ￿ x ￿ 2 + ￿ y ￿ 2 . Proof. We may suppose x , y = 0. For any λ > 0, we have 0 ￿ x λ y 2 = ￿ x λ y , x λ y X = ￿ x ￿ 2 2 λ ￿ x , y + λ 2 ￿ y ￿ 2 . This yields 2 ￿ x , y ￿ ￿ x ￿ 2 / λ + λ ￿ y ￿ 2 . Putting λ = ￿ x ￿ / ￿ y ￿ gives the required inequality. The identify is proved by writing the norms in terms of the inner product and expanding.  It follows that the square of the norm is strictly convex, in the sense that x + y 2 12 x 2 + y 2 . x = y = 2 < 7.2 Theorem. Any Hilbert space is uniformly convex, and therefore reflexive. To every element ζ in X there corresponds a unique u X such that ζ ( x ) = ￿ u , x X x X , ζ = ￿ u ￿ . The mapping ζ u is an isometry from X to X . It is natural, therefore, to identify the dual X of a Hilbert space X with the space itself, which we usually do. Proof. Let ε > 0 and x , y B satisfy ￿ x y ￿ > ε . By the parallelogram identity we have x + 1 ε 2 x + y 2 y 2 < 4 = 2 < 1 δ , where δ : = 1 [ 1 ε 2 / 4 ] 1 / 2 . This confirms that X is uniformly convex. Let us now consider the linear operator T : X X defined by ￿ T x , y = ￿ x , y X y X . We deduce T x = ￿ x ￿ , by the Cauchy-Schwarz inequality. Thus T is an isome-try, and T ( X ) is a closed subspace of X . To conclude the proof of the theorem, it suffices to prove that T ( X ) is dense in X . Assume the contrary; then there exists θ X ∗∗ different from 0 such that ￿ θ , T ( X ) = 0. Because X is reflexive, we may write θ = Jx ¯ for some point x ¯ X . Then 0 = ￿ θ , T x = ￿ Jx ¯ , T x = ￿ T x , x ¯ = ￿ x , x ¯ X x X , whence x = 0 and θ = 0, a contradiction.  ¯ 7.3 Proposition. Let C be a closed, convex, nonempty subset of a Hilbert space X , and let x X . Then there exists a unique u C satisfying d C ( x ) = ￿ x u ￿ .
7.1 Basic properties 123 Proof. The existence of a closest point u is known (Exer. 5.51). The uniqueness follows easily from the strict convexity of the square norm.  The point u is called the projection of x onto C , denoted proj C ( x ) . We proceed to characterize it geometrically. (Since we identify X with X , the normal cone N C ( u ) is viewed as lying in the space X itself.) 7.4 Proposition. Let C be a closed, convex, nonempty subset of a Hilbert space X , and let x X . Then u = proj C ( x ) ⇐⇒ x u N C ( u ) ⇐⇒ ￿ x u , y u X ￿ 0 y C . Proof. Let us first consider u = proj C ( x ) . Let y C and 0 < t < 1. Since C is convex, we have ￿ x u ￿ 2 ￿ x ( 1 t ) u ty 2 = ( x u ) + t ( u y ) 2 . We expand on the left in order to obtain t ￿ x u , y u X ￿ t 2 ￿ y u ￿ 2 . Now divide by t , and then let t decrease to 0; we arrive at x u N C ( u ) . Conversely, let x u N C ( u ) . Then for all y C , ￿ x u ￿ 2 − ￿ y x ￿ 2 = 2 ￿ x u , y u X − ￿ u y ￿ 2 ￿ 0 , whence u = proj C ( x ) . The last condition in the three-way equivalence is simply a restatement of the fact that x u is a normal vector (Prop. 2.9).  7.5 Exercise. Let C be as in Prop. 7.4. Then proj C ( x ) proj C ( y ) ￿ ￿ x y ￿ ∀ x , y X .  Another special feature of Hilbert spaces is that subspaces admit complements. For a subset A of X , we define A = ζ X : ￿ ζ , x X = 0 x A . The closed subspace A (often pronounced “ A perp”) is the orthogonal to A . 7.6 Proposition. Let M be a closed subspace of a Hilbert space X . Then every point x X admits a unique representation x = m + µ where m M and µ M ; the point m coincides with proj M ( x ) . Proof. Let x X , and set m = proj M ( x ) . By Prop. 7.4, we have ￿ x m , y m X ￿ 0 y M .
124 Cours de Francis Clarke : Hilbert spaces Since M is a subspace, we obtain ￿ x m , y X = 0 y M ; that is, x m M . We have written x in the desired fashion. If x = m + µ is another such decomposition, then m m = µ µ M M = { 0 } , whence m = m and µ = µ . The represen-tation is thereby unique.  7.7 Exercise. In the context of Prop. 7.6, show that the mapping proj M : X X belongs to L C ( X , X ) .  Orthonormal sets. Two points u and v in a Hilbert space X are orthogonal if ￿ u , v X = 0. A set u α : α A in X is said to be orthonormal if ￿ u α ￿ = 1 α , ￿ u α , u β X = 0 when α = β . The following result characterizes projection onto finite-dimensional subspaces. 7.8 Proposition. Let X be a Hilbert space, and let M be a subspace of X generated by a finite orthonormal set u i : i = 1 , . . . , n . Then proj M ( x ) = in = 1 ￿ x , u i X u i , x proj M ( x ) M , and 2 d 2 M ( x ) = ￿ x ￿ 2 in = 1 ￿ x , u i X . Proof. By expressing the norm in terms of the inner product and expanding, we find x in = 1 λ i u i 2 = ￿ x ￿ 2 2 in = 1 λ i ￿ x , u i X + in = 1 λ i 2 . attains aThgelorbigalhtmsiindiemduemnweshearecoitnsvegxrafduienncttiovnanoisfhe λ s; = th at λ 1 i , s λ , 2 f , . o . r . , λλ in = w ￿ h x i , c u h i X ( i = 1 , . . . , n ) . The corresponding linear combination of the u i is therefore proj M ( x ) . Explicit cal-culation yields ￿ x proj M ( x ) , u i X = 0 i , so that x proj M ( x ) M . Finally, the expression for d 2 M ( x ) follows from expanding x proj M ( x ) 2 .  a Hilbert x X , wGievdeenanneoˆ x r α th = on ￿ o x r , m u α al X se.tT h u es α e:ar α e t he A F ou in rier coeffici s e p n a t c s eof X , x awnidtharepsopientcttothe orthonormal set. 7.9 Theorem. (Bessel) Let u α : α A be an orthonormal set. Then the set A ( x ) = α A : x ˆ α = 0 is finite or countable, and we have 2 x ˆ α 2 = ˆ x α ￿ ￿ x ￿ 2 x X . α A α A ( x )
125
7.1 Basic properties Proof. The sum on the left is defined to be sup α ˆ 2 : F A , F finite , F x α so the inequality follows from Prop. 7.8. We also deduce that for each i , the set of indices α for which ˆ x α > 1 / i is finite, which implies that A ( x ) is countable.  A maximal orthonormal set u α : α A is called a Hilbert basis for X . One may use Zorn’s lemma to prove that a Hilbert basis exists. Theorem 7.9 implies that the expression α A x ˆ α u α is absolutely convergent; it defines therefore an element of X , by completeness. The next result shows that when u α : α A is a Hilbert basis, we can recover x from its Fourier coefficients. A Hilbert space isomorphism between X and Y refers to an isometry T that also preserves the inner product: ￿ T x , Tu Y = ￿ x , u X x , u X . 7.10 Theorem. (Parseval) Let u α : α A be a Hilbert basis for the Hilbert space X . The set of (finite) linear combinations of the u α is a dense subset of X . We have x = ˆ x α u α , ￿ x ￿ 2 = x ˆ α 2 , ￿ x , y X = ˆ x α y ˆ α x , y X . α A α A α A X is finite dimensional if and only if X admits a finite Hilbert basis, in which case X is isomorphic as a Hilbert space to R n ( for some n ) . When X is infinite dimensional, then X is separable if and only if X admits a countable Hilbert basis; in this case, X is isomorphic as a Hilbert space to ￿ 2 . Proof. We merely sketch the proof of the theorem. The density assertion is that the closed subspace generated by the u α coincides with X . This is easily proved by contradiction, using Prop. 7.6. As mentioned above, the sum α A ˆ x α u α defines an element u X ; it satisfies ￿ u x , u β X = 0 β A , since this holds when u is replaced by any of the finite sums α F ˆ x α u α that converge to it. It follows that u = x . Similarly, the expression for ￿ x , y X is obtained by passing to the limit in the appropriate finite sums that converge respectively to x and y . The assertion in the finite dimensional case follows from Theorem 1.21. If X ad-mits a countable Hilbert basis, then the set of (finite) linear combinations of the u α with rational coefficients defines a countable dense subset of X , which is therefore separable. Finally, suppose that X is separable. Then any orthonormal set is count-able, since the distance between any two distinct elements of the set is 2. Let u i be a sequence constituting a Hilbert basis. The preceding conclusions then show that x → ￿ x , u 1 X , ￿ x , u 2 X , . . . defines an isomorphism from X to ￿ 2 .  7.11 Exercise. The goal is to prove the following result, the Lax-Milgram theorem . (It is a tool designed to solve certain linear equations in Hilbert space.) Theorem. Let b ( u , v ) be a bilinear form on a Hilbert space X . We suppose that b is continuous, and coercive in the following sense: there exist c > 0 , C > 0 such that
126 Cours de Francis Clarke : Hilbert spaces b ( u , v ) ￿ C ￿ u ￿ ￿ v ￿ , b ( u , u ) c ￿ u ￿ 2 u , v X . Then for any ϕ X , there exists a unique u ϕ X such that b ( u ϕ , v ) = ￿ ϕ , v v X . If b is symmetric ( that is, if b ( u , v ) = b ( v , u ) u , v X ) , then u ϕ may be character-ized as the unique point in X minimizing the function u → ( 1 / 2 ) b ( u , u ) ￿ ϕ , u . a) Show that the map u → Tu : = b ( u , ) defines an element of L C ( X , X ) satisfying Tu c ￿ u ￿ . b) Prove that T X is closed. c) Prove that T is onto, and then deduce the existence and uniqueness of u ϕ . d) Now let b be symmetric, and define f ( u ) = b ( u , u ) / 2. Show that f is strictly convex, and that f ( u ; v ) = b ( u , v ) u , v . e) Prove that the function u → ( 1 / 2 ) b ( u , u ) ￿ ϕ , u attains a unique minimum over X at a point u ϕ . Write Fermat’s Rule to conclude.  A crucial property of Hilbert spaces is that their norms are smooth. 7.12 Exercise. Let X be a Hilbert space. Prove that the squared norm function θ : X R , x → θ ( x ) = ￿ x ￿ 2 is continuously differentiable, with θ ( x ) = 2 x . Deduce that X admits a bump func-tion belonging to C b 1 , 1 ( X ) (see Exer. 5.23).  It follows from the above that, in a Hilbert space, the norm is differentiable . (One must hear, sotto voce, the words “except at the origin” in such a sentence; a norm can never be differentiable at 0, because of positive homogeneity.) In a general Ba-nach space, however, this is not necessarily the case. In fact, a space may admit no equivalent norm which is differentiable.
7.2 The proximal subdifferential
We proceed to extend in a local fashion, to functions that are not necessarily convex, the notion of subgradient. Let X be a normed space. 7.13 Definition. Let f : X R be given, with x dom f . We say that ζ X is a proximal subgradient of f at x if, for some σ = σ ( x , ζ ) 0 , for some neighbor-hood V = V ( x , ζ ) of x , we have f ( y ) f ( x ) + σ ￿ y x ￿ 2 ￿ ζ , y x y V .
7.2 The proximal subdifferential 127 The proximal subdifferential of f at x , denoted P f ( x ) , is the set of all such ζ . The above is very reminiscent of the subdifferential f ( x ) of convex analysis: when ζ f ( x ) , the inequality above holds globally, and for σ = 0. It is decidedly not the case that f is assumed convex here, but let us note that if it happens to be, we obtain the same construct as before: 7.14 Proposition. Let f be convex. Then P f ( x ) = f ( x ) . Proof. Clearly it suffices to show that an element ζ P f ( x ) belongs to f ( x ) . To do so, note that for such a ζ , the convex function y → g ( y ) = f ( y ) + σ ￿ y x ￿ 2 ￿ ζ , y attains a local minimum at y = x (by definition of proximal subgradient). Thus 0 g ( x ) , which, by the Sum Rule (Theorem 4.11) and Exer. 7.12, yields ζ f ( x ) , as required.  Let us turn now to the relation between proximal subgradients and derivatives. Sup-pose that f is Gaˆteaux differentiable at x . We claim that the only possible element of P f ( x ) is f G ( x ) . To see this, fix any v R n . Observe that we may set y = x + tv in the proximal subgradient inequality to obtain f ( x + tv ) f ( x ) / t ￿ ζ , v σ t 2 ￿ v ￿ 2 for all t sufficiently small . Passing to the limit as t 0, this yields ￿ f G ( x ) , v ζ , v . Since v is arbitrary, we must have ζ = f G ( x ) . We have proved: 7.15 Proposition. If f is Gaˆteaux differentiable at x , then P f ( x ) f G ( x ) . 7.16 Example. The last proposition may fail to hold with equality; in general, P f ( x ) may be empty even when f G ( x ) exists. To develop more insight into this question, bear in mind that the proximal subdifferential is philosophically linked to (local) convexity, as mentioned above. At points where f has a “concave cor-ner”, there will be no proximal subgradients. A simple example is provided by the function f ( x ) = −￿ x ￿ . If ζ P f ( 0 ) , then, by definition, −￿ y ￿ − 0 + σ ￿ y ￿ 2 ￿ ζ , y for all y near 0 . Fix any point v X , and substitute y = tv in the inequality above, for t > 0 sufficiently small. Dividing across by t and then letting t 0 leads to ￿ ζ , v ￿ −￿ v ￿ ∀ v X , a condition that no ζ can satisfy. Thus, we have P f ( 0 ) = /0. The proximal subdifferential P f ( x ) can be empty even when f is continuously differentiable. Consider the function f ( x ) = −￿ x ￿ 3 / 2 , which is C 1 , with derivative 0 at 0. Yet, we claim, P f ( 0 ) = /0.Toseethis,let ζ P f ( 0 ) . By Prop. 7.15, ζ must be 0. But then the proximal subgradient inequality becomes
128 Cours de Francis Clarke : Hilbert spaces −￿ y ￿ 3 / 2 0 + σ ￿ y ￿ 2 0 for all y near 0 . We let the reader verify that this cannot hold; thus, P f ( 0 ) = 0/ once again.  It might be thought that a subdifferential which can be empty is not going to be of much use; for example, its calculus might be very poor. In fact, the possible empti-ness of P f is a positive feature in some contexts, as in characterizing certain proper-ties (we shall see this in connection with viscosity solutions later). And the calculus of P f is complete and rich (but fuzzy, in a way that will be made clear). It is evident that if f has a (finite) local minimum at x , then P f ( x ) is nonempty, since we have 0 P f ( x ) (Fermat’s Rule). This simple observation will be the key to proving the existence (for certain Banach spaces) of a dense set of points at which P f is nonempty. First, we require a simple rule in proximal calculus: 7.17 Proposition. Let x dom f , and let g : X R be differentiable in a neighbor-hood of x , with g Lipschitz near x . Then P f + g ( x ) = P f ( x ) + g ( x ) . Proof. We begin with Lemma. There exists δ > 0 and M such that y , z B ( x , δ ) = g ( u ) g ( x ) ￿ g ( x ) , u x ￿ M ￿ u x ￿ 2 . To see this, we invoke the Lipschitz hypothesis on g to find δ > 0 and M such that y , z B ( x , δ ) = g ( y ) g ( z ) ￿ M ￿ y z ￿ . For any u B ( x , δ ) , by the Mean Value theorem, there exists z B ( x , δ ) such that g ( u ) = g ( x ) + ￿ g ( z ) , u x . Then, by the Lipschitz condition for g , g ( u ) g ( x ) ￿ g ( x ) , u x = ￿ g ( z ) g ( x ) , u x ￿ M ￿ z x ￿ ￿ u x ￿ ￿ M ￿ u x ￿ 2 , which proves the lemma. Now let ζ P f + g ( x ) . Then, for some σ 0 and neighborhood V of x , we have f ( y ) + g ( y ) f ( x ) g ( x ) + σ ￿ y x ￿ 2 ￿ ζ , y x y V . It follows from the lemma that f ( y ) f ( x ) + ( σ + M ) ￿ y x ￿ 2 ￿ ζ g ( x ) , y x y V δ : = V B ( x , δ ) , whence ζ g ( x ) P f ( x ) . Conversely, if ψ P f ( x ) , then, for some σ 0 and neighborhood V of x ,
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents