Normal approximation for quasi associated random fields
14 pages
English

Normal approximation for quasi associated random fields

-

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
14 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

Niveau: Supérieur, Master, Bac+5
Normal approximation for quasi associated random fields Alexander Bulinski a,1,3 Charles Suquet b,2 aDepartment of Mathematics and Mechanics, Moscow State University, Moscow 119 899, Russia bStatistique et Probabilites, FRE CNRS 2222, Bat M2, U.F.R. de Mathematiques, University Lille 1, F-59655 Villeneuve d'Ascq Cedex, France AMS classification: Primary: 60F05 Abstract For quasi associated random fields (comprising negatively and positively dependent fields) on Zd we use Stein's method to establish the rate of normal approximation for partial sums taken over arbitrary finite subsets U of Zd. Key words: Random fields; Dependence conditions; Positive and negative association; Lindeberg function; CLT; Convergence rates; Maximum of partial sums. 1 Introduction There are a number of interesting stochastic models described by means of families of random variables possessing properties of positive or negative dependence or their modifications. One can refer to the pioneering papers by Harris (1960), Lehman (1966), Esary et al. (1967), Fortuin et al. (1971), Joag-Dev and Proschan (1983). Definition 1 (Esary et al. (1967)) A finite collection Y = (Y1, . . . , Yn) of real val- ued random variables Yk, k = 1, . . .

  • wise lipschitz constants

  • lipschitz functions

  • positive association

  • lindeberg function

  • finite subset

  • any coordinate-wise

  • random variable

  • random fields

  • coordinate


Sujets

Informations

Publié par
Nombre de lectures 17
Langue English

Extrait

Normal approximation for quasi associated random fields
Alexander Bulinski a , 1 , 3 Charles Suquet b , 2 a Department of Mathematics and Mechanics, Moscow State University, Moscow 119 899, Russia b StatistiqueetProbabilite´s,FRECNRS2222,BaˆtM2,U.F.R.deMathe´matiques, University Lille 1, F-59655 Villeneuve d’Ascq Cedex, France AMS classification: Primary: 60F05
Abstract For quasi associated random fields (comprising negatively and positively dependent fields) on Z d we use Stein’s method to establish the rate of normal approximation d for partial sums taken over arbitrary finite subsets U of Z .
Key words: Random fields; Dependence conditions; Positive and negative association; Lindeberg function; CLT; Convergence rates; Maximum of partial sums.
1 Introduction There are a number of interesting stochastic models described by means of families of random variables possessing properties of positive or negative dependence or their modifications. One can refer to the pioneering papers by Harris (1960), Lehman (1966), Esary et al. (1967), Fortuin et al. (1971), Joag-Dev and Proschan (1983). Definition 1 (Esary et al. (1967)) A finite collection Y = ( Y 1 , . . . , Y n ) of real val-ued random variables Y k , k = 1 , . . . , n , is called associated or positively dependent if Cov f ( e Y r ) t , h g e ( Y co ) v ar ia 0 ncfeoerxiasntsy.cAonoridninantitee-wfiasmeilnyoonfdercarnedaosimngvafruinacbtlieosniss f as,s g oc : iat R n if R , whenev ed this is valid for every finite sub-family. The association and related concepts (e.g. positive quadrant dependence, etc.) were initially connected with reliability theory and mathematical statistics only. Percolation 1 Corresponding author, e-mail: bulinski@mech.math.msu.su (A. V. Bulinski) 2 e-mail: Charles.Suquet@univ-lille1.fr 3 Partially supported by INTAS grant 99-01317 and RFBR grant 99-01-00112 Preprint submitted to Elsevier Preprint 19 January 2006
theory and statistical mechanics, where one considers random variables “satisfying the FKG inequalities” (Fortuin et al. (1971)) implying the association, provide a different domain of applications of this notion. Definition 2 (Joag-Dev and Proschan (1983)) Real valued random variables Y k , k = 1 , . . . , n , and a family thereof, are called negatively dependent if, each time the covariance exists, Cov f ( Y i , i I ) , g ( Y j , j J ) 0 , (1) for every pair of disjoint subsets I , J of { 1 , . . . , n } and for any coordinate-wise nonde-creasing functions f : R I R , g : R J R . Sampling without replacement provides an example of negatively dependent random vari-ables (see Joag-Dev and Proschan (1983) for this and other examples). Newman (1984) calls a family of random variables weakly associated if it satisfies the requirement of Definition 2, but with reversed sign in the inequality (1). Evidently, associated families of random variables are weakly associated. Note also that any family of independent random variables is automatically associated and negatively dependent. Instead of the terms negative dependence and weak association one uses also negative association (NA) and positive association (PA). It is worth mentioning that concepts of mixing or positive (negative) dependence offer complementary approaches to analysis of dependent random variables. The main advantage of dealing with positively or negatively dependent random fields is due to the fact that most of their properties are determined by the covariance structure whereas the calculation of mixing coefficients is in general a nontrivial problem (we refer to the book on mixing by Doukhan (1994)). Starting from the seminal paper by Newman (1980), during the last two decades, var-ious classical limit theorems of probability theory (CLT, SLLN, weak and strong invari-ance principles, LIL and FLIL, Glivenko-Cantelli type theorems, etc.) were established for stochastic processes and random fields under the positive or negative dependence conditions mentioned above. In this paper, we prove more general variants of CLT for random fields involving wider classes of random variables (Theorems 4-13 of Section 2). Section 3 is devoted to an important new result by Shao (2000) showing that the expectations of convex increasing functions of maxima of partial sums of negatively dependent random variables can be estimated by means of the independent copies of summands. We demonstrate that it is impossible to get an exact analogue of this result for negatively dependent random fields { X j , j Z d } with d > 1. Definition 3 Call a collection of real valued random variables Y = { Y t , t T } with IE Y t 2 < ( t T ) quasi associated if, for all finite disjoint subsets I , J of T and any Lipschitz functions f : R I R , g : R J R , one has | Cov f ( Y i , i I ) , g ( Y j , j J ) | ≤ X X L i ( f ) L j ( g ) | Cov( X i , X j ) | , (2) i I j J where the coordinate-wise Lipschitz constants L i ( f ) are such that, for all x = ( x i , i I ) , I y = ( y i , i I ) in R , | f ( x ) f ( y ) | ≤ X L i ( f ) | x i y i | . i I
2
Obviously, if a random field { Y t , t T } is quasi associated, the same is true for the centered field { Y t IE Y t , t T } . Inequality (2) is satisfied for PA or NA random fields, see Bulinski and Shabanovich (1998). An analogue of (2) for smooth functions f and g was firstly proved in Birkel (1988) for associated random variables (related results appeared in Newman (1984), Roussas (1994), Peligrad and Shao (1995) and Bulinski (1996)). Note that covariance inequalities are powerful tools in establishing moment inequalities and limit theorems for sums of dependent r.v.’s. We refer, e.g., to Ibragimov and Linnik (1971), Withers (1981), Bradley and Bryc (1985), Doukhan (1994), Bakhtin and Bulinski (1997), Louhichi (1998), Rio (2000). Interesting examples of application of covariance inequalities in statistical problems are provided by Doukhan and Louhichi (1999).
2 Normal Approximation Let X = { X j , j Z d } , d 1, be a centered random field such that IE | X j | 2 < for all j Z d . For a finite subset U of Z d denote W = B 1 X X j , B 2 = X IE X j 2 , (3) j ∈U j ∈U where the trivial case B 2 = 0 is excluded. Introduce further R = B 2 X | Cov( X j , X q ) | j,q ∈U j 6 = q and, for ε > 0, the Lindeberg function L ε = B 2 X IE X j 2 1 {| X j | > εB } . j ∈U Evidently, B 2 , W , R and L ε are functions of X j , j ∈ U , and we use also notations B 2 ( X, U ), W ( X, U ), R ( X, U ) and L ε ( X, U ). Theorem 4 If X = { X j , j Z d } , d 1 , is a quasi associated centered random field, then, for any finite subset U of Z d , every x R and arbitrary positive ε , γ , | P ( W x ) P ( Z x ) | ≤ P ( x γ Z x + γ ) + C n (3 / 2) ε + (4 + ε ) L ε + (1 +2 ε ) R o , (4) where Z is a standard normal random variable, and one can take C = C ( γ ) = 2 + 2 /γ. (5) PROOF. It is well known (see Stein (1986)) that, for any bounded continuous function g : R R , the unique bounded solution of the equation f 0 ( w ) wf ( w ) = g ( w ) IE g ( Z ) (6) is determined by the formula f ( w ) = exp( w 2 / 2) g ( t ) IE g ( Z ) exp( t 2 / 2) dt. Z w 3
(7)
For fixed x R and γ > 0 define a smooth nondecreasing function g ( w ) = g x,γ ( w ) in such a way that g ( w ) = 0 if w < x , g ( w ) = 1 if w > x + γ and g 0 ( w ) 2 for all x R . Hence, for any x, t R , γ > 0, 1 [ x, ) ( t ) g ( t ) 1 [ x + γ, ) ( t ) . (8) Then one can prove that for the function f ( w ) (depending on x and γ ) given by (7) there exists f 00 ( w ) for all w R and, for every x R , γ > 0, k f k q π/ 2 , k f 0 k 2 , k f 00 k q π/ 2 + k g 0 k . Consequently f , f 0 are Lipschitz functions and, for the corresponding Lipschitz constants, max { Lip( f ) , Lip( f 0 ) } ≤ C (9) the bound C being given in (5). The ingenious Stein’s idea is that substituting in (6) instead of w a random variable W and taking the expectation one gets IE f 0 ( W ) IE W f ( W )=IE g ( W ) IE g ( Z ) . (10) So the left hand side of (10) gives an accuracy of normal approximation for IE g ( W ). Note that we can not use here the discontinuous indicator function g ( w ) = 1 ( −∞ ,x ] ( w ) to measure the Kolmogorov distance between distribution functions of W and Z . Our further steps consist of evaluating IE f 0 ( W ) IE W f ( W ). For a given ε > 0, define the Lipschitz function ε if t < ε , h ( t ) = h ε ( t ) = t if ε t ε, ε if t > ε.
Set for j ∈ U ξ j = X j /B, ξ j, 1 = h ( ξ j ) , ξ j, 2 = ξ j ξ j, 1 , W ( j ) = W ξ j . Clearly ξ j, 1 and ξ j, 2 depend on ε . Using the obvious relations ξ j = ξ j, 1 + ξ j, 2 , W = W ( j ) + ξ j = W ( j ) + ξ j, 1 + ξ j, 2 , one can write IE W f ( W ) = X IE ξ j f ( W ) = R 1 + R 2 + R 3 + R 4 , j ∈U where
4
(11)
(12)
(13)
R 1 = X IE ξ j f ( W ( j ) ) , j ∈U R 2 = X IE n ξ j, 2 h f ( W ) f ( W ( j ) ) io , j ∈U R 3 = X IE n ξ j, 1 h f ( W ( j ) + ξ j, 1 + ξ j, 2 ) f ( W ( j ) + ξ j, 1 ) io , j ∈U R 4 = X IE n ξ j, 1 h f ( W ( j ) + ξ j, 1 ) f ( W ( j ) ) io . j ∈U Here R 2 , R 3 and R 4 depend also on ε . Note that if F : R R , G : R I R are Lipschitz functions, then F ( G ( . )) is Lipschitz as well and its coordinate-wise Lipschitz constants L i can be chosen so that L i F ( G ( . )) Lip F L i ( G ), i I . Therefore, (2) and (9) imply | R 1 | ≤ X | Cov ξ j , f ( W ( j ) ) | ≤ BC 2 X j,q X ∈U | Cov( X j , X q ) | = CR. (14) j ∈U j ∈U j 6 = q Taking into account (9), we observe that | R 2 | ≤ X IE n ξ j, 2 h f ( W ( j ) + ξ j, 1 + ξ j, 2 ) f ( W ( j ) + ξ j, 1 ) io j ∈U + X IE n ξ j, 2 h f ( W ( j ) + ξ j, 1 ) f ( W ( j ) ) io j ∈U C X IE | ξ j, 2 | 2 +IE | ξ j, 2 || ξ j, 1 | j ∈U C  j X U IE | ξ j | 2 1 {| ξ j | } + ε j X U IE | ξ j | 1 {| ξ j | } ! 2 C L ε , in view of the following estimates: | ξ j, 1 | ≤ ε, | ξ j, 2 | ≤ | ξ j | 1 {| ξ j | } ( ξ j 2 ) 1 {| ξ j | } . (16) Analogously, we get | R 3 | ≤ C X IE | ξ j, 1 || ξ j, 2 | ≤ C L ε . (17) j ∈U The Taylor formula yields f ( W ( j ) + ξ j, 1 ) f ( W ( j ) ) = f 0 ( W ( j ) ) ξ j, 1 +21 f 00 ( η j ) ξ j 2 , 1 , (18) where η j = η j ( ω ) is a point between W ( j ) ( ω ) and W ( j ) ( ω ) + ξ j, 1 ( ω ) ( ω Ω, all random fields under consideration being defined on the same probability space (Ω , F , P )). Thus, R 4 = X IE ξ j 2 , 1 f 0 ( W ( j ) ) + Δ 1 , j ∈U 5
(15)
(19) (20) (21)
(22) (23)
where, because of relations | ξ j, 1 | ≤ | ξ j | for j ∈ U and P j ∈U IE ξ j 2 = 1, we have | Δ 1 | ≤ 12 X IE | ξ j, 1 | 3 | f 00 ( η j ) | ≤ 21 k f 00 k j X U IE | ξ j, 1 | 3 21 Cε. j ∈U Further on, X IE n ξ j 2 , 1 f 0 ( W ( j ) ) o = X Cov ξ j 2 , 1 , f 0 ( W ( j ) ) + X IE ξ j 2 , 1 IE f 0 ( W ( j ) ) . j ∈U j ∈U j ∈U Note that h 2 is a Lipschitz function with Lip( h 2 ) = 2 ε . By (2) and (9), X Cov ξ j 2 , 1 , f 0 ( W ( j ) ) 2 C 2 ε X | Cov( X j , X q ) | j ∈U B j,q ∈U = 2 CεR. j 6 = q Now, X IE ξ j 2 , 1 IE f 0 ( W ( j ) ) = X IE ξ j 2 , 1 IE f 0 ( W ) + X IE ξ j 2 , 1 IE f 0 ( W ( j ) ) IE f 0 ( W ) j ∈U j ∈U j ∈U and X IE ξ j 2 , 1 IE f 0 ( W ( j ) ) IE f 0 ( W ) Δ 2 + Δ 3 , j ∈U where Δ 2 = X IE ξ j 2 , 1 | IE { f 0 ( W ( j ) + ξ j, 1 + ξ j, 2 ) f 0 ( W ( j ) + ξ j, 1 ) }| , j ∈U Δ 3 = X IE ξ j 2 , 1 | IE { f 0 ( W ( j ) + ξ j, 1 ) f 0 ( W ( j ) ) }| . j ∈U Relations (9) and (16) yield Δ 2 C X IE ξ j 2 , 1 IE | ξ j, 2 | ≤ 2 X IE | ξ j, 2 | ≤ L ε , (24) j ∈U j ∈U Δ 3 C X IE ξ j 2 , 1 IE | ξ j, 1 | ≤ Cε. (25) j ∈U Using again the relation P j ∈U IE ξ j 2 = 1, we have IE f 0 ( W ) X IE ξ j 2 , 1 =IE f 0 ( W )+IE f 0 ( W ) X IE ξ j 2 , 1 IE ξ j 2 . (26) j ∈U j ∈U On account of (9) and (12) we obtain IE f 0 ( W ) X IE ξ j 2 , 1 IE ξ j 2 ≤ k f 0 k X IE | ξ j 2 ξ j 2 , 1 | ≤ C X IE | ξ j 2 | 1 {| ξ j | } = C L ε . (27) j ∈U j ∈U j ∈U Hence (18)–(27) imply that R 4 =IE f 0 ( W ) + Δ 4 , 6
(28) (29)
(30)
where | Δ 4 | ≤ (3 / 2) + C (1 + ε ) L ε + 2 CεR. Finally, due to (8), for any x R , γ > 0, P ( W x ) P ( Z x + γ ) IE g ( W ) IE g ( Z )=IE f 0 ( W ) IE W f ( W ) =IE f 0 ( W ) R 1 R 2 R 3 IE f 0 ( W ) Δ 4 = R 1 R 2 R 3 Δ 4 . According to (14)–(17) and (28), 3 X R i + Δ 4 (3 / 2) + C (4 + ε ) L ε + C (1 + 2 ε ) R. i =1 For the function g ˜( t ) = g x γ,γ ( t ), t R , we get in a similar way P ( W x ) P ( Z x γ ) IE g ˜( W ) IE g ˜( Z ) . (31) So we come to the same bounds as in (29) and (30). Thus, the estimate (4) is estab-lished. Remark 5 To prove Theorem 4 and other results concerning quasi associated random fields we actually need only the property (2) where the cardinality of the set I is equal to 1. Corollary 6 For a family of quasi associated centered random fields X ( n ) = { X j ( n ) , j Z d } , n N , and a family of finite subsets U n of Z d , the CLT holds, i.e. W X ( n ) ; U n law Z as n → ∞ , whenever, for every ε > 0 , L ε X ( n ) ; U n 0 and R X ( n ) ; U n 0 as n → ∞ . (32) Remark 7 If X ( n ) are positively or negatively dependent random fields, the condition on R in (32) means that asymptotically the behaviour of sums is similar to the case of independent r.v.’s, since X Var X j ( n ) Var  j X U X j ( n ) ! 1 , as n → ∞ . j ∈U n n Remark 8 Our Theorem 4 comprises Theorem 1 by Bulinski and Vronski (1996) where a strictly stationary associated random field X = { X j , j Z d } was studied under the condition σ 2 = X Cov( X 0 , X j ) < j Z d and summation was carried over finite sets U n Z d , growing in the Van Hove sense (appropriate to the discrete case). The latter result generalized the classical Newman’s 7
CLT where partial sums were taken over blocks. We use the renorm group approach, considering for m = ( m 1 , . . . , m d ) N d auxiliary random fields Y q ( m ) = X X j , q = ( q 1 , . . . , q d ) Z d , j Π q ( m ) where Π q ( m ) = { i = ( i 1 , . . . , i d ) Z d : ( q k 1) m k < i k q k m k } . Note that here we consider more general dependence conditions and neither stationarity nor the existence of absolute moments of summands of order higher than two is required. Moreover, no conditions are imposed on the growth of the sets U n used to form the partial sums and there are no hypotheses concerning the rates of decrease of the covariance functions of random fields under consideration (cf. Cox and Grimmett (1984), Roussas (1994), Bulinski and Keane (1996), Bulinski and Vronski (1996)). Let now X = { X j , j Z d } , d 1, be a random field such that IE X j = 0 , IE | X j | s < for some s (2 , 3] and all j Z d . (33) For a finite subset U of Z d , denote by L s the Lyapounov fraction L s = B s X IE | X j | s , j ∈U
where B 2 > 0 is defined in (3). Theorem 9 If X is a quasi associated centered random field satisfying (33) , then, for any finite subset U of Z d , every x R and arbitrary positive γ , one has | P ( W x ) P ( Z x ) | ≤ P ( x γ Z x + γ ) + 3 CR + (13 / 2) CL s , (34) where W , Z and C are the same as in (3) and (5) .
PROOF. The scheme of the proof follows that of Theorem 4. Take ε = 1 in the definition of the function h in (11) and note that L 1 L s . Instead of (25) one has now Δ 3 C X IE ξ j 2 , 1 IE | ξ j, 1 | ≤ C X IE | ξ j, 1 | 3 CL s , j ∈U j ∈U where the Lyapunov inequality and the bounds | ξ j, 1 | ≤ 1, | ξ j, 1 | ≤ | ξ j | ( j ∈ U ) are used. To estimate | Δ 1 | we modify (19) in the same way. Remark 10 The bound (34) is similar to Lemma 3 in Shao and Su (1999). Unfortunately in that paper, there is a gap in the proof (see p. 143) due to the application of the Hoeffding formula to discontinuous functions. Although in both papers Stein’s method is used with a similar splitting ( R 1 , R 2 , R 3 , R 4 ), our treatment of R 4 is different. Now we turn to the general dependence conditions for random fields proposed initially by Doukhan and Louhichi (1999) for stochastic processes. Note that for random fields there are no “future” and “past”, but it is natural to measure the dependence between a single random variable X j and other X q , q Z d , when k q j k is “large” ( k j k = max 1 k d | j k | 8
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents