Strain Transformation and Rosette Gage Theory
28 pages
English

Strain Transformation and Rosette Gage Theory

-

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
28 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

AE3145 Strain Transformation and Rosette Gage Theory Page 1 Strain Transformation and Rosette Gage Theory It is often desired to measure the full state of strain on the surface of a part, that is to measure not only the two extensional strains, εx and εy, but also the shear strain, γxy, with respect to some given xy axis system. It should be clear from the previous discussion of the electrical resistance strain gage that a single gage is capable only of measuring the extensional strain in the direction that the gage is oriented.
  • rectangular rosette
  • ae3145 strain transformation
  • circle diameter
  • orientation with respect to the rosette gage
  • cba bc accbba cba
  • gage
  • circle
  • surface

Sujets

Informations

Publié par
Nombre de lectures 95
Langue English

Extrait

goAl.152.5mhtiregrevnoCtrei5s3.cnpeoreptropolis&5.4RWMeorpfasopohC&oeci
Nick Whiteley
Lecture 7: The Metropolis-Hastings Algorithm Nick Whiteley
Algorithm
l
7:
Lecture
Metropolis-Hastings
The
atweWhemittsalneesevahrlempsabsib:GlAognisgNmciirhtropoeMetHastlis-ceLhT:7erut5&4.WReMrtpolosinceproperties5.3htir2.5mvnoCegrehikWlete.1y5goAl
X j ( t ) f X j | X j ( ∙| X (1 t ) , . . . , X j ( t )1 , X j ( t + 11) , . . . , X p ( t 1) )
Requires that it is possible / easy to sample from the full conditionals. Can yields a slowly mixing chain if (some of) the components of ( X 1 , . . . , X p ) are highly correlated.
Two drawbacks:
Samples ( X (0) , X (1) , . . . ) form a Markov chain (like the Gibbs sampler).
What we will see today: Metropolis-Hastings algorithm
Key idea: Generate a Markov chain by updating the component of ( X 1 , . . . , X p ) in turn by drawing from the full conditionals:
Key idea: Use rejection mechanism, with a “local proposal”: We let the newly proposed X depend on the previous state of the chain X ( t 1) .
lasoporpfoeciohC&
Lecture 7: The Nick Whiteley
5.1
Algorithm
Metropolis-Hastings
Algorithm
5.3
&
5.4
RW
5.1 Algorithm 5.2 Convergence properties opolis & Choice of proposal
Metropolis & Choice of proposal
eLrutcilophC&seciorpfoosop
The Metropolis-Hastings algorithm
al
3. With probability α ( X | X ( t 1) ) set X ( t ) = X , otherwise set X ( t ) = X ( t 1) .
Algorithm 5.1: Metropolis-Hastings Starting with X (0) := ( X 1(0) , . . . , X p (0) ) iterate for t = 1 , 2 , . . . 1. Draw X q ( ∙| X ( t 1) ) . 2. Compute
1) | X ) α ( X | X ( t 1) ) = min ( 1 ,f ( f X ( ( t X ) 1) ) q ( q X ( ( t X | X ( t 1) ) ) .
evgrneecrpporeites5.3&5.4RWMetrohWkciNmh.5yeletiitorlg1Aon2C5.hmeheM7eT:losirtpoting-HasoritsAlg
Illustration
of
the
Metropolis-Hastings
Lecture 7: The Metropolis-Hastings Algorithm Nick Whiteley
method
5.1 Algorithm 5.2 Convergence properties 5.3 & 5.4 RW Metropolis & Choice of proposal
tias-HisoloptrMeehT:7erutceLesti3&5.prceeropevnonegrmhtiC2.55.1AlgorWhiteleytimhiNkcgnAsglro
The probability of remaining in state X ( t 1) is
The probability of acceptance does not depend on the normalisation constant: If f ( x ) = C π ( x ) , then
α ( X | X ( t 1) ) = min ( 1 ,π ( π X ( ( t X ) 1) ) q ( q X ( ( t X | 1 X ) | ( t X ) 1) ) )
Basic properties of the Metropolis-Hastings algorithm
The probability that a newly proposed value is accepted given X ( t 1) = x ( t 1) is a ( x ( t 1) ) = Z α ( x | x ( t 1) ) q ( x | x ( t 1) ) d x .
P ( X ( t ) = X ( t 1) | X ( t 1) = x ( t 1) ) = 1 a ( x ( t 1) ) .
etWM4R5.s&liporofoeciohClasoporp
Lemma 5.1 The transition kernel of the Metropolis-Hastings algorithm is
K ( x ( t 1) , x ( t ) ) = α ( x ( t ) | x ( t 1) ) q ( x ( t ) | x ( t 1) ) +(1 a ( x ( t 1) )) δ x ( t 1) ( x ( t ) ) ,
where δ x ( t 1) ( ) denotes Dirac-mass on { x ( t 1) } .
oposal
The Metropolis-Hastings Transition Kernel
rpfoeciohC&siloproetWM4R5.3&5.esreitrpponeecevgr2Conhm5.orit1AlgrutceLMehe:Te7isoloptritgnH-sarotiAsglckWhhmNiey5.itel
Lecture 7: The Nick Whiteley
5.2
Convergence
Metropolis-Hastings Algorithm
properties
5.3 &
5.4
RW
5.1 Algorithm 5.2 Convergence properties Metropolis & Choice of proposal
tingsAlgorithmNieheMrtpolosiH-saLeurct:Te74RWM3&5.polietrorpponeecse.5reit5.hmitorrgveon2CletihWkcglA1.5ye
Theoretical properties
la
Thus f ( x ) is the invariant distribution of the Markov chain ( X (0) , X (1) , . . . ) . Furthermore the Markov chain is reversible.
K ( x ( t 1) , x ( t ) ) f ( x ( t 1) ) = K ( x ( t ) , x ( t 1) ) f ( x ( t ) ) .
Proposition 5.1 The Metropolis-Hastings kernel satisfies the detailed balance condition
profosopChs&ceoi
ecLretuHastingsAlgorith:7hTMeteoropil-svnoC2.5mhtiroglA.1y5letehikWicmNrtpoWReM5&4.5s3.rtieropenceperge
and the proposal distribution q ( ∙| x ( t 1) ) :
δ
X | X ( t 1) = x ( t 1) U [ x ( t 1) δ, x ( t 1) + δ ]
δ
q ( ∙| x ( t 1) )
f ( )
1 / (2 δ ) 1 / 2
f ( x ) = ( I [0 , 1] ( x ) + I [2 , 3] ( x )) / 2 .
asop
Example 5.1: Reducible Metropolis-Hastings
Consider the target distribution
leoicrofpisolho&C
x ( t 1) 1 2 3 Reducible if δ 1 : the chain stays either in [0 , 1] or [2 , 3] .
ceLeteMpororetuTh7:rties5.3&5.4RWMertpolosiC&ohcioerofpsapo
Further theoretical properties
The Markov chain ( X (0) , X (1) , . . . ) is irreducible if q ( x | x ( t 1) ) > 0 for all x , x ( t 1) supp( f ) : every state can be reached in a single step. (less strict conditions can be obtained, see e.g.(see Roberts & Tweedie, 1996) The chain is aperiodic, if there is positive probability that the chain remains in the current state, i.e. P ( X ( t ) = X ( t 1) ) > 0 ,
lHastlis-AlgoingsNmciirhtetelWkihgoAl.1y5.2m5thriegrevnoCeporpecn
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents