DNS et LES de turbulence de paroi soumis un gradient de pression

Publié par

Intro Numerical code Results Conclusion DNS et LES de turbulence de paroi soumis a un gradient de pression J.-P. Laval (1), M. Marquillie (1) (1) Laboratoire de Mecanique de Lille (LML), CNRS Blv Paul Langevin 59655 Vileneuve d'Ascq, FRANCE J.-P. Laval (1), M. Marquillie (1) Journee Scinetifique ONERA, 9 Octobre 2009, Chatillon

  • investigating models

  • projection method

  • curved wall ?

  • computation domain

  • wall turbulence

  • nd order

  • models near

  • incompressible navier-stokes equations

  • nd order backward

Publié le : jeudi 1 octobre 2009
Lecture(s) : 30
Source : onera.fr
Nombre de pages : 34
Voir plus Voir moins
DNSetLESdeturbulencedeparoisoumis`aun gradient de pression
J -P Laval(1), M. Marquillie(1) . .
(1)SC,RNLab´Mednacetaroerioe(llL)LMueiqLide Blv Paul Langevin 59655 Vileneuve d’Ascq, FRANCE
Experiments on flat wall (Poland) and curved wall (Surrey & LML) DNS of APG boundary layer on flat wall (Madrid) DNS of APG channel flow: both flat & curved wallDEISA (Performed by LML in collaboration with TU Munich (Germany), University of Rome (Italy), Chalmers University (Sueden), University of Surrey (UK))
Objectives Generating and analysing new data on near wall turbulence, Extracting physical understanding from these data, Putting more physics in the near wall RANS models, Developing better LES models near the wall, Investigating models based on Low Order Dynamical Systems. WALLTURB Databases on Adverse Pressure Gradient Flows
WALLTURB: A European Synergy for Assesment of Wall Turbulence (STREP FP6 2005-2009,e1.frl)illniv-rb.ullut//awtt:ph
Spatial discretisation : 4thorder finite differences for Laplacian (streamwise). Collocation-Chebyshev (normal). Fourier (spanwise) Temporal discretisation : 2nd order backward Euler. 2nd order Adams-Bashforth Projection method for incompressibility.
∂~u¯ ~ ~¯1¯Δ~ ¯ u, ¯t+ (~u¯.r¯)~¯u=−rp¯ +Re ~ ¯~ r.u¯ = 0.
Incompressible Navier-Stokes equations :
Mesh : physical domain.
Mesh : computation domain.
y=L(11γx))y¯ +γ(x¯), γ(x¯) =Lη¯(¯x)+η¯(x¯L)
Navier-Stokes system in thecomputational coordinates:
∂~ut+ (~u. ~rη)~u+ (~u.G~η)~u=~rηp~Gηp+R1eΔη~u+R1e Lη~u
Direct resolution([Nz×Ny] 5 bands matrix of sizeNx) Any smooth geometries in XY plane (small derivatives ofη(x))
~ ~ rη.~u=Gη.~u
MPI Parallelization in spanwise direction (Fourier).
Possiblity to simulate a moving wallη(t,x)
processors, 2 nodes, NEC SX-8 at IDRIS)
MPI performance (optimization of transfer)
IntroNumerical codeResults Conclusion Performance of the code Processor Proc. Grid resolution Mem./proc. CPU /iter time/point/iter Perf. PERFORMANCES ONSCALARPROCESSORS (without mapping) Same physical problem IBM P4 P655 64 1024×193×5121.2 Go35.8 s22.59µs IBM P4 P655 128 1024×193×5120.6 Go19.0 s15.87µs IBM P4 P655 256 1024×193×5120.3 Go7.2 s18.17µs IBM P6 P575 128 2048×193×10242.4 Go33.9 s10.72µs IBM P6 P575 256 2048×193×10241.2 Go16.5 s10.43µs IBM P6 P575 512 2048×193×10240.6 Go11.5 s14.59µs Same CPU cost per processor IBM P4 P655 64 1024×193×5121.2 Go 35.8 s22.59µs1.2 GFlops IBM P4 P655 128 1024×193×10241.2 Go 38.8 s17.92µs1.2 GFlops IBM P4 P655 256 1024×193×20481.2 Go 34.5 s21.76µs1.2 GFlops IBM P6 P575 128 2048×193×1024 33.92.4 Go s10.72µs IBM P6 P575 256 2048×193×20482.4 Go s 32.410.25µs IBM P6 P575 512 2048×193×4096 s 50.42.4 Go15.93µs IBM BlueGene P 256 4096×33×1024 12.20.4 Go s22.51µs IBM BlueGene P 512 4096×33×20480.4 Go 14.0 s25.91µs IBM BlueGene P 1024 4096×33×4096 s0.4 Go 17.331.64µs Increase of memory and transfer IBM P4 P655 256 1024×193×512 0.3 Go7.2 s18.17µs IBM P4 P655 256 1024×193×1024 0.6 Go14.0 s17.66µs IBM P4 P655 256 1024×193× Go2048 1.234.5 s21.76µs Same problem on different processors IBM P4 P65564 1024×193× s 35.8 Go512 1.222.59µs1.2 GFlops IBM P6 P57564 1024×193×512 1.2 16.3 Go s10.30µs2.7 GFlops IBM BlueGene P256 1024×193×512 s 15.70.3 Go39.77µs IBM P4 P655256 1024×193×512 s0.3 Go 7.218.17µs PERFORMANCES ONVECTORIALPROCESSORS (with mapping) NEC SX8 64 2304×385×576 6 Go 18.19 2.35 sµs10 GFlops Peak Perf. (GFlops): BlueGene P: (850Mhz)3.4, Power4+ (1.7GHz):6.8, Power6 (4.7GHz):18.8, NEC-SX8 (2Ghz):16 J.-P. Laval(1), M. Marquillie(1)tNcObRoEq,AOe9u9e0nCi,teSre0i2cirlnl´oehˆJaotun
(every other 16 meshes in each direction)
Reτ= 617at the inlet 4π×2×π Fromprecursor DNSof flat channel flow (sameReτ) 2304×384×576 (510 Millionsgrid points) 450 000 time steps (160000 CPU hours) 400 Gb 932 fields 3D (u,v,w,p) [NetCDF] 3D fields for visualization vortices 60×48×31 time evolutions
Computation at HLRS (DECI 2006 & 2007):64 Processors NEC-SX8
>7 Tb >3 Tb >800 Gb
Reynolds: Domain: Inlet: Resolution: Integration : Memory: Storage :
Need to optimize MPI transfer routines (rewrite version of MPI GATHER, MPI SCATTER) Performance with 64 Processors:10 GFlops/ Procs. (>60% of peak performance)
Helmholtz and Poisson equations: 55 % (14 GFlops) FFT (VFFTPACK) : 20 % (7-8 GFlops) Communications(not overlapped with computations):13 %
x+)inlet= 5.1, (Δz+)inlet= 3.4 x+)max= 10.7, (Δz+)max= 7.4 (resolution after dealiasing)
: Kolomogorov scale
Maximum mesh size:
4ηin the region of maximum ofk
Soyez le premier à déposer un commentaire !

17/1000 caractères maximum.