Detailseite
Projekt Druckansicht

Development of an Adaptive Grid Code for Particle in Cell Simulation in Plasmaphysics

Fachliche Zuordnung Optik, Quantenoptik und Physik der Atome, Moleküle und Plasmen
Förderung Förderung von 2006 bis 2010
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 32807477
 
Erstellungsjahr 2011

Zusammenfassung der Projektergebnisse

A new adaptive hybrid code A.I.K.E.F. (Adaptive Ion Kinetic Electron Fluid) was developed. The code operates on cartesian meshes that can adapt to the physical structures in both, space and time. Adaptivity is implemented by means of Hybrid-Block-AMR, that is individual octs are refined rather than entire blocks, where an oct is one eighth of a block. In order to account for a reasonable number of particles in each cell, particles are refined via splitting and coalescence. The code is efficiently parallelized for distributed systems by means of the Message Passing Interface (MPI). In an early version the concept was implemented for a serial code where the code structure accounts for later parallelization. The calculation of the electromagnetic fields on the adaptive grids follows standard tools as interpolation and projection at grid boundaries. The implementation of adaptive particle systems, however, is much more complex and it requires much more calculations. Therefore, the adaptive particle systems were extensively tested and optimized with respect to performance. In order to cover the kinetic nature of the ions an adequate number N of particles has to populate each grid cell. The movement from coarse to fine grids reduces N and vice versa. Therefore, it is mandatory to split or to fuse particles. Both procedures of particle splitting and particle merging are implemented in such a way that mass, momentum and kinetic energy are conserved. Using sorted particle lists reduces the numerical cost from NxN to N operations. Since further steps as particle push and particle acceleration also require N operations this reduction from NxN to N was a significant step for an efficient work balance with respect to the subsequent parallelization. An important step to achieve precise physical modeling is to trigger particle splitting already in coarse grids just before the particles reach fine grids. Thus, numerical noise at grid boundaries was reduced drastically. The particle splitting in grids of maximal refinement is completely avoided. Therefore, any unrealistic changes of physical structures by particle splitting is excluded. This is of great advantage as interesting structures are preferentially located in grids that are maximal refined. A.I.K.E.F. was tested extensively. Several analytical dispersion relations of waves in plasma were reproduced. It was confirmed that the waves propagate through the boundaries between grids of different refinement level without relevant change. The corresponding multiple particle splitting and coalescence do not influence mass, momentum or kinetic energy. Intensive cache optimization, implementation of improved random generators, and sophisticated particle administration accelerated A.I.K.E.F. by a factor of six compared to the precursor code even on static unrefined grids without loss of precission. The gain of the AMR was determined by comparison of a high resolution simulation on a static grid and an adaptive simulation for identical physical conditions. A gain of factor 71 was achieved. A.I.K.E.F. was parallelized by Message Passing Interface (MPI) in cooperation with the University of Edinburgh. To optimize the work balance and to minimize the interprocessor communication the blocks of the adaptive grid are sorted along a so called Space Filling Curve (SFC). The SFC requires the number of root blocks to be a power of two in each dimension. Afterwards, the curve is subdivided into as many intervals as CPUs are available in such a way that each interval includes approximately the average workload. Finally, each CPU is assigned one interval of this curve. The shape of the curve ensures that blocks of a given interval are close in coordinate space. This procedure provides a nearly ideal scaling of the code. The speedup of A.I.K.E.F. compared to the precursor code is estimated by a factor of 17000 when optimization, adaptivity and parallelization of a simulation on 128 CPUs is taken into account. A.I.K.E.F. has already been applied very successfully to several scenarios: 1. Real time simulation of magnetopause crossing of Saturn’s moon Titan: First time the so called “Fossile Fields” inside Titan’s ionosphere and the complex tail reconfiguration were simulated. Without adaptivity the simulation of both effects is impossible because of the very different spatial scales. 2. Simulation of Mercury’s plasma environment and reproduction of MESSENGER observations was performed with high precision due to the adaptivity. 3. Analysis of the Enceladus’ plasma environment with special respect to the generation of Alfven waves. 4. Real time simulation of Moon’s plasma wake during highly dynamic Solar Wind and interpretation of ARTEMIS observations. 5. Study of the impact of stellar magnetic field orientation to the magnetospheres of exoplanets. 6. Study of the perihelion approach of comet 67P/Churyumov-Gerasimenko. 7. Simulation of cometary jets and their interaction with the Solar Wind. The application of A.I.K.E.F. to further interesting space plasma topics will be performed in cooperation with High Performance Computing Centers in Germany, Spain, Scotland, and Taiwan.

Projektbezogene Publikationen (Auswahl)

  • The plasma interaction of Enceladus: 3D hybrid simulations and comparison with Cassini MAG data. Planetary and Space Science 57, 2113–2122, 2009
    H. Kriegel, S.Simon, J.Mueller, U.Motschmann, J.Saur, K.-H.Glassmeier, M.K.Dougherty
  • A hybrid simulation of Mercury’s magnetosphere for the MESSENGER encounters in year 2008. Icarus 209, 46–52, 2010
    Y.-C. Wang, J. Mueller, U. Motschmann, W.-H. Ip
  • Global plasma-parameter simulation of Comet 67P/Churyumov-Gerasimenko approaching the Sun. Astronomy & Astrophysics 520, A92, 2010
    N. Gortsas, U. Motschmann, E. Kührt, K.-H. Glassmeier, K. C. Hansen, J. Mueller, A. Schmidt
    (Siehe online unter https://doi.org/10.1051/0004-6361/201014761)
  • Magnetic field fossilization and tail reconfiguration at Titan during a magnetopause passage: 3D adaptive hybrid code simulations. Planetary and Space Science, 58, 1526–1546, 2010
    J. Mueller, S. Simon, U. Motschmann, K.H. Glassmeier, J. Saur, J. Schuele, G. Pringle
  • Interplanetary magnetic field orientation and the magnetospheres of close-in exoplanets. Astronomy & Astrophysics, 525, A117, 2011
    E. P. G. Johansson, J. Mueller, U. Motschmann
    (Siehe online unter https://doi.org/10.1051/0004-6361/201014802)
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung