dot
Detailansicht
Katalogkarte GBA
Katalogkarte ISBD
Suche präzisieren
Drucken
Download RIS
Hier klicken, um den Treffer aus der Auswahl zu entfernen
Titel Optimizing performance of Vlasov simulations using sparse velocity grids
VerfasserIn S. von Alfthan, I. Honkonen, A. Sandroos, M. Palmrooth
Konferenz EGU General Assembly 2012
Medientyp Artikel
Sprache Englisch
Digitales Dokument PDF
Erschienen In: GRA - Volume 14 (2012)
Datensatznummer 250066658
 
Zusammenfassung
Global magnetohydrodynamic (MHD) codes successfully model the Earth's magnetosphere when the plasma has a well defined temperature, and when the important spatial scales are larger than the ion gyro radii. To describe multi-temperature multi-component plasma one has to turn to a more accurate physical description. The exponential growth of supercomputing power has enabled us to develop a new finite volume method (FVM) code (Vlasiator) based on a six-dimensional Vlasov-hybrid approach, where ions are six-dimensional distribution functions and electrons are a massless charge-neutralizing fluid. The six-dimensional distribution function is split into three-dimensional spatial and velocity spaces, i.e., every spatial-space cell stores a three-dimensional velocity grid. To make the approach usable we have to achieve good computational efficiency: the total computing time required to simulate an event has to be on the order of tens of hours, and the code has to scale to tens of thousands of cores. Recently we have developed two techniques that help to enable this level of performance: 1) the velocity grid is sparse, only the "important" velocity blocks of every spatial-space cell are simulated greatly reducing the memory, cpu and memory-bandwidth requirements of the simulation; 2) all levels of parallelism is extracted using a hybrid OpenMP-MPI based approach which reduces memory-bandwidth requirements and improves load balance. We will present the implementation details of a parallel FVM simulation with independently adaptive velocity grids in real-space cells using hybrid OpenMP-MPI parallelism. We will also present results from various test cases showing the impact of these approaches.