The following guidelines is a quick overview of the code.
The first section describes the main algorithm of the Notus program which pilots the overall execution. The second section presents the domain and some relative points associated to the partitioning of the domain and the boundary conditions. Finally we shortly present the PDE discretization principles and the associated linear system.
The principal Notus algorithm is located in the file notus.f90:
Read test case file and initialization
Each block of the NTS file is read, associated variables and fields are set:
Time loop
The following steps are executed at each time iteration:
navier
, energy
, species_transport
, phase_advection
.Notus solves fluid flows in a global physical domain that is a Cartesian part of the 2D or 3D real space delimited by global boundaries. Figure 1 illustrates this definition: the global physical domain is outlined in gray and the global boundaries are drawn with black lines.
When physics requires non-Cartesian boundaries, Notus uses immersed boundaries: the global domain is split into two sub-domains: the inner sub-domain contains the physics of interest, while the outer sub-domain adapts the physical problem to the Cartesian nature of Notus. Figure 1 also illustrates immersed boundaries drawn with blue lines; the inner sub-domain is shaded in white while the outer sub-domain is shaded in blue.
Notus can handle periodical domain. When the domain is periodic in some direction, the domain extent in that direction must match the period.
The module variables_domain defines a handful of variables that describes the global domain.
When the code is executed on more than one processor, the global physical domain is partitioned in local physical domains, and each one is solved by a process unit. Notus automatically finds optimal partitioning ensuring a balance between processor loads and a minimisation of the data exchange between processors. Similarly to the global physical domain, each local domain has its own local boundaries. A local boundary may match a part of a physical boundary, but most of them are boundaries between processors where a special treatment must be done to solve the equations in parallel. Figure 2 illustrates a partition of the global physical domain with local boundaries drawn with black lines.
Part of the code that deals with parallelism topics can be found in several directorie directory:
The global physical domain is represented by a Cartesian grid composed of cells. The local physical domain is exactly a subset of theses cells as shown in Figure 3.
For different numerical reasons, the local physical domain is actually extended across each local boundary by nghost
rows of cells; the extended domain is called the numerical domain. It has nx
, ny
, nz
cells in the x
, y
or z
direction.
The Fortran arrays are based on the numerical domain. Some index variables help to locate the start or the end of the numerical domain (for instance is
, ie
and js
, je
in Figure 4).
The ghost boundary cells that are located outside the physical domain are used to apply the boundary conditions. The ghost cells that are locate inside the physical domain create ovelapping regions between processors which are used to exchange data explicitly with the MPI library. This way, each processor has the necessary information to compute the solution on its local physical domain.
The number of ghost cells is automatically adapted to the schemes used. Whatever the number of processor, the solution is the same up to computer precision (or nearly, depending on the problem solved and the obtained solver residual).
Part of the code that deals with grid generation can be found in geometry/grid directory. Especially, one can find grid coordinates arrays and various grid indices.
Within a cell, different nodes are defined:
The set of cell nodes, of x-face nodes, etc. forms a staggered arrangement of Cartesian grids named cell grid, x-face grid, etc.
Figure 5 illustrates (in 2D) the node types: cell nodes are represented by circles, x-face nodes are represented by rightward triangles, and y-face nodes are represented by upward triangles.
More generally, the adjectives cell and face are used to designate things related to the corresponding grid. The term node used without these adjective designates a cell node or a face node indistinctly. For example, scalar fields will be named a cell field represents a field discretized on cell nodes, a x-face field represents a field discretized on x-face nodes, etc. Additionally, face fields designates vector fields whose components are discretized on x-face, y-face, and z-face nodes.
Staggered grid are used to solve the velocity/pressure coupling. Components u
, v
and w
of the velocity vector are respectively associated to the x-face, y-face and z-face grids. Pressure and other scalar fields are defined at the centrer of the cells.
The section Modeling provides the routines that discretize and solve the linear systems associated to the following PDE:
In Notus the terms of a PDE are discretized independently using building blocks provided by the section Discretization of PDE. The discretization is implicit in time, except for the advection equation which is explicit, and eventually for the advection term of the other equations. The implicit discretization of a PDE leads to a linear system Ax=b
to solve, where A
is the matrix, b
the right-hand-side and x
the solution (velocity, temperature, etc.). The numerical solution of the linear system is an approximation of the solution of the PDE.
If the advection term of an equation is discretized explicitly, the right-hand-side of the linear system is modified.
Once the linear system is created, it is solved using an iterative or direct solver (see Linear system solvers). Iterative solvers and preconditioners used are the ones of the Hypre library. They are designed for 2D and 3D problems, some of them being massively parallel and efficient on thousands of processors. The direct solver linked to Notus code is MUMPS. It has the advantage to be more robust and to compute the solution up to the computer precision, but is limited to 2D problems (or very small size 3D ones).
Some specificities exists as regards the PDE-related linear systems:
(i,j,k)
into a column-matrices index '(l)';