version 0.6.0
Code overview

The following guidelines is a quick overview of the code.

The first section describes the main algorithm of the Notus program which pilots the overall execution. The second section presents the domain and some relative points associated to the partitioning of the domain and the boundary conditions. Finally we shortly present the PDE discretization principles and the associated linear system.

Principal simulation algorithm

The principal Notus algorithm is located in the file notus.f90:

  1. Setup
    1. Setup MPI communications.
    2. Read the command line.
    3. Write initialization information.
  2. Read test case file and initialization

    Each block of the NTS file is read, associated variables and fields are set:

    1. System block
    2. Domain and grid block
    3. Modeling block
    4. Numerical methods block
    5. Post processing block
  3. Finalization the initialization stage
    1. Finalize the initialization process
    2. Read the restart file (if requested in the input file)
    3. Write initial data for post-processing visualization
  4. Time loop

    The following steps are executed at each time iteration:

    1. If requested, compute time step as a multiple of the CFL.
    2. If requested, prepare next iteration test case (for instance to set time dependent boundary conditions).
    3. Solve the selected equations among navier, energy, species_transport, phase_advection.
    4. Compute physical properties (density, viscosity, etc.) depending of temperature, volume fraction, etc.
    5. If requested, execute a special post processing tool (user post processing or specific to some test cases).
    6. If requested (turbulent flows), compute statistics.
    7. Execute some diagnostics (such as the computation of the Nusselt number, mean velocity magnitude, etc.).
    8. Error measurement.
    9. Stop tests.
    10. Time step switch.
    11. Print state for post-processing visualization.
  5. Finalization of the execution
    1. If requested, check and compare the solution to a reference solution for validation purpose
    2. If requested, write grid convergence data

Domain definitions

Physical domain and immersed boundaries

Notus solves fluid flows in a global physical domain that is a Cartesian part of the 2D or 3D real space delimited by global boundaries. Figure 1 illustrates this definition: the global physical domain is outlined in gray and the global boundaries are drawn with black lines.

When physics requires non-Cartesian boundaries, Notus uses immersed boundaries: the global domain is split into two sub-domains: the inner sub-domain contains the physics of interest, while the outer sub-domain adapts the physical problem to the Cartesian nature of Notus. Figure 1 also illustrates immersed boundaries drawn with blue lines; the inner sub-domain is shaded in white while the outer sub-domain is shaded in blue.

Figure 1: Global physical domain and immersed boundaries.

Notus can handle periodical domain. When the domain is periodic in some direction, the domain extent in that direction must match the period.

The module variables_domain defines a handful of variables that describes the global domain.

Local domains and process boundaries

When the code is executed on more than one processor, the global physical domain is partitioned in local physical domains, and each one is solved by a process unit. Notus automatically finds optimal partitioning ensuring a balance between processor loads and a minimisation of the data exchange between processors. Similarly to the global physical domain, each local domain has its own local boundaries. A local boundary may match a part of a physical boundary, but most of them are boundaries between processors where a special treatment must be done to solve the equations in parallel. Figure 2 illustrates a partition of the global physical domain with local boundaries drawn with black lines.

Figure 2: Domain decomposition with local boundaries.

Part of the code that deals with parallelism topics can be found in several directorie directory:

Mesh definitions

Numerical domain and process ghost cells

The global physical domain is represented by a Cartesian grid composed of cells. The local physical domain is exactly a subset of theses cells as shown in Figure 3.

Figure 3: Global mesh.

For different numerical reasons, the local physical domain is actually extended across each local boundary by nghost rows of cells; the extended domain is called the numerical domain. It has nx, ny, nz cells in the x, y or z direction.

Figure 4: Local mesh and numerical domain.

The Fortran arrays are based on the numerical domain. Some index variables help to locate the start or the end of the numerical domain (for instance is, ie and js, je in Figure 4).

The ghost boundary cells that are located outside the physical domain are used to apply the boundary conditions. The ghost cells that are locate inside the physical domain create ovelapping regions between processors which are used to exchange data explicitly with the MPI library. This way, each processor has the necessary information to compute the solution on its local physical domain.

The number of ghost cells is automatically adapted to the schemes used. Whatever the number of processor, the solution is the same up to computer precision (or nearly, depending on the problem solved and the obtained solver residual).

Part of the code that deals with grid generation can be found in geometry/grid directory. Especially, one can find grid coordinates arrays and various grid indices.

Node types

Within a cell, different nodes are defined:

  • the cell node center,
  • the x-face nodes at the left and right faces center,
  • the y-face nodes at the bottom and top faces center,
  • the z-face nodes at the rear and front faces center,
  • the vertex nodes

The set of cell nodes, of x-face nodes, etc. forms a staggered arrangement of Cartesian grids named cell grid, x-face grid, etc.

Figure 5 illustrates (in 2D) the node types: cell nodes are represented by circles, x-face nodes are represented by rightward triangles, and y-face nodes are represented by upward triangles.

Figure 5: Node types.

More generally, the adjectives cell and face are used to designate things related to the corresponding grid. The term node used without these adjective designates a cell node or a face node indistinctly. For example, scalar fields will be named a cell field represents a field discretized on cell nodes, a x-face field represents a field discretized on x-face nodes, etc. Additionally, face fields designates vector fields whose components are discretized on x-face, y-face, and z-face nodes.

Staggered grid are used to solve the velocity/pressure coupling. Components u, v and w of the velocity vector are respectively associated to the x-face, y-face and z-face grids. Pressure and other scalar fields are defined at the centrer of the cells.

Discretization and linear systems

The section Modeling provides the routines that discretize and solve the linear systems associated to the following PDE:

In Notus the terms of a PDE are discretized independently using building blocks provided by the section Discretization of PDE. The discretization is implicit in time, except for the advection equation which is explicit, and eventually for the advection term of the other equations. The implicit discretization of a PDE leads to a linear system Ax=b to solve, where A is the matrix, b the right-hand-side and x the solution (velocity, temperature, etc.). The numerical solution of the linear system is an approximation of the solution of the PDE.

If the advection term of an equation is discretized explicitly, the right-hand-side of the linear system is modified.

Once the linear system is created, it is solved using an iterative or direct solver (see Linear system solvers). Iterative solvers and preconditioners used are the ones of the Hypre library. They are designed for 2D and 3D problems, some of them being massively parallel and efficient on thousands of processors. The direct solver linked to Notus code is MUMPS. It has the advantage to be more robust and to compute the solution up to the computer precision, but is limited to 2D problems (or very small size 3D ones).

Some specificities exists as regards the PDE-related linear systems:

  1. the vectorization of fields that maps each node index (i,j,k) into a column-matrices index '(l)';
  2. the structure of the matrices, which is also vectorized.