version 0.6.0
Partitioning

Namespaces

module  mod_mpi_grid_info
 Print grid and partitioning informatio, and some partitioning statistics.
 
module  mod_mpi_localization
 Localization of the processor as regards physical boundaries.
 
module  mod_mpi_repartitioning
 Manage the partitioning of the domain.
 

Functions

subroutine mod_modify_neighbor_repartitioning::modify_neighbor_repartitioning ()
 Modify the points to exchange between processors in cas of refinement. More...
 
subroutine mod_local_domain_size::local_domain_size ()
 Set the local domain size and associated global indices. More...
 
subroutine mod_partitioning::partitioning ()
 Do the partitioning of the domain. More...
 
subroutine mod_local_refined_domain_size::local_refined_domain_size ()
 Set the local domain size and associated global indices for the refined domain. More...
 

Detailed Description

In this directory, partitioning of the domain is done. It minimizes communications between processors and assure a computational load balance. A set of variables that define each physical and numerical subdomains are also computed.

Function Documentation

◆ local_domain_size()

subroutine mod_local_domain_size::local_domain_size

This routine sets each local domain size according to the number of processors in each spatial direction computed by partitioning.f90.

Then it sets start and end global indices of each physical and numerical subdomain.

◆ local_refined_domain_size()

subroutine mod_local_refined_domain_size::local_refined_domain_size

This routine sets each local domain size according to the number of processors in each spatial direction computed by partitioning.f90.

Then it sets start and end global indices of each physical and numerical subdomain.

◆ modify_neighbor_repartitioning()

subroutine mod_modify_neighbor_repartitioning::modify_neighbor_repartitioning

This routine updates the list of points at the boundaries between processors for mpi exchange.

◆ partitioning()

subroutine mod_partitioning::partitioning

This routine does the initial partitioning of the grid that verifies load balancing and minimizes the MPI data exchange between process.

A Cartesian grid of process is created (the MPI communicator is changed from mpi_comm_world to mpi_comm_notus). Coordinates of each process are set, as well as neighbor process numbers.

MPI process on a periodic boundary are also detected.

Finally the routine computes for each process the number of process to exchange with.