5. Special case (MATR_DISTRIBUEE)#
When performing a distributed calculation with the solver MUMPS, if option MATR_DISTRIBUEE is activated, the bits of MATR_ASSE specific to each processor are resized so as not to store useless zeros (REFA (11) is then tagged in MATR_DISTR). But as a result, the manipulation of these « thin » distributed assembled matrices is more complex. During some global treatments (matrix-vector product), we do not complete them and we stop at ERREUR_FATALE. It will be necessary to enrich the scope of use of this feature on a case-by-case basis.
Notes:
To distinguish the data associated with each subdomain, each includes a SD_SOLVEUR * **, a* NUME_DDL * , a MATR_ASSE * ** and CHAM_NO * s adapted to its dimensions. On the other hand, each processor is responsible for a SD_SOLVEUR * , a NUME_DDL * , a MATR_ASSE * ** and CHAM_NOs * ** masters which will be almost empty and whose only function is to point to the * SD * slaves of the sub-domains whose processor manages. It is another form of*SD distributed which therefore involves this recursion at two levels.
5.1. Rule to be respected when programming a data flow/parallel processing#
At the end of the Aster operator, all the global processor bases must be the same because it is not known if the operator that follows in the command file has predicted an incomplete flow of input data. It is therefore necessary to organize, for example, the completions of ad hoc fields in archiving routines (by trying to base it on MPI_ALLREDUCEplus that are simple and more effective to implement). To offer more effective parallelism, it will be necessary one day to break this lock. But until then!