### Abstract

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

Ideally, domain decomposition methods (DDMs) seek what we call the DDM-paradigm: “constructing the ‘global' solution by solving ‘local' problems, exclusively”. To achieve it, it is essential to disconnect the subdomain problems. This explains in part the success of nonoverlapping DDMs. However, in this kind of methods, different subdomains are linked by interface nodes that are shared by several subdomains. Discretization procedures for partial differential equations of a new kind, in which each node belongs to one and only one coarse-mesh subdomain, are here introduced and analyzed. A discretization method of this type was very successfully used to develop the derived vector-space-framework. Using it, it is possible to develop algorithms that satisfy the DDM-paradigm. Other enhanced numerical and computational properties of them are also discussed. © 2014 The Authors. Numerical Methods for Partial Differential Equations Published by Wiley Periodicals, Inc. 30: 1427–1454, 2014

### I. INTRODUCTION

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

Mathematical models of many systems of interest, including very important continuous systems of engineering and science, are constituted by a great variety of boundary-value problems (BVP) of partial differential equations [1], or systems of such equations, whose solution methods are based on the computational processing of large-scale algebraic systems. Furthermore, the incredible expansion experienced by the existing computational hardware and software has made amenable to effective treatment problems of an ever increasing diversity and complexity, posed by engineering and scientific applications [2].

Parallel computing is outstanding among the new computational tools, especially at present when further increases in hardware speed apparently have reached insurmountable barriers. As it is well-known, the main difficulties of parallel computing are associated with the coordination of the many processors that carry out the different tasks and the information-transmission between them. Ideally, given a task, these difficulties disappear when such “a task is carried out with the processors working independently of each other.” We refer to this latter condition as the “paradigm of parallel-computing software.”

The emergence of parallel computing prompted on the part of the computational-modeling community a continued and systematic effort with the purpose of harnessing it for the endeavor of solving the mathematical models of scientific and engineering systems [3]. Very early after such an effort began, it was recognized that domain decomposition methods (DDMs) were the most effective technique for applying parallel computing to the solution of partial differential equations, because such an approach drastically simplifies the coordination of the many processors that carry out the different tasks and also reduces very much the requirements of information-transmission between them. When a DDM is applied, first a discretization of the mathematical model is carried out in a fine-mesh and, afterwards, a coarse-mesh is introduced, which properly constitutes the domain-decomposition. The “DDM-paradigm,” a paradigm for domain decomposition methods concomitant with the paradigm of parallel-computing software, consists in “obtaining the ‘global' solution by solving ‘local' problems exclusively” (a “local” problem is one defined separately in a subdomain of the coarse-mesh). Stated in a simplistic manner, the basic idea is that, when the DDM-paradigm is satisfied, full parallelization can be achieved by assigning each subdomain to a different processor.

When intensive DDM research began much attention was given to overlapping DDMs, but soon after attention shifted to nonoverlapping DDMs. When the DDM-paradigm is taken into account, this evolution seems natural because it is easier to uncouple the “local” problems when the subdomains do not overlap. However, as it is further discussed in the next section, in this kind of methods, different subdomains are linked by interface nodes that are shared by several subdomains and, therefore, even nonoverlapping DDMs are actually overlapping when seen from the perspective of the nodes used in the discretization. So, one would expect that a more thorough uncoupling of the “local” problems could be achieved if it were possible to carry out the discretization of the BVP to be solved using a “non-overlapping system of nodes”; that is, a set of nodes with the property that each one of them belongs to one and only one subdomain of the coarse mesh. Therefore, in what follows a discretization procedure is said to be a “non-overlapping discretization method” when the system of nodes applied in it, is nonoverlapping. In this article, one such method, the derived vector-space (DVS)-discretization method, is presented and discussed. To our knowledge, this is the first nonoverlapping discretization method reported in the literature. Furthermore, this discretization method has a very general character and is equally applicable to symmetric, nonsymmetric, and indefinite (neither positive nor negative definite) matrices.

Actually, the DVS-algorithms introduced by Herrera et al. in [4, 5] apply a similar kind of discretization but its use remained unnoticed in those papers, in spite of the fact that the novelty of the DVS-approach is to a large extent due its use. Therefore, this article is devoted to present and discuss the new discretization methodology in its own merits. In Section 'THE BVP AND THE NONOVERLAPPING NODES', the generic boundary-value problem (BVP) considered is introduced and discretized by means of any “standard” method, with “overlapping” nodes. A great diversity of BVPs can be incorporated in this scheme as very little is assumed about the nature of such a problem; the BVP may be associated with a single differential equation or a system of such equations, and the corresponding differential operator may be formally symmetric or nonsymmetric, and indefinite. Also, the “standard” discretization method used may be any, albeit the matrix of the discretized system so obtained is required to satisfy Eq. (4.3). A procedure for constructing a nonoverlapping system of nodes is also explained in Section 'THE BVP AND THE NONOVERLAPPING NODES'. Using such nodes, an enlarged vector-space containing “discontinuous-vectors” is introduced in Section 'THE “DERIVED VECTOR-SPACE (DVS)”', and the nonoverlapping discretized problem is obtained in Section 'THE NONOVERLAPPING DISCRETIZATION'. Schur-complement formulations are given in Sections 'SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS' and 'SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS'; they are based on generalized Schur-complement formulas of wide applicability that are derived in Section 'GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS'. Achieving this generality was possible due to the use of very convenient matrix notations, defined in Section 'MATRIX NOTATIONS'. Section 'THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS' is devoted to apply the new discretization methods to develop DDM-algorithms; in particular, the DVS-algorithms of Herrera et al. mentioned before are there derived, and in Section 'THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS' a geometrical summary for them is supplied. Altogether, there are four DVS-algorithms and two of them are new versions of the well-known balancing domain decomposition with constraints (BDDC) [6-8] and dual-primal finite-element tearing and interconnecting (FETI-DP) [9-12]. As for the other two, nothing similar had been reported in the literature prior to the publication of [4, 5]. An important advantage of using nonoverlapping discretizations of BVPs is that such procedures permit achieving the DDM-paradigm, as it is shown in Section 'HOW TO ACHIEVE THE DDM-PARADIGM'. Section 'NUMERICAL RESULTS' is devoted to numerical and computational experiments, while Section 'CONCLUSIONS' to conclusions.

### II. THE BVP AND THE NONOVERLAPPING NODES

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

Consider a well-posed BVP (“the BVP”), defined by the partial differential equation (or system of such equations)

- (2.1)

and suitable boundary conditions. To treat it, we first introduce a mesh (“the fine-mesh”) and apply a standard (“overlapping”) method of discretization to obtain the following discrete version of it (“the standard discretization” of the BVP):

- (2.2)

Next, as it is usually done in nonoverlapping domain decomposition methods [4], we introduce another mesh (the coarse-mesh); properly, this latter mesh constitutes the domain-decomposition from which this kind of methods receives its name. However, when the coarse-mesh is introduced generally, some of the nodes of the fine-mesh belong to the closure of more than one subdomain of the coarse-mesh (this situation is illustrated in Figs. 1 and 2) and, due to this fact, we say that the system of nodes of the fine-mesh is overlapping (in spite of the fact that the method is nonoverlapping); therefore, we proceed to introduce a nonoverlapping system of nodes. To this end, we divide each node into a number of pieces equal to the number of subdomains it belongs to (Fig. 3) and then one, and only one, of such pieces is allocated in each one of such subdomains. Clearly, in this manner, a nonoverlapping set of nodes is obtained, in the sense that each node of such a set belongs to one and only one subdomain of the coarse-mesh (Fig. 4); the nodes so obtained will be referred to as derived-nodes.

In our developments, the following notation is adopted: the labels *p*, *q*, etc. will be used to denote original-nodes. Therefore, under the assumption that the total number of nodes of the fine mesh is *N*, the labels *p*, *q*, etc. may be any number of the set . On the other hand, the labels *α*, *β*, etc. will be used to denote the subdomains of the coarse-mesh and therefore, if the total number of subdomains of the coarse-mesh is *E*, the labels *α*, *β*, etc. may be any number of the set . As for the nomenclature and notation to be used for derived-nodes, they will be labeled by pairs: (*p*, *α*), *p* being the original-node it derives from and *α* the subdomain to which it belongs. In domain decomposition methods, it is customary to classify the original-nodes into internal and interface-nodes; a node is internal if it belongs to only one subdomain closure and it is interface otherwise. The same classification applies to any derived-node, (*p*, *α*), its class being determined by that of *p*. Furthermore, two complementary classes of interface nodes are frequently considered: primal and dual. The subsets corresponding to internal, interface, primal, and dual-derived-nodes, are denoted by I, Γ, *π*, and Δ, respectively. Furthermore, the set Π is defined by

- (2.3)

Therefore, the family X* *^{α} , constitutes a nonoverlapping decomposition of X. Given a node *p* of the fine-mesh, the derived-nodes that originate from it constitute the set:

- (2.5)

The multiplicity *m* (*p*) of , is the “cardinality” (i.e., the number of elements) of the set *Z* ( *p*). Some relevant properties of the derived-node sets introduced so far are:

- (2.6)

and

- (2.7)

### III. THE “DERIVED VECTOR-SPACE (DVS)”

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

This is an important property. In particular, it implies that any can be written uniquely as

- (3.2)

When *n* = 1, this reduces to

- (3.5)

The derived-vector space, *W*, constitutes a finite dimensional Hilbert-space with respect to the Euclidean inner product. We observe that Euclidean inner product, depends on the fine-mesh that is used, but it is independent of the BVP considered.

A derived-vector, , is said to be “continuous” when is independent of *α*, for every . The subset of continuous vectors, , constitutes a linear subspace of *W*. The natural injection, , of into *W*, is defined by the condition that, for every , one has

- (3.6)

Furthermore,

- (3.13)

and

- (3.14)

More explicitly, these relations are

- (3.15)

and

- (3.16)

Next, we introduce the subspace , which is defined to be the orthogonal complement of *W* _{12}, with respect to the Euclidean inner product. In this manner, the space *W* is decomposed into two orthogonal complementary subspaces: *W* _{11} and *W* _{12}, which fulfill

- (3.17)

Two matrices and are now introduced; they are the orthogonal-projection operators, with respect to the Euclidean inner product, on *W* _{12} and *W* _{11}, respectively. The first one will be referred to as the “average operator” and the second one will be the “jump operator.” If , then ; that is, vectors of are “zero-average vectors.” Similarly, if , then ; that is, vectors of are “zero-jump vectors.” We observe that in view of Eq. (3.17), every derived-vector, , can be written in a unique manner as the sum of a zero-average vector plus a continuous vector; indeed:

- (3.18)

It can be seen that , from which it follows that

- (3.19)

An explicit expression for is:

- (3.20)

### IV. THE NONOVERLAPPING DISCRETIZATION

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

In this section, we address the main subject of the present article and to this end, we present a nonoverlapping discretization method; namely, the DVS-discretization method.

To start with, we define

- (4.1)

together with

- (4.2)

The function *m* (*p*, *q*) is the “multiplicity” of the pair (*p*, *q*), which can be zero, when the pair *p* and *q* do not occur simultaneously in any subdomain-closure. The DVS-discretization method, to which the present paper is devoted, has a wide range of applicability; it can be applied whenever the following basic assumption (or, axiom) is fulfilled:

- (4.3)

The fact that *m* (*p*, *q*) may take the value zero is inconvenient for the following developments and, therefore, we replace it by a function *s* (*p*, *q*), which is essentially the same except that it never is zero. It is defined by

- (4.4)

For , we define the matrices

- (4.5)

It can be verified that

- (4.6)

Eq. (4.6) implies that

- (4.7)

Next, we define the matrices:

- (4.8)

and

- (4.9)

Applying Eq. (4.7) it is seen that

- (4.11)

Definition 4.1.. When Eq. (2.2) is a standard discretization of the BVP of Eq. (2.1), then Eq. (4.14) is a “DVS-discretization” of the same BVP.

In connection with Definition 4.1., and taking into account Theorem 4.1., we observe that any DVS-discretization of the BVP of Eq. (2.1) is a nonoverlapping discretization of the same BVP. Furthermore, in turn, Eq. (4.14) can be replaced by

- (4.18)

### X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

They satisfy:

- (10.6)

In view of the above, a very simple geometrical summary of the DVS-algorithms can be given, which is presented in Figs. 5 and 6.

For each one of them, every iteration consists of the succession of two projections; the first one sends a trial vector of the space where the sought information is known to be, to a different space, whereas the second one returns it to the original space.

It can be seen, that according to Eqs. (9.2), (9.3), (9.5), and (9.7) for the DVS-BDDC, DVS-Primal, DVS-FETI-DP, and DVS-dual, the trial vectors are taken from *W* _{12}(Δ), *W* _{22}(Δ), *W* _{11}(Δ), and *W* _{31}(Δ). The processes occurring in each iteration for every DVS-algorithms are illustrated in Figs. 7-10.

### XI. HOW TO ACHIEVE THE DDM-PARADIGM

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

The algorithms presented in Section 'THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS' permit to develop codes that achieve the DDM-paradigm. How that can be done was explained in [4] and we draw from it. All the algorithms of Section 'THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS' are iterative algorithms and can be implemented with recourse to conjugate gradient method (CGM), when the matrix is definite and symmetric, or some other iterative procedure such as GMRES, when that is not the case. At each iteration step, depending on the DVS-algorithm that is applied, one has to compute the action on a derived-vector of one of the following matrices: , , , or . Such matrices in turn are different permutations of the matrices , , , and . Thus, to develop codes that achieve the DDM-paradigm one only needs to separately develop codes with that property which compute the action of each one of the matrices , , , or on an arbitrary derived-vector, as was explained in [4].

### XII. NUMERICAL RESULTS

- Top of page
- Abstract
- I. INTRODUCTION
- II. THE BVP AND THE NONOVERLAPPING NODES
- III. THE “DERIVED VECTOR-SPACE (DVS)”
- IV. THE NONOVERLAPPING DISCRETIZATION
- V. MATRIX NOTATIONS
- VI. GENERAL SCHUR-COMPLEMENT DECOMPOSITIONS
- VII. SCHUR COMPLEMENT FORMULATION WITHOUT CONSTRAINTS
- VIII. SCHUR COMPLEMET FORMULATION WITH CONSTRAINTS
- IX. THE PRECONDITIONED DVS-ALGORITHMS WITH CONSTRAINTS
- X. THE GEOMETRIC SUMMARY OF DVS-ALGORITHMS
- XI. HOW TO ACHIEVE THE DDM-PARADIGM
- XII. NUMERICAL RESULTS
- XIII. CONCLUSIONS
- APPENDIX
- REFERENCES

In previous publications it was verified, through certain number of numerical examples treated, that the DVS-algorithms possess up to date numerical efficiencies [5, 13, 14]. Some additional numerical work is presented in this article and some of their parallelization properties are also exhibited.

All the BVPs treated have the form:

- (12.1)

where *a* > 0 and *c* ≥ 0 are real-valued constants, whereas is a constant-vector. Details of the discretizations used are given in previous publications [5, 13]. Both the fine and coarse-meshes are constituted by orthogonal parallelepipeds. When , these differential equations are symmetric, otherwise they are nonsymmetric. In [14], 2D and 3D examples were treated, and Tables 1 and 2 reproduce such results for 3D problems only. Table 3 shows the results obtained here for the case when the differential operator is Laplace's (Poisson equation). Helmholtz equation was also treated here, which corresponds to *a* = 1, *c* = −1, and , above. In spite that this is an indefinite equation, which generally is considered a complicated problem, the algorithms works as well as for the other equations and the results are shown in Table 4. All the DVS-algorithms are preconditioned and constrained; the constraints used are continuity at primal nodes. Such nodes have been chosen according to Algorithm “D” of Toselli and Widlund (p. 173 of [15]). The CGM algorithm [16] was used for the iterative solution of symmetric positive-definite cases; in all other cases, the DQGMRES algorithm was used. All the codes that were developed to treat the examples were written in C++ language, using the MPI library for the communications on the Master-Slave scheme that was used.

Table 1. Symmetric and positive definite example 3-D .Partition | Subdomains | Dof | Primals | DVS-BDDC | DVS-Primal | DVS-FETI-DP | DVS-Dual |
---|

(2 × 2 × 2) × (2 × 2 × 2) | 8 | 27 | 7 | 2 | 2 | 2 | 2 |

(3 × 3 × 3) × (3 × 3 × 3) | 27 | 512 | 80 | 4 | 4 | 3 | 3 |

(4 × 4 × 4) × (4 × 4 × 4) | 64 | 3375 | 351 | 5 | 5 | 4 | 3 |

(5 × 5 × 5) × (5 × 5 × 5) | 125 | 13824 | 1024 | 6 | 5 | 4 | 3 |

(6 × 6 × 6) × (6 × 6 × 6) | 216 | 42875 | 2375 | 6 | 6 | 4 | 4 |

(7 × 7 × 7) × (7 × 7 × 7) | 363 | 110592 | 4752 | 7 | 6 | 4 | 4 |

(8 × 8 × 8) × (8 × 8 × 8) | 512 | 250047 | 8575 | 8 | 7 | 5 | 6 |

(9 × 9 × 9) × (9 × 9 × 9) | 729 | 512000 | 14336 | 8 | 8 | 7 | 7 |

(10 × 10 × 10) × (10 × 10 × 10) | 1000 | 970299 | 22599 | 8 | 8 | 8 | 8 |

Table 2. Non-symmetric example 3-D .Partition | Subdomains | Dof | Primals | DVS-BDDC | DVSPrimal | DVS-FETI-DP | DVS-Dual |
---|

(2 × 2 × 2) × (2 × 2 × 2) | 8 | 27 | 7 | 3 | 2 | 2 | 2 |

(3 × 3 × 3) × (3 × 3 × 3) | 27 | 512 | 80 | 6 | 4 | 4 | 4 |

(4 × 4 × 4) × (4 × 4 × 4) | 64 | 3375 | 351 | 7 | 6 | 5 | 5 |

(5 × 5 × 5) × (5 × 5 × 5) | 125 | 13824 | 1024 | 8 | 7 | 5 | 5 |

(6 × 6 × 6) × (6 × 6 × 6) | 216 | 42875 | 2375 | 10 | 7 | 6 | 6 |

(7 × 7 × 7) × (7 × 7 × 7) | 363 | 110592 | 4752 | 11 | 8 | 6 | 6 |

(8 × 8 × 8) × (8 × 8 × 8) | 512 | 250047 | 8575 | 11 | 9 | 7 | 7 |

(9 × 9 × 9) × (9 × 9 × 9) | 729 | 512000 | 14336 | 12 | 10 | 8 | 8 |

(10 × 10 × 10) × (10 × 10 × 10) | 1000 | 970299 | 22599 | 13 | 11 | 9 | 9 |

Table 3. Poisson equation example 3-D .Partition | Subdomains | Dof | Primals | DVS-BDDC | DVS-Primal | DVS-FETI-DP | DVS-Dual |
---|

(2 × 2 × 2) × (2 × 2 × 2) | 8 | 27 | 7 | 1 | 1 | 1 | 1 |

(3 × 3 × 3) × (3 × 3 × 3) | 27 | 512 | 80 | 3 | 2 | 2 | 2 |

(4 × 4 × 4) × (4 × 4 × 4) | 64 | 3375 | 351 | 1 | 1 | 2 | 2 |

(5 × 5 × 5) × (5 × 5 × 5) | 125 | 13824 | 1024 | 5 | 4 | 5 | 4 |

(6 × 6 × 6) × (6 × 6 × 6) | 216 | 42875 | 2375 | 6 | 6 | 5 | 6 |

(7 × 7 × 7) × (7 × 7 × 7) | 363 | 110592 | 4752 | 6 | 6 | 6 | 6 |

(8 × 8 × 8) × (8 × 8 × 8) | 512 | 250047 | 8575 | 7 | 7 | 8 | 8 |

(9 × 9 × 9) × (9 × 9 × 9) | 729 | 512000 | 14336 | 8 | 9 | 9 | 9 |

(10 × 10 × 10) × (10 × 10 × 10) | 1000 | 970299 | 22599 | 10 | 10 | 10 | 10 |

Table 4. Indefinite example 3-D (Helmholtz equation) .Partition | Subdomains | Dof | Primals | DVS-BDDC | DVS-Primal | DVS-FETI-DP | DVS-Dual |
---|

(4 × 4 × 4) × (4 × 4 × 4) | 64 | 3375 | 351 | 5 | 4 | 4 | 3 |

(5 × 5 × 5) × (5 × 5 × 5) | 125 | 13824 | 1024 | 6 | 6 | 5 | 5 |

(6 × 6 × 6) × (6 × 6 × 6) | 216 | 42875 | 2375 | 7 | 7 | 6 | 5 |

(7 × 7 × 7) × (7 × 7 × 7) | 363 | 110592 | 4752 | 7 | 7 | 6 | 5 |

(8 × 8 × 8) × (8 × 8 × 8) | 512 | 250047 | 8575 | 8 | 8 | 6 | 5 |

(9 × 9 × 9) × (9 × 9 × 9) | 729 | 512000 | 14336 | 8 | 8 | 6 | 6 |

(10 × 10 × 10) × (10 × 10 × 10) | 1000 | 970299 | 22599 | 9 | 6 | 6 | 6 |

In the tables that follow, each line corresponds to a different run of the software we developed. The first column indicates the fine-mesh that was used in each one of them. The second column indicates the corresponding coarse-mesh. Third column to total number of degrees freedom. The fourth one indicates the total number of primal nodes. The fifth column the number of iterations required for convergence, when the DVS-BDDC was applied. The following columns give, successively, such a number for the cases when DVS-PRIMAL, DVS-FETI-DP, and DVS-DUAL were used.

To exhibit the parallelization efficiency of the DVS-algorithms Eq. (12.1), with , *c* = 0, and *a* = 1 (Poisson equation) was treated. The analytical solution of this example is can be seen in Fig. 11.

The parallelization properties of the DVS-PRIMAL algorithm are illustrated in Table 5. This example was run in the Kan-Balam cluster facility of the National University of Mexico (UNAM) using up to 512 cores. To measure the parallelization efficiency, the metric used was based on the relative speed up, defined as and then the relative efficiency is given by . We can see from Table 5 that the DVS-PRIMAL algorithm is more efficient as the number of subdomains and the number of degrees of freedom (dof) increases from 23,017,500 to 63,937,500. For this latter number of degrees of freedom, the Relative Efficiency attains the value one, which is characteristic of 100% parallelization. This is due to the increased load on the cores, while the communication-time remains small because the DVS-algorithms achieve the DDM-paradigm; essentially, the ratio of the communication-time to processing-time is negligible.

Table 5. Speed up and efficiency made by the DVS-PRIMAL algorithm. The primal nodes were located at the vertices of subdomains.Decomposition | dof | Time *T*_{p'}/*T*_{P} | Speed Up | Efficiency |
---|

31 × 33 and 150 × l50 | 23,017,500 | 7315/1541 | | |

31 × 33 and 200 × 200 | 40,920,000 | 16,037/2688 | | |

31 × 33 and 250 × 250 | 63,937,500 | 26,587/6388 | | |