This article is concerned with the efficient numerical solution of a large system of partial differential equations (PDEs), ordinary differential equations (ODEs), and algebraic equations (AEs), as they occur in the modeling of transport, chemical reactions and biodegradation below the Earth's surface. The topic of this article is the proposition of a method to reformulate the given system of equations describing the problem, in such a way that the resulting system is easier to solve. A rather general approach for a wide class of such biogeochemical problems is proposed in this article. In fact, this article is a generalization of the work by Kräutle and Knabner , where equilibrium reactions between mobile and immobile species were excluded.
 The chemical species which are considered are divided into two classes: Mobile (dissolved) species and immobile (sorbed chemical species, minerals, immobile bacteria) ones. The system of equations for the concentrations consists of PDEs for the mobile species and ODEs for the immobile ones, all of them coupled through the reaction terms. The characteristic timescale of the different reactions may cover a large range making it desirable to model some reactions as equilibrium-controlled and others as kinetically controlled, leading to a differential algebraic system of equations (DAE).
 Basically there are two different concepts to treat these kinds of problems numerically: By global implicit approaches (GIA) and by sequential iterative and sequential noniterative approaches (SIA/SNIA). The GIA requires most resources per time step, but is usually considered to be the most stable solution method. SIA and SNIA, depending on the specific problem to solve, may suffer from heavy restrictions on the time step size to gain convergence, or from the introduction of large splitting errors, respectively [see, e.g., Steefel and MacQuarrie, 1996; Valocchi and Malmstead, 1992; Saaltink et al., 2000; Berkvens et al., 2002].
 In this article, our goal is to avoid such problems by focussing on the GIA. In order to keep the computational effort with respect to memory requirements and cpu time limited, we want to reformulate the given system of PDEs/ODEs or DAE in such a way that some of the equations decouple, leading to a smaller nonlinear system to which we apply the GIA. The reformulation of the given system is performed by (1) taking linear combinations of the given equations and (2) by introducing new variables which are linear combinations of the unknown concentrations. This leads to a decoupling of some scalar linear transport equations and a smaller remaining nonlinear system of PDEs, ODEs, and AEs. In another step, the local equations, i.e., the ODEs and the AEs, are solved for certain variables, and these variables are eliminated in the remaining PDEs, which reduces the size of the coupled system again and which resembles the so-called direct substitutional approach (DSA).
 There are many papers dealing with the efficient solution of transport-reaction problems in porous media [e.g., Lichtner, 1985; Yeh and Tripathi, 1989; Friedly, 1991; Friedly and Rubin, 1992; Saaltink et al., 1998; Chilakapati et al., 1998, 2000; Robinson et al., 2000; Holstad, 2000; Fang et al., 2003] (also, recently, the very advanced paradigm system by Molins et al. ). A main difference of the method proposed in this article to other reformulations [e.g., Saaltink et al., 1998, 2000; Molins et al., 2004] is that when we introduce linear combinations of concentrations or equations, we lay special emphasis on the distinction between mobile species and immobile species, not mixing up mobile and immobile species during the transformation. A benefit of this proceeding compared to other methods is that it enables a decoupling of some equations without posing additional assumptions on the stoichiometry of the problem and without “enforcing” a decoupling by splitting techniques. Another advantage can be seen in the fact that the DSA-like treatment of the local equations in our resulting system preserves a very sparse population of the Jacobian [see Kräutle and Knabner, 2005, section 3.4], which can be exploited if the linear solver is an iterative method.
 The article is structured as follows: In section 2, the equations for the coupled reactive transport are given. In section 3, which is the main part of this article, the general reduction algorithm including the case of heterogeneous equilibrium reactions is derived. The algorithm presented in section 3 requires a certain condition on the stoichiometric matrix. In section 4 it is demonstrated that every stoichiometric system can be written in such a form that the required condition is met. Section 5 demonstrates the application of the method to an example problem. A comparison of the proceeding to other methods is included in order to motivate our method.
 A mathematical proof that the local equations can always be solved for certain variables, if mass action law is assumed for the equilibrium reactions, is given in Appendix A.
2. Problem Formulation
 Let us consider I mobile species X1, …, XI and immobile species I+1, …, I+. Let us denote their time- and space-dependent concentrations by c = (c1, …, cI)t, = (I+1, …, I+)t; the mobile ci are given in moles per fluid volume, the immobile i in moles per total volume. The mobile species are convected by a given Darcy flow field and are subject to dispersion. Let us assume that the underlying transport operator L is linear and that it is the same for all mobile species, i.e., L(c1, …, cI)t = (L1c1, …, LIcI)t with L1 = … = LI. The last assumption is justified if the species-dependent diffusion is negligible compared to species-independent dispersion. A typical transport operator would be Lici = −∇ · (D∇ci − qci), i = 1, …, I, with dispersion tensor D and flow field q.
 The J chemical reactions can be written as
where the stoichiometric coefficients sij form an (I + ) × J matrix S with entries in or in and Rj = Rj(c, ) is the rate expression for reaction j.
 The I + mass balance equations are
θ > 0 denotes the fraction of the mobile fluid phase volume. We assume that the sij are constant in space and time. The rates Rj and θ may depend on space and time; however, for the sake of simplicity we assume that θ is constant.
 The stoichiometric matrix S consists of an I × J block S1 of stoichiometric coefficients for the mobile species and an × J block S2 of coefficients for the immobile species:
Using the vector notation R = (R1, …, RJ)t we get the compact notation
 If a rate Rj in (1)/(2) is kinetic, then we prescribe a rate function Rj(c, ). As an example may serve the forward-backward rate formulation
with rate constants kjf, kjb ≥ 0. For the sake of simplicity we have omitted the bar atop the immobile species I+1, …, I+ in (3). Besides (3), other rate laws can be considered.
 A reaction Rj that is assumed to be at local equilibrium is described by a local algebraic equation
that holds at every point of the computational domain. The local equilibrium described by the law of mass action
here formulated with ideal activities for all species, is a typical assumption, at least for low concentrations. However, we will not need this specific form (5) unless we are interested in proving the mathematical theorem of section 3.4. See also section 3.6 for a discussion of the equilibrium conditions. Note that if an equilibrium condition (4) is assumed, then the corresponding rate term Rj(c,) in (1) is not an explicit function of the local concentration vector [see, e.g., Lichtner, 1996, equation (121); De Simoni et al., 2005]. The common way to handle the DAE is to eliminate the rates Rj corresponding to equilibrium reactions from the system (1)/(2). This can be achieved by taking linear combinations of the differential equations. This process leads to the introduction of what is often called components [see, e.g., Friedly and Rubin, 1992; Saaltink et al., 1998; Molins et al., 2004].
 Let us assume reactions R1, …, to be in equilibrium and reactions …, RJ to be kinetic, 0 ≤ Jeq ≤ J. We can split the vector R into a vector of equilibrium reaction rates Req of size Jeq, and a vector of kinetic reaction rates Rneq of size Jneq = J − Jeq:
Similar to vector R, we split the matrices S, S1, S2 into a block of Jeq columns belonging to the equilibrium reactions, and a block of Jneq columns for the kinetic reactions:
With this notation, (2) reads
and the AEs (5) for j = 1, …, Jeq can be expressed by the equivalent formulation
where the vector of equilibrium constants K has the entries ln(kjf/kjb), and ln (the natural logarithm) applied to a vector with entries ci > 0 is short for the vector with the entries ln ci.
 The well known way to handle system (7) together with (4) is to form linear combinations of the equations in (7) in such a way that the unknown reaction terms Req vanish. The forming of linear combinations of equations can be expressed by multiplying (2) with certain matrices [e.g., Lichtner, 1996; Saaltink et al., 1998; Molins et al., 2004; Kräutle and Knabner, 2005]. A main difference between different reduction methods lies in the choice of these matrices. This choice strongly influences whether (or under which conditions) the aim of a partial decoupling of equations can be reached in the resulting system.
3. Decoupling Algorithm
3.1. Transformation of the Mobile and Immobile Blocks
 Let us recall the basic idea of the transformation by Kräutle and Knabner [2005, section 4.1].
 The first step of the decoupling method is to transform the two blocks of (2) separately: Let J1, J2 be the number of linearly independent columns in the matrix S1, S2, respectively. For Si, i = 1, 2, we define S*i as a matrix consisting of a maximum system of linearly independent columns of Si and Ai such that
holds. A1 is a matrix of size J1 × J and A2 of size J2 × J. For S*i we define Si⊺consisting of a maximum set of linearly independent columns that are orthogonal to each column of S*i. By using (9) in (2) and then multiplication of each block of (2) by
we derive the following four blocks:
This manipulation corresponds to the forming of linear combinations within each of the two blocks of equations in (2). The number of equations in (11) is the same as the number of equations in (2) or (7). Hence the reduction is still to come.
 Since the matrices and the differential operators in system (11) commute, we can substitute
The inversion of relation (12) reads
The vectors (ξ, η) ∈ I, (, ) ∈ are representations of the vectors c, , respectively, with respect to another basis. ξ and η are linear combinations of only mobile species, and , are linear combinations of only immobile species.
 System (11) becomes
The advantage of (14) compared to other possible formulations is that the “reaction invariants,” i.e., the equations for the components η and are decoupled from the equations for ξ, . This is caused by the fact that our components η, do not consist of mixed linear combinations of mobile and immobile species. Note that the decoupled equations for η are linear and scalar which makes their solution rather fast. The components are even constant throughout the whole simulation. The evolution of both of these quantities is completely determined by the initial and boundary values for c, and is independent of the reactions. The ξ, differ from the classical definition of “reaction extents” [Prigogine and Defay, 1954; Friedly, 1991; Friedly and Rubin, 1992], since on the right hand sides of the corresponding equations, linear combinations of reaction rates occur.
 The number of remaining coupled nonlinear equations for representation (14) is J1 + J2. If all reactions are kinetic, then the formulation (14) could directly be used for simulation. However, it is possible to further reduce the system size: After discretization in time, the last block in (14) consists of local algebraic equations for . These equations can be solved for and substituted in the right hand side of the third block in (14). This reduces the number of remaining coupled nonlinear PDEs to J1.
 If some of the reactions are assumed to be at equilibrium (formulation (7) together with (4)), then formulation (14) cannot be used directly. It is necessary to eliminate the equilibrium rates Req from the system (14) and use the remaining equations together with the equilibrium conditions (4). The way to eliminate the equilibrium reactions from the system (14) such that the component equations for η, remain decoupled will be motivated by an example in the following section.
3.2. Further Treatment for an Example Problem Including Equilibrium Sorption
 Let us consider the chemical reaction network
consisting of 3 mobile species X1, X2, X3 and 2 immobile species 4, 5 and 3 chemical reactions R1, R2, R3. The corresponding nonlinear system of PDEs reads
Let us apply the algorithm of Section 3.1 to (15)/(16): The stoichiometric matrix reads
The extraction of the linearly independent columns yields
and for the orthogonal complement of S*1 we choose
and S2⊺ = () is the empty matrix. Hence the algorithm of section 3.1 yields the system
Equation (17) corresponds to the general form (14).
 Let us analyze (17) with respect to the case that reactions can be at equilibrium. In this case, our goal is a formulation in which every equilibrium reaction occurs only in one equation, and where in each equation, there is at most one equilibrium reaction term. After that we can replace the corresponding PDE or ODE by the corresponding algebraic equilibrium condition (4). If R1 or R3 is at equilibrium in (17), then we can immediately proceed like this. If R2 is at equilibrium, then we first have to replace the third or the fourth line in (17) by the difference of these two equations:
In this formulation, an arbitrary subset of the three reactions can be assumed to be at equilibrium, since we can replace the specific PDEs/ODEs from (18) by the corresponding algebraic equilibrium conditions Qj = 0.
 Analysis of this example problem shows under which assumptions we can generalize the proceeding: Reactions R1 and R3 are homogeneous; they only occur in exactly one of the two blocks (16). Therefore they occur only in one of the blocks of the system (17). Because of the special structure of A1, A2, which consist of unit vector columns and zero columns (this is a consequence of the fact that S1, S2 consist of linear independent and zero vectors), they occur only once in the whole system. However, reaction R2 is heterogeneous, hence it occurs in both blocks of (16) and therefore occurs in both blocks of the system (17). Again, thanks to the structure of A1, A2, the heterogeneous reaction occurs exactly in one equation of each block of the system (17). By taking the difference of these two equations, as it was done in (18), one occurrence of R2 can be eliminated.
 So what we need to apply the method used for this example, is basically that those columns of A1 and A2 that correspond to equilibrium reactions, are certain unit vectors or zero columns. For this, we need that Seq1 and Seq2 consist of linearly independent columns plus possibly some zero columns; see section 3.3 for the details.
3.3. General Algorithm Capable of Handling Heterogeneous Equilibrium Reactions
 Let us sort the vector of equilibrium reactions in the following way: We start with those equilibrium reactions whose participants are all mobile, then we take the heterogeneous equilibrium reactions (i.e., those having participants both in the mobile and in the immobile phase) and at last those equilibrium reactions in the immobile phase:
Note the slight abuse of notation, since Rsorp not only contains equilibrium sorption reactions, but arbitrary equilibrium reactions between mobile and immobile species. For the kinetic reactions Rneq, no special order is required. The size of the subvectors is Jmob, Jsorp, Jimmo, Jneq; Jmob + Jsorp + Jimmo = Jeq, Jeq + Jneq = J. Because of this order, the stoichiometric matrix has the following shape:
Note that for representation (19) we made no assumptions on the stoichiometry; every reactive system has such a representation. A usual assumption on the stoichiometric matrix is that all the columns of S are linearly independent. What we require is that at least the columns of the equilibrium part Seq are linearly independent. Hence also the columns in Smob1, in Simmo2 and in are linearly independent.
 As an additional requirement, motivated by the example in section 3.2, we postulate that both the columns in (Smob1∣Ssorp1) and in (Ssorp2∣Simmo2) are linearly independent, i.e.,
This condition is not met for arbitrary stoichiometric matrices (19), but in section 4 we will show that without loss of generality, each chemical system can be formulated in a way fulfilling condition (20).
 As a consequence of (20), we can choose a maximum set of linearly independent columns from S1 containing (Smob1∣Ssorp1), and a maximum set of linearly independent columns from S2 containing (Ssorp2∣Simmo2). As in section 3.1, we denote these matrices by S*1, S*2 again. We get the block structure
where Sneq1′, Sneq2′ consist of columns taken from Sneq1, Sneq2, respectively, such that S*1, S*2 consist of a maximal set of linear independent columns of S1, S2. Let Jneq1′, Jneq2′ be the number of columns of Sneq1′, Sneq2′.
 The selection process is described by (9), where A1, A2 have, thanks to (20) and (21), the block structure
and where Idn is the n × n identity matrix. Exactly as in (9)–(14), we define S1⊺, S2⊺, multiply the two blocks of the given system (2) by the matrices (10) and substitute c, by the new variables η, ξ, , . Additionally we introduce the splitting into subvectors
of size Jmob, Jsorp, Jneq1′; Jsorp, Jimmo, Jneq2′, and we make use of the block structure (22), (23). We get
This formulation corresponds completely to the representation (14); the only difference is that we used the block structure of S, A1, A2. Formulation (24) also corresponds to formulation (17) of the example problem: Those equilibrium rates which involve only mobile species or only immobile species (Rmob, Rimmo) occur exactly once, and the heterogeneous equilibrium reactions Rsorp occur exactly twice. The assumption (20) was necessary to guarantee these properties. The kinetic reaction rates Rneq can occur multiple times.
 As in the example of section 3.2, we have to eliminate one occurrence of Rsorp by taking the difference of (24d) and (24f):
where we have set Asorp: = Asorp1 − Asorp2. Now, since all equilibrium reactions (Rmob, Rsorp, Rimmo) = Req occur exactly once, we can replace (25c), (25f), and (25g) by the AEs (4) describing the equilibrium.
The system is closed (see (13)) by
The system (26) consists of the decoupled linear problems (26a) for η, a block of “local” problems (26c) and (26d), and a system of PDEs (26e) and (26f). The character of equation (26e) is that of a “generalized equilibrium sorption process.” As the next sections shows, the blocks of local equations (26c) and (26d) can be solved for the unknowns ξmob, sorp, immo and neq (or for ξmob, ξsorp, immo, neq), and substituted into the remaining PDEs in the sense of a DSA. Note that (different from the scheme arising from the method by Friedly and Rubin , if considered as a GIA with DSA technique), the introduction of the resolution functions for ξmob, sorp, immo, neq in (26e) and (26f) does not take place under the transport operator. Hence the sparsity pattern for the linear problems arising from the discretization of (26) is more convenient for efficient numerical solution with iterative linear solvers. See Kräutle and Knabner [2005, sec. 3.4] for a more detailed discussion of the sparsity pattern.
 After the local equations (26c) and (26d) are substituted into (26e) and (26f), the size of remaining coupled nonlinear equations (26e) and (26f) is Jsorp + Jneq1′, which is less than or equal to Jsorp + Jneq, which is again less than or equal to the total number of reactions J, and J is less than or equal to I + , which is the number of unknowns in the original problem (1)/(2). The number of homogeneous equilibrium reactions (Jmob, Jimmo) does not influence the size of the resulting system. See also section 5 for a comparison of our reduction scheme to classical formulations and to Molins et al. .
3.4. Implicit Elimination Process
 In this section, the solvability of the blocks of local equations in scheme (26) is discussed under rather general assumptions. Theorem 1.Let assumption(20)hold and let there be a positive lower bound for all concentrations ci, i, uniform for all pointsxof the computational domain. 1. If the equilibrium reactions (26c)are governed by the mass action law(5)/(8), then there is a local resolution function forξmob, sorp, immo (depending onξsorp, ξneq, neq). 2. Let the derivatives, exist and are bounded all over the computational domain for all speciesci, iand all kinetic reactionsRj, j = Jeq + 1, …, J. If (26d)is discretized with the implicit Euler method and if the timestep size Δt is sufficiently small (or if the explicit Euler method is used and Δt > 0 is arbitrary), then there is a local resolution function of this block forneq(depending onξmob, ξsorp, ξneq, sorp, immo). 3. Under the assumptions of 1and2, there is a local resolution function of (26c)and (26d)forξmob, sorp, immo, neq(depending onξsorp, ξneq). 4. The assertions 1–3 is also true if we exchangeξsorp and sorp. For the proof, see Appendix A.
 Note that the choice to solve (26c) for ξmob, sorp, immo (or for ξmob, ξsorp, immo) corresponds to the segregation of the so-called secondary dependent variables from the primary ones. So for our method, this segregation is not done for the original unknowns ci, i, as it is done, for example, by Lichtner  and by most of the studies cited in section 1, but for the new unknowns ξi, i.
 Let us mention that it is possible to show that under the assumptions of the theorem, the system of differential equations we get after substitution of the local variables is parabolic and does not degenerate.
3.5. Newton's Method for the Reduced System
 Each time step for our reduced system (26) consists of the following steps.
 First, we perform the time step for the linear scalar decoupled equations for η. Then we perform a Newton or Newton-like iteration for the nonlinear problem consisting of equations (26e) and (26f) for the variables ξsorp and ξneq. We refer to these variables as “global” variables, since they are coupled by a system of PDEs. The variables ξmob, sorp, immo, neq are considered to be substituted in (26e) and (26f) by solving (26c) and (26d) for these variables. It is not necessary (and also not realistic) that we find the resolution function of (26c) and (26d) explicitly; it is sufficient that we can evaluate the residuals of (26e) and (26f) and compute the Jacobian for the reduced problem. Notice that the Newton step for (26e) and (26f) only gives an update for the global variables. In order to get an update for the remaining so-called “local” variables, which is necessary to evaluate the residuals and the Jacobian, we have to perform a nested Newton iteration for problem (26c) and (26d) with fixed values of the global variables. This nested problem consists of small local decoupled problems for the different nodes or control volumes of the computational domain. Numerical test runs confirm that the costs for solving these local problems are negligible compared to the cpu time for the global problem.
3.6. Nonideal Activities and Minerals
 The assumption of ideal activities in (5) was only required for the proof of the theorem in section 3.4. However, also if nonideal activities are assumed, the resolution function exists most likely; only the proof does not cover this case. If the assertion of the theorem is desired also in the case of nonideal activities of aqueous species, one could make the approximation that during one time step, the activity coefficients remain constant. This leads to a formulation of the equilibrium conditions again depending on the concentrations instead of the activities like equation (8), but now, with different equilibrium constants K incorporating the activity coefficients of the species. By this, the assertion of the theorem holds valid.
 If reactions with minerals are considered, for the mineral species usually constant activity is assumed. For species with constant activity, the remarks made by Kräutle and Knabner [2005, section 4.4] apply, i.e., these mineral species can be eliminated a priori, and the reduction algorithm is applicable, as long as no mineral reaches zero concentration (no complete dissolution).
 If a system is considered where complete dissolution of minerals is possible, then the transformation of the problem into the decoupled formulation (26) is still valid. However, the further reduction of section 3.4 cannot be guaranteed, since the equilibrium conditions with minerals have a more complicated structure than (5). For example, the equilibrium condition corresponding to reaction R2 in (15), with 4 assumed to be a mineral, can be written as
which is called a complementarity condition, and which cannot be solved for one of the variables. One well-known possibility to handle the case of vanishing minerals is to formulate the given problem as a moving boundary value problem (MBVP) [Lichtner, 1985], the moving boundaries separating the saturated and the undersaturated subdomain. The formalism of MBVP assumes that on subdomains where a certain mineral concentration i is zero, the corresponding mineral reaction rate Rj is zero, i.e., the corresponding column and row of the stoichiometric matrix can be dropped on that subdomain. On subdomains where the mineral concentration i is greater than zero, a simple algebraic condition describes the equilibrium. This condition, as well as all the other reactions, are independent of the mineral concentration, so that the ODE for the mineral concentration is decoupled from the computation of the nonmineral concentrations, and the update of the mineral concentration can be computed a posteriori from the update of the nonmineral concentrations. This procedure leads to slightly different stoichiometric matrices describing the nonlinear problem on the different subdomains, all being submatrices of S. It is possible to perform the transformation of section 3.3 on each subdomain separately using on each subdomain the specific stoichiometric matrix. A different rather new approach would be to use the complementarity representation of the mineral reactions (28), which is valid on the whole domain, instead of the MBVP, and to perform the transformation of section 3.3 on the whole domain with the complete given stoichiometric matrix S, and to apply numerical solution methods to the resulting system of differential equations coupled to the complementarity conditions without applying the substitution of section 3.4 for the mineral reactions. Such numerical solution methods for complementarity problems are known from the mathematical optimization theory, for example the interior point method, which was applied by Saaf  to a problem on porous media.
4. Reduction for Arbitrary Stoichiometric Systems
 In section 3.3 the assumption (20) on the equilibrium reactions was made in order to guarantee the applicability of the reduction algorithm. The condition was used (1) to derive representation (24) such that the generalized sorption formulation (25)/(26) could be derived and (2) to guarantee the solvability of the equilibrium conditions (26c) (see proof of the theorem parts 1–3 in Appendix A). In this section we will show that every reactive system can be formulated in such a way that condition (20) is met. We will see that two simple preprocessing steps for the given problem are sufficient.
 For the following, let us consider an arbitrarily given stoichiometric matrix. Without loss of generality we can assume that at least the equilibrium reactions (i.e., the columns of Seq) are linearly independent.
4.1. Preprocessing Step I
 Let us consider an example problem with 6 species X1, X2, X3, 4, 5, 6 and the following Jeq = Jsorp = 4 heterogeneous equilibrium reactions R1–R4:
The corresponding stoichiometric matrix is
Clearly, both the columns of (Smob1∣Ssorp1) = Seq1 and the columns of (Ssorp2∣Simmo2) = Seq2 are linearly dependent, though the columns of Seq are linearly independent; condition (20) is not met.
 The preprocessing step I now consists of a Gaussian (row-based) elimination for Seqt (or, equivalently, a column-based Gaussian elimination for Seq):
 After transposition we get the following matrix, which will, to keep the notation simple, be denoted by Seq again:
The Gaussian elimination does not modify the stoichiometry: An addition of two columns of Seq corresponds, for example, to the multiplication of one equilibrium condition in (5) by another. Note that the same effect is used when stoichiometric matrices are transformed to the so-called canonical form [Lichtner, 1985]. However, the reason for such a transformation when using the canonical form is, different from ours, the desire to write the local equations as explicit resolution functions for the secondary variables.
 Since all Jeq columns of the given matrix (29) are linearly independent, the resulting matrix (30) contains Jeq rows which equal the Jeq unit vectors in Each column in matrix (30) contains exactly one of the unities (printed in bold fonts). Each column in the upper part Seq1 of matrix (30) contains either a unity or is completely zero. From this representation it is clear that we can write (30) as
such that all columns in submatrix (Smob1∣Ssorp1) are linearly independent, i.e.,
(in the example (30), Jmob = 0, Jsorp = 2, Jimmo = 2). The columns in Simmo2 are also linearly independent, since all columns in are linearly independent.
 Note that for the example of this section (matrix (30)) the stronger condition (20) is still not met after the preprocessing.
4.2. Preprocessing Step II
 Let us consider a matrix of type (31) with linearly independent columns which fulfills (32). We will transform it into a matrix fulfilling (20) using column operations:
 Let us denote the columns of Seq, Seq1, Seq2 by sj, sj1, sj2, j = 1, …, Jeq, respectively. Hence sj = . Suppose the columns in (Ssorp2∣Simmo2) are linearly dependent. Then there is a linear combination αjsj2= 0 where at least one ≠ 0. Since the columns of Simmo2 are linearly independent, the column j0 can be chosen among the Ssorp2 columns, i.e., Jmob + 1 ≤ j0 ≤ Jmob + Jsorp. Since
the addition of
to column = cancels vector leaving us with a matrix with Jmob increased by one and Jsorp decreased by one. Obviously the property (32) remains unaffected by this addition.
 The process is repeated until the columns of (Ssorp2∣Simmo2) are linearly independent.
 Returning to our example (30) with Jmob = 0, Jsorp = 2, Jimmo = 2, we may choose j0 = 1 and add −1/2 times column 2 to column 1, and get the new stoichiometric matrix, which is again denoted by Seq,
with Jmob = 1, Jsorp = 1, Jimmo = 2.
 Note that the preprocessing does not affect the reaction invariants, since the orthogonal complement of the space generated by the columns of the upper/lower parts of the representations (29), (30), (33) is clearly the same.
5. Application of the Reduction Scheme and Comparison to Previous Methods
 In this section we demonstrate the application of the reduction scheme of section 3 to an example problem and we compare the procedure and the number of resulting equations with other methods. Since the concept of this paper is to investigate decoupling techniques which do not require splitting techniques, we consider other available methods only insofar as they do not apply some operator splitting. Furthermore, we consider problems with mixed kinetic/equilibrium reactions and mobile/immobile species. We consider an example without mineral reactions, i.e., all heterogeneous reactions are sorption reactions. The reasons for this choice are twofold: Problems with minerals where full dissolution of minerals never occurs would not be very challenging for our decoupling method, since the mineral amounts do not affect the mobile species concentrations, i.e., the ODEs can be dropped and the structure of the system simplifies strongly. If problems with minerals are considered where complete dissolution of minerals takes place, some extension of the solution algorithm is necessary (see section 3.6), and the analysis of the efficiency of the method becomes more complicated.
 As an example may serve the problem with I + = 10 species c: = (c1, c2, c3, c4, c5, c6, 7, c8, c9, 10)t, three equilibrium reactions Req = (R1, R2, R3)t and two kinetic reactions Rneq = (R4, R5)t with the stoichiometric matrix
Note that for the moment we do not separate mobile from immobile species. The system of PDEs/ODEs can be written
where M1 is a diagonal matrix with diagonal entries θ for mobile and 1 for immobile species, and M2 is a diagonal matrix with entries 1 for mobile and 0 for immobile species. Besides these differential equations, the equilibrium conditions Reqt ln c = K holds (see (8)). Note that this example corresponds, concerning stoichiometry and mobility of species, to the example by Molins et al. ; however, as mentioned, we consider the heterogeneous reactions to be sorption reactions instead of mineral precipitation/dissolution. Let us start with the demonstration of “standard” methods to the example.
5.1. Reformulation by Classical Approaches
 The basic idea for classical methods is to eliminate the unknown equilibrium rates Req from the system. There are several ways to achieve this. One possibility is [see, e.g., Lichtner, 1996] to solve the Jeq algebraic equations for certain variables (the so-called secondary variables), e.g.,
and then to solve Jeq = 3 differential equations from the system (35) for the Jeq equilibrium rates R1, R2, R3. In our case, we have to take the three differential equations for the secondary species c8, c9, 10 from (35) and get
The choice of secondary species is rather arbitrary; however, the corresponding part of Seq has to be invertible.
 The expressions for R1, R2, R3 can be substituted in the remaining I + − Jeq = 7 differential equations from (35). We get
The remaining system consists of I + − Jeq = 7 PDEs/ODEs and the Jeq = 3 AEs (36). The AEs could be substituted into the differential equations leaving us with 7 coupled PDEs.
 Another possibility to derive such a system where Req is eliminated is to multiply (35) by a matrix Ueqt where Ueq is a (I + ) × (I + − Jeq) matrix consisting of a maximal system of columns which are orthogonal to the columns of Seq, i.e., Ueq = Seq⊺ in the notation of the previous sections. This technique is applied in Molins et al.  as a first step of the transformation, and this matrix is called component matrix. Since UeqtSeq = 0, the multiplication of (35) by Ueqt eliminates the equilibrium reaction term in (35). If we chose for example
(Ueq is not unique) then the multiplication yields (37). A third (equivalent) way to achieve this structure is to take linear combinations of the equations of (35) in such a way that R1, R2, R3 are eliminated from 7 equations [Steefel and MacQuarrie, 1996].
 Note that these classical transformations only eliminate Req; they do not lead to any decoupling of differential equations. Several authors note that it is possible to remove also the kinetic rate terms from I + − J = 5 of the I + − Jeq = 7 equations. There are again several possibilities to achieve this goal. One possibility is to take linear combinations among the equations of (37) such that R4, R5 are eliminated from I + − J = 5 differential equations. Since the right-hand side of (37) reads UeqtSneqRneq(c), a more instructive way to achieve the goal is to multiply (37) by a matrix Uneqt, where Uneq consists of I + − J columns which are orthogonal to the Jneq columns of matrix UeqtSneq. For the choice
we get the first I + − J = 5 equations of the system
Another way to find the conservative equations being void of any reaction term (called component equations) is to multiply the given system (35) directly by a matrix Et where E consists of I + − J = 5 columns that are orthogonal to all of the J = 5 columns of S. The resulting system consists of a coupled system of I + − J component equations (PDEs), Jneq PDEs with kinetic rates (38), and Jeq AEs (36). Such a structure is also derived by Steefel and MacQuarrie .
 One can either use (38) together with (36) for numerical simulation, or one can substitute (36) into (38) for simulation. Note that still no decoupling is achieved. A decoupling is possible if all species occurring in the component equations are either mobile or immobile. Then, the introduction of new variables η1, …, ηI+−J in the first I + − J equations of (38) leads to I + − J linear scalar transport equations for the ηi. After a time step for the ηi is done, the linear algebraic equations which were used to define the ηi can be solved for certain ci, and these ci can be eliminated in the remaining problem consisting of the last Jneq = 2 equations of (38) and the nonlinear algebraic equations (36). However, in most situations, as in our example, the component equations will contain a combination of mobile and immobile species which results in different linear combinations of species in the time derivatives and in the flux terms, which seems to prevent the introduction of the ηi and the decoupling.
5.2. Reformulation by a Newer Approach
 Let us consider the method by Molins et al. . The authors require that the stoichiometric matrix has the shape
and that all immobile species correspond to rows in the second and in the third block of rows of S, and that, if immobile species correspond to rows in the second block of rows, these rows have only zero entries in S21; i.e., each reaction contains at most one immobile species. Our example problem (34) has already this structure. The authors also perform the multiplication of (35) by Ueqt and then by Uneqt and get formulation (38). Then, they eliminate those immobile species which correspond to the second block of rows in (39) (this is only 7 in our example (34)) from the component equations in (38). This is done by multiplication of (38) by a certain matrix Ft which reads
The entries in the seventh column are chosen such that 7 vanishes from the component equations; the resulting system reads
Furthermore, Molins et al. provide a possibility to eliminate immobile constant activity species from the component equations in (40) by another multiplication step. This is a useful step if all the immobile species are constant activity species, since then component equations with only mobile species are left, which provides the possibility of introducing new variables ηi and a decoupling of the component equations. However, this is not the case for 10 in our example, so linear combinations of mobile and immobile species remain in the component equation, and a decoupling only occurs by splitting techniques. For those immobile species that correspond to rows within the second block in (39), the so-called immobile kinetic species (7 in our example), ODEs occur in (40). These can, after a time discretization, be solved for the immobile kinetic species concentrations, so that these species can be eliminated from the right-hand sides of the PDEs of (40), effectively reducing the number of coupled PDEs by the number of immobile kinetic species Nki(= 1). Since we want to treat the problem without splitting, we can substitute the Jeq = 3 AEs (36) into the remaining I + − Jeq − Nki = 6 ODEs. Hence the number of remaining PDEs is by one smaller than for the previously discussed methods. Note that the method by Molins et al. leads to a decoupling of the linear component equations if all immobile species with nonconstant activity correspond to rows within the second block of (39). This is usually the case if there is no equilibrium sorption.
5.3. New Decoupling Technique of Section 3
 Now let us apply the new reduction scheme of section 3 to the example. One main difference of our transformation is that, from the beginning, we distinguish thoroughly between mobile and immobile species. Hence we start by sorting the rows of S with respect to mobile and immobile species and by renumbering the species correspondingly. We get the system in the shape (2) with species vector c = (c1, …, c8)t, = (9, 10) and stoichiometric matrix
All nonzero columns in Seq1 and in Seq2 are linearly independent; hence the preprocessing of section 4 is not required. We note that all of the J = 5 = J1 columns of S1 are linearly independent; hence S*1 = S1, A1 = Id5. Only J2 = 2 columns of S2 are linearly independent; we get S2* = −Id2, A2 = −S2. We choose I − J1 = 3 columns orthogonal to the columns of S1, for example
Before we look at the final system of shape (26), let us consider the intermediate state (14)/(24), where only within the block of mobile species equations and within the block of immobile species equations linear combinations were taken. For our example this formulation reads
where the relations between old and new variables, according to (12), (13), are given in the transformation below. Also the three AEs corresponding to the three first columns in S have to be expressed in terms of the new variables.
 If we compare (41) to (38) or (40), we note that our number of component equations (I − J1 = 3 for the ηi plus − J2 = 0 for the i) is smaller than the number of component equations (I + − Jeq = 5) of the previous methods. However, what is more important, is the fact that our component equations decouple from the rest of the system, which is due to the fact that our component equations do not combine mobile and immobile species concentrations.
 As explained at the end of section 3.1, the formulation (41) is not yet suitable for numerical simulations, since the equilibrium rates R1, R2, R3 are still present. That was the reason to perform the transformation of (24) to (25). Formulation (41) corresponds to (24), with Req = (R1, R2, R3)t split into Rmob = (R1, R2)t, Rsorp = R3, and ξ = (ξ1, …, ξ5)t split into ξmob= (ξ1, ξ2)t, ξsorp = ξ3, ξneq= (ξ4, ξ5)t, and with = (1, 2)t split into sorp= 1 and neq = 2. The reactions Rmob occur exactly once in the system, and the reactions Rsorp occur once in each of the two blocks. Now we take differences between mobile and immobile equations in (41) such that in the resulting system, also the Rsorp reactions occur only once:
This formulation corresponds to (25). By this, we have gained one more component equation; however, this equation will not decouple from the system due to the combination of mobile and immobile species. The equations with equilibrium rates R1, R2, R3 in (42) are now replaced by the three algebraic equilibrium conditions.
 From the theorem in section 3.4 we know that the three AEs, expressed in terms of the new variables, together with the ODE for neq= 2 in (42) can be solved for the variables ξmob, sorp, neq, i.e., for ξ1, ξ2, 1, 2 (our secondary species). So the equations for the local variables ξmob, sorp, neq vanish from our global system (the corresponding equations can be used a posteriori to determine the equilibrium rates). The resulting formulation consists of the I − J1 = 3 decoupled linear component equations
and the Jsorp + Jneq1′ = J1 − Jmob = 3 coupled nonlinear PDEs
where c, are expressed in terms of the new variables according to the transformation equations in section 5.3 and where ξ1, ξ2, 1, 2 are the resolution functions of the Jeq + Jneq2′ = 4 local equations. Comparing this to the formulations by Lichtner , Steefel and MacQuarrie , Molins et al. , we have about half the number of coupled nonlinear differential equations (3 instead of 6 or 7). To emphasize the difference to the canonical formulation or Molins' formulation, note that three of our four component equations in (43)–(44) lack completely immobile and secondary species, thus decouple.
 Besides the smaller number of remaining coupled PDEs, our formulation (43)–(44) has obviously the property that the only coupling terms between the equations occur on the right-hand sides and under the time derivative, but not under the spatial derivatives. Note that the formulations (38), (40) do contain many coupling terms under the transport operator after (36) is substituted. As a result, the number of nonzero entries in the Jacobian for (44) is by far smaller than for (38) or (40), and this reduces the computation time for the linear solver, at least if iterative methods are used. This amplifies the cpu time reduction which was achieved by the smaller number of coupled PDEs. The factor by which the number of nonzero entries of the Jacobian for (44) is reduced compared to the other formulations depends on the spatial discretization and the number of space dimensions (1-D, 2-D, 3-D); a discussion of this question is given by Knabner and Kräutle [2005, section 3.4].
 The reduction scheme was implemented using the software basis M++ by Wieners  which provides the possibility of running the code on parallel computers. For the discretization we chose bilinear finite elements with mass lumping for the mass term and for the reactive terms, and an implicit Euler stepping in time. The nonlinear timesteps are linearized by a Newton-Armijo method, and the linear solver is GMinRes(k) with Jacobi preconditioner. Several scenarios including kinetic/equilibrium sorption, aqueous complexation, transport and biodegradation, e.g., the transformation of EDTA considered by Fang et al. , section 6.1, were simulated. The observations which were made for the preliminary version of the code by Kräutle and Knabner  concerning the cpu time in essence also hold for our generalized code. This includes the observation of a cpu time reduction by factors between two and seven compared to simulations using the initial formulation (2).
6. Summary and Outlook
 We have proposed a new reduction scheme for multicomponent reactive transport problems, which is able to handle coupled mobile and immobile species and mixed equilibrium and kinetic reactions. The basic philosophy is to figure out how much the size of the nonlinear system of differential equations can be reduced without using splitting techniques (SNIA/SIA) or imposing restrictions on the stoichiometry.
 The number of coupled PDEs for the canonical formulation [e.g., Lichtner, 1996] is the number of species minus the number of equilibrium reactions I + − Jeq. For Molins et al.  it is I + − Jeq − Nki if no constant activity species participate. A further decoupling is only achieved by splitting techniques.
 The number of coupled nonlinear PDEs for our reduction scheme lies between Jsorp and Jsorp + Jneq, where Jsorp is the number of heterogeneous equilibrium reactions and Jneq is the number of kinetic reactions. For most stoichiometric examples, the number of remaining PDEs seems to be smaller than that for the other mentioned formulations; however, there might be counterexamples. Comparisons of our GIA in terms of efficiency to methods using splitting techniques were not the subject of this article; results can be expected to depend strongly on the specific problem under consideration.
 A special property of the new scheme is that even within those equations which remain coupled, these coupling terms do not occur under the transport operator, which would cause many nonzero entries in the Jacobian. The sparsity of the system matrix can be exploited if iterative linear solvers are used, which seems reasonable at least for problems with fine discretization. Under rather general conditions, assuming mass action law for the equilibrium reactions, the applicability of the reduction method and the existence of the resolution functions can be guaranteed. Application to problems including minerals are possible as long as the mineral concentrations do not vanish or if techniques like the formulation in terms of a moving boundary value problem is applied first.
 Future work could try to consider the full generalization of the reduction scheme to situations where there are constant activity species (minerals) with total dissolution and precipitation in parts of the computational domain. The transformation into a formulation corresponding to (26) is still valid in this situation. Generalizations of our reduction scheme to this case could be achieved either by formulations of a moving boundary problem or by the formulation as a complementarity system (see section 3.6). The algorithmic details of these generalizations require a detailed investigation in the future.