Algebra preconditionings for 2D Riesz distributed-order space-fractional diffusion equations on convex domains

When dealing with the discretization of differential equations on non-rectangular domains, a careful treatment of the boundary is mandatory and may result in implementation difficulties and in coefficient matrices without a prescribed structure. Here we examine the numerical solution of a two-dimensional constant coefficient distributed-order space-fractional diffusion equation with a nonlinear term on a convex domain. To avoid the aforementioned inconvenience, we resort to the volume-penalization method, which consists of embedding the domain into a rectangle and in adding a reaction penalization term to the original equation that dominates in the region outside the original domain and annihilates the solution correspondingly. Thanks to the volume-penalization, methods designed for problems in rectangular domains are available for those in convex domains and by applying an implicit finite difference scheme we obtain coefficient matrices with a 2-level Toeplitz structure plus a diagonal matrix which arises from the penalty term. As a consequence of the latter, we can describe the asymptotic eigenvalue distribution as the matrix size diverges as well as estimate the intrinsic asymptotic ill-conditioning of the involved matrices. On these bases, we discuss the performances of the conjugate gradient with circulant and 𝜏 -preconditioners and of the generalized minimal residual with split circulant and 𝜏 -preconditioners and conduct related numerical experiments.

order varies continuously in the interior of the domain.This results in a sum of the contributions over the domain, leading to an integral over the order parameter. 1,2Applications of the fractional distributed order paradigma in porous media and groundwater flow can be found in the literature. 5,6Since analytical solutions for FDEs are rarely available, the investigation of numerical methods for the computation of approximate solutions constitutes an active and promising field of research.
In the present work, we focus on a semi-linear two-dimensional Riesz distributed-order space-FDE defined on a convex domain.A linear version of this problem was discretized by means of a finite element method over unstructured/non-Cartesian meshes. 7The versatility of this method comes at a price as the boundary treatment asks for some extra care and the resulting coefficient matrices do not have a distinct structure.To avoid these drawbacks, we adopt a different strategy.Rather than adjusting the discretization, we act on the continuous problem and alter it in a way that allows us to employ uniform grids and therefore preserve the good structure of the matrices.More precisely, we resort to the volume-penalization method, 8 applied by Huang and Sun to solve a semi-linear Riesz space FDE. 9 Such technique consists in embedding the domain into a rectangle and considering an extended problem with an additional reaction penalization term, that dominates outside of the original domain and serves the purpose of annihilating the solution correspondingly.The latter purpose is achieved by introducing a penalization parameter that, as it tends to zero, magnifies the reaction term over the fractional differential operator and forces the solution to be zero outside of the original domain.The idea of embedding the domain in a rectangle and then using a suitable reformulation of the problem is the common denominator of the large class of immersed methods (see e.g., References 10,11 and references therein), whose starting point is quite old and indeed these methods intersect with those called fictitious domain methods. 12,13hus, we obtain a problem on a rectangular domain that can be discretized on uniform grids.In particular, we adopt a second-order finite difference method for time derivative and a weighted and shifted Grünwald-Letnikov difference scheme for the fractional derivatives, combined with a quadrature rule for the integral and an approximation for the nonlinear term derived from Taylor expansions.The overall numerical scheme has a second-order convergence in both time and space directions.In addition, the presence of uniform gridding results in 2-level Toeplitz matrices, due to the shift-invariant nature of the operators, plus a diagonal matrix arising from the penalty term.This is crucial for overcoming the disadvantages of having full coefficient matrices, which in general require (n 3 ) computational costs and (n 2 ) storage costs, where n is the number of grid points.In fact, the storage requirement for a Toeplitz structure amounts only to (n) and the complexity of the matrix-vector product is reduced to (n log n) thanks to the use of the fast Fourier transform (FFT).
Moreover, putting together the matrices obtained for each discretization step, we obtain a real symmetric structured matrix-sequence, which can be associated to a function, called (spectral) symbol, and asymptotically representing the distribution of the eigenvalues as the matrix size increases.We recall that the evaluation of the symbol over an equispaced grid leads to a reasonable approximation of the eigenvalues, when the size of the matrix is sufficiently large.Applying the well-known theory of Toeplitz and generalized locally Toeplitz sequences, 14 and building on the information available regarding the one-dimensional case, 15 we are able to explicitly compute the symbol of the (properly scaled) coefficient matrix-sequence and exploit the spectral information in order to design preconditioners that counterbalance the asymptotic ill-conditioning.
The first preconditioner we investigate consists in a two-level version of the -preconditioners described in detail in the one-dimensional setting. 15As a basis for comparison, we consider a two-level Strang-type circulant preconditioner.Aware of the negative results regarding the convergence rate of matrix-algebra preconditioning in a multilevel setting, 16,17 we cannot expect either superlinear or mesh independent convergence in general.However, the -preconditioner is a preferable option in the real symmetric setting, 18 especially because of a substantially better matching of the small eigenvalues (an application to fractional problems has been investigated 19 ).However, both preconditioners do not include the diagonal matrix arising from the penalty term and this could cause poor performances of the preconditioned conjugate gradient (PCG).Therefore, as a second preconditioner we study an adaptation to the distributed case of a split -preconditioner 9 proposed for a non-distributed problem.A circulant version of the split preconditioner is also taken into account.These preconditioners are non-symmetric, therefore the associated linear systems need to be solved via the generalized minimal residual (GMRES) method.Moreover, they require two extra fast transforms if compared with the non-split circulant and -preconditioners.For all the aforementioned preconditioners, we compute the symbol and exploit the spectral information to assess which preconditioners should be the preferable choice under different circumstances.The numerical tests emphasize that both -preconditioners need fewer iterations than their circulant versions and this is in line with the theoretical results.Moreover, if the penalization parameter is small enough, the split preconditioners typically provide a lower number of iterations, which balances the higher cost of the GMRES method compared to the PCG and the two extra fast transforms.
The current paper is organized as follows.Section 2 introduces the continuous problem, its extension with the volume-penalization method and the discretization, including the basic features of the resulting coefficient matrices.Section 3 is devoted to the calculation of the symbol and the spectral analysis.Section 4 contains the preconditioning proposal and the spectral study of the preconditioners, followed in Section 5 by the discussion of several numerical experiments.Finally, in Section 6 we draw conclusions and propose a few open problems.

PROBLEM SETTING AND DISCRETIZATION
Consider the following two-dimensional distributed order space-FDE where is a convex region, f (u, x, y, t) is the source term, satisfying the Lipschitz condition with respect to u, and () is the kernel function satisfying where () = − 1 2 cos(  2 ) > 0. The Riesz space fractional derivatives   u(x,y,t) and   u(x,y,t) |y|  are defined as ) .
Here the left-sided and right-sided Riemann-Liouville fractional derivative operators with respect to x and y are defined as In order to seek the numerical solution without resorting to non-Cartesian meshes, we exploit the volume-penalization method. 9Before discretizing the equations, we extend the domain Ω to a rectangle Ω = [ã, b] × [c, d] ⊇ Ω and reformulate the problem as the following one: where  is the penalty parameter, 1 Ω is the indicator function defined as and f (u  , x, y, t), ũ0 (x, y) are zero extensions for f (u, x, y, t) and u 0 (x, y) respectively, meaning that f (u  , x, y, t) = 0, ũ0 (x, y) = 0 when (x, y) ∈ Ω ⧵ Ω.It is expected that as  → 0 + , u  (x, y, t) → u(x, y, t).Now, as done in the one-dimensional setting, 20 we adopt a second-order finite difference method to discretize the transformed equations (2).Let M, n 1 , and n 2 be positive integers and discretize the domain Ω × [0, T] with In order to discretize the left and right Riemann-Liouville fractional derivatives in space, we exploit the weighted and shifted Grünwald-Letnikov difference scheme, 21 that is, where Concerning the discretization in time, as done by Fan and Liu, 7 we take t m+ In order to approximate the integral in (1), we first decompose it on the subintervals arising from a partition of the integral interval (1, 2).Specifically, we divide the interval (1, 2) into l uniform subintervals and denote by Δ the length of such subintervals.Then, the mid-point of each subinterval is given by ) ()d Due to the nature of the nonlinearity, we can avoid employing a two-level iteration with a nonlinear solver and rather work towards the construction of a two-step implicit scheme.We then we exploit the Lipschitz condition on f (u  , x, y, t) with respect to u  and find a constant C such that With the additional hypothesis that the second derivative of u  remains bounded, from Taylor expansions in t m and t m−1 we get ) .
From backward and forward finite differences formulas It follows that which, combined with the inequality above, provides the following second-order approximation for the nonlinear term at each time step Note that this formula requires the knowledge of u  (x i , y j , t m ) and u  (x i , y j , t m−1 ), which means that we need to calculate the solution at the first time step in order to start the computation.For this purpose, we will apply one step of the explicit midpoint method where Combining ( 2)-( 6), we obtain ) ) where We define the penalization coefficients as . By omitting the small terms R m i,j in (7), we arrive at the following finite difference scheme for solving (2) ) ) ) ) with the initial condition ũ0 i,j = ũ0 (x i , y j ), for i = 0, 1, … , n 1 + 1, j = 0, 1, … , n 2 + 1 and the boundary conditions Thus the above numerical scheme (8) can be written in the following matrix form with and where ⊗ denotes the usual Kronecker product and We can also write , where For convenience, we rewrite the linear system in (9) as where We stress that as A n ( k ) is a symmetric Toeplitz matrix, the matrices A x N and A y N possess a symmetric Block-Toeplitz with Toeplitz-Blocks (BTTB) structure or, in a more precise terminology (see Definition 1), they are 2-level Toeplitz matrices.Since the matrices A x N and A y N are negative definite, 22 M N is symmetric positive definite.

SPECTRAL ANALYSIS OF THE COEFFICIENT MATRICES
In this section, we first introduce some basic definitions and results (Section 3.1) and then perform a spectral analysis of the (properly scaled) coefficient matrix-sequence {M N } N (Section 3.2).

Preliminaries
) and let {f k } k∈Z d be the sequence of its Fourier coefficients defined as , where N = Π d i=1 n i is the order of the matrix.The function f is called generating function of the matrix-sequence {T (d) N (f )} N .To clarify the notation for the case d = 2 of interest, the BTTB matrix of order N associated with f is , or equivalently, where J , are matrices whose entry (s, t)th equal 1 if s − t = j i and is 0 elsewhere.When d = 1, that is, for Toeplitz matrices, we simplify the notation using T N (f ) ∶= T (1) N (f ).Definition 2. The Wiener class is the set of functions f () = ∑ k∈Z d f k e i<k,> such that The Wiener class forms a sub-algebra of the continuous 2-periodic functions defined on [−, ] d .
We continue giving the definition of the spectral distribution in the sense of the eigenvalues.
being the Lebesgue measure over R k .Let  0 (C) be the set of continuous functions with compact support over C and let { N } N be a sequence of matrices of size N with eigenvalues  j ( N ), j = 1, … , N. We say that { N } N is distributed as the pair (f , G) in the sense of the eigenvalues, and we write if the following limit relation holds for all F ∈  0 (C): The function f is referred to as the (spectral) symbol. When for a given G of positive and finite Lebesgue measure, we say that { N } N is a zero-distributed matrix-sequence.
In case of diagonal matrices generated by the characteristic function of a Lebesgue measurable set the following proposition can be proved. 25oposition 1.Given a Lebesgue measurable set Θ ⊂ [0, 1] 2 such that the Lebesgue measure of the boundary is 0 and defined the matrix )) , it holds With reference to the previous result, notice that a Lebesgue measurable set Θ ⊂ [0, 1] 2 , whose boundary Lebesgue measure is 0, is a Peano-Jordan measurable set and vice-versa, so that 1 Θ is a Riemann integrable function.The specification is needed since the two different characterizations and terminologies are used in the literature. 25,26e concisely recall that Toeplitz sequences with L 1 symbols, diagonal sampling matrices with Riemann integrable symbols, zero-distributed matrix-sequences (see Definition 3) belong to the generalized locally Toeplitz (GLT) class, which represents a * -algebra of matrix-sequences equipped with a symbol that is closed under linear combinations, product, (pseudo) inversion when the symbol is nonzero almost everywhere.The GLT symbol is the spectral symbol in the sense of Definition 3 whenever the considered matrix-sequence is Hermitian or asymptotically Hermitian, where with the term "asymptotically Hermitian" we mean a sequence {X n } n such that with || ⋅ || S,p the Schatten p norm.Without digging deeper into the theory 14 and the general setting, 27 here we only mention that an example of asymptotically Hermitian character is reported in the assumptions of Theorem 2.
We end this introductory part by recalling a property of the spectral norm of d-level Toeplitz matrices and stating a relevant theorem. 27iven a square matrix X of order N, we denote its spectral norm by ||X|| that is its maximal singular value (||X|| = max i=1, … ,N  i (X)), which coincides with the spectral radius in the case of a normal matrix and we recall that every Hermitian matrix is also normal.Given a d-level Toeplitz sequence {T (d) N (f )} N generated by f , it holds that 28 : Theorem 2 (Corollary 2.8 27 ).Let { N } N be a matrix-sequence with  N = B N + C N and B N Hermitian ∀N ∈ N. Assume that

Spectral analysis
We are now ready to determine the symbol of our coefficient matrix-sequence.In this view, we start reporting the symbol associated to Proposition 2 (Proposition 3.8 15 ).Let  ∈ (1, 2).The symbol associated to the matrix-sequence {A ,n } n belongs to the Wiener class and its formal expression is given by Corollary 1 (Corollary 3.9 15 ).Let  ∈ (1, 2).The symbol associated to the matrix-sequence {A n () = A ,n + A T ,n } n belongs to the Wiener class and its formal expression is given by where f  is defined as in (14).
Remark 1.Note that g  () is a nonpositive function with a single zero at 0 of order .ΔtΔ and assume that h Δ s = o(1) with s = x, y.Then, where c l = ( l )( l )
By exploiting the previous tools and results, we are in position to discuss the eigenvalue distribution of the (properly scaled) coefficient matrix-sequence {M N } N .Theorem 3. Let us assume that ) . We have with ) , and Proof.Let us start from , we have ) .
On the other hand, where , by using the hypothesis and Equation ( 13), we have ) .

Now, let us observe that the matrices
N are symmetric two-level Toeplitz and that the function ) is real nonnegative.Therefore, by Theorem 1 and by hypothesis Moreover, as h 1) hence again by Theorem 2, this time with . Therefore, by Proposition 1, we obtain u(x, y, 0) = 1 Ω (x, y)u 0 (x, y), (x, y) ∈ Ω, u(x, y, t) = 0, (x, y) ∈ R 2 ⧵ Ω and t ∈ (0, T], (17)   then the computation of the symbol can still be performed by resorting to the GLT theory.Indeed, the discretization of Equation ( 17) in matrix form reads as (10) and (11), and it can be easily shown that , by means of the GLT theory.This kind of approach underlies the reduced GLTs, a more sophisticated tool for computing the symbol of the coefficient matrix that corresponds to (1); see Reference 29, pp.395-399, Section 3.1.4Reference 26 for the initial proposal and terminology and Reference 25 for a systematic treatment and for a complete development of the theory.Note that the reduced GLTs have recently been exploited to carry on the spectral analysis for some specific immersed methods for differential problems described by classical derivatives in Reference 30.
Remark 4. We stress that the present proposal takes inspiration from Reference 9 and hence we restrict our attention on convex domains, however, while the assumption is of technical importance for proving existence, uniqueness, regularity and for studying the approximation power of the adopted numerical scheme (as it happens in many contexts when treating PDEs or FDEs), the spectral analysis of the original matrix sequences and of the related preconditioning strategies holds in larger generality due to the structure of * -algebra of the reduced generalized locally Toeplitz class.As a matter of fact only the Peano-Jordan measurability of the domain is required which is equivalent to the Riemann integrability of its characteristic function that in turn is equivalent to the zero Lebesgue measure of the frontier of the considered domain. 14

ALGEBRA PRECONDITIONINGS
In this section, we benchmark the performances of several algebra preconditioners used for solving the linear system (12).Consider the -preconditioners  (B x n 1 ) and  (B y n 2 ) for the Toeplitz matrices B x n 1 and B y n 2 respectively. 31The first preconditioner we analyze is defined as As a term of comparison, we use the Strang-type preconditioner defined in analogy with  N as where (B x n 1 ) and (B y n 2 ) are the Strang preconditioners for the Toeplitz matrices B x n 1 and B y n 2 respectively.Both aforementioned matrix-algebra preconditionings do not ensure a superlinear convergence in a multidimensional setting. 16,17However, the -preconditioner is a preferable option in the real symmetric setting, 18 especially because of a substantially better matching of the small eigenvalues. 19he following theorem presents the symbol of the (scaled) preconditioners sequences.
Proof.It is well-known that the matrix sequences , respectively, simply because the difference with respect to their Toeplitz counterpart is a zero distributed matrix sequence.For the same reason, the same holds for {t x (B x n 1 )} n 1 and {t y (B y n 2 )} n 2 .Then, we obtain the thesis by reasoning as in the proof of Theorem 3. ▪ Note that these preconditioners are highly convenient from a computational point of view because they can be diagonalized through fast transforms, namely discrete sine transform (DST) for  N and the FFT for  N , allowing us to perform the inversion and the matrix-vector product in (N log(N)) steps.
For this reason, in both  N and  N the contribution of the penalization matrix D N is overlooked, since its presence would prevent the use of fast transforms.On the other hand, this could cause poor performances of PCG, as, due to the presence of D N , the eigenvalues of M N present a number (as many as the cardinality of the grid points in Ω ⧵ Ω) of large eigenvalues whose magnitude increases when both the mesh width and  decreases.Indeed,  should be chosen in such a way that the reaction regularization term becomes dominating without spoiling the accuracy of the numerical solution, that is, it should opportunely decrease with the mesh width.We then expect that a too small value of  causes an increase in the iteration of PCG (see the numerical results in the next section).
If we take into account the contribution of D N , we obtain the second group of preconditioning proposals, defined as We refer to these preconditioners as  and circulant splitting preconditioners, respectively.As we will explain a few lines down, the reason for this name is that they can be recast as a sum of two contributions on a splitting of Ω.

Eigenvalue distribution
We start by numerically verifying the eigenvalue distribution of the scaled coefficient matrix-sequence {t x M N } N , which in this particular case is lacking of the diagonal term D N and is therefore a Toeplitz sequence with symbol   ( 1 ,  2 ).
In Figure 1, we compare the eigenvalues of t x M N with a uniform sampling of the symbol.We fix l = 2 and n 1 = n 2 = 2 4 , 2 7 .It is evident that, as n 1 and n 2 increase, the symbol becomes a better approximation of the eigenvalues.Similar results are obtained for l = 5 and are shown in  coincide with  N and  N , respectively, and for this reason we apply and analyze the PCG method only.In Table 1 the preconditioner  N is compared to the circulant one  N in terms of iterations and CPU times.We set the tolerance to 10 −8 , l = 5, n 1 = n 2 , and Δt = 0.1."Iter" stands for the average number of iterations at the last time step and "CPU" is the corresponding average CPU timings in seconds.
We note that the  preconditioner greatly outperforms the circulant one in terms of number of iterations, even for small sizes, and this is reflected in CPU timings, which are smaller for the  preconditioner.This difference becomes more relevant as n 1 , n 2 increase.These results are not unexpected and are consistent with the theoretical predictions in Section 4.

Example 2
We now consider a problem defined on a convex region. 7Let Ω = ( where and with

Eigenvalue distribution
First, we want to verify the eigenvalue distribution of the scaled coefficient matrices, as portrayed in Theorem 3. In this case, M N contains the penalization term D N and therefore the sequence {t x M N } N is associated to the symbol   (x, y,  1 ,  2 ).
In Figure 3, we compare the eigenvalues of t x M N with a uniform sampling of the symbol, setting  = 10 −5 and adjusting h x and Δ to obtain C  (h x , Δ) = 100.The symbol provides a reliable approximation of the eigenvalues.We have already noted in Example 1 that as l increases spectral distribution result still holds asymptotically, but we would need a smaller h x to observe it, and again as discussed before the reason relies in the nature of formula (16).Now, in order to observe the behavior as  tends to zero and n 1 , n 2 tend to infinity, we vary  and n 1 , n 2 , maintaining We recall that in Theorem 3 we requested that  =  ) . Having fixed Δ, this means that n 1 and  should be chosen such that  and h  l x balance each other.Figure 4 highlights this fact.When n 1 = 2 4 ,  = 10 −2 is not small enough to counterbalance h  l x and, as a consequence, it is not possible to discern the part of the symbol corresponding to the domain Ω and the one corresponding to its complement.In other words, we are not able to identify the solution on the domain of interest.If we decrease  to 10 −4 , the separation becomes perceptible.
The same happens if we set n 1 = 2 6 , as shown in Figure 5.For  = 10 −4 there is no distinction between the two portions of the plot, while with the smaller value  = 10 −6 the gap is apparent.

PCG and GMRES method
Next, we discuss the performance of the PCG and GMRES methods with the proposed preconditioners.The results are shown in Table 2, with tolerance set to 10 −8 , l = 5, n 1 = n 2 and Δt = 0.1.First, we examine As in Example 1, the superior performance of both the -preconditioners in relation to their respective circulant counterparts is evident and consistent with theoretical predictions in Section 4.
Nonetheless, we observe that the ratio between the iterations required for  N and  N , or likewise between those required for split N and  split N , is greater than the ratio between the corresponding CPU times.In other words, the difference between the CPU times does not fully reflect the considerable difference in the required iterations.This is due to the fact that the circulant preconditioners are implemented by the means of the FFT, while the -preconditioners employ the DST, which in MATLAB is implemented via two FFTs.Then, the computation of  N in MATLAB requires twice the number of FFTs as compared to  N , and the same holds for  split N and split N .The use of an algorithm less computationally expensive for computing the DST would lead to lower CPU times for the -preconditioners, especially for smaller sizes.In this direction, for n power of 2, the Van Loan book 32 indicates a floating point cost of the FFT as 4n log 2 (n) operations while with the same setting the DST operation count amounts to 5 2 n log 2 (n) and the same type of advantage in favor of the DST holds for generic sizes.Therefore, with a careful implementation of the DST, we would have a CPU timing substantially in favor of the  approach, when compared with the circulant one.Now, let us analyze the performance of  split

N
, split N compared to  N ,  N .Since, as shown in Section 4, the symbol of the split preconditioners matches the symbol of the coefficient matrix, contrary to the symbol of the non-split preconditioners which does not include the term arising from the penalty matrix, we expect the split preconditioners to be the favored choice.This is confirmed by the numerical results collected in Table 2, where the number of iterations required by the split preconditioners is lower than their respective non-split versions.The same effect is reflected in the CPU times for small enough values of the penalization parameter.In fact, as a result of the cost per iteration of the GMRES method and the need of two extra FFTs/DSTs, when the penalization parameter is not sufficiently small, the lower number of iterations provided by the split choice does not translate to equally lower CPU times.For this example we see the advantage of using the split preconditioners when  = 10 −5 .Finally, we observe that the increase in accuracy given by smaller values of  comes at the cost of an increased number of iterations, and that this is more evident for CG with  N ,  N preconditioners.

Error analysis
To conclude, we briefly discuss the error and its relation to the penalization parameter.tolerance is 10 −6 , l = 10 and Δt = 0.1.The errors obtained with the different methods and preconditioners are all equal, therefore only the one related to  N is shown."ErrIn" designates the error inside the domain Ω, while "ErrOut" indicates the error on Ω ⧵ Ω.The latter coincides with the l 2 norm of the numerical solution on the same set and is also referred to as "penalty error".
• From ( 7) we know that the truncation error produced by the approximation is  . With the fixed values Δ = 0.1 and Δt = 0.1 selected for the experiments, we do not expect to see a significant improvement in accuracy as n 1 , n 2 increase.Nevertheless, the proposed numerical method appears to be precise and reliable.
• Regarding the penalization parameter, we observe that the error slightly improves for smaller values of .However, as n 1 increases the error ceases to decrease once the order of the penalty error is reached, because the penalty error dominates over the global truncation error.For instance, with  = 10 −2 ErrIn remains of order 10 −4 at most, following the penalty error ErrOut, and further refinements of the spatial grid cannot improve the accuracy of the numerical solution.Conversely, with  = 10 −5 ErrIn manages to reach the order 10 −5 , since that is the order of the penalty error.

Example 3
For the last example, we modify the problem from the previous example by inserting a nonlinear term on the domain Ω defined above which is once again extended to the rectangle Ω in the volume-penalization method.
Here, the exact solution is not known.Since the only change made, with respect to Example 2, is in the source term, the coefficient matrix-sequence remains the same and the comparison between the symbol and the eigenvalues would be redundant.Therefore, we move directly to the analysis of the numerical methods.We set the tolerance, l and Δt as in Example 2 and apply the PCG and GMRES methods with their respective preconditioners.The obtained results are gathered in

CONCLUSIONS AND OPEN PROBLEMS
In the present paper, we considered the numerical solution of a two-dimensional constant coefficient distributed-order space-FDE, where additional challenges are given by the presence of a nonlinear term and by the domain which is convex but not necessarily Cartesian.The problem has been reformulated in such a way that a more convenient Cartesian structure of the domain is recovered, by using a proper penalization technique.The resulting linear systems show structured coefficient matrices, which can be indeed represented as the sum of a diagonal (sampling) matrix and a two-level Toeplitz matrix with continuous generating function.The resulting matrix-sequences belong to the class of the GLT sequences and the related spectral analysis has been provided by minimizing the technicalities of the powerful GLT apparatus.Associated fast iterative solvers have been proposed and studied.The related numerical performances have been reported and critically discussed.
As future work, it would be interesting to study the localization of the spectra of the preconditioned matrix-sequences, for fixed dimensions and in terms of the various problem and approximation parameters, in order to check the robustness of the proposed methods: this would allow precise estimates of the convergence speed by using classical and important results, such as those started with the pioneering work by Axelsson and Lindskög. 33

Figure 2 , 1
although as l increases larger values of n 1 , n 2 are needed to observe the asymptotic relation.The reason of this degradation relies in formula (16), since l − 1 terms are neglected in the computation of the symbol because their matrix-sequences are zero-distributed.Example 1: Comparison between the symbol   ( 1 ,  2 ) and eig(t x M N ) for l = 2. (a) (b) F I G U R E 2 Example 1: Comparison between the symbol   ( 1 ,  2 ) and eig(t x M N ) for l = 5.
collects the error for several values of n 1 = n 2 and , computed as the l 2 norm of the difference between the exact and the approximated solution.The TA B L E 3  =