Fast hybrid iterative schemes for solving variational inclusion problems

Tseng's forward‐backward‐forward splitting method for finding zeros of the sum of Lipschitz continuous monotone and maximal monotone operators is known to converge weakly in infinite dimensional Hilbert spaces. The inertial and viscosity approximation techniques are the techniques widely used to accelerate iterative algorithms and obtain strong convergence, respectively. In this paper, we propose two fast, strongly convergent modifications of Tseng's method and present some consequences and applications of our results. Moreover, we illustrate the performance and computability of our algorithms with relevant numerical examples. The two hybrid techniques incorporate the inertial and viscosity techniques. Unlike the conventional inertial‐viscosity hybrid techniques in the literature, our new algorithms compute both the inertial extrapolation and viscosity approximation simultaneously at the first step of each iteration.


INTRODUCTION
Throughout this paper,  is a real Hilbert space with inner product ⟨•, •⟩ and induced norm || • ||, A ∶  ⇉  is a maximal monotone operator, and B ∶  →  is a Lipschitz continuous monotone operator.In this paper, we study the following variational inclusion problem (VIP): find x ∈  such that 0∈ (A + B)x. (1.1) We shall denote the solution set of the problem (VIP) (1.1) by Ω.The study of the VIP is significant because it stands at the core of many important concepts in applied mathematics, such as convex minimization, split feasibility, fixed point, saddle point, variational inequality, and equilibrium problems.Also, it models numerous problems in various areas of applied sciences and engineering, such as signal processing, optimal control, image reconstruction, statistical learning, machine learning, quantum mechanics, filtration theory, and so on; see, for example, [1][2][3][4][5][6][7].Its generality and wide applicability have attracted the attention of many researchers, who have studied it and also proposed algorithms for finding approximate solutions to it.

L
) and J A = (I + A) −1 is the resolvent of the operator A. The sequences generated by algorithm (1.2) converge weakly to some solution to (1.1) under the assumption that B is 1 L -inverse strongly monotone (or cocoercive).Another sufficient condition for the convergence of (1.2) is the similar assumption of the strong monotonicity of A + B [7,9].
In order to relax the cocoercivity assumption in algorithm (1.2), Tseng [10] proposed the following modified forward-backward splitting method (also known as forward-backward-forward splitting method) (MFBSM): Under the assumption that the operator B is monotone and Lipschitz continuous, Tseng proved a weak convergence theorem for the sequences generated by the MFBSM.Note that the implementation of the algorithm (1.2) requires a prior knowledge or estimate of the Lipschitz constant of B, which sometimes is unknown or difficult to estimate in nonlinear problems, but that of the MFBSM uses a line search technique which helps to circumvent the problem of estimating the Lipschitz constant a priori.However, using a line search is not cheap and can be time consuming because the line search often requires many extra computations at each iteration [11].An alternative and more economical approach is to use a self-adaptive step size.In this connection, Cholamjiak et al. [11] have recently proposed a relaxed forward-backward splitting method (RFBSM), which converges weakly to some solution to (1.1).
Polyak [12] introduced the inertial extrapolation technique, also called the heavy ball method, to speed up the convergence of the classical gradient algorithm.Let  ∶  → R be differentiable.For x 0 , x 1 ∈ , define {x n } by where  n ∈ [0, 1) is the extrapolation coefficient and the term  n (x n − x n−1 ) constitutes the inertial step.Later, Alvarez and Attouch [13] studied the inertial technique and proposed an inertial proximal method for solving the problem of finding zeros of a maximal monotone operator.Due to its tendency to improve the convergence rate of algorithms, several authors have applied the inertial technique in order to modify or improve the MFBSM; see, for example, [11,14,15].
Note that the algorithms proposed for solving the VIP (1.1) in the aforementioned literature converge only weakly in general Hilbert spaces.However, in infinite dimensional Hilbert spaces and most applications, strong convergence, that is, convergence in the norm, is more desirable than weak convergence.These facts necessitate the combination of hybrid techniques with existing methods for solving the VIP in order to generate a hybrid-like strongly convergent method.For instance, Gibali and Thong [16] studied the VIP (1.1) and proposed two modifications of the MFBSM based on Mann and viscosity ideas, but without incorporating inertial steps.They were able to obtain strong convergence theorems.
Also recently, Suparatulatorna and Chaichana [17] have studied the VIP for finite families of operators.They have proposed an inertial-shrinking projection-MFBSM with a self-adaptive step size for approximating a solution to the problem and thereby obtained strong convergence.One of the challenges here is the computation of a projection per iteration, which amounts to solving a minimization problem.In addition, the implementation requires the construction of a closed and convex set C n+1 from C n , which is not a half-space, and the projection of the initial point onto C n+1 per each iteration, thus leading to high computational costs [18].This brought about the following question: Can we devise a simple and accelerated strongly convergent algorithm for solving the VIP for the case where the operator B is monotone and the operator A is maximal monotone?
In this paper, we answer the above question in the affirmative.Motivated by the above literature, we propose two inertial-viscosity-modified forward-backward splitting algorithms for approximating a solution to the VIP (1.1).The algorithms use self-adaptive step sizes.We prove strong convergence theorems for our algorithms and illustrate their numerical advantages over the existing algorithms using relevant numerical examples.In summary, • We propose two modifications of Tseng's forward-backward-forward splitting method for solving variational inclusion problems and prove a strong convergence theorem for each of them.• Our methods combine the inertial technique and viscosity approximation method with Tseng's forwardbackward-forward splitting method.The unique feature of our methods compared to the existing inertial-viscosity hybrid techniques in the literature (see, for example, [18][19][20][21][22]) is that we compute the inertial extrapolation and the viscosity approximation simultaneously and at the initial step of each iteration.• We aim to introduce simple and accelerated strongly convergent modified forward-backward splitting methods for solving variational inclusion problems in the framework of real Hilbert spaces.
The organization of our paper is as follows: In Section 2, we recall some useful definitions and preliminary results which are needed for the convergence analyses of our algorithms.In Section 3, we present our algorithms and their convergence analyses.In Section 4, we give some applications of our main results.In Section 5, we provide some numerical examples to illustrate our algorithms and compare them with some existing related algorithms in the literature.We conclude our paper with Section 6.

PRELIMINARIES
Let K be a nonempty, closed, and convex subset of the real Hilbert space  and let {x n } be a sequence in .By "x n ⇀ x" and "x n → x," we denote the weak and the strong convergence, respectively, of the sequence {x n } to a point x ∈ .The following equality is well known: (2.1)

Definition 2.1 ([23]
).A mapping T ∶  →  is said to be: If L ∈ [0, 1), then T is said to be a strict contraction.If L = 1, then T is said to be nonexpansive; (ii) -strongly monotone if there exists a constant  > 0 such that ⟨Tx − T, x − ⟩ ≥ ||x − || 2 ∀x,  ∈ ; (iii) -inverse strongly monotone if there exists a constant  > 0 such that Remark 2.1.From the above definition, it is easy to see that if T is -inverse strongly monotone, then it is monotone and 1  -Lipschitz continuous.Please see [24] for the definitions and facts given below.For a set-valued operator A ∶  ⇉ , the graph of A, which we denote by gr(A), is defined by and maximal monotone if gr(A) is not properly contained in the graph of any other monotone operator.The resolvent with parameter  > 0 of a maximal monotone operator A is defined by The metric projection of H onto K, denoted by P K , is the mapping that assigns each point x ∈ H to its unique nearest point in K, that is, The subdifferential  of a proper convex function  at x ∈ H is defined by The normal cone of K at the point x ∈ H, denoted by N K (x), is defined by It is known that i K = N K and that i K is a maximal monotone operator.In addition, for each  > 0, We recall the following important lemmata that are useful in our convergence analysis.

Lemma 2.1 ([25]
).Let  be a real Hilbert space.Suppose that  ∶  →  is -Lipschitz and -strongly monotone over a closed and convex subset K ⊂ .Then the variational inequality problem

Lemma 2.2 ([26]
).Let {s n } be a sequence of nonnegative real numbers satisfying the relation where {t n } ⊂ (0, 1) and { n } ⊂ R satisfy the following conditions: Lemma 2.3 ([27]).Let {Γ n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence Also, consider the sequence of integers {(n)} n≥n 0 defined by Then { n } n≥n 0 is an increasing sequence satisfying lim n→∞  n = ∞, and, for all n ≥ n 0 , the following two estimates hold:

MAIN RESULTS
In this section, we present our iterative schemes and their convergence analyses.We make the following assumptions.
(f) For some  > 0, { n } is a positive sequence satisfying 0 ≤  n <  and lim n→∞ Now we present our first algorithm.
Remark 3.1.Using Step 1 and Assumption 3.1(f), we find that Therefore, there exists M 1 > 0 such that Also, the sequence { n } generated by Algorithm 3.1 is a decreasing sequence satisfying Lemma 3.1.The sequence {x n } generated by Algorithm 3.1 is bounded.
Proof.Let x ∈ Ω.Then (3.4) Note that ( In view of the algorithm, it is not difficult to see that Putting (3.5) in (3.4), we find that We also have 0 ∈ (A + B)x and B n + v n ∈ (A + B) n .Then by the monotonicity of (A + B), we have This implies that Using (3.6) and (3.7), we now obtain In view of Remark 3.1, we also have Hence there exists n 0 ∈ N such that Using (3.8), we find that for n ≥ n 0 , It follows from (3.1) that (3.10) Using (3.9), (3.10), and mathematical induction, we find that This implies that {||x n − x||} is bounded.Consequently, {x n } is bounded as asserted.□ Theorem 3.1.The sequence {x n } generated by Algorithm 3.1 converges strongly to x ∈ Ω, where x is the unique solution to the following variational inequality problem: Find x ∈ Ω such that Proof.It is obvious that the mapping (I −  ) is (1 + )-Lipschitz and (1 − )-strongly monotone, respectively; see, for example, [28].Therefore, by Lemma 2.1, there exists a unique point x ∈ Ω such that (3.11) is satisfied. Let Therefore, using (3.1) and (2.1), and noting that  is a -contraction, we get and Thus, it follows from (3.8) and (3.12) that In view of the monotonicity of (A + B), this implies that Therefore it follows from (3.20) that (3.21) Taking the limit as n → ∞ in (3.21), we obtain The maximal monotonicity of A + B now implies that 0 ∈ (A + B)x.
Next, we prove that {x n } converges strongly.Using (3.13), we immediately see that In then follows from (3.24) that lim sup Therefore, by applying Lemma 2.2 to (3.23), we infer that x n → x ∈ Ω, where x is the unique solution to the variational inequality (3.11).Case 2: Suppose {||x n − x||} is not eventually monotonically decreasing.Let  ∶ N → N be the sequence defined for all n ≥ N 0 (for some N 0 large enough) by Then {(n)} is an increasing sequence satisfying Following similar arguments to those used in the proof of Case 1, we obtain In both cases, we have shown that {x n } converges strongly to x ∈ Ω, where x is the unique solution to the variational inequality problem (3.11).□ Next, we present another inertial-type algorithm for approximating a solution to (1.1).
Lemma 3.2.The sequence {x n } generated by Algorithm 3.2 is bounded.
Proof.Let x ∈ Ω.Then it follows from (3.28) that Continuing from (3.31) by mathematical induction and using (3.9), we get Now it is clear that {x n } is indeed bounded, as asserted.□ Theorem 3.2.The sequence {x n } generated by Algorithm 3.2 converges strongly to x ∈ Ω, where x is the unique solution to the following variational inequality problem: Find x ∈ Ω such that ( We also have (3.34) Using (3.28), (3.33), and (3.34), we get where  n ∶=  n (1 − ) and Employing (3.8) and (3.35), we now obtain

APPLICATIONS
In this section and hereafter, our numerical experiments are performed in Windows 8 using Matlab R2022a.They run on a Desktop computer with Intel(R) Core(TM) i5-3470 CPU @3.20GHZ and 8GB RAM.

Split feasibility problem
Let K and Q be nonempty, closed and convex subsets of the real Hilbert spaces  1 and  2 , respectively, and let A ∶  1 →  2 be a bounded linear operator with adjoint A * .The split feasibility problem (SFP) [29] is to: The SFP has been applied to model some inverse problems in phase retrieval, medical image reconstruction, and intensity modulation radiation therapy [30].This problem has been studied by a host of authors, who also proposed and analyzed several iterative algorithms for solving it; see, for example, [30][31][32] and references therein.An equivalent formulation of the SFP is the following minimization problem: The objective function g is continuously differentiable with the gradient ∇g(x) = A * (Ax − P Q Ax).Note that ∇g is From (2.2) and (4.2), it follows that Therefore, letting A = i K and B = ∇g in Algorithms 3.1 and 3.2, we obtain that the resulting algorithms converge strongly to a solution of the SFP (4.1)

Image restoration problem
We shall consider the following linear model used in image restoration problems: where  is the degraded image obtained by the action of the blurring matrix A on the original image x and  is the noisy term.For a typical grayscale image of dimension M pixels wide by N pixels high, each pixel has integer values in the range K = [0, 255], where the lower bound, 0, is black, and the upper bound, 255, is white [33].So this case means we are considering the Hilbert space R D equipped with the standard Euclidean norm || • || 2 , where D = M × N.An approach to recovering the original image x from the blurred image  calls for solving the following constrained convex minimization problem (see, for example, [34]): The minimization problem (4.4) is equivalent to the following inclusion problem:  where ∇g(x) = A * (Ax − ).Since the operator i K is maximal monotone and the operator ∇g is monotone and Lipschitz continuous with constant ||A|| 2 , we can apply our algorithms to restore the original image.We use the three grayscale images of the Cameraman, Medical image, and Tire from MATLAB Image Toolbox as our test images and degrade each by a Gaussian 7 × 7 blur kernel with standard deviation 4. We then apply our Algorithms, in comparison with Algorithm 2 (VTTM) in [16] and Algorithm 2 (IRFBSM) in [11], in order to solve the deblurring problem.The quality of the restored image is measured by the magnitude of the signal-to-noise ratio (SNR) in decibel (dB): ) 1 2 ∀x ∈ .Let A ∶  ⇉ , B ∶  →  and  ∶  →  be defined by , respectively, ∀ x ∈ .
The values of E n = 0.5||x n − J A (x n − Bx n )|| 2  2 against the n-th iterates x n for each choice are given in Figure 6 and Table 3.

Corollary 3 . 2 .
.36) Therefore, proceeding similarly to the steps taken after(3.13)  in the proof of Theorem 3.1, we obtain the asserted result.□ Some of the consequences of our results are given next.Corollary 3.1.Let  be a real Hilbert space, A ∶  ⇉  be a maximal monotone operator, and B ∶  →  be a monotone and L 2 -Lipschitz continuous operator.Assume that Ω ∶= {x ∈  ∶ 0 ∈ (A + B)x} ≠ ∅.Let {x n } be the sequence generated by Algorithm 3.3 under Assumption 3.1 (e) and (f).Then the sequence {x n } converges strongly to x ∈ Ω, where x = P Ω u.Proof.Put  (x) = u in Algorithm 3.1 and apply Theorem 3.1.□ Let  be a real Hilbert space, A ∶  ⇉  be a maximal monotone operator, and B ∶  →  be a monotone and L 2 -Lipschitz continuous operator.Assume that Ω ∶= {x ∈  ∶ 0 ∈ (A + B)x} ≠ ∅.Let {x n } be the sequence generated by Algorithm 3.4 under Assumption 3.1 (e) and (f).Then the sequence {x n } converges strongly to x ∈ Ω, where x = P Ω u.Proof.Put  (x) = u in Algorithm 3.2 and apply Theorem 3.2.□

FIGURE 1
FIGURE 1 Section 4.2: top left: original image; top middle: blurred image; top right: restored image by Algorithm 3.1 with SNR = 35.4066;Bottom left: restored image by Algorithm 3.2 with SNR = 35.4066;bottom middle: restored image by VTTM with SNR = 33.1575;bottom right: restored image by IRFBSM with SNR = 30.2582.

FIGURE 2
FIGURE 2 Section 4.2: top left: original image; top middle: blurred image; top right: restored image by Algorithm 3.1 with SNR = 42.1339;bottom left: restored image by Algorithm 3.2 with SNR = 42.1339;bottom middle: restored image by VTTM with SNR = 39.2252;bottom right: restored image by IRFBSM with SNR = 35.6827.

1
||A|| 2 -inverse strongly monotone, hence monotone and ||A|| 2 -Lipschitz continuous by Remark 2.1.It is not difficult to see that x * solves (4.1) if and only if it solves the fixed point equation
FIGURE 6 Left: Case IIa; right: Case IIb.[Colour figure can be viewed at wileyonlinelibrary.com]