• Aucun résultat trouvé

J. Vis. Commun. Image R.

N/A
N/A
Protected

Academic year: 2022

Partager "J. Vis. Commun. Image R."

Copied!
13
0
0

Texte intégral

(1)

An image restoration model combining mixed L

1

/L

2

fidelity terms

q

Tongtong Jia

a

, Yuying Shi

a,

, Yonggui Zhu

b

, Lei Wang

a

aDepartment of Mathematics and Physics, North China Electric Power University, China

bSchool of Science, Communication University of China, China

a r t i c l e i n f o

Article history:

Received 12 May 2015 Revised 22 March 2016 Accepted 25 March 2016 Available online 28 March 2016 Keywords:

Image restoration L1andL2fidelity terms TV

Split-Bregman Mixed noise

a b s t r a c t

Image restoration is a common problem in visual process. In this paper, a modified minimization model is presented, which combines theL1andL2fidelity terms with a combined quadraticL2and TV regularizer just as the regularizer of Cai et al. (2013). The combined regularizer has the priorities of preserving desir- able edges and ensuring several kinds of noises can be removed clearly. Split-Bregman algorithm is effi- ciently employed to solve this model and convergence analysis is also discussed. Moreover, we extend the proposed model and algorithm for image restoration involving blurry images and color images.

Experimental results show that our proposed model and algorithm have good performance both in visual andISNRvalues for different kinds of blurs and noises including mixed noise.

Ó2016 Elsevier Inc. All rights reserved.

1. Introduction

Image restoration has been widely applied in remote sensing, medical imaging, video cameras and so on (see, e.g., [38,24,33, 3,10]). The process of image restoration is that, an observed imagegis separated into an actual imageuand an additive noise n, that is,

g¼ Auþn;

whereAcan be the identity operator or a blurring operator.

It is well known that restoring an imageuis an ill-conditioned problem. Thus a regularization method should be used in the image restoration process. Tikhonov regularization is one of the commendable regularization methods, which adds kLuk22 as the regularization term. The matrix L is a regularization operator, and it is often chosen as the identity matrix or a matrix approxi- mating the first or second order derivation operator [37]. As is known, the normk k2yields over-smoothed restored images and often fails to adequately preserve important detail information such as sharp edges. While image edges usually contain some important information about the contours of an object which pro- vide sound resources for feature extraction and target detection, thus, the important problem is how to preserve edges in restored images.

The kLuk1 regularization is considered to remedy this. More relative works on promoting the use ofL1regularization can be found in the recent work[12]. One of the most successful regularizations is the Rudin–Osher–Fatemi (ROF) model with total variation (TV) reg- ularization, proposed by Rudin et al.[33]. However, the ROF model may lead to the staircase effect, due to the TV termkruk1. In order to solve the problem, many scholars have proposed a number of modifications (see, e.g.,[41,14,6,4,22]). You and Mostafa[41]pro- posed the four-order diffusion equation based on the direction of curvature. In [14], Gilboa et al. presented the nonlinear filtering model in the complex domain. Bredies et al.[4]discussed the total generalized variation (TGV) in detail, and He et al.[16]used the TGV regularization together with Augmented Lagrangian Method on image restoration, which is effective in staircasing effect suppres- sion. In[22,13], the non-local total variation (NLTV) was proposed which does not suffer from the staircase effect in image deblurring and denoising. Gilboa and Osher[13]showed the ability of NLTV to handle better textures and repetitive structures than local TV with L1fidelity term. Furthermore, Mumford and Shah[27,28]proposed an energy minimization problem which is close to the true solution by finding optimal piecewise smooth approximations.

Because of the nonconvexity of the Mumford–Shah (MS) model, it is a challenge to find or approximate its minimizer. To find the true solution, instead of tackling the challenging problem of solv- ing the MS model directly, Cai et al. [6] proposed to solve the approximate model

minu

c

1

2kg Auk22þ

a

2kruk22þ kruk1

n o

; ð1Þ

http://dx.doi.org/10.1016/j.jvcir.2016.03.022 1047-3203/Ó2016 Elsevier Inc. All rights reserved.

qThis paper has been recommended for acceptance by Weickert Joachim.

Corresponding author.

E-mail addresses:jttncepu@163.com(T. Jia),yyshi@amss.ac.cn(Y. Shi),ygzhu@

cuc.edu.cn(Y. Zhu),wanglei2239@126.com(L. Wang).

Contents lists available atScienceDirect

J. Vis. Commun. Image R.

j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / j v c i

(2)

local approach than TV. Inversely, model (1)can be solved very quickly by popular algorithms such as the split-Bregman algorithm [15,36]or the Chambolle–Pock method[8].

It had been proved in[6]that model(1)has one and only one solutionu, and showed the well restoration for noisy and blurry images in Examples 3 and 4 (see[6]). Motivated by the attractive feature that model(1)is effective for image restoration, we modify model(1)in this paper for image restoration by keeping the com- bined regularization term in model (1). The choice of the data-fidelity term usually depends on the type of noise contained in the measured image. As is known, theL2fidelity term (kgAuk22) has important advantage of removing Gaussian noise and is broadly used in many papers (see, e.g.,[7,15]). However, it usually yields unsatisfactory restored image in the presence of impulse noise (e.g., salt-and-pepper noise). Note that for the impulse noise, anL1fidelity term has been successfully used in the literature (see, e.g.,[1,29,30]). Based on model(1), the corresponding model for removing impulse noise can also be derived:

minu

c

2

2kg Auk1þ

a

2kruk22þ kruk1

n o

; ð2Þ

where

c

2is positive.

In order to settle the problem of removing a mixture of Gaus- sian and impulse noise, in [17] a combined L1=L2 fidelity term was suggested, which leads to the following model in the case of the regularizers in(1) and (2)

minu dðuÞ:¼

c

1

2kg Auk22þ

c

2

2kg Auk1þ

a

2kruk22þ kruk1

n o

; ð3Þ

where

a

>0;

c

1P0;

c

2P0 are parameters to balance the fidelity terms and the regularization terms. Noting that model (3)inter- plays betweenL2andL1terms, thus it can treat different kinds of noise. In fact, if

c

1>0;

c

2¼0, model(3)means model(1), and if

c

2>0;

c

1¼0, model(3)is simplified as model(2).

The variational functiondðuÞcan be solved by many numerical methods such as the fixed-point method[11], the gradient descent method[33,31], alternating minimization method[38], algebraic multigrid (AMG) method[39]and so on.

It should be noted that the underlying task in this paper is image restoration. The contributions of the paper are as follows:

firstly, motivated by[6,17], we combine theL1andL2fidelity terms with the regularization terms of Cai et al.[6]to construct a mixed restoration model. Secondly, we apply the split-Bregman method for minimizing the mixed model and analyze its convergence.

Finally, we test several different gray images with different noises and blurs, even color image to show the efficiency and accuracy of our algorithm.

The rest of the paper is organized as follows. In Section2, we show the detailed implementation of the split-Bregman algorithm for solving model (3). In Section 3, we display the convergence analysis of the proposed algorithm. Experimental results are exhib- ited in Section4. A brief summary is given in Section5.

applying the split-Bregman iteration, we can get the following iter- ative scheme(5) and (6)which is an approximation of(4)depend- ing on the choices ofbkandek,

ðukþ1;dkþ1;qkþ1Þ ¼argmin

u;d;q

c

1

2kg Auk22þ

c

2

2kqk1þ

a

2kruk22 n

þ kdk1þk1

2kd$ubkk22þk2

2kq ðg AuÞ ekk22o

; ð5Þ

where the parametersk1>0; k2>0; kis the iteration number and the update formulas forbkþ1andekþ1are

bkþ1¼bkþ$ukþ1dkþ1;

ekþ1¼ekþ ðg Aukþ1Þ qkþ1: ð6Þ Note that the solutionuof(5) and (6)is close to that of(4).

Our primal aim is how to solve the above minimization problem (5) and (6). By means of the alternating optimization technique, the problem (5) can be approximated to the following three alternating subproblems depending on the choices of qk;ek;dk andbk:

ukþ1¼argmin

u c1

2kg Auk22þa2kruk22þk21kdk$ubkk22

n

þk22kqk ðg AuÞ ekk22o

; dkþ1¼argmin

d kdk1þk21kd$ukþ1bkk22

n o

; qkþ1¼argmin

q c2

2kqk1þk22kq ðg Aukþ1Þ ekk22

n o

: 8>

>>

>>

>>

>>

<

>>

>>

>>

>>

>:

ð7Þ We now develop an efficient algorithm to solve the three sub- problems(7). To this end, we need the following definition about subdifferential[32].

Definition. Iff:U!Ris a real-valued convex function defined on a convex open set in the Euclidean spaceRnðnP1Þ, a vector

v

in

that space is called a subgradient at a pointx0inU, if for anyxinU one has

fðxÞ fðx0ÞP

v

ðxx0Þ;

where the dot denotes the dot product. The set of all subgradients at x0is called the subdifferential atx0and is denoted as@fðx0Þ.

From[32], we know that@fðx0Þis a nonempty convex compact set. And whenfis differentiable atx; @fðxÞ ¼f0ðxÞ. According to the proposition of subdifferential,(7)yields the following decoupled solutions.

1. For solving the u-subproblem, we derive the optimality condition

½ð

c

1þk2ÞATA ð

a

þk1ÞDukþ1

¼ ð

c

1þk2ÞATgþk2ATðekqkÞ þk1$TðdkbkÞ; ð8Þ

(3)

whereD¼ ðrTxrxþrTyryÞandAT is the conjugate transpose ofA.

Since for the periodic boundary conditions, the blurring matrix Acan be a circulant matrix that is easy to be computed, here we apply the periodic boundary conditions in our program. Note that the periodic boundary will cause undesired artifacts in the recovered images. Many authors have proposed many strategies to reduce it (e.g.,[25]), thus we will consider them in the follow- ing paper. To reduce the complexity of the algorithm, we apply the fast Fourier transform (FFT) [23,9] to solve the u-subproblem. That is,

ukþ1¼ F1 Fðð

c

1þk2ÞATgþk2ATðekqkÞþk1$TðdkbkÞÞ Fðð

c

1þk2ÞAT

a

þk1ÞDÞ

" #

: ð9Þ 2. Thed-subproblem can be solved by shrinkage formulation

(see, e.g.,[15,34]), the closed-form solution is

dkþ1¼ ðrukþ1þbkÞmax 1 1 k1j$ukþ1þbkj;0

!

: ð10Þ

3. The q-subproblem also can be accurately solved by the closed-form solution

qkþ1¼ ðg Aukþ1þekÞmax 1

c

2

2k2jg Aukþ1þekj;0

: ð11Þ Thus we can see that each subproblem either can be solved by the FFT, one fast solver, or has a closed-form solution.

We summarize the algorithm as follows:

Proposed Algorithm 1. Initialization: set

u0¼g; b0¼d0¼0; q0¼g Au0; e0¼q0. 2. Fork¼0;1;. . ., do

a: solve(9)to getukþ1, b: solve(10)to getdkþ1, c: solve(11)to getqkþ1, d: updatebkþ1;ekþ1by(6).

3. End do till some stopping rule meets.

3. Convergence analysis

In this section, we elaborate on the rigorous convergence of the iterative algorithm proposed in the previous section. We first state the main theorem, followed by the detailed proof motivated by [26,5,35]inAppendix A. The main difference in the proof is that our model has the smoothing term kruk22 (similar to the term kuk22in [26]) and theL1 fidelity term (kg Auk1) which need us to additionally control. It is seen from(5)that the algorithm needs to solve several subproblems(7)which are convex and subdiffer- entiable. From the first order optimality condition, we derive

½ð

c

1þk2ÞATA ð

a

þk1ÞDukþ1 ð

c

1þk2ÞATg þk2ATðqkekÞ k1rTðdkbkÞ ¼0;

pkþ1þk1ðdkþ1rukþ1bkÞ ¼0; p2@jdj;

tkþ1þk2ðqkþ1 ðg Aukþ1Þ ekÞ ¼0; t2@jqj;

bkþ1¼bkþrukþ1dkþ1; ekþ1¼ekþ ðg Aukþ1Þ qkþ1; 8>

>>

>>

>>

>>

><

>>

>>

>>

>>

>>

:

ð12Þ

where d;b;p are in the component form of d¼ ðdx;dyÞ; b¼ ðbx;byÞ; p¼ ðpx;pyÞ.

Theorem 1. Let u be a solution of model (3). Given

c

1;

c

2;k1;k2;

a

>0in the alternating split-Bregman iteration scheme, we have

k!1limdðukÞ ¼dðuÞ; ð13Þ

Moreover, if the solution is unique, we get

k!1limkukuk22¼0: ð14Þ

The detailed proof is shown inAppendix A.

4. Numerical experiments

In this section, we test two gray images (‘‘cameraman” and

‘‘kitten”) of size 256256 pixels, and a color1image (‘‘lena”) of size 2562563 pixels inFig. 1to show the effectiveness of the pro- posed model and algorithm. The numerical examples are all imple- mented using Matlab (R2010b) on a laptop with Intel(R) in Core (TM)2 2.00GHZ, 2.00 GB RAM, windows XP.

In the following examples, if there is no special instructions, the initial iterative valueu0is the corresponding contaminated image g, respectively. The CPU times recorded are in seconds. The noisy images are contaminated by Gaussian noise (GN) with the variance of

r

using Matlab code: ‘‘imnoise(u, ‘gaussian’, 0,

r

)”, or salt- and-pepper noise (SPN) with the noise density d using Matlab code: ‘‘imnoise(u, ‘salt & pepper’,d)”, whereuis the original image.

For simplicity,Gð0:01Þmeans GN with the variance of 0.01,Sð0:04Þ means SPN with the noise density of 0:04, andGð0:04Þ þSð0:01Þ represents mixed noise: GN with the variance of 0:04 and SPN with the noise density of 0:01 (the same below). All the following exper- iments, we use the prescribed relative error which is defined as follows:

err¼kukþ1ukk2 kukþ1k2

and set the stopping ruleerr¼105. We mainly compare the visual quality of the restored images and the improvement in signal to noise ratioðISNRÞvalue which is defined as follows[2]:

ISNR¼10log10

P

i;jðui;jgi;jÞ2 P

i;jðui;jwi;jÞ2;

where ui;j;wi;j and gi;j denote the pixel values of the original, restored and contaminated images, respectively. The larger theISNR value is, the better the restored result is.

4.1. Example 1 (Parameters selection)

In this part, we will mainly discuss some basic guidelines on how to choose the three parameters (

a

;

c

1;

c

2) in model(3). Differ- ent values of the parameters illustrate the influence of the three individual terms. The parameters

c

1 and

c

2 regulate the fidelity terms, which should be chosen extremely large such as in [40].

The parameter

a

regulates kruk22 to avoid the staircase effect which is caused because of the termkruk1, but too large

a

yields

over-smoothed restored image. A number of trials on ‘‘camera- man” image with GN (seeFig. 2(a)) and SPN (seeFig. 2(e)) have been carried out.

1For interpretation of color in Figs. 1 and 9, the reader is referred to the web version of this article.

(4)

Here, we fix k1¼10; k2¼2, and choose

a

2 f0:1;1;5g;

c

12 f5;10;15;50g;

c

22 f0:01;0:1;1;10gto show the effect of the three parameters on the restoration results for Fig. 2(a). And we fix k1¼2; k2¼5, and choose

a

2 f1;2;5g;

c

12 f0:1;1;10;100g;

c

22 f0:5;1;5;15gto show the effect of the three parameters on the restoration results forFig. 2(e). A complete treatment of the restored images requires many pages, so the restored images are not shown here. TheISNRvalues are mainly compared in Tables 1 and 2. From Tables 1 and 2, we obtain the following three conclusions:

1. Usually, if

a

and

c

2are fixed, larger

c

1leads to theISNRvalue first increases and then decreases. But when

c

2 is relatively large, larger

c

1gets smallerISNRvalue.

2. Usually, if

a

and

c

1are fixed, theISNRvalue first increases and then decreases as

c

2becomes larger. But when

c

1is relatively large, larger

c

2gets smallerISNRvalue.

3. Generally speaking, when

c

1or

c

2is relatively small, larger

a

brings about smallerISNRvalue, but when

c

1or

c

2is relatively large, theISNRvalue increases as

a

becomes larger.

It should be illustrated that the conclusions can give us a rough guidance when selecting the parameters. If we would like to get the best restored image, parameters selection should be done Fig. 1.Original images.

Fig. 2.Restoration results with different iterative initial values using model(3). First column: noisy images with different noises; second-fourth columns: restored images using different iterative initial values.

Table 1

TheISNRvalues of the ‘‘cameraman” image contaminated by GN (Fig. 2(a)) using different parameters.

Noise a c1 c2

0.01 0.1 1 10

GN 0.1 5 3.7410 3.9412 5.5958 0.0206

10 6.2088 6.3180 6.9423 0.0196

15 6.9627 6.9646 6.4512 0.0184

50 3.0314 2.9730 2.3910 0.0155

1 5 3.3182 3.5089 5.0797 0.0256

10 5.7213 5.8329 6.5816 0.0242

15 6.6847 6.7103 6.5170 0.0228

50 3.4237 3.3691 2.8188 0.0197

5 5 2.2330 2.3876 3.6775 1.6059

10 4.3127 4.4083 5.2002 1.4533

15 5.4737 5.5281 5.8594 1.3158

50 4.5501 4.5112 4.1053 0.7657

(5)

manually combined with the above guidance and experience for dif- ferent images. Furthermore, for fixed

a

¼1;k1¼10;k2¼2, we test the parameters

c

1¼

c

2¼1:0e+17 which are sufficiently large for Fig. 2(a). The restoration ofFig. 2(a) using model(3)is nearly the noisy image and theISNRvalue is 3:0079e15 which is close to 0. Meanwhile if we set

a

¼1; k1¼2; k2¼5;

c

1¼

c

2¼1:0e+17 forFig. 2(e), the restored image is also the noisy image and theISNR value is 0.

4.2. Example 2 (Different initial iterative values) Since the choice of the initial iterative values impacts the num- ber of iterations and computational time, we test two images (‘‘cameraman” and ‘‘kitten”) contaminated by GN (seeFigs. 2(a) and 3(a)) and SPN (seeFigs. 2(e) and 3(e)) with three differentu0: u0¼g; u0¼0; u0¼R;

wheregis the observed image,0represents zero matrix,Rmeans the random value matrix. Generally, the other parameters are set to be

a

¼1; k1¼2; k2¼2;

c

1¼11;

c

2¼1 forFigs. 2(a) and 3(a),

a

¼1; k1¼2; k2¼5;

c

1¼0:0001;

c

2¼5 for Figs. 2(e) and 3(e).

The tolerance err¼105 for both the ‘‘cameraman” and ‘‘kitten”

images. We present the restoration results with different initial val- ues inFigs. 2 and 3. TheISNRvalues and CPU times (in seconds) are listed inTable 3.

We observe fromFigs. 2 and 3that the restoration results are all satisfactory in visual.Table 3implies that different initial values lead to almost the same ISNR values for the same noisy image, but different computational times which are generated because of the same stopping criterion. The longest computational time is the case when the random matrix as the initial value. The above facts show our algorithm is robust and effective. Moreover, the results verify that our proposed algorithm is convergent.

Table 2

TheISNRvalues of the ‘‘cameraman” image contaminated by SPN (Fig. 2(e)) using different parameters.

Noise a c1 c2

0.5 1 5 15

SPN 1 0.1 0.4202 3.2774 12.0938 5.9400e4

1 2.6270 4.3567 12.1019 5.3911e4

10 5.2314 5.3727 3.8417 2.7982e4

100 0.7273 0.6996 0.4453 2.9909e5

2 0.1 0.2795 2.8457 10.8856 1.8521

1 2.2988 3.9273 11.1819 1.6720

10 5.7080 6.0316 5.6241 0.8400

100 1.0253 1.0001 0.7598 0.1393

5 0.1 0.0263 2.0266 8.7120 7.8875

1 1.6749 3.1127 9.0480 7.4313

10 5.6931 6.1984 8.0068 4.6520

100 1.8082 1.7924 1.6080 0.9764

Fig. 3.Restoration results with different iterative initial values using model(3). First column: noisy images with different noises; second-fourth columns: restored images using different iterative initial values.

Table 3

TheISNRvalues and computational times (in seconds) using different initial values for Figs. 2(a), (e) and 3(a), (e).

Image Noise Initial value ISNR Time (s)

Cameraman GN g 6.7008 13.000287

0 6.7006 13.018911

R 6.7005 13.560725

SPN g 12.9243 20.371140

0 12.9243 20.680721

R 12.9242 20.715392

Kitten GN g 4.3116 10.214072

0 4.3119 10.594537

R 4.3119 10.728288

SPN g 11.7711 16.015243

0 11.7711 15.321070

R 11.7710 16.262321

(6)

4.3. Example 3 (Comparisons with pure noises)

This example tests the ‘‘cameraman” image to signify the effi- ciency of the combination of theL1andL2fidelity terms in model (3). Here, in order to get relatively fair results, we compare model (1) (resp.(2)) with model(3) for images corrupted by Gaussian (resp. salt-and-pepper) noise. Note that if we setk2¼

c

2¼0,(5) becomes an iterative scheme of model(1).

To show the superiority of model(3)compared with model(1) for image (Fig. 2(a)) which is corrupted by Gaussian noise, we try to determine the optimal

c

1 for model (1). Letting k1! 1, the problem(8) would have become ill-conditioned[18]. As

a

is a

weighting parameter to balance the two regularization terms, it should be chosen appropriately to ensure that when noises can be sufficiently removed, the edge information can also be pre- served well. Here, we fixerr¼105;

a

¼1e4,k1¼3, and test

c

1 Fig. 4.Model comparisons. (a) Restored image ofFig. 2(a) using model(1); (b) restored image ofFig. 2(a) using model(3); (c) restored image ofFig. 2(e) using model(2); (d) restored image ofFig. 2(e) using model(3).

Fig. 5.Comparisons among models(1)–(3)with mixed noises. First column: noisy images; second column: restored images using model(1); third column: restored images using model(2); fourth column: restored images using model(3). The numbers after different models are the iteration numbers.

(7)

from 1 to 20 using bisection method, then get the optimal

c

1¼11 with the best visual quality and the largestISNRvalue. So we let

a

¼1e4, k1¼3;

c

1¼11 for model (1), and additionally

a

¼1e4,k1¼2;

c

1¼11; k2¼0:001;

c

2¼1 for model(3).

Similarly, if we set

c

1¼0 in(5), it becomes an iterative scheme of model(2). To show the superiority of model(3)compared with model(2) for image (Fig. 2(e)) which is corrupted by salt-and- pepper noise, we try to determine the optimal

c

2for model(2).

Here, we fixerr¼105;

a

¼0:5; k1¼3; k2¼5, and test

c

2from 1 to 10, then get the optimal

c

2¼5 for model(2). Furthermore, we set

c

1¼0:0001 for model(3).

The restored images,ISNRvalues and the iteration numbers are shown inFig. 4. FromFig. 4, models(1)–(3)can get satisfactory results in visual, but model(3)performs superior than model(1) for Gaussian noise with largerISNRvalues, and can get the same result with model(2)for salt-and-pepper noise.

4.4. Example 4 (Comparisons with mixed noises)

To further signify the advantage of the mixed model(3)again, we test the ‘‘cameraman” image corrupted by mixed noises

(e.g., GN and SPN with different variances or noise density) in this subsection. Here, we need to illustrate that we will present the selected parameter values in the form of (

a

;

c

1;k1) for model(1), (

a

;k1;

c

2;k2) for model (2) and (

a

;

c

1;k1;

c

2;k2) for model (3) in the following experiments for details.

If the image is corrupted by mixed noise Gð0:04Þ þSð0:01Þ (Fig. 5a)), the parameters are set to be (7;5;1) for model (1), (5;0:1;1:5;1) for model(2)and (7;5;1;1;0:001) for model(3). If the image is corrupted by mixed noise Gð0:01Þ þSð0:01Þ (Fig. 5 (e)), we choose the parameters (0:0001;3;0:01) for model (1), (3;0:05;2;0:008) for model(2) and (3;2;0:05;2;0:08) for model (3). If the image is corrupted by mixed noise Gð0:01Þ þSð0:04Þ (Fig. 5(i)), letting (0:002;3:5;1) for model(1), (3;0:05;2;0:08) for model(2), and (3;2;0:05;2;0:08) for model(3).

Fig. 5shows the ISNRvalues, iteration numbers and restored images which reflect the visual appearance of the three models.

Comparing theISNRvalues, it can be seen that when image is con- taminated by mixed noise Gð0:04Þ þSð0:01Þ (i.e., Fig. 5(a)), the restored results of model(1)performs better than that using model (2), and our model(3)performs superior than both models(1) and (2). But sometimes (e.g.,Fig. 5(d)), model(3)needs more iteration numbers than model(1). When images are contaminated by mixed Fig. 6.Restoration results of blurry and noisy images. First column: images corrupted by Gaussian noise and separately Gaussian, motion, disk blurs; second column: restored images of the first column using model(3); third column: images corrupted by salt-and-pepper noise and separately Gaussian, motion, disk blurs; fourth column: restored images of the third column using model(3). The numbers after different models are the iteration numbers.

(8)

noise Gð0:01Þ þSð0:01Þ (Fig. 5) and Gð0:01Þ þSð0:04Þ (Fig. 5(i)), model(2)performs better than model(1), and our model(3)can get the best results among the three models. We noticed that Hin- termüller and Langer [17] had pointed out that the model with mixedL1 andL2fidelity terms performs superior than theL1-TV model as well as theL2-TV model in restoration tasks for mixed noises such as GN and SPN.

4.5. Example 5 (Blurry and noisy images)

In[6], they give an example with blurry and noisy image to show the robustness of their model with respect to the threshold.

Here, to illustrate the robustness of our model(3), we further test the ‘‘kitten” image with three kinds of different blurs: Gaussian blur (GB) (fspecial(‘gaussian’,7,1)), motion blur (MB) (fspecial(‘mo tion’,5,1)), disk blur (DB) (fspecial(‘disk’,1)), namelyA–I. Firstly, we test the images which are contaminated by pure noises. The symbol GB +G(0.01) means that the blurry and noisy image cor- rupted by GB and GN with variance 0:01 (the same below in Fig. 6). We choose the parameters (20;40;10;1;0:001) when images are corrupted by GN, and (1;10;0:01;0:01;0:1) when

images are corrupted by SPN.Fig. 6shows the blurred and noisy images and the restored images with their corresponding iteration numbers,ISNRvalues, respectively. From Fig. 6, we see that the restored images are all satisfactory in visual. This example shows that our proposed model (3) is also very efficient for restoring blurred and noisy images with pure noises.

Additionally, we give restored results about the blurred images (GB, MB, DB) with mixed noises (Gð0:01Þ þSð0:04Þ and Gð0:04Þ þSð0:01Þ) in Fig. 7 to verify model (3) is effective for blurred and noisy images with mixed noises. The parameters are selected as (5;9;10;1;0:001) for this test. From the second and fourth columns ofFig. 7, we see that the restorations are all satis- factory to remove blurs and mixed noises.

4.6. Example 6 (Extension to color image)

In this part, we extend models(1)–(3)for a color image inFig. 8.

Since there are three channels for a color image, namely R, G, B, we need to treat each channel just as a gray image, and then com- pound each of the restored channels to get the final restored image.

Firstly, pure noises GN and SPN are separately added to the image. The parameters are (6;15;3) (resp. (0:5;2;5;5)) for model Fig. 7.Restoration results of blurry images with mixed noises. First column: images corrupted by mixed noiseGð0:01Þ þSð0:04Þand separately Gaussian, motion, disk blurs;

second column: restorations of the first column using model(3); third column: images corrupted by mixed noiseGð0:04Þ þSð0:01Þand separately Gaussian, motion, disk blurs; fourth column: restorations of the third column using model(3).

(9)

(1)(resp.(2)), and (6;15;3;1;0:05) (resp. (0:5;1e5;2;5;5)) for model(3)when image is corrupted by GN (resp. SPN). Here we compare the restorations of models(1)(resp.(2)) and(3) when images are corrupted by pure GN (resp. SPN) inFig. 8.

By comparing the restored images using model(3)(seeFig. 8(c) and (f)) with the results using model(1)(seeFig. 8(b)) and model (2)(seeFig. 8(e)), we see that our model(3)can get clearer (or the same) restored images with larger ISNR values. Precisely, our model (3) can better preserve the details (e.g., lena’s nose and hat). The corresponding iteration numbers for each channel of Fig. 8(b), (c), (e), (f) are (128, 157, 11), (328, 456, 11), (349, 364, 53), (349, 364, 53), respectively.

Secondly, in order to illustrate the advantage of the proposed model(3)to remove the mixed noise, we test the ‘‘lena” image with three different kinds of mixed noise just as Example 4.

Here, for mixed noiseGð0:04Þ þSð0:01Þ, the parameters are set to be (5;5;0:3) in model (1), (5;0:3;2;1) in model (2), and (5;1;0:3;2;1) in model(3). The corresponding iteration numbers for each channel of the restored images using models(1)–(3)are ð551;696;35Þ;ð697;927;25Þ;ð565;765;50Þ, respectively.

For mixed noise Gð0:01Þ þSð0:01Þ, parameters are set to be (5;5;0:3) for model (1), (3;1;2;0:8) for model (2), and (3;0:05;1;2;0:8) for model(3). The corresponding iteration num- bers for each channel of the restored images using models(1)–(3) areð653;878;2Þ;ð1646;2143;39Þ;ð1332;1827;31Þ, respectively.

For mixed noise Gð0:01Þ þSð0:04Þ, parameters are (4;1;2) for model (1), (0:005;80;2;300) for model (2), and (0:005;1;80;2; 300) for model(3). The corresponding iteration numbers for each channel using models (1)–(3) are ð333;506;23Þ;ð532;665;4Þ; ð492;568;4Þ, respectively.

The results which are listed inFig. 9show the priority of model (3)than models(1) and (2)with largerISNRvalues, no matter what kind of mixed noise. In addition, model(1)performs better than model (2) (see Fig. 9(b) and (c)) for mixed noise Gð0:04Þ þSð0:01Þ, and model(2) performs better than model(1) (see Fig. 9(g), (k) and (f), (j)) for mixed noise Gð0:01Þ þSð0:01Þ

andGð0:01Þ þSð0:04Þ. It should be noticed that model(3)can get largerISNRvalues with smaller iteration numbers than model(2).

Furthermore, the restored results about the blurry color images with mixed noises are shown inFig. 10. The parameters for all the situations are the same with (5;9;1;1;0:0001). Seen from the sec- ond and fourth column ofFig. 10, model(3)also performs well on color image with different kinds of blur and mixed noise. Here, the corresponding iteration numbers for each channel using model(3) for the second column of Fig. 10 are ð366;474;476Þ;ð346;450; 471Þ;ð325;400;439Þ, and for the fourth column of Fig. 10 are ð345;427;441Þ;ð316;386;400Þ;ð291;364;365Þ, respectively.

5. Conclusions

In this paper, we have presented a model which combinesL1and L2fidelity terms with the two regularization terms based on model (1), and applied the split-Bregman technique to separate the mini- mum formulation. To accelerate the computation, we have used the FFT to solve theu-subproblem. Convergence of the algorithm has been revealed by theoretical and experimental study in the paper.

A large number of numerical experiments have shown that our pro- posed model is effective and robust for image restoration corrupted by different blurs and different noises. If there is only salt-and- pepper noise presented in an image, model(3)does not improve the restoration compared to model(2), see for exampleFig. 8. On the contrary, if an image is contaminated by Gaussian noise or even by a mixture of Gaussian and salt-and-pepper noise, model(3)is superior to the other two considered models. In addition, the proposed model also has revealed its validity for color image.

Acknowledgments

The research is partially supported by NSFC (Nos. 11271126, 11571325) and the Fundamental Research Funds for the Central Universities (No. 2014ZZD10). The authors are indebted Fig. 8.Comparisons among model(1), (2)and model(3)for color images. First column: noisy images; second column–third column: restored images using different models.

(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

(10)

to Professor Tieyong Zeng from Hong Kong Baptist University for sharing the code.

Appendix A

Proof of Theorem 1. Letu be the fixed point of(3). By the first order optimality condition,usatisfies

@H1ðuÞ þ@H2ðuÞ þ@RðuÞ þ@ðTVðuÞÞ ¼0; ðA:1Þ with H1ðuÞ ¼c21kg Auk22; H2ðuÞ ¼c22kg Auk1; RðuÞ ¼a2kruk22 and TVðuÞ ¼ jruj. Then in the similar way, the fixed point ðu;d;q;b;eÞof(6) and (7)satisfy

½ð

c

1þk2ÞATA ð

a

þk1ÞDu ð

c

1þk2ÞATg þk2ATðqeÞ k1rTðdbÞ ¼0;

pþk1ðdrubÞ ¼0; p2@jdj;

tþk2ðq ðg AuÞ eÞ ¼0; t2@jqj;

b¼bþrud; e¼eþ ðg AuÞ q: 8>

>>

>>

>>

>>

<

>>

>>

>>

>>

>:

ðA:2Þ

Denote the errors by

uke¼uku; dke¼dkd; qke¼qkq; bke¼bkb; eke¼eke; pke¼pkp; tke¼tkt:

All equations of (A.2) correspondingly subtracted from the equations of(12), we obtain

½ð

c

1þk2ÞATA ð

a

þk1ÞDukþ1e þk2ATðqkeekeÞ þk1rTðbkedkeÞ ¼0;

pkþ1e þk1ðdkþ1e rukþ1e bkeÞ ¼0; p2@jdj;

tkþ1e þk2ðqkþ1e þ Aukþ1e ekeÞ ¼0; t2@jqj;

bkþ1e ¼bkeþrukþ1e dkþ1e ; ekþ1e ¼eke Aukþ1e qkþ1e : 8>

>>

>>

>>

>>

>>

<

>>

>>

>>

>>

>>

>:

ðA:3Þ

Taking the inner product of the left- and right-hand sides for the first three equations in(A.3)with respect toukþ1e ;dkþ1e andqkþ1e sep- arately, and square both sides of the last two equations, we have Fig. 9.Comparisons on color image with mixed noises. First column: noisy images with different noises; second column: restored images with theirISNRvalues using model (1); third column: restored images with theirISNRvalues using model(2); fourth column: restored images with theirISNRvalues using model(3). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

(11)

ð

c

1þk2ÞhATAukþ1e ;ukþ1e i þ ð

a

þk1Þkrukþ1e k22 þk1hrTðbkedkeÞ;ukþ1e i þk2hATðqkeekeÞ;ukþ1e i ¼0;

hpkþ1e ;dkþ1e i þk1kdkþ1e k22k1hrukþ1e þbke;dkþ1e i ¼0;

htkþ1e ;qkþ1e i þk2kqkþ1e k22k2hAukþ1e þeke;qkþ1e i ¼0;

kbkþ1e k22¼ kbkek22þ krukþ1e dkþ1e k22þ2hbke;rukþ1e dkþ1e i;

kekþ1e k22¼ kekek22þ k Aukþ1e qkþ1e k22þ2heke;Aukþ1e qkþ1e i:

8>

>>

>>

>>

>>

><

>>

>>

>>

>>

>>

:

ðA:4Þ Summing up the first three equations of(A.4), we get

ð

c

1þk2ÞhATAukþ1e ;ukþ1e i þ ð

a

þk1Þkrukþ1e k22þ hpkþ1e ;dkþ1e i þ htkþ1e ;qkþ1e i þk1hbke;rukþ1e dkþ1e i þk1kdkþ1e k22 k1hrukþ1e ;dkþ1e þdkei þk2kqkþ1e k22þk2heke;Aukþ1e qkþ1e i k2hAukþ1e ;qkþ1e þqkei

¼0: ðA:5Þ

Furthermore, rearranging the last two formulas of (A.4), we obtain

hbke;rukþ1e dkþ1e i ¼12kbkþ1e k2212kbkek2212krukþ1e dkþ1e k22; heke;Aukþ1e qkþ1e i ¼12kekþ1e k2212kekek2212k Aukþ1e qkþ1e k22: 8<

:

ðA:6Þ Substituting(A.6)into(A.5), we have

k1

2ðkbkek22 kbkþ1e k22Þ þk2

2ðkekek22 kekþ1e k22Þ

¼

c

1hATAukþ1e ;ukþ1e i þ

a

krukþ1e k22þ hpkþ1e ;dkþ1e i þ htkþ1e ;qkþ1e i þk1krukþ1e k22k1

2krukþ1e dkþ1e k22k1hrukþ1e ;dkþ1e þdkei þk1kdkþ1e k22þk2hATAukþ1e ;ukþ1e i k2

2k Aukþ1e qkþ1e k22 k2hAukþ1e ;qkþ1e þqkei þk2kqkþ1e k22:

Then by summing the above equation bilaterally from 0 toN, we obtain

Fig. 10.Restoration results of blurry and noisy images using model(3). First column: images corrupted by mixed noise and separately Gaussian, motion, disk blurs; second column: restored images of the first column; third column: images corrupted by mixed noise and separately Gaussian, motion, disk blurs; fourth column: restored images of the third column.

(12)

2

Noting that all terms involved in(A.7)are nonnegative,j jand k k22are convex, we derive that

k1

2ðkb0ek22þ kd0ek22Þ þk2

2ðke0ek22þ kq0ek22Þ P

c

1

XN

k¼0

hATAukþ1e ;ukþ1e i þ

a

X

N

k¼0

krukþ1e k22þXN

k¼0

hpkþ1e ;dkþ1e i

þXN

k¼0

htkþ1e ;qkþ1e i þk1

2 XN

k¼0

krukþ1e dkek22þk2

2 XN

k¼0

k

Aukþ1e qkek22: ðA:8Þ In the following part, we mainly discuss the function of each term in the right-hand side of(A.8)to the results.

Firstly, the first two equations of(A.8)leads to PN

k¼0hATAukþ1e ;ukþ1e i ¼PN

k¼0hrH1ðukþ1Þ rH1ðuÞ;ukþ1ui<1; PN

k¼0krukþ1e k22¼PN

k¼0hrRðukþ1Þ rRðuÞ;ukþ1ui<1: (

Together with Theorem 3.1 in[35]implies limk!1hrH1ðukþ1Þ rH1ðuÞ;ukþ1ui ¼0;

limk!1hrRðukþ1Þ rRðuÞ;ukþ1ui ¼0:

(

ðA:9Þ Recall that, for any convex functionJ, the Bregman distance satisfies

DpJðu;

v

Þ þDqJðu;

v

Þ ¼ hqp;u

v

i ¼0; 8p2@Jð

v

Þ; q2@JðuÞ:

This, together with(A.9)and the nonnegativity of Bregman dis- tance, we have

k!1limDrHH1ðÞ1ðuÞðukþ1;uÞ ¼0; and lim

k!1DrRðuRðÞ Þðukþ1;uÞ ¼0 i:e:;

limk!1ðH1ðukþ1Þ H1ðuÞ hrH1ðuÞ;ukþ1uiÞ ¼0;

limk!1ðRðukþ1Þ RðuÞ hrRðuÞ;ukþ1uiÞ ¼0:

(

ðA:10Þ Secondly, analogous to(A.9), we get the following two formulas from(A.8),

k!1limhpkþ1e ;dkþ1e i ¼0 and lim

k!1htkþ1e ;qkþ1e i ¼0:

Associating it with the nonnegativity of the Bregman distance, we obtain

k!1limðjdkþ1j jdj hdkþ1d;piÞ ¼0;

and

k!1limðjqkþ1j jqj hqkþ1q;tiÞ ¼0:

Thirdly, (A.8) generates PN

k¼0krukþ1e dkek22<1 and PN

k¼0k Aukþ1e qkek22<1, which implies

Combining(A.10), (A.11) and (A.1), we have limk!1ðH1ðukþ1Þ þ H2ðukþ1Þ þ Rðukþ1Þ þ jrukþ1

¼ H1ðuÞ þ H2ðuÞ þ RðuÞ þ jruj:

Hence(13)holds.

Next, we prove(14)by assuming(3)has a unique solution. The argument is by contradiction. If(14)does not hold, then for some

and for alli, there exists a subsequenceuki, such thatkukiuk>

e

.

Letc¼tuþ ð1tÞukiwitht2 ð0;1Þ. By the convexity ofdand the assumptionuis the unique minimizer ofdðuÞ, we have

dðukiÞ>tdðuÞ þ ð1tÞdðukiÞ>dðcÞPminfdðuÞ:kuuk2¼

e

g:

Denote

u¼argmin

u fdðuÞ:kuuk2¼

g:

By applying the conclusiondðukÞ ¼dðuÞ, we have dðuÞ ¼lim

k!1dðukiÞ>dðuÞ>dðuÞ;

which is a contradiction. Thus limk!1kukuk2¼0.h

References

[1]S. Alliney, A property of the minimum vectors of a regularizing functional defined by means of the absolute norm, IEEE Trans. Signal Process. 45 (4) (1997) 913–917.

[2]S. Babacan, R. Molina, A. Katsaggelos, Total variation image restoration and parameter estimation using variational posterior distribution approximation, IEEE International Conference on Image Processing, vol. 1, 2007, pp. I-97–I- 100.

[3]A. Bovik, J.D. Gibson, Handbook of Image and Video Process, ACM Digital Library, 2005.

[4]K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM J. Imag. Sci. 3 (3) (2010) 492–526.

[5]J. Cai, S. Osher, Z. Shen, Split Bregman methods and frame based image restoration, SIAM J. Imag. Sci. 8 (2) (2009) 337–369.

[6]X. Cai, R. Chan, T. Zeng, A two-stage images segmentation method using a convex variant of the Mumford–Shah model and thresholding, SIAM J. Imag.

Sci. 6 (1) (2013) 368–390.

[7]A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imag. Vision 20 (1–2) (2004) 89–97.

[8]A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imag. Vision 40 (1) (2011) 120–145.

[9]R. Chan, M. Tao, X. Yuan, Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers, SIAM J. Imag. Sci. 6 (1) (2013) 680–697.

[10]T. Chan, J. Shen, Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods, SIAM, 2005.

[11]T. Chan, C. Wong, Total variation blind deconvolution, IEEE Trans. Image Process. 7 (3) (1998) 370–375.

[12]M. Figuriredo, R. Nowak, S. Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problem, IEEE J. Sel. Topics Signal Process. 1 (4) (2007) 586–597.

[13]G. Gilboa, S. Osher, Nonlocal operators with applications to image processing, SIAM Multiscale Model. Simul. 7 (3) (2007) 1005–1028.

[14]G. Gilboa, N. Sochen, Y. Zeevi, Image enhancement and denoising by complex diffusion process, IEEE Trans. Image Process. 26 (8) (2002) 1020–1036.

[15]T. Goldstein, S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imag. Sci. 2 (2) (2009) 323–343.

Références

Documents relatifs

To do so, a full Eulerian approach is used to model the injection of textured moulds at both the macroscopic and microscopic scales as usual industrial software cannot handle

Ce texte montre qu’en combinant le théorème fort des six exponentielles de D.Roy et la conjugaison complexe, on peut obtenir un certain nombre de cas particuliers de la conjecture

SLP sea-level pressure, CESM Community Earth System Model, CMIP5 Coupled Model Intercomparison Project phase 5, RCP representative concentration pathway, CERA20C ECMWF ’s

Par exemple dans le cas d’une simulation de lancée de pièces notre modèle sera une distribution de Bernoulli, il est spécifier de la manière suivante px1:n |θ = θm 1 −

Black diamonds – aircraft measurements; red crosses – TES retrievals on the days of the aircraft measurements; blue rectangles – GEOS-Chem results sampled along the TES orbital

Language Differences Traversing COM Interfaces continued • VB.NET Casting – There are 4 options: Dim Dim areaPolygon areaPolygon as as IArea IArea == New New PolygonClass

At this step the i-vectors contain both speaker and chan- nel information. The goal is to find a method that is able to carry out channel compensation. In [9, 10], the authors

This section presents the procedure, which has been used to allow the comparison between experimental datasets, expressed as a function of exposure dose and