Multiplicative noise removal using primal–dual and reweighted alternating minimization


As Huang et al. (2009]) has mentioned, let us consider the splitting form of Eq. (2)

(5)

where w is an auxiliary function, The parameter ? measures the amount of regularization to a denoising image, which is large enough
to make w be close to z. In our experiment, ? = 19 is chosen. The main advantage of the proposed method is that the TV norm can
be used in the noise removal process in an efficient manner. And Eq. (5) can be splitted into two equations

(6a)

(6b)

This is an alternating minimization algorithm. The first step of the method is to
apply a weighted TV denoising scheme to the image generated by the previous multiplicative
noise removal step. The second step of the method is to solve a part of the optimization
problem.

In this paper, a primal–dual algorithm (Bertsekas et al. 2006]; Bertsekas 2011]) is applied to iteratively reweighted model 6a. Convex close set K is defined by,

where denotes convex close set of .

Let be two finite dimention real vector spaces, the corresponding norm defined as , where is the inner product. Gradient operator is continuous linear operator, the corresponding norm defined as

We introduce a divergence operator , the adjoint of divergence operator is defined by . Then we introduce dual variable , which divergence is , we have

The regularizator of 6a is

and 6a can be transformed into

(7)

for every and , holds, so J is one-homogeneous. By the Legendre–Fenchel transform, we can obtain

with is the “characteristic function” of a closed convex set K:

(8)

Since , we recover

The Euler equation for (7) is

where is the “sub-differential” of J. Writing this as

we get that is the minimizer of . Since is given by (3), the solution of problem (6) is simply given by

Therefore the problem to compute become a problem to compute the nonlinear projection . Consider the following problem:

(9)

Following the standard arguments in convex analysis (Chambolle 2004]; Chambolle and Pock 2011]), the Karush–Kuhn–Tucker conditions yield the existence of a Lagrange multiplier
, such that constraint problem (9) become to,

(10)

Notice constraint problem in Eq. (10). For any , , if ,then ; If , we see that in any case

Then

(11)

Substituting (11) into (10) gives,

We thus propose the following semi-implicit gradient descent (or fixed point) algorithm.
We choose ?  0, let and for any n ? 0,

(12)

Combining Eq. (3) , we calculate by

(13)

The denominator of Eq. (13) is greater than zero, which avoids the appearance of the rectified parameters, and
of course does not need to be adjusted. The method can be seen a new method to solve
nonconvex problem. We need to calculate the boundary of the norm .

Theorem 1

(Chambolle 2004]) If, then

Similar to Papers (Chambolle 2004]; Chambolle and Pock 2011]; Bresson et al. 2007]), we now can show the following result about dual algorithm to iteratively reweighted
TV model.

Theorem 2

Let. Then, converges toas.

Proof

By algorithm we easily see that for every , . Let it can be obtained

Then we have

Now, consider the following equation

where is any point of image region (2-dimensional matrices).

For every point , we get

(14)

By , we know

.

So, if , is decrease on .

And when , it holds .

In fact, If , it is obvious that is equivalence to ;If , by Eq. (14), for any of , it hold

We deduce or , In both cases, (12) yields .

In the following, we will prove the convergence of . Let , and be the limit of a converging subsequence of . Letting be the limit of , we have

and repeating the previous calculations we see

It holds , for any ,i.e.,

So we can deduce

which is the Euler equation for a solution of (8). One can deduce that solves (8) and that is the projection of . Since this projection is unique, we deduce that all the sequence converges to as .

6b is equivalence to solve the nonlinear system