A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study

Simulated imaging

In the high-resolution X-ray imaging system, the detector consists of thin scintillator
material, optical lens with magnification and low noise CCD camera (Figure 1). The thickness of the scintillator should be matched with the depth of focus of
the objective lens, e.g., one objective with 20× magnification and resolution of 1
?m, its DOF is about 5 ?m while the thickness of the scintillator would be 5 ?m or less. When the scintillator is thicker than that, a point on some layer of the
scintillator that is away from the focal plane is focused before or behind the sensor
plane. This point becomes a circle called circle of confusion (CoC) in the sensor,
just as shown in Figure 2. When this circle is smaller than a pixel, it cannot be seen, and that is why depth
of focus exists. But when the thickness of the scintillator exceeds the depth of focus,
all the circles of confusion from different planes will overlap and make the resultant
image blur. The size of CoC can be derived from the basic geometric relationship among
image plane, object plane and focal length, and this relationship can be expressed
as Eq. (1).

Figure 1. X-ray imaging system model employing transparent luminescent screen and optical system.

Figure 2. Imaging model of high-resolution X-ray imaging system in which the thickness of the
employed scintillator is not matching with the objective lens
. D: Object distance, F: Image plane distance, d: Diameter of lens aperture, f: Focal
length, CoC: Circle of Confusion, A(x,y,z):Object point, B(x’,y’,z’):virtual imaging
point.

(1)

Here, D is object distance, F is the conjugate distance of D and f is the focal length.
The direction z is parallel to the optical axis. According to the imaging equation,
object plane at the distance of D1 is focused before the immediate plane. It becomes CoC in the immediate plane. The
diameter of the circle can be calculated by Eq. (2).

(2)

Here, d is the diameter of the aperture lens. When dCoC is bigger than the pixel size of the image sensor, the resolution of the image captured
will decrease.

In the usual optical lens systems, image blur can be considered as the degradation
model with a convolution filter h(x, y) and this model can be denoted by

(3)

where, h(x, y) is the impulse response or point spread function (PSF) which is usually relative
with CoC of the imaging system, n(x, y) is the additive noise and g(x, y) is the blur image. PSF of the defocused lens can be either modeled by the geometrical
or physical optics 5]. The former ignores the effect of diffraction and it can be considered as uniform
distribution when the aperture is circular. For the physical optics case, diffraction
and aberration is taken into consideration and the signal intensity in the CoC is
assumed to follow Gaussian distribution 6-8] as follow.

(4)

Here, the spread parameter ? is proportional to the radius of the CoC 6-8]. Because the radiuses of the CoC formed by different scintillator planes are not
identical, the response functions of different planes are not same, and because the
divergence can be neglected, the final blur image generated by the photons transmitting
from all the planes of the scintillator can be described as a form of integration
as Eq. (5).

(5)

where, z0 refers to the distance from the focal plane in the scintillator to the scintillator’s
left surface, z1 represents the distance from the focal plane to the right surface, and ? represents X-ray absorption coefficient. When assuming the image formed by the photons
which come from the focal plane and transmit through the optical lens to be , all other images formed by the photons coming from other scintillator planes can
be expressed as follow.

(6)

Here, k is the scale factor and can be written as . Now the imaging formula of the final degraded blur image needed to be modifies as

(7)

Its Fourier transformation can be described as

(8)

where, and c is a constant which is relative with the system. Now we can discretize G(u, v) and get G(m, n) as stated below.

(9)

When , and ?m, ?n are integers, we can get the approximate expression of by using bilinear interpolation or nearest neighbor interpolation if z is quite small comparing with D. In our discussion, we adopt bilinear interpolation
for more accuracy and assume that z1 is 10 ?m, –z0 is -10 ?m, D = 50 mm, sampling account of the blur image is 512*512, so , , and G(m, n) can approximately be transformed as

(10)

where,

, ,,,, and , and .

If we set ? to be the inverse transform of function ?, the expression of the blur image in spatial
domain can be expressed as

(11).

where, . For now, we set up a relationship between the image f(k, l) generated by the luminous plane located on the focal plane and the blur image g(k, l).

(12)

Here,

and we will use this relationship to simulate the degraded blur image.

Recovery algorithm

When the pixel number of the detector is small, it is easy to invert the system of
equations by conventional matrix theory. However, in practice there are usually as
many as tens of thousands of pixels in one detector, so it precludes any possibility
to invert the matrix directly. For solving the problems with large numbers of functions,
there is an attractive iterative method for solving them. That is called projection
onto convex sets algorithm, which was developed by Kaczmarz 9] and can be briefly described as follows.

(13)

The vector in Eq. (13) is composed of all the variables that are generated by the kth iteration, and is the weighting vector. In our model, we have a similar form as described below.

(14)

By using the POCS algorithm we can basically solve the equation sets. However, since
various types of noises and errors may exist during the imaging procedure, an accurate
unique solution to the system of equations may not exist. Rather we can only obtain
a near solution. Hence, the restored image is degenerated to some extent. In order
to improve the quality of the restored image, we adopt a compress-sensing algorithm
10] based on the minimization of the image total variation (TV) during the solution procedure.
The TV algorithm is first developed for signal noise reduction, and recently brought
in CT reconstruction for insufficient data problems 11-13]. In this paper we construct a TV expression similar to the one in CT reconstruction.
Combining above iterative algorithm with the TV regularization, the new solution procedure
can be viewed as an optimization problem as stated below.

(15)

In Eq. (15), the vector represents an image and is composed of f(i, j) and the expression means the l1 norm of the gradient of the image represented by . The latter two parts of Eq. (15) are treated as data consistency constraint and
positive constraint, and the former part is a regularization term. The TV expression
can be described as

(16)

where ? is a small positive number. The minimization of TV of the image estimate can be achieved
using many optimization algorithms such as gradient descent or conjugate gradient
method. In this paper we choose the gradient descent method to search the minimization
point that agrees with data consistency constraint. And the derivative of can be expressed as an approximate discrete form as

(17)

Here, we have specified all the parameters needed in the solution procedure. The solution
procedure contains two phases: one is POCS and the other is TV gradient descent. The
two phases are alternately executed iteratively. The POCS phase enforces the data
consistency and the TV gradient descent phase reduces the TV of the intermediate image
from the POCS phase.