Printer Friendly

Advanced Image Enhancement Method for Distant Vessels and Structures in Capsule Endoscopy.

1. Introduction

In the effort to obtain more information about the vessels and structures, particularly, in the darker or distant parts of capsule endoscopic images, the image contrast enhancement is the way to go. There exist several categories and subcategories of contrast enhancement methods in the literature, for example, the Histogram Equalization (HE), Adaptive HE (AHE), and Contrast-Limited AHE (CLAHE) whose details are provided in [1, 2] as well as Ml a method proposed in [3], and M2 represents an enhancement method proposed in [4], developed to deal generally with the poor contrast problems in color and grayscale images, which remain nonexhaustive and image dependent in their performance [5-12]. Today, capsule endoscopy is among the newest research and application areas in medicine that caught interest of many researchers because of the advantages that capsule endoscopy provides over the traditional endoscopy in terms of comforting patients while exploring the entire gastrointestinal (GI) tract [3, 13-16]. To clinically benefit from images obtained thanks to the capsule endoscope (CE), it is important to develop an advanced method that would deal carefully with the poor contrast problem caused generally by poor visibility conditions of the GI tract [17]. In this regard, a novel method using exclusively the bilinear interpolation algorithm has been proposed in [4] to deal with (1) the creation of artefacts leading to unnatural colors of the Histogram Equalization (HE) based methods without the need for converting Red-Blue-Green (RGB) to another color space [9, 10] and (2) the disadvantages of the generalized overexposure problem of the method proposed previously in [3]. Although experimental demonstrations showed that the half-unit weighted-bilinear algorithm (HWB), proposed in [4], made considerable improvements over the method proposed in [3], (improvements that can also be seen/noticed in this paper Figures 5(a)-11(a)), a gastroenterologist (OH) could only rate highly 70% of enhanced images presented in experimental demonstrations of [4]. Such a rating was due to overenhancement of the neighborhoods of the brightest areas (i.e., specular areas) by the HWB and bad intensity transitions between darker and brighter areas (as can be seen in this paper's Figures 5(b)-11(b)). Now, since the CE has the main light source, implanted in it, composed of a group of many Light Emitting Diodes, when such a light falls onto a GI surface tissue, some of the beams are reflected back straightaway, specular reflection, while the rest of the beams penetrate it before being reflected (diffuse reflection), thus forming specular highlights on the capsule endoscopic images [18]. Although the sporadic presence of specular areas remains unavoidable, it is not a major one in this direction [19, 20], except enlarging them via overenhancement of their neighborhoods. The advanced method, taking into account those possible enlargements and jagged transition of intensities between darker and brighter areas issues, has been proposed in this paper. The disadvantage of the proposed method (PM) is that it does not suppress cognitively (or underenhance appropriately) specular highlight spots. It does not always perform very well or does not achieve the best scores with small sized images. Figure 1 shows the CE device and human GI tract.

This paper is organized as follows: the second part gives a brief introduction to the state-of-the-art key algorithm, dealing with darker areas, proposed in [4]. Part three gives the proposed method, as well as its summary. Experimental demonstrations, results, and evaluations by a gastroenterologist are provided in the fourth part. The conclusion is given in the fifth part.

2. State of the Art

Half-unit weighted-bilinear algorithm (HWB) is a novel RGB image enhancement strategy developed in [4], to significantly remove all Histogram Equalization (HE) based artefacts and disadvantages [9, 10], rather than using complex enhancement strategies [23, 24]. A key point on which the HWB differs from the conventional bilinear algorithm (CWB) [25-28] is that it uses a half-unit weighting strategy to calculate new pixel values for each overlapping four-pixel group in the destination matrix or image [4]. The mathematical expression on which the CWB is based is given in (1). (r,c), (r,c + 1), (r + 1, c), and (r + 1, c +1) are pixel ([P.sub.n]) locations, on the pixel grid, as shown in Figure 2(b) (n is the number of four nearest neighbors).

CWB (r', c') = [4.summation over (n=1)] [P.sub.n] x [CW.sub.n], (1)

where [CW.sub.1] = (1-[DELTA]r)x(1-[DELTA]c), [CW.sub.2] = ([DELTA]r) x (1- [DELTA]c), [CW.sub.3] = (1-[DELTA]r)x([DELTA]c), and [CW.sub.4] = ([DELTA]r) x ([DELTA]c) represent the weighting functions in the CWB. CWB(r', c') provides or presents interpolated values. Note that, in Figure 2, CWB(r', c') is also represented by Y(r', c').

The mathematical expression for the HWB algorithm is given in (2). Equation (2) is the result of applying a constant half-unit to weights [CW.sub.n] in (1):

HWB (r', c') = ([[summation].sup.4.sub.n=1] [P.sub.n])/HW, (2)

where HW = 2 is the weighting function of the HWB. It is important to note that (2) is the main function used to calculate the pixel values in the preliminary enhancement stage, as explained in [4].

3. Proposed Method

The threshold weighted-bilinear algorithm (TWB) is a novel algorithm that operates from an empirical threshold value d and the Hue-Saturation-Value (HSV) component V is developed and proposed, in this paper, to achieve the overall proposed method (PM). By developing TWB, the objective is to back the HWB algorithm and achieve an overall enhancement scheme that leads to a better visibility of distant capsule endoscopic images vessels and structures needed by gastroenterologist in their clinical diagnosis than what was previously achieved in [4]. The mathematical expression for the TWB's weighting function (TW) is given as follows:

[TW.sub.n(m)] = ([CW.sub.n] + HW)/[H.sub.m]. (3)

[H.sub.m] is the denominator of the weighting function TW, whose mathematical expression is given in as follows:

[H.sub.m] = [beta] + [omega], (4)

where [beta] is the TW's denominator initial value whose optimal range, leading to the fact that a better linkage process between the darker and brighter areas (see Figures 3(b)-3(d)) has been experimentally located between 1.45 and 1.50; and w is equal to the difference between consecutive [H.sub.m] denominators. Note that the experimental value for w was found to be equal to or less than 0.025 so that the border between darker and brighter areas can become invisible. Here, m is the number of weighting steps between the component V empirical threshold and maximum values, as shown in (5). This number can be obtained by dividing the component V maximum value, [phi], with the component V empirical threshold value d (a default value for [partial derivative] is equal to 0.4).

m = [phi]/[partial derivative]. (5)

Note that [phi] is equal to 1 in the component V. The mathematical expression for the TWB is given by the following:

TWB [(r', c').sub.m] = [4.summation over (n=1)] [P.sub.n] x [TW.sub.n(m)] (6)

The PM's mathematical expression is given by (7). The PM's equation is a combination of the (2) and (6). The functioning of this combination is enabled by a set condition where it has to be verified whether the component V values are greater or less than the component V empirical threshold value [partial derivative]. If this condition is true (Yes), the matrix output of (2) is added together with the reference matrix. If the condition is false (No), the matrix output of (6) is added together with the reference matrix. The final matrix mapping, of all output matrices, constitutes the PM enhanced image. The summary of the PM is given in Figure 4. As can be seen/noticed the simplicity in the design of the PM is also another advantage computationally, although the processing time is not the main concern in this work.

[mathematical expression not reproducible], (7)

where I is the reference RGB image (often seen or treated as a poor contrast image) and [eta] is any value of the V component, ranging between zero and [phi].

4. Experiments and Results

Experiments using standard image quality metrics, and the evaluation on better visibility of the vessels and structures in the PM enhanced images by a gastroenterologist, are presented here. The PM software has been implemented in MATLAB-R2017a. Image quality metrics used are structural similarity index (SSIM) and feature similarity index (FSIM).

The reason is that, for capsule endoscopic applications where the pursuit for diagnostic quality is the main concern, metrics taking into account the image diagnostic structures and features (with reference to the reference image) are more appropriate than those who do not. It is important to note that there exist well-documented and widely available scientific works on such metrics in the literature [29-31]. Therefore, explanations, mathematical formulas, demonstrations, and so forth of such metrics are not included in this paper. Also, metrics widely used in statistic of visual representation, such as contrast and intensity enhancement metrics, have been used to measure the contrast and intensity distortion in each RGB channel of the CE images. However, such methods normally apply on grayscale images. Therefore, their results presented were obtained based on processing each RGB channel separately. Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE), which operates in the spatial domain and according to [32] was the best performing metric for capsule images correlating with diagnostic quality, has been used to quantify possible losses of "naturalness" in the enhanced images.

Capsule endoscopic images, downloaded from the capsule endoscopy database for medical decision support, have been used as test images [22].

In Tables 1 and 2, M1 represents the method proposed in [3], M2 represents the method proposed in [4], and PM represents the method proposed in this paper.

As can be seen, in images shown in Figures 5-11, only Figures 5(c)-11(c) are brighter, but not too bright, with preserved hue compared to Figures 5(d)-11(d). Furthermore, distant parts of capsule endoscopic images, in Figures 5(c)-11(c), are clearer or more visible than in the reference images. In this way, diagnostic details or information on the vessels and structures can be seen by a gastroenterologist better in Figures 5(c)-11(c) than in Figures 5(a), 5(b), and 5(d)-11(a), 11(b), and 11(d). In the ideal world, images obtained from capsule endoscopy would be perfect for a gastroenterologist in the sense of sharpness of image details, brilliant image colors, perfect image contrast, and no artefacts. So far, there exist no such perfect capsule endoscopic images. However, based on evaluation of the reference, proposed, and even upscaled images, an additional experiment whose details are not included in this paper but which was conducted using Lanczos interpolation for three times (3x) upscaling purposes), a gastroenterologist (0H) concludes that the PM enhanced images were the best ones based on the information about the vessels, contrast in the images, and the view of the structures in the most distant parts of the images.

In some of the series the PM enhanced images were brighter, and, hence, it was easier to see the structures also in the distant parts of the images. In some of the series, the upscale images were too blurred to give more information than the PM enhanced images, but most of the upscale images gave more information than the normal sized ones. As mentioned earlier, details about Histogram Equalization (HE), Adaptive HE (AHE), and Contrast-Limited AHE (CLAHE) are provided in [1,2]. As can be seen in Tables 1 and 2, the PM produced the highest SSIM and FSIMc values. The SSIM and FSIMc value closest or equal to one means generally the best quality because, in that case, the similarity (to the reference image diagnostic quality structures and features) is almost maximum or maximum.

Unlike in the M1 and M2 cases, the PM produced brighter images, but not too bright, and preserved the hue. It is important to note that Histogram Equalization (HE), Adaptive HE (AHE), and Contrast-Limited AHE (CLAHE) terribly change the hue of reference images, in RGB color space, as demonstrated in [3, 9]. Therefore, the capsule endoscopic images enhanced by such methods have not been included in this part. A part from widely known standard image quality metrics, in this paper, metrics widely used in statistic of visual representation have also been used, as stated earlier, to measure the contrast and intensity distortion in each RGB channel of the CE images [16]. It is important to remember that such methods normally are applied on grayscale images. Therefore, results presented in Tables 3 and 4 were obtained based on processing each RGB channel separately. It is also important to note that processing the intensities of each RGB channel is not the same as processing the intensities of a grayscale image. However, in the effort to find out how much enhancement methods affected each channel intensity, each channel was processed separately using those metrics. The results obtained proved that diagnostic quality cannot be correctly assessed based on the highest values of contrast or intensity enhancement in each channel's intensities, between the reference and enhanced images [16]. For example, in Table 3, the method proposed in [3] gave the highest values in terms of contrast enhancement (in all channels, for every image almost) but the corresponding images, shown in Figures 5(a)-11(a), showed that images produced by M1 are too bright and some image details are not visible compared to M2 and PM. Another example, in Table 4, showed that the HE gave the highest values in terms of intensity enhancement (in G and B channels, for every image) but, as shown in [3], the output of the HE method does not give any usefully diagnostic information because the reference hue is damaged terribly. However, if we analyze the statistics provided by these metrics in another way, for example, assuming that positive statistics means better quality, in this way the enhanced images and statistics by the PM have proven to be the most positively correlating to the reference image without overexposing or overenhancing image details. Figure 12 presents the BRISQUE scores obtained. For image category whose size is equal to 288 x 288 x 3 (i.e., image 5, image 6, and image 7), the PM achieved generally the best scores. For image category whose size is equal to 188 x 188 x 3 (i.e., image 1, image 2, image 3, and image 4), the PM achieved generally the second best scores. This suggests that the PM works better with larger images than with smaller images. However, the PM achieved the best scores compared to all enhancement methods mentioned.

5. Conclusion

Advanced enhancement method for vessels and structures in capsule endoscopic images has been proposed in this paper. The proposed method used mainly two HWB and TWB algorithms to deal with darker and brighter areas, respectively. It also used additional strategies to create a smooth intensity transition between such areas. The overall enhancement method achieved produced enhanced images with a moderate increase in brightness in darker/distant areas that could preserve the hue of the reference images (without enlarging the specular highlight spots or overenhancing their neighborhoods). Compared to the previous works, more details, especially in brighter areas, could still be seen after the PM enhancement operations because the PM could avoid overenhancing the neighborhood of the brighter areas. In this way, it was easier to see more details about the vessels and structures, for example, in the pursuit of precancerous or polypous tissues or even inflammations, in the PM enhanced images than in the reference images and in both M1 and M2 enhanced images. In the evaluation conducted together with a gastroenterologist (0H), a gastroenterologist concluded that PM enhanced images were the best ones based on the information about the vessels, the contrast in the images, and the view of the structures in the most distant parts of capsule endoscopic images used. The usefulness of the PM enhanced images was also supported by statistics obtained using the SSIM and FSIMc metrics. Furthermore, in the effort to find out how much the PM affected each channel intensity, each channel was processed separately using contrast and intensity enhancement metrics. The first analysis demonstrated that the pursued diagnostic quality could not be correctly assessed based on the highest values of contrast or intensity enhancement in each channel's intensities, between the reference and enhanced images.

The second analysis suggested that the statistics provided by these metrics, in another way, could mean better quality with reference to being closer to zero in a positive direction. And, enhanced images and statistics by the PM proved to be the most positively correlating to the reference image without overexposing or overenhancing brighter areas neighborhoods and their neighborhoods and image textural details compared to M1 and M2 methods' outputs. Future work can be dedicated to the development of an innovative enhancement method enabling the desired gastroenterological sharpness of capsule endoscopic image details, color brilliantness, and artefact-free and that can lead to an underenhancement of specular highlights spots since such spots hide the details in part of the image. On top of that since the PM gave better BRISQUE scores in 2 types of test images of the same size out of 3 and in only 1 type of test images of the same size out of 4 (smaller than the previous category size), future effort will be dedicated to an "intelligent" or "cognitive" method that would lead to the best visibility desired by gastroenterologists and scores in all types of test images sizes (in terms of BRISQUE, FSIM, etc.).

Conflicts of Interest

The authors confirm that the mentioned received funding in Acknowledgments does not lead to any conflicts of interest and that there are no any other possible conflicts of interest regarding the publication of this paper.

Authors' Contributions

Olivier Rukundo conceived and designed the proposed method's algorithms, conducted experiments using standard image quality metrics as well as metrics widely used in statistic of visual representation, and wrote the manuscript according to the expected standards in scientific publishing. 0istein Hovde evaluated the PM enhanced images against reference images, in both normal and upscale sizes, and validated the usefulness of PM enhanced images in terms of the information about the visibility of vessels, contrast in images, and structures for gastroenterological diagnosis. Marius Pedersen discussed the PM critically, gave feedback on the manuscript, and contributed to the evaluation of the PM using the BRISQUE metric. All authors read and approved the final manuscript.


This research has been supported by the Research Council of Norway (Project no. 260175, entitled Upscaling Based Image Enhancement for Video Capsule Endoscopy) through Project no. 247689: Image Quality Enhancement in MEDical Diagnosis, Monitoring, and Treatment, IQ-MED.


[1] R. Hummel, "Image enhancement by histogram transformation," Computer Graphics and Image Processing, vol. 6, no. 2, pp. 184-195, 1977.

[2] S. M. Pizer, E. P. Amburn, J. D. Austin et al., "Adaptive histogram equalization and its variations," Computer Vision Graphics and Image Processing, vol. 39, no. 3, pp. 355-368, 1986.

[3] O. Rukundo, "Effects of Empty Bins on Image Upscaling in Capsule Endoscopy," in Procceding of the SPIE 10420, Ninth International Conference on Digital Image Processing (ICDIP '17), 2017.

[4] O. Rukundo, "Half-Unit Weighted Bilinear Algorithm for Image Contrast Enhancement in Capsule Endoscopy," in Proceedings of the Proc. SPIE, Ninth International Conference on Graphic and Image Processing (ICGIP '17), 2017.

[5] D. Menotti, L. Najman, J. Facon, and A. D. A. Araujo, "Multihistogram equalization methods for contrast enhancement and brightness preserving," IEEE Transactions on Consumer Electronics, vol. 53, no. 3, pp. 1186-1194, 2007.

[6] M. A.-A. Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, "A dynamic histogram equalization for image contrast enhancement," IEEE Transactions on Consumer Electronics, vol. 53, no. 2, pp. 593-600, 2007.

[7] T. Iwanami, T. Goto, S. Hirano, and M. Sakurai, "An adaptive contrast enhancement using regional dynamic histogram equalization," in Proceedings of the 2012 IEEE International Conference on Consumer Electronics (ICCE '12), pp. 719-722, January 2012.

[8] J.-S. Chiang, C.-H. Hsia, H.-W. Peng, and C.-H. Lien, "Color Image Enhancement with Saturation Adjustment Method," Journal of Applied Science and Engineering, vol. 17, no. 4, pp. 341-352, 2014.

[9] E. Provenzi, "Boosting the stability of wavelet-based contrast enhancement of color images through gamma transformations," Journal of Modern Optics, vol. 60, no. 14, pp. 1145-1150, 2013.

[10] E. Provenzi, C. Gatta, M. Fierro, and A. Rizzi, "A spatially-variant white-patch and gray-world method for color image enhancement driven by local contrast," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 10, pp. 1757-1770, 2008.

[11] S. S. Bedi and R. Khandelwal, "Various Image Enhancement Techniques- A Critical Review," International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, no. 3, pp. 1605-1609, 2013.

[12] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, "A multiscale retinex for bridging the gap between color images and the human observation of scenes," IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 965-976, 1997.

[13] MiroCam[R] Capsule Endoscope, products/gi-physician-products/mirocam-capsule-endoscope, 30 June 2017.

[14] D. K. Iakovidis and A. Koulaouzidis, "Software for enhanced video capsule endoscopy: Challenges for essential progress," Nature Reviews Gastroenterology & Hepatology, vol. 12, no. 3, pp. 172-186, 2015.

[15] D. Barbosa, J. Ramos, and C. Lima, "Wireless capsule endoscopic frame classification scheme based on higher order statistics of multi-scale texture descriptors," in Proceedings of the 4th European Conference of the International Federation for Medical and Biological Engineering (ECIFMBE '08), pp. 200-203, November 2008.

[16] M. S. Imtiaz and K. A. Wahid, "Color enhancement in endoscopic images using adaptive sigmoid function and space variant color reproduction," Computational and Mathematical Methods in Medicine, vol. 2015, Article ID 607407, 2015.

[17] E. Sakai, H. Endo, S. Kato et al., "Capsule endoscopy with flexible spectral imaging color enhancement reduces the bile pigment effect and improves the detectability of small bowel lesions," BMC Gastroenterology, vol. 12, article no. 83, 2012.

[18] R. T. Tan and K. Ikeuchi, "Separating reflection components of textured surfaces using a single image," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 178-193, 2005.

[19] A. Koulaouzidis and A. Karargyris, "Use of enhancement algorithm to suppress reflections in 3-D reconstructed capsule endoscopy images," World Journal of Gastrointestinal Endoscopy, vol. 5, no. 9, p. 465, 2013.

[20] S. N. Adler and C. Hassan, Colon Capsule Endoscopy: Quo Vadis? CHAP.11, Colonoscopy and Colorectal Cancer Screening - Future Directions, Intech, February 13, 2013.

[21] MiroCam[R]Capsule Endoscope, products/gi-physician-products/mirocam-capsule-endoscope, 7 Juy 2017.

[22] A. Koulaouzidis and D. K. Iakovidis, KID: Koulaouzidis-Iakovidis database for capsule endoscopy, http://is-innovation .eu/kid, 4 February 2016.

[23] T. H. Khan and K. A. Wahid, "Design of a lossless image compression system for video capsule endoscopy and its performance in in-vivo trials," Sensors, vol. 14, no. 11, pp. 20779-20799, 2014.

[24] V. P. Gopi, P. Palanisamy, and S. I. Niwas, "Capsule endoscopic colour image denoising using complex wavelet transform," Communications in Computer and Information Science, vol. 292, pp. 220-229, 2012.

[25] O. Rukundo and H. Q. Cao, "Nearest neighbor value interpolation," International Journal of Advanced Computer Science and Applications, vol. 3, no. 4, pp. 25-30, 2012.

[26] O. Rukundo, H. Cao, and M. Huang, "Optimization of bilinear interpolation based on ant colony algorithm," Lecture Notes in Electrical Engineering, vol. 137, pp. 571-580, 2012.

[27] O. Rukundo and H. Cao, "Advances on image interpolation based on ant colony algorithm," SpringerPlus, vol. 5, no. 1, article no. 403, 2016.

[28] O. Rukundo and B. T. Maharaj, "Optimization of image interpolation based on nearest neighbour algorithm," in Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISAPP '14), pp. 641-647, January 2014.

[29] L. Zhang, L. Zhang, X. Mou, and D. Zhang, "FSIM: a feature similarity index for image quality assessment," IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378-2386, 2011.

[30] H. R. Sheikh, M. F. Sabir, and A. C. Bovik, "A statistical evaluation of recent full reference image quality assessment algorithms," IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3440-3451, 2006.

[31] Z. Wang, E. P. Simoncelli, and A. C. Bovik, "Muti-scale structural similarity for image quality assessment," in Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers, pp. 1398-1402, Pacific Grove, Calif, USA, November 2003.

[32] M. Pedersen, O. Cherepkova, and A. Mohammed, "Image Quality Metrics for the Evaluation and Optimization of Capsule Video Endoscopy Enhancement Techniques," Journal of Imaging Science & Technology, vol. 61, no. 4, pp. 404021-404028, 2017.

Olivier Rukundo, (1) Marius Pedersen, (1) and Oistein Hovde (2)

(1) Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway

(2) Department of Gastroenterology, Innlandet Hospital Trust, 2819 Gjovik, Norway

Correspondence should be addressed to Olivier Rukundo;

Received 10 July 2017; Revised 21 September 2017; Accepted 9 October 2017; Published 31 October 2017

Academic Editor: Kaan Yetilmezsoy

Caption: FIGURE 1: (a) Capsule endoscope (CE). (b) Gastrointestinal (GI) tract where the capsule passes. Images downloaded from the MiroCam[R] Capsule Endoscope from Medivators [21]. This figure has also been used in [4].

Caption: FIGURE 2: (a) represents the source pixel grid. (b) represents a four- pixel group; (c) represents the destination pixel grid [4].

Caption: FIGURE 3: (a) and (c) are two test images from the capsule endoscopy database for medical decision support [22]. (b) and (d) show the darker and brighter maps in component V of (a) and (b), respectively. Here, the darker areas are defined by V < [partial derivative]. The brighter are defined by V [greater than or equal to] [partial derivative].

Caption: FIGURE 4: The proposed method (PM) summary.

Caption: FIGURE 5: Image 1, size 180 x 180; Image 1 (a) enhanced by M1; Image 1 (b) enhanced by M2; Image 1 (c) enhanced by PM; Image 1 (d) = reference image.

Caption: FIGURE 6: Image 2, size 180 x 180; Image 2 (a) enhanced by M1; Image 2 (b) enhanced by M2; Image 2 (c) enhanced by PM; Image 2 (d) = reference image.

Caption: FIGURE 7: Image 3, size 180 x 180; Image 3 (a) enhanced by M1; Image 3 (b) enhanced by M2; Image 3 (c) enhanced by PM; Image 3 (d) = reference image.

Caption: FIGURE 8: Image 4, size 180 x 180; Image 4 (a) enhanced by M1; Image 4 (b) enhanced by M2; Image 4 (c) enhanced by PM; Image 4 (d) = reference image.

Caption: FIGURE 9: Image 5, size 288 x 288; Image 5 (a) enhanced by M1; Image 5 (b) enhanced by M2; Image 5 (c) enhanced by PM; Image 5 (d) = reference image.

Caption: FIGURE 10: Image 6, size 288 x 288; Image 6 (a) enhanced by M1; Image 6 (b) enhanced by M2; Image 6 (c) enhanced by PM; Image 6 (d) = reference image (with a highly visible specular highlight spot).

Caption: FIGURE 11: Image 7, size 288 x 288; Image 7 (a) enhanced by M1; Image 7 (b) enhanced by M2; Image 7 (c) enhanced by PM; Image 7 (d) = reference image.

Caption: FIGURE 12: The comparison of M1, M2, PM, and reference images based on BRISQUE scores.
TABLE 1: Structural similarity index, SSIM.

            HE      AHE     CLAHE      M1       M2       PM

Image 1   0.1237   0.6485   0.4201   0.7591   0.9061   0.9228
Image 2   0.1708   0.5829   0.3924   0.7107   0.9024   0.9415
Image 3   0.1035   0.6103   0.3675   0.8009   0.9253   0.9491
Image 4   0.1041   0.6544   0.4318   0.7671   0.9142   0.9353
Image 5   0.1078   0.6581   0.4156   0.6849   0.8885   0.9159
Image 6   0.0820   0.6319   0.3673   0.7172   0.8929   0.9062
Image 7   0.1003   0.6542   0.3985   0.7730   0.9133   0.9299

TABLE 2: Feature similarity index color, FSIMc.

            HE      AHE     CLAHE      M1       M2       PM

Image 1   0.7471   0.8014   0.7765   0.8685   0.9500   0.9660
Image 2   0.8932   0.8144   0.8018   0.8383   0.9428   0.9741
Image 3   0.8726   0.8282   0.8072   0.8133   0.9321   0.9690
Image 4   0.8175   0.8123   0.7780   0.8673   0.9492   0.9728
Image 5   0.8873   0.8397   0.8029   0.8674   0.9513   0.9618
Image 6   0.7764   0.8716   0.8366   0.8734   0.9532   0.9579
Image 7   0.8906   0.8553   0.8216   0.8945   0.9541   0.9646

TABLE 3: Contrast enhancement in RGB channels.

             R         G        B         R         G        B

                      M1                           M2

Image 1   1.4871    1.7152    2.6847   0.6170    0.7056    1.1189
Image 2   1.1449    1.9602    2.6177   0.4896    0.8251    1.0918
Image 3   0.6263    1.5121    2.9125   0.2524    0.6325    1.2030
Image 4   0.9968    1.1078    2.1161   0.4093    0.4490    0.8893
Image 5   0.6197    1.0664    2.1690   0.2402    0.4448    0.9033
Image 6   0.8775    1.1147    1.9780   0.3650    0.4659    0.8229
Image 7   1.0862    2.3800    2.4949   0.4629    0.9928    1.0208

                      HE                           AHE

Image 1   -0.2728   0.7372    3.5268   0.1090    0.6680    1.6756
Image 2   -0.3860   0.0399    0.6312   0.0222    0.5216    1.0866
Image 3   -0.5415   0.0705    3.3189   -0.1862   0.3917    2.4028
Image 4   -0.4388   0.1519    1.7447   -0.0520   0.3140    1.1327
Image 5   -0.3852   -0.1038   0.5886   -0.1344   -0.0435   0.3814
Image 6   -0.2046   -0.0317   1.1356   -0.1869   -0.0964   0.6501
Image 7   -0.3498   0.2051    1.5053   -0.0333   0.4124    1.5220

             R         G        B


Image 1   0.5971    0.3816    0.4361
Image 2   0.5083    0.4953    0.4819
Image 3   0.2408    0.1254    0.2000
Image 4   0.3882    0.1263    0.1579
Image 5   0.2174    0.1472    0.0823
Image 6   0.3696    0.3790    0.2357
Image 7   0.4755    0.5610    0.4777


Image 1   -0.2641   0.2772    1.1922
Image 2   -0.3606   -0.0143   0.3703
Image 3   -0.4900   -0.0477   1.6584
Image 4   -0.3701   -0.0410   0.6516
Image 5   -0.4766   -0.3579   0.0057
Image 6   -0.5088   -0.4126   0.1961
Image 7   -0.4192   -0.0744   0.8891

TABLE 4: Intensity enhancement in RGB channels.

            R        G        B         R        G        B

                     M1                          M2

Image 1   0.7122   0.9117   1.0103   0.3577    0.4580   0.5084
Image 2   0.5706   0.8545   0.9769   0.2868    0.4290   0.4905
Image 3   0.4384   0.8199   1.0248   0.2205    0.4119   0.5154
Image 4   0.5781   0.8063   0.9597   0.2905    0.4052   0.4827
Image 5   0.5706   0.7112   0.9256   0.2871    0.3580   0.4670
Image 6   0.6501   0.7229   0.9192   0.3279    0.3648   0.4674
Image 7   0.5860   0.9307   1.0007   0.2948    0.4680   0.5051

                     HE                         AHE

Image 1   0.4130   1.5860   2.9732   0.0055    0.3837   0.6665
Image 2   0.2875   0.8905   1.5424   -0.0418   0.2090   0.4789
Image 3   0.2287   1.3171   3.0543   -0.1207   0.2609   0.7773
Image 4   0.3425   1.5815   2.8684   -0.0217   0.3595   0.6638
Image 5   0.6300   1.2110   3.6746   0.1006    0.2632   0.7072
Image 6   1.1621   1.5345   5.1613   0.1428    0.2312   0.9084
Image 7   0.3651   0.9644   2.7345   0.0288    0.2774   0.7700

             R        G        B


Image 1   0.3432    0.3306   0.3392
Image 2   0.2841    0.3005   0.2945
Image 3   0.2080    0.1886   0.2131
Image 4   0.2762    0.2407   0.2514
Image 5   0.2744    0.2539   0.1740
Image 6   0.3242    0.3333   0.2722
Image 7   0.2933    0.3257   0.3099


Image 1   -0.0222   0.5710   1.0939
Image 2   -0.0958   0.2221   0.5715
Image 3   -0.1625   0.3879   1.1629
Image 4   -0.0416   0.5325   1.0731
Image 5   0.1067    0.3879   1.3080
Image 6   0.2708    0.4367   1.7947
Image 7   -0.0241   0.3060   1.0830
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Rukundo, Olivier; Pedersen, Marius; Hovde, Oistein
Publication:Computational and Mathematical Methods in Medicine
Article Type:Report
Date:Jan 1, 2017
Previous Article:Reconstruction of Intima and Adventitia Models into a State Undeformed by a Catheter by Using CT, IVUS, and Biplane X-Ray Angiogram Images.
Next Article:3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |