Printer Friendly

Automatic Radiographic Position Recognition from Image Frequency and Intensity.

1. Introduction

Digital X-ray imaging technique has generated massive amounts of clinical image data in radiology departments every day. These data need to be classified, retrieved, and analyzed in Picture Archiving and Communication Systems (PACS) or Radiology Information Systems (RIS). The urgent requirements to process these massive image data demand an automated and computationally efficient approach [1, 2]. Among these approaches, image classification, radiographic position identification, and artificial intelligence analysis are the most widely used ones. In this sense, the retrieval of images and the learning of radiographic position are the most fundamental parts.

Traditional medical image retrieval is semimanual, which obtains clinical information from manually retrieved image annotations and databases. The disadvantage of this approach involves human errors and operator variations, which is labor intensive and results in lower accuracy [1]. Automated methods using image retrieval technique are based on image features such as color [2, 3], texture [4], and shape [5]. Wang et al. proposed a dynamic interpolation method to achieve stereo microscopic measurements, but the scheme required a large quantity of matching elements [6]. Histograms were also widely used for image retrieval but had its relevant disadvantages [7, 8]. Other image retrieval techniques such as wavelet transform (WT) [9], Fourier transform (FT) [10], local binary pattern (LBP) [11], and Tamura texture features [12] can recognize an image type through library searching and image classification. However, position information cannot be automatically determined with these algorithms. Besides, these methods lack recognition on imaging organ tracking as researched by Jiao et al. [13].

Pattern recognition can automatically process and analyze digital images as mentioned by Paparo et al. [14, 15]. Feature selection method reported by Silva et al. [16] and Hussain [17] was used in traditional learning algorithms such as support vector machine (SVM) and k-means for image retrieval but needs large datasets for training. Medical expert systems as discussed elsewhere [18, 19] used mixed algorithm to extract target area. Multilayer perceptron neural networks (MLPNN) can identify tissues and diseases as discussed in other places [20-22]; however, the process is complex and the processing time is too long for clinical use. Recently, the well-known deep learning algorithm has also been introduced to medical image processing and achieved equivalent results compared with professional expertise [23-25], but the data quantity and accuracy have remained a debate [26].

Therefore, in this paper, a method that combines frequency curve classification with gray scale matching for image retrieval and matching is proposed. It uses a whole-body phantom image as the template mask for anatomical and radiographic location marking, with shorter time cost and higher accuracy.

2. Materials and Methods

2.1. Image Preprocessing. Raw digital radiographic image data typically has large dynamic range and gray level features. Therefore, we use linear histogram stretching and a median filter for noise reduction. The respective equations are

[F.sub.H] (x,y) = (65355/B-A) x [f(x,y) - A], (1)

where

A = min [f(x, y],

B = max [f(x, y)], (2)

and

[F.sub.w](x, y) = med{f(x - R, y - l), (R, l) [member of] w}. (3)

w is 5.

2.2. The Phantom X-Ray Image Masks. X-ray imaging phantoms are physical analogs of human body shapes and tissues as studied by Dewerd and Kissick [27]. Plastic and nylon are used to simulate the outline of the human body, bones, and primary tissues for whole-body radiography. We took X-ray imaging of the brain, cervical spine, chest, lumbar spine, pelvis, and limbs of a whole-body phantom (Whole Body Phantom PBU-50, Kyoto Kagaku, Japan) by using Digital Radiology DR (Wan dong HF50, Beijing, China). Each of the images was processed by adjusting the histogram, filtering, performing rigid translations, and scaling [28] and then fitted into a whole-body radiographic image. We also performed contrast-limited adaptive histogram equalization (CLAHE) for handling the variation in X-ray exposures.

For recognition of the radiographic positions after completing the input image matching, we performed the anatomical definition to a phantom template; the matrix of images is 2000 x 800, and the height of the corresponding body is 165 cm without gender. For the information of the image, diagnostician can use different ranges to define different organs, such as head size ranging from [260, 1] to [540, 285] and lung size ranging from [250, 130] to [560, 365], as shown in Figure 1. For the frontal image, there are seven radiographic positions and six radiographic target organs. The phantom template defines the target template for subsequent matching based on automatic identification and X-ray photography posture.

2.3. Classification Based on Image Frequency. Radiographic images have special frequency and amplitude characteristics, which are position dependent. These characteristics of the frequency curve can be used for classifying the type of image (for a given radiological position) and extract the texture of the organ.

2.4. The Characteristics of X-Ray Image Frequency. We use the fast Fourier transform (FFT) of the organ images to obtain the frequency spectrum as follows:

[mathematical expression not reproducible], (4)

where M and N are the image resolution and u and v are coordinates in the frequency domain. From the frequency image and 2D curve, we find that the effective anatomical contours concentrate on the minimum 2% of the frequency curve. In Figure 2, the frequency curve at each position is the average of 10 images of the same radiological position in the same coordinate system, and the curve features shown are significantly different among the positions. Partly, the differences of some positions such as the lungs and limbs are not reflected in the frequency curve; thus, we offer the areas under the curve (AUCs), whose values of the lungs and limbs have obvious differences. Combined with frequency curve and AUCs, differences of positions can be obviously shown. The radiological positions are the head, lungs, lumbar (spine), pelvis (abdominal), joint (knee), and limbs.

In X-ray images, organs or tissue has a characteristic frequency response, even in different samples and different radiological positions. For example, the chest imaging using appropriate exposure parameters shows lung texture details and the lung signal captured in certain frequency bands. As shown in Figure 3, which shows the average frequency curve of 10 lung X-ray images, there is a peak in the low-frequency range, which corresponds to a lung texture detail (extracted using a Butterworth filter). For comparison, a similar peak in the averaged knee curve corresponds to bone trabeculae as plotted in Figure 4.

2.5. Classification Based on Image Frequency. The frequency curves for six radiographic positions were used as the standard library for comparison with arbitrary input images, and the similarity between input image and standard library was determined by the mean variance of the vector frequency curve. The input image is f(w); the corresponding amplitude-arranged vector is [A1, A2, ..., An]; the six frequency curves, F(w), are used as a standard for comparison with arbitrary input images in the library and have amplitudes of [B1, B2, Bn]. The mean-variance similarity between the input image and the reference organ image is

[mathematical expression not reproducible]. (5)

The cosine of the angle 0 between the two images can be described as follows:

[mathematical expression not reproducible]. (6)

The smaller mean-variance is and the closer the cosine value is to 1 (indicates an angle closer to zero), the greater the similarity is. Matching 6 curves yields 6 mean-variance values, and then bubble sort is performed to determine the two mean-variances with the highest absolute value. The absolute values of the top two are less than 0.02, comparing the cosine similarity between the wave curves of the source image and organs which corresponds to the top two mean-variances. The organ which is the closest to mean-variance is considered the same as the organ of the source image. Six organs had mean-variances with standard frequency curves, and the reciprocal of that for all organs is plotted in Figure 5, as histograms. Higher reciprocal of mean-variance signifies greater similarity.

2.6. Image Matching Based on Matrix Multiplication and Correlation Coefficient. After the vector calculations based on image frequency have been performed, we determine the types of the images that are the most similar to the standard organ curve according to the shape of their curves and mean variances. The input image will be matched against the whole-body phantom mask so that the organ field is defined. This step involves matrix multiplication and the correlation coefficient.

In (7), (8), and (9), the input image after preprocessing is [F.sub.input](x, y) and the 2% part of the frequency curve is f(w). The image has been finished by classification based on image frequency, and the phantom image is denoted as p(m, n). (In (7), (8), and (9), p(m, n) represents image patches whose frequency is not within the minimum 2% range. The range of p (m, n) represents the image from top to bottom.) By finding the maximum values of M and R, region T can be found, which is the intersection of M and R shown in the phantom image and also the target recognized region.

[mathematical expression not reproducible], (7)

[mathematical expression not reproducible] (8)

T = M|R. (9)

The maximum values of M and R have been solved, respectively, by the matrix multiplication and correlation coefficient, between the input image and phantom image. T is a region corresponding to the phantom area and is indicated by a bright box. To improve the processing speed, the matrix of the input and phantom images is reduced (maintaining image proportions).

2.7. Implementation of the Overall Algorithm. For any input image being preprocessed, the 2D Fourier transform will be taken and the lowest 2% frequency curve of the image is obtained. Compared with 6 predefined curve types and the input image type (radiographic position), the curve is classified by calculating the curve similarity and the mean variance. Next, the image is matched in the phantom image by finding the maximum value of matrix similarity. The final matching region, which corresponds to a priori knowledge of the patient's anatomical field, is shown in the phantom as the result. The workflow is shown in Figure 6.

3. Results and Discussion

217 clinical radiological images were randomly collected in this study, from the Radiology Department of Taishan Medical University. The radiological position and body region in all images have been automatically recognized by our method. The results were verified by the clinical physicians of the Radiology Department. For comparison, the input images were also processed by dot matrix matching, correlation matching, and histogram retrieval algorithms. The accuracy rates and the processing times are shown in Table 1. The accuracy between the proposed method and any other methods has a statistically significant difference (p < 0.005).

The results have shown that the proposed algorithm has the highest accuracy and robustness for all images (6 position types); the average organ recognition accuracy was 93.78% and the average judgment time was 0.2903 s.

The proposed method is better than other benchmark methods; moreover, the method can obtain the radiographic position's description from the anatomical knowledge in the phantom image and reduce the processing time and recognition accuracy. What is more, compared with some effective approach such as the large margin local estimate (LMLE) [15] and deep learning network [24], the LMLE method only achieved less than 90% accuracy with 10% data as the training set. Although the convolution neural network in [24] achieved more than 90% accuracy in most image data, the approach needs 7000+ image slices and a most recently equipped computer (i7 3.4GHz, 16 GB RAM) for neural network training, while our method only needs simple matrix multiplication and correlation coefficient which can be calculated on a multicore computer with less time and more than 90% of the accuracy.

The sample results of the radiographic position recognition are shown in Figure 7, by matching rectangular areas and annotated text. This integrated method can accurately mark the photograph site on the phantom images. We can get the information of photography range and photography sites according to early anatomical definition in phantom-pixel area. For different images with the same position type, the image matching can show the regional differences in the whole-body phantom image. For example, in Figure 7, three different cervical spine images have been identified and shown in different cover areas.

The human body model was represented by a phantom template X-ray image. The phantom was developed to mimic the human body X-ray attenuation parameters. The radiography of the phantom was closely approximated to the real human, even though the model structure was only simplified to the macroscopic shape of the organs. For example, the lung phantom made of plastic can simulate the lung contour and segments but did not include the pulmonary veins and nodules. In the X-ray image of the phantom, the macroscopic profile of the lung is authentic for the imaging modality. The majority of conventional radiography sites are matched accurately by using this phantom image approach. For the detection of the contours of the lungs and the heart, the independent frequency or gray information is not sufficient.

The histogram and gray intensity are widely used for image similarity detection. Histogram matching has the advantage of being fast and no limitation by image size. However, it cannot determine the position and scope information. The method presented in this paper obtains robust frequency characteristic curves from X-ray information. The templates of different anatomical features have distinct frequencies and amplitudes. Comparison of input images and template only needs to take 2% effective frequency characteristics.

We extract a 1D curve from a 2D image, which accelerates and simplifies the image-matching algorithm. For 5.5 GB image data consisting of 217 images, the total processing time was 414.6 s.

Although our method was performed well for all of the test images, the algorithm has some limitations. The major obstacle is the poor result for nonstandard radiography; the matched result will be in the wrong position in the phantom image. For these cases, in a subsequent study, we plan to develop more standard phantom models, such as for babies, animals, and separate male and female bodies, in order to obtain more appropriate phantom images.

4. Conclusions

In this paper, we proposed a method for the automatic recognition of a radiographic position and body field, based on frequency curve classification and gray information of digital radiographic images. Compared with image analysis methods based on complex pattern recognition algorithm, the proposed method can extract more information about the patient's position. The frequency classification in this work has good sensitivity and robustness to reduce the errors, which is caused by variations in the lighting environment (image exposure, detector sensitivity). This method is a fast 1D classification for 2D images and can be used for automatic feature extraction and be applied to big data calculations.

https://doi.org/ 10.1155/2017/2727686

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

The authors thank the funding support by the China National Key Research and Development Program (2016YFC0103400) and the Natural Science Foundation of Taishan Medical University (no. GCC003). The authors also thank Weizhao Lu for his help with the English writing in the paper.

References

[1] K. H. Hwang, H. Lee, and D. Choi, "Medical image retrieval: past and present," Healthcare Informatics Research, vol. 18, no. 1, pp. 3-9, 2012.

[2] K. E. van de Sande, T. Gevers, and C. G. Snoek, "Evaluating color descriptors for object and scene recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1582-1596, 2010.

[3] J. Kajimura, R. Ito, N. R. Manley, and L. P. Hale, "Optimization of single- and dual-color immunofluorescence protocols for formalin-fixed, paraffin-embedded archival tissues," Journal of Histochemistry & Cytochemistry, vol. 64, no. 2, pp. 112-124, 2016.

[4] X. Zhang, X. Gao, B. J. Liu, K. Ma, W. Yan, and H. Fujita, "Effective staging of fibrosis by the selected texture features of liver: which one is better, CT or MR imaging?," Computerized Medical Imaging & Graphics the Official Journal of the Computerized Medical Imaging Society, vol. 46, Part 2, pp. 227-236, 2015.

[5] A. K. Jain and A. Vailaya, "Image retrieval using color and shape," Pattern Recognition, vol. 29, no. 8, pp. 1233-1244, 1996.

[6] Y. Wang, G. Jiang, M. Yu, S. Fan, and J. Deng, "A study of stereo microscope measurements based on interpolated feature matching," Biomedical Materials and Engineering, vol. 26, Supplement 1, pp. S1473-S1481, 2015.

[7] G. Pass and R. Zabih, "Histogram refinement for content-based image retrieval," in IEEE Workshop on Applications of Computer Vision. IEEE Computer Society, p. 96, 1996.

[8] D. Shen, "Image registration by local histogram matching," Pattern Recognition, vol. 40, no. 4, pp. 1161-1172, 2007.

[9] M. Kokare, P. K. Biswas, and B. N. Chatterji, "Texture image retrieval using new rotated complex wavelet filters," IEEE Transactions on Systems Man & Cybernetics Part B Cybernetics A Publication of the IEEE Systems Man & Cybernetics Society, vol. 35, no. 6, pp. 1168-1178, 2005.

[10] D. Zhang and G. Lu, "Shape-based image retrieval using generic Fourier descriptor," Signal Processing Image Communication, vol. 17, no. 10, pp. 825-848, 2002.

[11] J. Sun, S. Zhu, and X. Wu, "Image retrieval based on an improved CS-LBP descriptor," in 2010 2nd IEEE International Conference on Information Management and Engineering. IEEE, pp. 115-117, 2010.

[12] M. X. Xin, Z. X. Shi, G. B. Cui, and H. B. Lu, "Algorithm improvement of Tamura texture features in content-based medical image retrieval," Chinese Medical Equipment Journal, 2010.

[13] X. Jiao, D. R. Einstein, V. Dyedov, and J. P. Carson, "Automatic identification and truncation of boundary outlets in complex imaging-derived biomedical geometries," Medical & Biological Engineering & Computing, vol. 47, no. 9, pp. 989-999, 2009.

[14] F. Paparo, A. Piccardo, L. Bacigalupo et al., "Multimodality fusion imaging in abdominal and pelvic malignancies: current applications and future perspectives," Abdominal Imaging, vol. 40, no. 7, pp. 2723-2737, 2015.

[15] Y. Song, W. Cail, H. Huang, Y. Zhou, D. D. Feng, and M. Chen, "Large margin aggregation of local estimates for medical image classification," Medical Image Computing and Computer-Assisted Intervention, vol. 17, Part 2, pp. 196-203, 2014.

[16] S. F. D. Silva, M. X. Ribeiro, J. D. E. S. Batista Neto, C. Traina Jr., and A. J. M. Traina, "Improving the ranking quality of medical image retrieval using a genetic feature selection method," Decision Support Systems, vol. 51, no. 4, pp. 810-820, 2011.

[17] S. A. Hussain, "A novel feature selection mechanism for medical image retrieval system," International Journal of Advances in Engineering & Technology, vol. 6, pp. 1283-1298, 2013.

[18] L. Rourke, V. Willenbockel, L. Cruickshank, and J. Tanaka, "The neural correlates of medical expertise," Journal of Vision, vol. 15, no. 12, p. 1131, 2015.

[19] Y. Shen and W. Zhu, "Medical image processing using a machine vision-based approach," International Journal of Signal Processing Image Processing & Pattern Recognition, vol. 6, 2013.

[20] C. Y. Chang, "A contextual-based Hopfield neural network for medical image edge detection," in 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No. 04TH8763), vol. 2, pp. 1011-1014, 2004.

[21] J. C. Fu, C. C. Chen, J. W. Chai, S. T. Wong, and I. C. Li, "Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging," Computerized Medical Imaging and Graphics, vol. 34, no. 4, p. 308, 2010.

[22] T. Kondo and J. Ueno, "Medical image diagnosis of lung cancer by a revised GMDH-type neural network using various kinds of neuron," Artificial Life and Robotics, vol. 16, no. 3, pp. 301-306, 2011.

[23] A. Esteva, B. Kuprel, R. A. Novoa et al., "Dermatologist-level classification of skin cancer with deep neural networks," Nature, vol. 542, no. 7639, p. 115, 2017.

[24] Z. Yan, Y. Zhan, Z. Peng et al., "Body part recognition using multi-stage deep learning," Information Processing in Medical Imaging, vol. 24, pp. 449-461, 2015.

[25] H. C. Shin, H. R. Roth, M. Gao et al., "Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning," IEEE Transactions on Medical Imaging, vol. 35, no. 5, p. 1285, 2016.

[26] J. Cho, K. Lee, E. Shin et al., "How much data is needed to train a medical image deep learning system to achieve necessary high accuracy," Computer Science, 2016.

[27] L. A. Dewerd and M. Kissick, The Phantoms of Medical and Health Physics, Springer New York, Heidelberg, Dordrecht, London, 2014.

[28] http://www.mathworks.com/matlabcentral/fileexchange/ authors/24896.

Ning-ning Ren, (1,2) An-ran Ma, (1,2) Li-bo Han, (1,2) Yong Sun, (1) Yan Shao, (1,3) Jian-feng Qiu (1)

(1) College of Radiology, Taishan Medical University, Taian, Shandong, China

(2) College of Information and Engineering, Taishan Medical University, Taian, Shandong, China

(3) Department of Radiology, Affiliated Hospital of Taishan Medical University, Taian, Shandong, China

Correspondence should be addressed to Jian-feng Qiu; jfqiu100@gmail.com

Received 15 February 2017; Accepted 17 July 2017; Published 17 September 2017

Academic Editor: Shujun Fu

Caption: FIGURE 1: The whole-body phantom's X-ray mask and the examples of partial anatomical definition.

Caption: FIGURE 2: Frequency curves and the AUCs for various anatomical regions.

Caption: FIGURE 3: (a) From top to bottom: the chest X-ray image, the image frequency curve, and the chest X-ray image with inversed gray scale. (b) From top to bottom: chest X-ray image by Butterworth filtering, the image frequency curve, and the chest X- ray image with inversed gray scale. (c) From top to bottom: lung texture image reconstructed by the filtered frequency information, the frequency curve,

Caption: FIGURE 4: (a) From top to bottom: the knee X-ray image and the knee image frequency curve. (b) From top to bottom: the knee X-ray image by Butterworth filtering and the image frequency curve by filtering. (c) From top to bottom: the trabeculae texture image reconstructed by the filtered frequency information and the frequency curve.

Caption: FIGURE 5: The reciprocal of mean-variance between 6 organs and the standard frequency curve.

Caption: FIGURE 6: Workflow.

Caption: FIGURE 7: The automatic recognition results for three cervical spine images.
TABLE 1: The accuracy rate of four different radiographic position
matching methods.

Radiographic    Dot matrix      Correlation     Histogram
positions        matching        matching        retrieval
               algorithm (%)   algorithm (%)   algorithm (%)

Head               83.3            100.0           50.0
Lungs              47.4            71.9            45.6
Lumbar             45.6            66.7            40.5
Pelvis             35.3            41.2            41.2
Joint              90.9            100.0           27.3
Limbs              75.8            56.8            56.6
Average            63.1            72.7            43.5

Radiographic   Current algorithm    Average time of
positions             (%)          current algorithm
                                          (s)

Head                 100.0              0.2808
Lungs                100.0              0.2918
Lumbar               100.0              0.2934
Pelvis               66.7               0.2919
Joint                100.0              0.2903
Limbs                96.0               0.2936
Average              93.7               0.2903
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Ren, Ning-ning; Ma, An-ran; Han, Li-bo; Sun, Yong; Shao, Yan; Qiu, Jian-feng
Publication:Journal of Healthcare Engineering
Date:Jan 1, 2017
Words:3792
Previous Article:An Evaluation of the Benefits of Simultaneous Acquisition on PET/MR Coregistration in Head/Neck Imaging.
Next Article:Neural Network-Based Coronary Heart Disease Risk Prediction Using Feature Correlation Analysis.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |