# A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images.

1. IntroductionUltrasound (US) imaging is one of the most popular medical imaging modalities with numerous diagnostic applications due to the following merits: no radiation, faster imaging, higher sensitivity and accuracy, and lower cost compared to other imaging modalities, such as computed tomography (CT) or magnetic resonance imaging (MRI) [1-6]. However, sonography is operator-dependent, and reading US images requires well-trained and experienced radiologists. To reduce the interobserver variation among different clinicians and help them generate more reliable and accurate diagnostic conclusions, computer-aided diagnosis (CAD) has been proposed [3, 7, 8]. Generally, the CAD system based on the US image involves the following four steps: preprocessing, segmentation, feature extraction and selection, and classification [9, 10]. Among these four procedures, image segmentation which separates the lesion region from the background is the key to the subsequent processing and determines the quality of the final analysis. In the previous clinical practice, the segmentation task is generally performed by manual tracing, which is laborious, time-consuming, and skill- and experience-dependent. Consequently, reliable and automatic segmentation methods are preferred to segment the ROI from the US image, to improve the automation and robustness of the CAD system. However, accurate and automatic US image segmentation remains a challenging task [11-13] due to various US artifacts, including high speckle noise [14], low signal-to-noise ratio, and intensity inhomogeneity [15].

In the last decade, a large number of segmentation methods have been developed for US images, for example, thresholding-based methods [16-18], clustering-based methods [19-23], watershed-based methods [24-27], graph-based methods [28-35], and active contour models [36-42]. Thresholding is one of the frequently used segmentation techniques for the monochrome image. Yap et al. [18] adopted the thresholding segmentation to separate the lesion region from the background before detecting the initial boundary via edge detection. Clustering is a classification technique and has been successfully applied to image segmentation based on similarity between image regions or pixels. Isa et al. [19] used the moving k-means clustering to automatically select the seed and proposed a modified seed based region growing algorithm to detect the edge. Shan et al. [20] used a novel neutrosophic clustering approach to detect the lesion boundary. Moon et al. [22] used the fuzzy C-means (FCM) clustering to extract the tumor candidates in their CAD system. The watershed transformation which is frequently used in the segmentation of grey scale images considers the gradient magnitude of an image as a topographic surface. Chen et al. [24] employed the two-pass watershed transformations to generate the cells and proposed a region-based approach called cell-competition algorithm to simultaneously segment multiple objects in a sonogram. L. Zhang and M. Zhang [26] used an extended fuzzy watershed method to segment US images fully automatically. The experiments showed that the proposed method could get good results on blurry US images.

In the last few years, graph-based segmentation has become a research hotspot due to the simple structure and solid theories. In graph-based segmentation, the image is modeled as a weighted, undirected graph. Zhang et al. [28] applied the discriminative graph-cut approach to segmenting tumors after discrimination between tumors and the background via a trained classifier. In 2014, Zhou et al. [29] proposed a novel US image segmentation method based on mean shift and graph cuts (MSGC). It uses mean shift filter to improve the homogeneity and applies graph-cut method, whose energy function combines region- and edge-based information to segment US images. The result showed that the method is rapid and efficient. Huang et al. [30] designed a novel comparison criterion for pairwise subregions which takes local statistics into account to make their method more robust to noises, and hence it was named as robust graph-based (RGB) segmentation method. The experimental results showed that accurate segmentation results can be obtained by this method. However, two significant parameters determining the segmentation result should be set empirically, and for different images they need to be adjusted by repeated tests to obtain good segmentation results. In 2013, Huang et al. [31] proposed an improvement method for RGB by using PSO algorithm to optimize the two significant parameters automatically. The between-class variance, which denotes the difference between the reference region and its adjacent regions, was introduced as the objective function and the method was named as parameter-automatically optimized robust graph-based (PAORGB) segmentation.

The active contour model (ACM), more widely known as snake, is another very popular segmentation method for US images and has been massively used as an edge-based segmentation method. This approach attempts to minimize the energy associated with the initial contour as the sum of the internal and external energies. During the deformation process, the force is calculated from the internal energy and external energy. The internal energy derived from the contour model is used to control the shape and regularity of the contour, and the external energy derived from the image feature is used to extract the contour of the desired object. A 3D snake technique was used by Chang et al. [36] to obtain the tumor contour for the pre- and postoperative malignant tumor excision. Jumaat et al. [37] applied the Balloon Snake to segment the mass in the US image taken from Malaysian population. To overcome the curvature and topology problems in the ACM, level set has been employed to improve the US image segmentation. Sarti et al. [38] used a level set formulation to search the minimal value of ACM, and the segmentation results showed that their model is efficient and flexible. Gao et al. [40] combined an edge stopping term and an improved gradient vector flow snake in the level set framework, to robustly cope with noise and to accurately extract the low contrast and/or concave ultrasonic tumor boundaries. Liu et al. [39] proposed a novel probability density difference-based active contour method for ultrasound image segmentation. In 2010, Li et al. [43] proposed the new level set evolution model Distance Regularized Level Set Evolution (DRLSE) in which it adds a distance regularization term over traditional level set evolution to eliminate the need for reinitialization in evolution process and improve the efficiency. Some researchers combined texture information with other methods for US images segmentation [44-47]. In 2016, Lang et al. [44] used a multiscale texture identifier integrated in a level set framework to capture the spiculated boundary and showed improved segmentation result.

However, most of the above methods are purely region-based or edge-based. For region-based methods, they use homogeneity statistics and low-level image features like intensity, texture, and histogram to assign pixels to objects. Two pixels would be assigned to the same object if they are similar in value and connected to each other in some sense. The problem of applying these approaches to US images is that, without considering any shape information, they would classify pixels within the acoustic shadow as belonging to the tumor, while posterior acoustic shadowing is a common artifact in US images [48, 49]. For edge-based methods (ACM), they are used to handle only the ROI, not the entire image. Although they can obtain the precise contour of the desired object, they are sensitive to noise and heavily rely on the suitable initial contour which is very difficult to generate properly. Also, the deformation procedure is very time-consuming. Therefore, segmentation approaches which integrate region-based techniques and edge-based techniques have been proposed to obtain accurate segmentation results for US images [50-55]. Chang et al. [50] introduced the concepts of 3D stick, 3D morphologic process, and 3D ACM. The 3D stick is used to reduce the speckle noise and enhance the edge information in 3D US images. Then, the 3D morphologic process is used to obtain the initial contour of the tumor for the 3D ACM. Huang and Chen [51, 52] utilized the watershed transform and ACM to overcome the natural properties of US images (i.e., speckle, noise, and tissue-related textures), to segment tumors precisely. In their methods, the watershed transform is performed as the automatic initial contouring procedure for the ACM. Then, the ACM automatically determines the exquisite contour of the tumor. Wang et al. [55] presented a multiscale framework for US image segmentation based on speckle reducing anisotropic diffusion and geodesic active contour. In general, the region-based technique is used to generate the initial contour for the edge-based technique. The experimental results of these approaches indicate that accurate segmentation results can be obtained by combining region-based and edge-based information of the US image.

In this paper, we propose a novel segmentation scheme for US images based on the RGB segmentation method [30] and particle swarm optimization (PSO) algorithm [56, 57]. In this scheme, the PSO is used to optimally set the two significant parameters determining the segmentation result automatically. To combine region-based and edge-based information, we consider the optimization as a multiobjective problem comparing with PAORGB. We use multiobjective optimization method (maximizing the difference between target and background, improving the uniformity within the target region, and considering the edge gradient) to improve the segmentation performance. We take the uniformity of the region and the information of the edge as the objective contents in the process of optimization. First, one rectangle is manually selected to determine the ROI on the original image. However, because of the low contrast and speckle noises of US images, the ROI image is filtered by a bilateral filter and contrast-enhanced by histogram equalization. Next, pyramid mean shift is executed on the enhanced image to improve homogeneity. A novel objective function consisting of three parts corresponding to region-based and edge-based information is designed in the PSO. With the optimization of PSO, the RGB segmentation method is performed to segment the ROI image. Finally, the segmented image is processed by morphological opening and closing to refine the tumor contour.

This paper is organized as follows. Section 2 introduces the proposed method in detail. Next, the experimental results and comparisons among different methods are presented in Section 3. Finally, we provide some discussion and draw the conclusion in Section 4.

2. Methods

In this paper, our method is called multi-objectively optimized robust graph-based (MOORGB) segmentation method, which utilizes PSO algorithm to optimize the two key parameters of RGB segmentation method. In the MOORGB, a multiobjective optimization function which combines region-based and edge-based information is designed in the PSO to optimize the RGB. The flowchart of the proposed approach is shown in Figure 1. In the rest of this section, we introduce each step in the proposed approach in detail.

2.1. Preprocessing

2.1.1. Cropping Tumor Centered ROI. According to [11] a good segmentation method for clinical US images should have taken advantage of a priori knowledge to improve the segmentation result due to the relatively low quality. In addition, it is hard to describe the segmentation result quantitatively without any a priori knowledge; therefore, it is difficult to design objective function(s) without any a priori knowledge. Therefore, we employ the a priori knowledge used in [31], namely, asking the operator to roughly extract a relatively small rectangular ROI (in which the focus of interest is fully contained and located in the central part) from the US image. In this way, interferences from other unrelated regions can be reduced as much as possible, making the segmentation easier and more efficient. Besides, it gives useful a priori knowledge for design of objective function(s). Such a ROI is called tumor centered image (TCI) in this paper, and Figure 2 shows how a TCI is extracted from a US image.

2.1.2. Bilateral Filtering. Because of diverse interferences (e.g., attenuation, speckle, shadow, and signal dropout) in US images, speckle reduction is necessary to improving the quality of US images. Bilateral filter [58] which has proven to be an efficient and effective method for speckle reduction is adopted in the MOORGB.

2.1.3. Histogram Equalization. To improve the contrast of US images, histogram equalization is conducted to enhance the filtered TCI. Histogram equalization maps one distribution (the given histogram of intensity values in the filtered TCI) to another distribution (a wider and uniform distribution of intensity values). The classical histogram equalization method [59] is used in the MOORGB.

2.1.4. Mean Shift Filtering. After contrast enhancement, we improve the homogeneity by performing mean shift filtering. Mean shift filtering is based on mean shift clustering over grayscale and can well improve the homogeneity of US images and suppress the speckle noise and tissue-related textures [60]. Figure 3 shows the preprocessing results of the image.

2.2. RGB Segmentation Method. Given an image which is initially regarded as a graph, the RGB method [30] aims to merge spatially neighboring pixels with similar intensities into a minimal spanning tree (MST), which corresponds to a subgraph (i.e., a subregion in the image). The image is therefore divided into several subregions (i.e., a forest of MSTs). Obviously, the step for merging pixels into a MST is the key, determining the final segmentation results. A novel pairwise region comparison predicate was proposed in the RGB to determine whether or not a boundary between two subgraphs should be eliminated. Given a graph G = (V,E), the resulting predicate D([C.sub.1], [C.sub.2]) which compares intersubgraph differences with within-subgraph differences is formulated as follows [30]:

[mathematical expression not reproducible] (1)

Dif ([C.sub.1], [C.sub.2]) = [absolute value of ([mu]([C.sub.1]) - [mu]([C.sub.2]))] (2)

MInt ([C.sub.1], [C.sub.2]) = min ([sigma] ([C.sub.1]) + [tau]([C.sub.1]), [sigma] ([C.sub.2]) + [tau] ([C.sub.2])) (3)

[tau] (C) = k / [absolute value of C] * (1 + 1 / [alpha] x [beta]), [beta] = [mu](C) / [sigma](C), (4)

where Dif([C.sub.1], [C.sub.2]) is the difference between two subgraphs, [C.sub.1] and [C.sub.2] [member of] V, MInt([C.sub.1], [C.sub.2]) represents the smallest internal difference of [C.sub.1] and [C.sub.2], [mu](C) denotes the average intensity of C, [sigma](C) is the standard deviation of C, and [tau](C) is a threshold function of C while a and k are positive parameters. When k increases, [tau] increases as well and the regions merge more easily. On the contrary, when a increases, [tau] decreases and hence the regions are merged less easily.

Based on the pairwise region comparison predicate, the general procedures of segmenting an image are as follows.

Step 1. Construct a graph G = (V, E) for the US image to be segmented. In G, each pixel corresponds to a vertex and each edge connects two spatially neighboring vertices. The edge weight is defined by the absolute intensity difference between two adjacent pixels. Initially, each vertex is regarded as a subgraph and all edges constituting the edge set E are invalid.

Step 2. Sort the edges in E in nondescending order according to the edge weight, and set q = 1.

Step 3. Pick the qth edge in the sorted E. If the qth edge is an invalid edge (connecting two different subgraphs) and the boundary between these two subgraphs can be eliminated according to the pairwise region comparison predicate as mathematically expressed in (1)-(4); then merge these two subgraphs into a larger subgraph and set this edge valid. Let q = q + 1.

Step 4. Repeat Step 3 until all edges in E are traversed.

When all edges are traversed, a forest including a number of MSTs can be obtained. Each MST corresponds to a subregion in the image. However, the selection of a and k in (4) can significantly influence RGB's segmentation results [30]. As shown in Figure 4, it can be seen that inappropriate selections of [alpha] and k can lead to under- or oversegmentation. In [30], two significant parameters in RGB segmentation algorithm were empirically selected and usually manually assigned by testing repeatedly to achieve acceptable results. It cannot be fixed for real clinical application because good selections of a and k may be quite different for different images due to the diversity of US images.

Therefore, the PAORGB was proposed to optimize these two parameters and to achieve a good selection of them automatically for each US image [31]. However, only region-based information and only one optimization goal (maximizing the difference between target and background) have been used. Although the PAORGB can obtain good segmentation results for some US images, its performance is not adequately stable. Therefore, we propose the MOORGB which uses multiobjective optimization method (maximizing the difference between target and background, improving the uniformity within the target region, considering the edge gradient) to improve the segmentation performance. The method makes comprehensive consideration of edge-based and region-based information.

2.3. PSO Optimization of Parameters. PSO algorithm is an evolutionary computation technique mimicking the behavior of flying birds and their means of information exchange [56, 57]. In PSO, each particle represents a potential solution, and the particle swarm is initialized with a population of random/uniform individuals in the search space. PSO searches the optimal solution by updating positions of particles in an evolutionary manner.

Suppose that there are [n.sub.p] solutions, each of which corresponds to a particle, and the position (i.e., the solution) and velocity of the "th particle (i = 1, ... , [n.sub.p]) are represented by two m-dimensional (m = 2 in our study) vectors (i.e., [x.sub.i] = ([x.sub.i1], [x.sub.i2], ... , [x.sub.im]) and [v.sub.i] = ([v.sub.i1], [v.sub.i2], ... , [v.sub.im]), resp.). Position x is a vector and in our method, x = (k, [alpha]). Velocity v means the varied distance of the position at every iteration. [c.sub.1], [r.sub.1], [c.sub.2], [r.sub.2] and w are scalars. According to specific issues, one or more objective functions are used to evaluate fitness of each particle, and then the comparison criterion is employed to obtain superior particles. Assume that [p.sub.i] = ([p.sub.i1], [p.sub.i2], ... , [p.sub.im]) is the best position visited until the moment of the ith particle during the update process, and the global best position of the whole particle swarm obtained so far is indicated as [p.sub.g] = ([p.sub.g1], [p.sub.g2], ... , [p.sub.gm]). At each generation, each particle updates its velocity and position according to the following equations after [p.sub.i] and [p.sub.g] are acquired through fitness evaluation and the comparison criterion [56]:

[v.sup.t+1.sub.i] + = w[v.sup.t.sub.i] + [c.sub.1][r.sub.1] ([p.sup. t.sub.i] - [x.sup.t.sub.i]) + [c.sub.2][r.sub.2] ([p.sup.t.sub.g] - [x.sup.t.sub.i]) (5)

[x.sup.t+1.sub.i] = [x.sup.t.sub.i] + [v.sup.t+1.sub.i] (6)

[w.sup.t] = [w.sub.max] ([w.sub.max] - [w.sub.min]) / [T.sub.max] * t, (7)

where t is the generation number, [T.sub.max] is the maximum iteration, [w.sup.t] is the value of the rth iteration, w is the inertia weight, [c.sub.1] and [c.sub.2] are positive parameters known as acceleration coefficients, determining the relative influence of cognition and social components, and [r.sub.1] and [r.sub.2] are independently uniformly distributed random variables within the range of (0, 1). The value of w describes the influence of historical velocity. The method with higher w will have stronger global search ability and the method with smaller w will has stronger local search ability. At the beginning of the optimization process, we initially set w to a large value in order to make better global exploration and gradually decrease it to find optimal or approximately optimal solutions and thus reduce the number of the iterations. Hence we let w decrease linearly from 1 towards 0.2, as shown in (7). We set [w.sub.max] = 1, [w.sub.min] = 0.2, and [T.sub.max] = 200. In (5), w[v.sup.t.sub.i] represents the influence of the previous velocity on the current one, and [c.sub.1][r.sub.1] ([p.sup.t.sub.i] - [x.sup.t.sub.i]) represents the personal experience while [c.sub.2][r.sub.2] ([p.sup.t.sub.i] - [x.sup.t.sub.i]) represents the collaborative effect of particles, which pulls particles to the global best solution the whole particles warm has found so far. As suggested in [41], we set [c.sub.1] = 0.5 and [c.sub.2] = 0.5 and make personal experience and collaborative effect of particles play the same important role in optimization as shown in Figure 5.

To conclude, at each generation, the velocity and position of each particle are updated according to (5), and its position is updated by (6). At each time, any better position is stored for the next generation. Then, each particle adjusts its position based on its own "flying" experience and the experience of its companions, which means that if one particle arrives at a new promising position, all other particles then move closer to it. This process is repeated until a satisfactory solution is found or a predefined number of iterative generations is met.

The general procedure is summarized as follows.

Step 1. Properly set the size of particle swarm and randomly/uniformly initialize them according to the search space. In this study, the size of particle swarm is [n.sub.p] = 200, and the particles are uniformly initialized. According to the work in [30], k varies from 100 to 4000 and a varies from 0.001 to 4.000, which form the search space.

Step 2. Traverse all particles: in each traversal (i.e., at each generation), each particle is evaluated through the objective function, and [p.sub.i] and [p.sub.g] are acquired according to the comparison criterion.

Step 3. Update the velocity and position of each particle according to (5) and (6). As suggested in [57], we set [c.sub.1] = 0.5 and [c.sub.2] = 0.5, and let w decrease linearly from 1 towards 0.2.

Step 4. Repeat Steps 2 and 3 until all particles converge to the predefined extent or the iterative number arrives at the predefined maximum. The predefined extent in this study is that [p.sub.g] does not change for four iterations, and the maximum iteration is set to N = 200 empirically.

2.4. The Proposed Objective Function in the PSO. At each time, we use RGB to segment the TCI according to the information (i.e., [alpha] and k) of one particle. According to the a priori knowledge that the focus of interest is located in the central part of TCI, the central subregion with the central pixel of TCI is the possible tumor region. This central subregion is defined as the reference region, which varies with the setting of a and k, and the reference region is the expected tumor region when a and k are optimally set. Figure 6 gives an example of reference region (the original image is shown in Figure 2).

In MOORGB, a novel objective function consisting of three parts corresponding to region-based and edge-based information is adopted. Based on the above a priori knowledge, these three parts, that is, between-class variance, within-class variance, and average gradient, are defined as follows. Compared with PAORGB, we add two objective functions, within-class variance and average gradient. It is not enough to optimize parameters just relying on edge information or region information for segmentation. We take the uniformity of the region and the information of the edge as the objective contents in the optimization process.

2.4.1. Between-Class Variance. Inspired by the idea of Otsu's method [61] which utilizes the difference between subregions to quantitatively describe the segmentation result to select an optimal threshold, the between-class variance (Vg) is defined as follows:

[V.sub.B] = [k.summation over (i = 1)] P([C.sub.i])[([mu]([C.sup.i]) - [mu]([C.sub.Ref])).sup.2], (8)

where [V.sub.B] denotes the sum of difference of mean intensity between subregion C and the reference region, k denotes the number of subregions adjacent to the reference region, and [mu](C) denotes the mean intensity of subregion C while P([C.sub.i]) denotes the proportion of the ith subregion in the whole TCI and is expressed as

p([C.sub.i]) = [absolute value of [C.sub.i]] / [absolute value of TCI], (9)

where [absolute value of [C.sub.i]] is the number of pixels in the ith subregion and [absolute value of TCI] is the number of pixels in the whole TCI.

From the definition, [V.sub.B] denotes the difference between the reference region and its adjacent regions. Since the reference region corresponds to the interested tumor region in the US image, it is easy to understand that maximizing [V.sub.B] can well overcome oversegmentation. By the way, this is the only part adopted in PAORGB [31].

2.4.2. Within-Class Variance. The aim of image segmentation is to segment a region with uniformity, which is always the target object, out of the background [62]. Therefore, considering the uniformity within the target region, we come up with another part called within-class variance ([V.sub.W]) defined as follows:

[mathematical expression not reproducible] (10)

where [absolute value of [C.sub.Ref]] is the number of pixels in the reference region and [I.sub.i] denotes the intensity of the "th pixel while [mu]([C.sub.Ref]) denotes the mean intensity of the reference region, and [absolute value of TCI] is the number of pixels in the whole TCI. Since the minimizing of pure within-class variance [mathematical expression not reproducible] will lead to oversegmentation, we add P([C.sub.Ref]) to suppress it. Since the value range of [mathematical expression not reproducible]) is much larger than the value range of P([C.sub.Ref]), we use arctan operation to make them comparable. From the definition, [V.sub.W] denotes the difference within the reference region, and the undersegmentation problem can be well overcome by minimizing [V.sub.W].

2.4.3. Average Gradient. As mentioned above, the purpose of segmenting US images is to support the latter analysis and classification in the CAD system, and a wealth of useful and significant information for classification is contained in the contour of the focus. Accordingly, to achieve the objective of acquiring better tumor contours, another part called average gradient ([G.sub.A]) is employed in our objective function. With the inspiration of the definition of energy in ACM, [G.sub.A] is defined as follows:

[G.sub.A] = [1/m] [m.summation over (i = 1)] [absolute value of [G.sub.i]] (11)

where m is the number of pixels included in the edge of the reference region and [G.sub.i] denotes the gradient (calculated by the Sobel operator) of the ith pixel. Sobel operator is an edge detection operator based on 2D spatial gradient measurement. It can smooth the noise of the image and provide more accurate edge direction information than Prewitt and Roberts operators [59]. [G.sub.A] denotes the average energy of the edge of the reference region.

Maximizing average gradient [G.sub.A] obtains more accurate contour and avoids oversegmentation. If there were oversegmentation, the reference region would be included within the real target area; real target area would be a relatively homogeneous region within which every partitioned smaller region would have smaller [G.sub.A]. Consequently, to increase [G.sub.A] would force the contour of the reference region to move towards that of the real target area. Very often, the edges of the targets in the US image are not sufficiently clear and sharp such that we cannot use only [G.sub.A] in the objective function. We take average gradient into account as one of the three objective functions in optimization process to improve the segmentation result. In ACM, the initial edge is forced to approach the real edge through maximizing the energy. Similar to ACM, maximizing [G.sub.A] can force the contour of the reference region to approach the real contour of the tumor.

2.4.4. The Final Objective Function. Based on the above three parts, the objective function is defined as follows:

[F.sub.O] = a * [V.sub.B]/[f.sub.B] - b * [V.sub.W] / [f.sub.W] + c * [G.sub.A] / [f.sub.A] (12)

[mathematical expression not reproducible], (13)

[mathematical expression not reproducible] (14)

[mathematical expression not reproducible] (15)

[F.sub.O] = 0.3 * [V.sub.B] / [f.sub.B] - 0.3 * [V.sub.W] / [f.sub.W] + 0.4 * [G.sub.A] / [f.sub.A], (16)

where a, b, c are the weights of different objective parts (a = 0.3, b = 0.3, c = 0.4 in our experiment; they can be adjusted as needed). The final objective function in the experiment is defined as (16). [V.sub.B], [V.sub.W], and [G.sub.A] are between- class variance, within-class variance, and average gradient, respectively. [f.sub.B], [f.sub.W], and [f.sub.A] are normalized factors while [n.sub.p] = 200 is the size of particle swarm. Because the value ranges of [V.sub.B], [V.sub.W], and [G.sub.A] are quite different, they should be normalized to be comparable. For each US image, [f.sub.B], [f.sub.W], and [f.sub.A] are calculated once after the uniform initialization of particle swarm but before the first iteration. We try to maximize [F.sub.O] by the PSO.

2.5. Postprocessing. After the TCI is segmented by the RGB with the optimal a and k obtained by the PSO, we turn it into a binary image containing the object (tumor) and the background (tissue). Next, morphological opening and closing are conducted to refine the tumor contour, with opening to reduce the spicules and closing to fill the holes. A5 x 5 elliptical kernel is used for both opening and closing.

2.6. The Proposed MOORGB Segmentation Method. Assuming that the position and velocity of the ith particle in our case are expressed as [x.sub.i] = ([k.sub.i], [[alpha].sub.i]) and [v.sub.i] = ([v.sub.ki], [v.sub.[alpha]i]), respectively, the general procedure of MOORGB is summarized as follows.

Step 1. Manually delineate TCI from the original US image.

Step 2. Use the bilateral filter to do the speckle reduction for TCI.

Step 3. Enhance the filtered TCI by histogram equalization to improve the contrast.

Step 4. Improve the homogeneity by performing pyramid mean shift filtering.

Step 5. Uniformly initialize the particle swarm within the search space, and let the iteration count q = 0 and so on.

Step 6. Let q = q + 1; traverse all [n.sub.p] particles: in the qth traversal, RGB is performed with the position (i.e., [x.sub.i] = ([k.sub.i], [[alpha].sub.i])) of each particle; then evaluate the segmentation result with the objective function [F.sub.O] and obtain [p.sub.i] and [p.sub.g] by comparing values of [F.sub.O] for updating each particle (including position and velocity) for next iteration.

Step 7. Iteratively repeat Step 6 until convergence (i.e., [p.sub.g] remains stable for 4 generations) or q = N (N = 200 in this paper).

Step 8. After finishing the iteration, the position of the globally best particle (i.e., [p.sub.g]) is, namely, the optimal setting of a and k; then get the final segmentation result by performing RGB with the optimal setting.

Step 9. Turn the segmentation result into a binary image; then get the final tumor contour by conducting morphological opening and closing.

2.7. Experimental Methods. We developed the proposed method with the C++ language using OpenCV 2.4.3 and VisualStudio 2010 and run it on a computer with 3.40 GHz CPU and 12.0 GB RAM. To validate our method, experiments have been conducted. Our work is approved by Human Subject Ethics Committee of South China University of Technology. In the dataset, 100 clinical breast US images and 18 clinical musculoskeletal US images with the subjects' consent forms were provided by the Cancer Center of Sun Yat-sen University and were taken from an HDI5000 SonoCT System (Philips Medical Systems) with an L12-5 50 mm Broadband Linear Array at the imaging frequency of 7.1 MHz. The "true" tumor regions of these US images were manually delineated by an experienced radiologist who has worked on US imaging and diagnosis for more than ten years. The contour delineated by only one doctor is not absolutely accurate because different doctors may give different "real contours," which is indeed a problem in the research. Nevertheless, the rich diagnosis experience of the doctor has fitted the edge of every tumor as accurately as possible. This dataset consists of 50 breast US images with benign tumors, 50 breast US images with malignant tumors, and 18 musculoskeletal US images with cysts (including 10 ganglion cysts, 4 keratinizing cysts, and 4 popliteal cysts).

To demonstrate the advantages of the proposed method, besides PAORGB, we also compared the method with the other two well-known segmentation methods (i.e., DRLSE [43] and MSGC [29]). DRLSE method, an advanced level set evolution approach in recent years, is applied to an edge-based active contour model for image segmentation. It is an edge-based segmentation method that needs to set initial contour manually. The initial contour profoundly affects the final segmentation result. MSGC is a novel graph-cut method whose energy function combines region- and edge-based information to segment US images. It also needs to crop tumor centered ROI. Among the three comparative methods, DRLSE is an edge-based method, PAORGB is a region-based method, and MSGC is a compound method. To make a comparison of computational efficiency, the methods PAORGB and MOORGB were programmed in the same software system. As such, the four methods were run with the same hardware configuration. The ROI is all the same for the four segmentation methods.

To quantitatively measure the experiment results, four criteria (i.e., averaged radial error (ARE), true positive volume fraction (TPVF), false positive volume fraction (FPVF), and false negative volume fraction (FNVF)) were adopted in this study. The ARE is used for the evaluation of segmentation performance by measuring the average radial error of a segmented contour with respect to the real contour which is delineated by an expert radiologist. As shown in Figure 7, it is defined as

ARE (n) = [1/n] [n-1.summation over (i = 0)] [absolute value of ([C.sub.s](i) - [C.sub.r](i))] / [absolute value of ([C.sub.r](i) - [C.sub.o])] x 100%, (17)

where n is the number of radial rays and set to 180 in our experiments while [C.sub.o] represents the center of the "true" tumor region which is delineated by the radiologist and [C.sub.s](i) denotes the location where the contour of the segmented tumor region crosses the "th ray, while [C.sub.r](i) is the location where the contour of the "true" tumor region crosses the "th ray.

In addition, TPVF, FPVF, and FNVF were also used in the evaluation of the performance of segmentation methods. TPVF means true positive volume fraction, indicating the total fraction of tissue in the "true" tumor region with which the segmented region overlaps. FPVF means false positive volume fraction, denoting the amount of tissue falsely identified by the segmentation method as a fraction of the total amount of tissue in the "true" tumor region. FNVF means false negative volume fraction, denoting the fraction of tissue defined in the "true" tumor region that is missed by the segmentation method. In our study, the "true" tumor region is delineated by the radiologist. Figure 8 shows the areas corresponding to TPVF, FPVF, and FNVF. Accordingly, smaller ARE, FPVF, and FNVF and larger TPVF indicate better segmentation performance. TPVF, FPVF, and FNVF are defined by

TPVF = [A.sub.m] [intersection] [A.sub.n] / [A.sub.m] FPVF = [A.sub.n] - [A.sub.m] [intersection] [A.sub.n] / [A.sub.m] FNVF = [A.sub.m] - [A.sub.m] [ubtersection] [A.sub.n] / [A.sub.m], (18)

where [A.sub.m] is the area of the "true" tumor region delineated by the radiologist and [A.sub.n] is the area of the tumor region obtained by the segmentation algorithm.

3. Experimental Results and Discussion

3.1. Qualitative Analysis. In this paper, we present the segmentation results for five tumors. Five US images with the segmentation results are shown in Figures 9-13. The quantitative segmentation results on US images are shown in Tables 1, 2, 3, and 4. Figures 9(a), 10(a), 11(a), 12(a), and 13(a) show original B-mode US images for two benign tumors, two malignant tumors, and one musculoskeletal cyst, respectively. After preprocessing the original images, the segmentation results using the MOORGB are illustrated in Figures 9(b), 10(b), 11(b), 12(b), and 13(b), those using the PAORGB in Figures 9(d), 10(d), 11(d), 12(d), and 13(d), those using the DRLSE in Figures 9(e), 10(e), 11(e), 12(e), and 13(e), and those using the MSGC in Figures 9(f), 10(f), 11(f), 12(f), and 13(f).

In Figures 9-13, we can see that our method achieved the best segmentation results compared with the other three methods, and the contour generated by our method is quite close to the real contour delineated by the radiologist. Undersegmentation happens in Figures 9(d) and 10(d), but not in Figures 9(b) and 10(b); and oversegmentation happens in Figures 12(d) and 13(d), but not in Figures 12(b) and 13(b). Comparing with the PAORGB, the MOORGB improves the segmentation results obviously, avoiding the undersegmentation and oversegmentation more effectively. Regional uniformity has been significantly improved, and the edge has been smoother. The reason is that the within-class variance and average gradient are introduced into the objective function of MOORGB by combining region- and edge-based information. The segmentation results of the MSGC are better than those of the PAORGB and DRLSE, since MSGC is also a compound method (region energy and boundary energy are both included in its energy function) and many preprocessing techniques are adopted. As shown in Figures 9(e), 10(e), 11(e), 12(e), and 13(e), the DRLSE can only roughly detect the tumor contour, and the detected contours are irregular. The reason is that it depends on edge-based information and is sensitive to speckle noise and sharp edge; hence it captures sharp edge easily and leads to boundary leakage and undersegmentation.

3.2. Quantitative Analysis. Table 4 shows the quantitative comparisons of different segmentation approaches on the whole dataset. Similarly, we show the quantitative segmentation results of the benign tumors, malignant tumors, and cysts in Tables 1, 2, and 3, respectively.

Comparing Tables 1 and 3 with Table 2, it is shown that all four segmentation methods perform better on benign tumors and musculoskeletal cysts than on malignant tumors on the whole, indicating that the boundaries of benign tumors and musculoskeletal cysts are more significant than those of malignant tumors. The shape of the benign tumor is more regular and similar to circle or ellipse. The shape of the malignant tumor is irregular and usually lobulated with burrs in the contour. The segmentation result of the malignant tumor is worse than benign tumors because the contour of malignant tumor is less regular and less homogenous than that of benign tumor.

From Table 4, it is seen that our method achieved the lowest ARE (10.77%). Due to the undersegmentation, the DRLSE got the highest TPVF (94.07%) and FPVF (17.97%), indicating the high ratio of false segmentation. The MSGC got the lowest FPVF (2.9%), which indicates the low ratio of false segmentation, and is the fastest method (0.123 s) among the four methods. However it got the lowest TPVF (75.61%), showing oversegmentation in a way. Comparing with the original PAORGB (as shown in Table 4), our method improves the segmentation result obviously, achieving higher TPVF and lower ARE, FPVF, and FNVF. Although MOORGB could not achieve the best performance in all evaluation indices, it got the best overall performance. Comparing with DRLSE, our method was 6.45% lower than it in TPVF but 8.75% better than it in FPVF, obtaining better overall performance. Comparing with MSGC, our method was 1.58% higher than it in FPVF but 9.93% better than it in TPVF, obtaining better overall performance and avoiding oversegmentation in a way. As shown in Table 4, our method is faster than the PAORGB. It is because the convergence condition in our method is that "[p.sub.g] remains stable for 4 generations," rather than that "the updating of k is below 1 and that of a is below 0.00001 for all the particles in an experiment" in the PAORGB [31].

3.3. The Influence of the Weight. Our method synthesizes three optimization objective functions (between-class variance [V.sub.B], within-class variance [V.sub.W], and average gradient [G.sub.A]). Thus the weight values of three objective parts (i.e., a, b, and c) are introduced. Figure 14 shows the comparison of experimental results with different weight values. From Figures 14(a), 14(b), and 14(c) and Table 5, we can see that when the weight values of the three objective functions are almost the same, three optimization objectives play nearly equal roles in the optimization process, making the algorithm not only region-based but also edge-based. When one of the three weight values is overlarge, it would not be able to reflect the three optimization results evenly, hence leading to oversegmentation or undersegmentation. As shown in Figures 14(k), 14(l), and 14(m), if one of the three weight values equals one, the proposed method degenerated into a single objective optimization algorithm, and the optimization goal is only one role, which cannot avoid oversegmentation and undersegmentation effectively. Through analyzing the influence of different weight values of objective functions and our repeated experiments, we can set the system parameters as a = b = 0.3, c = 0.4. The final objective function is described as (16) which can make segmentation system work well.

4. Conclusions

In this paper, we propose a novel segmentation scheme for US images based on the RGB and PSO methods. In this scheme, the PSO is used to optimally set the two significant parameters in the RGB for determining the segmentation result automatically. To combine region-based and edge-based information, we consider the optimization as a multiobjective problem. First, because of the low contrast and speckle noises of US images, the ROI image is filtered by a bilateral filter and contrast-enhanced by histogram equalization and then pyramid mean shift is executed on the enhanced image to improve homogeneity. A novel objective function consisting of three parts corresponding to region-based and edge-based information is adopted by PSO. The between-class variance denotes the difference between the reference region and its adjacent regions. The within-class variance denotes the difference within the reference region, and the undersegmentation problem can be well overcome by minimizing it. Between-class variance and within-class variance reflect the regional information. The average gradient denotes the average energy of the edge of the reference region and maximizing it can force the contour of the reference region to approach the real contour of the tumor. Average gradient reflects the edge-based information of the image. Three optimization objectives play important roles in the optimization process, making the algorithm achieve the corresponding segmentation effect, not only region-based but also edge-based. With the optimization of PSO, RGB is performed to segment the ROI image. Finally, the segmented image is processed by morphological opening and closing to refine the tumor contour. To validate our method, experiments have been conducted on 118 clinical US images, including breast US images and musculoskeletal US images. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that our method could successfully segment US images and achieved the best segmentation results compared with the other three methods, MSGC, PAORGB, and DRLSE. The contour generated by our method was closer to the real contour delineated by the radiologist. The MOORGB obtained the lowest ARE and better overall performance in TPVF, FPVF, and FNVF, avoiding the undersegmentation and oversegmentation more effectively.

However, the step to obtain TCI (as shown in Figure 2) requires user's participation which may result in significant influence on the following segmentation. To obtain acceptable segmentation results, the operator should be well experienced in examining US images and identifying suspicious lesions in clinical practices. Moreover, the TCI should be carefully delineated to achieve the full lesion region with partial surrounding tissues, and the interested lesion region must be located in the central part. Consequently, how to automatically extract TCI from the BUS image is one of our future studies. In addition, the computation time is still far away from real-time applications. Accordingly, making efforts to reduce the computation time by adopting parallel processing techniques is also part of our future work. Besides, adopting our segmentation method in real CAD systems to validate the whole performance will be included in our future work.

https://doi.org/10.1155/2017/9157341

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors' Contributions

Yaozhong Luo and Longzhong Liu have contributed equally to this work.

Acknowledgments

This work was partially supported by National Natural Science Foundation of China (nos. 61372007 and 61571193), Guangzhou Key Lab of Body Data Science (no. 201605030011), and Guangdong Provincial Science and Technology Program-International Collaborative Projects (no. 2014A050503020).

References

[1] B. Sahiner, H.-P. Chan, M. A. Roubidoux et al., "Malignant and benign breast masses on 3D US volumetric images: effect of computer-aided diagnosis on radiologist accuracy," Radiology, vol. 242, no. 3, pp. 716-724, 2007.

[2] C.-M. Chen, Y.-H. Chou, K.-C. Han et al., "Breast lesions on sonograms: computer-aided diagnosis with nearly setting-independent features and artificial neural networks," Radiology, vol. 226, no. 2, pp. 504-514, 2003.

[3] K. Drukker, M. L. Giger, K. Horsch, M. A. Kupinski, C. J. Vyborny, and E. B. Mendelson, "Computerized lesion detection on breast ultrasound," Medical Physics, vol. 29, no. 7, pp. 1438-1446, 2002.

[4] Q. Li, W. Zhang, X. Guan, Y. Bai, and J. Jia, "An improved approach for accurate and efficient measurement of common carotid artery intima-media thickness in ultrasound images," BioMed Research International, vol. 2014, Article ID 740328, 8 pages, 2014.

[5] B. O. Anderson, R. Shyyan, A. Eniu et al., "Breast cancer in limited-resource countries: an overview of the breast health global initiative 2005 guidelines," Breast Journal, vol. 12, no. 1, pp. S3-S15, 2006.

[6] V. Naik, R. S. Gamad, and P. P. Bansod, "Carotid artery segmentation in ultrasound images and measurement of intima-media thickness," BioMed Research International, vol. 2013, Article ID 801962,10 pages, 2013.

[7] Y. L. Huang, D. R. Chen, and Y. K. Liu, "Breast cancer diagnosis using image retrieval for different ultrasonic systems," in Proceedings of the International Conference on Image Processing, ICIP, pp. 2957-2960, Institute of Electrical and Electronics Engineers, Singapore, 2004.

[8] H. D. Cheng, J. Shan, W. Ju, Y. Guo, and L. Zhang, "Automated breast cancer detection and classification using ultrasound images: a survey," Pattern Recognition, vol. 43, no. 1, pp. 299-317, 2010.

[9] Y. Li, W. Liu, X. Li, Q. Huang, and X. Li, "GA-SIFT: a new scale invariant feature transform for multispectral image using geometric algebra," Information Sciences, vol. 281, pp. 559-572, 2014.

[10] J. Shi, S. Zhou, X. Liu, Q. Zhang, M. Lu, and T. Wang, "Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset," Neurocomputing, vol. 194, pp. 87-94, 2016.

[11] J. A. Noble and D. Boukerroui, "Ultrasound image segmentation: a survey," IEEE Transactions on Medical Imaging, vol. 25, no. 8, pp. 987-1010, 2006.

[12] J. Peng, J. Shen, and X. Li, "High-Order Energies for Stereo Segmentation," IEEE Transactions on Cybernetics, vol. 46, no. 7, pp. 1616-1627, 2016.

[13] M. Xian, Y. Zhang, H.-D. Cheng, F. Xu, and J. Ding, "Neutroconnectedness cut," IEEE Transactions on Image Processing, vol. 25, no. 10, pp. 4691-4703, 2016.

[14] P. N. T. Wells and M. Halliwell, "Speckle in ultrasonic imaging," Ultrasonics, vol. 19, no. 5, pp. 225-229,1981.

[15] G. Xiao, M. Brady, J. A. Noble, and Y. Zhang, "Segmentation of ultrasound B-mode images with intensity inhomogeneity correction," IEEE Transactions on Medical Imaging, vol. 21, no. 1, pp. 48-57, 2002.

[16] S. Y. Joo, W. K. Moon, and H. C. Kim, "Computer-aided diagnosis of solid breast nodules on ultrasound with digital image processing and artificial neural network," in Proceedings of the 26th Annual International Conference of, vol. 2, pp. 1397-1400, San Francisco, CA, USA, 2004.

[17] K. Horsch, M. L. Giger, C. J. Vyborny, and L. A. Venta, "Performance of Computer-Aided Diagnosis in the Interpretation of Lesions on Breast Sonography," Academic Radiology, vol. 11, no. 3, pp. 272-280, 2004.

[18] M. H. Yap, E. A. Edirisinghe, and H. E. Bez, "Fully automatic lesion boundary detection in ultrasound breast images," in Medical Imaging2007: Image Processing,vol. 6512 of Proceedings of SPIE, p. I5123, San Diego, Calif, USA, 2007.

[19] N. A. M. Isa, S. Sabarudin, U. K. Ngah, and K. Z. Zamli, "Automatic detection of breast tumours from ultrasound images using the modified seed based region growing technique," in Proceedings of the 9th International Conference on Knowledge-Based Intelligent Information and Engineering Systems, vol. 3682, pp. 138-144, Springer, La Trobe University, Melbourne, Australia, 2005.

[20] J. Shan, H. D. Cheng, and Y. Wang, "A novel segmentation method for breast ultrasound images based on neutrosophic l-means clustering," Medical Physics,vol. 39, no. 9, pp. 5669-5682, 2012.

[21] H. B. Kekre and P. Shrinath, "Tumour delineation using statistical properties of the breast us images and vector quantization based clustering algorithms," International Journal of Image, Graphics and Signal Processing, vol. 5, no. 11, pp. 1-12, 2013.

[22] W. K. Moon, C.-M. Lo, R.-T. Chen et al., "Tumor detection in automated breast ultrasound images using quantitative tissue clustering," Medical Physics, vol. 41, no. 4, Article ID 042901, 2014.

[23] D. Boukerroui, O. Basset, N. Guerin, and A. Baskurt, "Multiresolution texture based adaptive clustering algorithm fo breast lesion segmentation," European Journal of Ultrasound,vol. 8, no. 2, pp. 135-144, 1998.

[24] C.-M. Chen, Y.-H. Chou, C. S. K. Chen et al., "Cell-competition algorithm: a new segmentation algorithm for multiple objects with irregular boundaries in ultrasound images," Ultrasound in Medicine and Biology, vol. 31, no. 12, pp. 1647-1664, 2005.

[25] B. Deka and D. Ghosh, "Ultrasound image segmentation using watersheds and region merging," in Proceedings of the IET Visual Information Engineering (VIE '06), p. 6, Bangalore, India, 2006.

[26] L. Zhang and M. Zhang, "A fully automatic image segmentation using an extended fuzzy set," in Proceedings of the International Workshop on Computer Science for Environmental Engineering and EcoInformatics, vol. 159, pp. 412-417, Springer, Kunming, People's Republic of China, 2011.

[27] W. Gomez, A. Rodriguez, W. C. A. Pereira, and A. F. C. Infantosi, "Feature selection and classifier performance in computer-aided diagnosis for breast ultrasound," in Proceedings of the 10th International Conference and Expo on Emerging Technologies for a Smarter World, CEWIT, IEEE, Melville, NY, USA, 2013.

[28] J. Zhang, S. K. Zhou, S. Brunke, C. Lowery, and D. Comaniciu, "Database-guided breast tumor detection and segmentation in 2D ultrasound images," in Medical Imaging 2010: Computer-Aided Diagnosis, vol. 7624 of Proceedings of SPIE, p. 7, San Diego, Calif, USA, February 2010.

55[29] Z. Zhou, W. Wu, S. Wu et al., "Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts," Ultrasonic Imaging, vol. 36, no. 4, pp. 256-276, 2014.

[30] Q.-H. Huang, S.-Y. Lee, L.-Z. Liu, M.-H. Lu, L.-W. Jin, and A.H. Li, "A robust graph-based segmentation method for breast tumors in ultrasound images," Ultrasonics, vol. 52, no. 2, pp. 266-275, 2012.

[31] Q. Huang, X. Bai, Y. Li, L. Jin, and X. Li, "Optimized graph-based segmentation for ultrasound images," Neurocomputing, vol. 129, pp. 216-224, 2014.

[32] Q. Huang, F. Yang, L. Liu, and X. Li, "Automatic segmentation of breast lesions for interaction in ultrasonic computer-aided diagnosis," Information Sciences, vol. 314, pp. 293-310, 2015.

[33] H. Chang, Z. Chen, Q. Huang, J. Shi, and X. Li, "Graph-based learning for segmentation of 3D ultrasound images," Neurocomputing, vol. 151, no. 2, pp. 632-644, 2015.

[34] Q. Huang, B. Chen, J. Wang, and T. Mei, "Personalized video recommendation through graph propagation," in Transactions on Multimedia Computing, Communications, and Applications, vol. 10 of 4, pp. 1133-1136, ACM, 2012.

[35] Y. Luo, S. Han, and Q. Huang, "A novel graph-based segmentation method for breast ultrasound images," in Proceedings of the DigitalImage Computing: Techniques and Applications (DICTA), pp. 1-6, IEEE, 2016.

[36] R.-F. Chang, W.-J. Wu, C.-C. Tseng, D.-R. Chen, and W. K. Moon, "3-D Snake for US in Margin Evaluation for Malignant Breast Tumor Excision Using Mammotome," in Proceedings of the IEEE Transactions on Information Technology in Biomedicine, vol. 7 of 3, pp. 197-201, 2003.

[37] A. K. Jumaat, W. E. Z. W. A. Rahman, A. Ibrahim, and R. Mahmud, "Segmentation of masses from breast ultrasound images using parametric active contour algorithm," in Proceedings of the International Conference on Mathematics Education Research, ICMER, pp. 640-647, Malacca, Malaysia, 2010.

[38] A. Sarti, C. Corsi, E. Mazzini, and C. Lamberti, "Maximum likelihood segmentation of ultrasound images with rayleigh distribution," IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 52, no. 6, pp. 947-960, 2005.

[39] B. Liu, H. D. Cheng, J. Huang, J. Tian, X. Tang, and J. Liu, "Probability density difference-based active contour for ultrasound image segmentation," Pattern Recognition, vol. 43, no. 6, pp. 2028-2042, 2010.

[40] L. Gao, X. Liu, and W. Chen, "Phase- and GVF-based level set segmentation of ultrasonic breast tumors," Journal of Applied Mathematics, vol. 2012, Article ID 810805, pp. 22-ID 810805, 2012.

[41] B. Wang, X. Gao, J. Li, X. Li, and D. Tao, "A level set method with shape priors by using locality preserving projections," Neurocomputing, vol. 170, pp. 188-200, 2015.

[42] B. N. Li, J. Qin, R. Wang, and M. Wang, "Selective level set segmentation using fuzzy region competition," IEEE Access, vol. 4, pp. 4777-4788, 2016.

[43] C. Li, C. Xu, C. Gui, and M. D. Fox, "Distance regularized level set evolution and its application to image segmentation," IEEE Transactions on Image Processing, vol. 19, no. 12, pp. 3243-3254, 2010.

[44] I. Lang, M. Sklair-Levy, and H. Spitzer, "Multi-scale texture-based level-set segmentation of breast B-mode images," Computers in Biology & Medicine, vol. 72C, pp. 30-42, 2016.

[45] A. Madabhushi and D. N. Metaxas, "Combining low-, high-level and empirical domain knowledge for automated segmentation of ultrasonic breast lesions," IEEE Transactions on Medical Imaging, vol. 22, no. 2, pp. 155-169, 2003.

[46] D. R. Chen, R. F. Chang, W. J. Kuo, M. C. Chen, and Y. L. Huang, "Diagnosis of breast tumors with sonographic texture analysis using wavelet transform and neural networks," Ultrasound in Medicine and Biology, vol. 28, no. 10, pp. 1301-1310, 2002.

[47] Y. Guo, A. Sengur, and J. W. Tian, "A novel breast ultrasound image segmentation algorithm based on neutrosophic similarity score and level set," Computer Methods and Programs in Biomedicine, vol. 123, pp. 43-53, 2016.

[48] A. T. Stavros, D. Thickman, C. L. Rapp, M. A. Dennis, S. H. Parker, and G. A. Sisney, "Solid breast nodules: use of sonography to distinguish between benign and malignant lesions," Radiology, vol. 196, no. 1, pp. 123-134,1995.

[49] W. Leucht and D. Leucht, "Teaching Atlas of Breast Ultrasound," pp. 24-38, Thieme Medical, Stuttgart, Germany, 2000.

[50] R. F. Chang, W. J. Wu, W. K. Moon, W. M. Chen, W. Lee, and D. R. Chen, "Segmentation of breast tumor in three-dimensional ultrasound images using three-dimensional discrete active contour model," Ultrasound in Medicine and Biology, vol. 29, no. 11, pp. 1571-1581, 2003.

[51] Y. L. Huang and D. R. Chen, "Watershed segmentation for breast tumor in 2-D sonography," Ultrasound in Medicine and Biology, vol. 30, no. 5, pp. 625-632, 2004.

[52] Y. L. Huang and D. R. Chen, "Automatic contouring for breast tumors in 2-D sonography," in Proceedings of the 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE), pp. 3225-3228, IEEE, Shanghai, China, 2005.

[53] Y. L. Huang, Y. R. Jiang, D. R. Chen, and W. K. Moon, "Level set contouring for breast tumor in sonography," Journal of Digital Imaging, vol. 20, no. 3, pp. 238-247, 2007.

[54] M. Aleman-Flores, L. Alvarez, and V. Caselles, "Texture-oriented anisotropic filtering and geodesic active contours in breast tumor ultrasound segmentation," Journal of Math Imaging Vision, vol. 28, no. 1, pp. 81-97, 2007.

[55] W. M. Wang, L. Zhu, J. Qin, Y. P. Chui, B. N. Li, and P. A. Heng, "Multiscale geodesic active contours for ultrasound image segmentation using speckle reducing anisotropic diffusion," Optics and Lasers in Engineering, vol. 54, pp. 105-116, 2014.

[56] J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proceedings of the International Conference on Neural Networks, (ICNN '95), vol. 4, pp. 1942-1948, IEEE, Perth, Australia, 1995.

[57] K. E. Parsopoulos and M. N. Vrahatis, "Particle swarm optimization method in multiobjective problems," pp. 603-607, Proceedings of the ACM symposium, Madrid, Spain, March 2002.

[58] M. Elad, "On the origin of the bilateral filter and ways to improve it," IEEE Transactions on Image Processing, vol. 11, no. 10, pp. 1141-1151, 2002.

[59] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall, Upper Saddle River, NJ, USA, 2nd edition, 2001.

[60] D. Comaniciu and P. Meer, "Mean shift analysis and applications," in Proceedings of the 7th IEEE International Conference on Computer Vision, vol. 2, pp. 1197-1203, Kerkyra, Greece, September 1999.

[61] N. Otsu, "A threshold selection method from gray-level histograms," Automatica, vol. 11, pp. 23-27, 1975.

[62] N. R. Pal and D. Bhandari, "Image thresholding: some new techniques," Signal Processing, vol. 33, no. 2, pp. 139-158, 1993.

Yaozhong Luo, (1) Longzhong Liu, (2) Qinghua Huang, (1, 3) and Xuelong Li (4)

(1) School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China

(2) Department of Ultrasound, The Cancer Center of Sun Yat-sen University, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong, China

(3) College of Information Engineering, Shenzhen University, Shenzhen 518060, China

(4) Center for OPTical IMagery Analysis and Learning (OPTIMAL), State Key Laboratory of Transient Optics and Photonics, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, Shaanxi 710119, China

Correspondence should be addressed to Qinghua Huang; qhhuang@scut.edu.cn

Received 9 September 2016; Revised 21 January 2017; Accepted 14 March 2017; Published 27 April 2017

Academic Editor: Cristiana Corsi

Caption: Figure 1: Flowchart of the proposed approach.

Caption: Figure 2: Example of extracting a TCI: (a) the original image and (b) the TCI image.

Caption: Figure 3: Example of preprocessing, (a) the TCI image, (b) the image after the bilateral filtering, (c) the image after histogram equalization, and (d) the image after mean shift filtering.

Caption: Figure 4: Influence of [alpha] and k: the image of (a) is the original image, the image of (b) shows the segmentation result with different [alpha], and (c) shows the segmentation result with different k.

Caption: Figure 5: Update of the particle.

Caption: Figure 6: Example of reference region: (a) a segmented image by the RGB and (b) the reference region of (a).

Caption: Figure 7: An illustration of computation principle for ARE.

Caption: Figure 8: The areas corresponding to TPVF, FPVF, and FNVF, respectively. [A.sub.m] indicates the "true" contour delineated by the radiologist and [A.sub.n] denotes the contour obtained by the segmentation algorithm.

Caption: Figure 9: Segmentation results for the first benign breast tumor. (a) A breast US image with a benign tumor (the contour is delineated by the radiologist). (b) The result of MOORGB. (c) The final result of MOORGB. (d) The result of PAORGB. (e) The result of DRLSE. (f) The result of MSGC.

Caption: Figure 10: Segmentation results for the second benign breast tumor. (a) A breast US image with a benign tumor (the contour is delineated by the radiologist). (b) The result of MOORGB. (c) The final result of MOORGB. (d) The result of PAORGB. (e) The result of DRLSE. (f) The result of MSGC.

Caption: Figure 11: Segmentation results for the first malignant breast tumor. (a) A breast US image with a malignant tumor (the contour is delineated by the radiologist). (b) The result of MOORGB. (c) The final result of MOORGB. (d) The result of PAORGB. (e) The result of DRLSE. (f) The result of MSGC.

Caption: Figure 12: Segmentation results for the second malignant breast tumor. (a) A breast US image with a malignant tumor (the contour is delineated by the radiologist). (b) The result of MOORGB. (c) The final result of MOORGB. (d) The result of PAORGB. (e) The result of DRLSE. (f) The result of MSGC.

Caption: Figure 13: Segmentation results for the keratinizing cyst. (a) A musculoskeletal US image with a cyst (the contour is delineated by the radiologist). (b) The result of MOORGB. (c) The final result of MOORGB. (d) The result of PAORGB. (e) The result of DRLSE. (f) The result of MSGC.

Caption: Figure 14: The segmentation results with different weights (a, b, c).

Table 1: Quantitative segmentation results of 50 breast US images with benign tumors. Methods ARE (%) TPVF (%) Our method 11.09 [+ or -] 12.47 85.60 [+ or -] 13.71 PAORGB [26] 16.47 [+ or -] 21.41 81.64 [+ or -] 29.94 DRLSE [46] 11.37 [+ or -] 13.04 93.60 [+ or -] 16.87 MSGC [24] 15.76 [+ or -] 13.18 75.34 [+ or -] 16.25 Methods FPVF (%) FNVF (%) Our method 4.51 [+ or -] 20.18 14.40 [+ or -] 13.71 PAORGB [26] 10.52 [+ or -] 29.40 18.36 [+ or -] 29.94 DRLSE [46] 14.42 [+ or -] 24.33 6.40 [+ or -] 16.87 MSGC [24] 2.51 [+ or -] 14.60 24.66 [+ or -] 16.24 Table 2: Quantitative segmentation results of 50 breast US images with malignant tumors. Methods ARE (%) TPVF (%) Our method 10.41 [+ or -] 13.62 84.91 [+ or -] 16.39 PAORGB 19.12 [+ or -] 27.63 74.98 [+ or -] 27.49 DRLSE 15.84 [+ or -] 15.34 95.31 [+ or -] 19.75 MSGC 15.52 [+ or -] 22.66 74.12 [+ or -] 15.12 Methods FPVF (%) FNVF (%) Our method 4.43 [+ or -] 19.01 15.09 [+ or -] 16.39 PAORGB 10.16 [+ or -] 37.09 25.02 [+ or -] 27.49 DRLSE 24.05 [+ or -] 20.68 4.69 [+ or -] 19.75 MSGC 2.93 [+ or -] 13.17 25.88 [+ or -] 15.12 Table 3: Quantitative segmentation results of 18 musculoskeletal US images with cysts. Methods ARE (%) TPVF (%) Our method 10.85 [+ or -] 17.14 85.61 [+ or -] 7.75 PAORGB 20.90 [+ or -] 39.66 82.12 [+ or -] 29.33 DRLSE 8.60 [+ or -] 12.06 91.90 [+ or -] 21.61 MSGC 14.43 [+ or -] 27.30 80.50 [+ or -] 17.33 Methods FPVF (%) FNVF (%) Our method 4.52 [+ or -] 34.22 14.30 [+ or -] 7.80 PAORGB 18.00 [+ or -] 35.97 17.80 [+ or -] 29.40 DRLSE 10.90 [+ or -] 10.72 8.04 [+ or -] 21.66 MSGC 3.9 [+ or -] 43.41 19.35 [+ or -] 29.65 Table 4: Overall quantitative segmentation results of 118 US images. Methods ARE (%) TPVF (%) Our method 10.77 [+ or -] 17.22 85.34 [+ or -] 16.69 PAORGB 18.27 [+ or -] 37.03 78.89 [+ or -] 30.24 DRLSE 12.84 [+ or -] 16.82 94.07 [+ or -] 18.51 MSGC 15.46 [+ or -] 26.27 75.61 [+ or -] 17.74 Methods FPVF (%) FNVF (%) Our method 4.48 [+ or -] 34.26 14.67 [+ or -] 16.67 PAORGB 10.51 [+ or -] 35.74 21.10 [+ or -] 30.23 DRLSE 17.97 [+ or -] 28.03 5.92 [+ or -] 23.78 MSGC 2.9 [+ or -] 42.41 24.37 [+ or -] 24.63 Methods Averaged computing time (s) Our method 50.54 PAORGB 719.78 DRLSE 5.93 MSGC 0.123 Table 5: Quantitative segmentation results of 15 US images with different weight values. Methods ARE (%) TPVF (%) FPVF (%) FNVF (%) a = 0.4, b = 0.3, c = 0.3 10.67 85.61 4.52 14.70 a = 0.3, b = 0.4, c = 0.3 10.71 85.60 4.51 14.69 a = 0.3, b = 0.3, c = 0.4 10.69 85.60 4.52 14.69 a = 0.6, b = 0.2, c = 0.2 10.47 85.51 4.49 14.91 a = 0.2, b = 0.6, c = 0.2 10.59 85.52 4.50 14.73 a = 0.2, b = 0.2, c = 0.6 10.74 85.64 4.80 14.75 a = 0.8, b = 0.2, c = 0.2 11.12 84.97 5.44 15.39 a = 0.2, b = 0.8, c = 0.2 11.25 85.03 5.78 15.41 a = 0.2, b = 0.2, c = 0.8 11.23 84.77 5.61 15.28 a = 1, b = 0, c = 0 12.92 83.39 6.78 16.07 a = 0, b = 1, c = 0 69.84 98.86 154.72 0.46 a = 0, b = 0, c = 1 8.97 87.79 10.49 17.93

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Luo, Yaozhong; Liu, Longzhong; Huang, Qinghua; Li, Xuelong |

Publication: | BioMed Research International |

Article Type: | Report |

Date: | Jan 1, 2017 |

Words: | 10932 |

Previous Article: | A Psychological Perspective on Preterm Children: The Influence of Contextual Factors on Quality of Family Interactions. |

Next Article: | MicroRNA Regulation of Glycolytic Metabolism in Glioblastoma. |

Topics: |