Printer Friendly

Hacia el modelado 3D de tumores cerebrales mediante endoneurosonografia y redes neuronales.

Towards a 3D modeling of brain tumors by using endoneurosonography and neural networks


The need to improve accuracy and reduce morbidity of intra-cranial procedures has motivated the revolution in diagnosis, imaging, and medical equipment applied to neurosurgeries. Undoubtedly, current surgical navigation technology will lay the foundations for future brain tumor surgeries [1-2].

Because of the inconvenience of frame-based systems, most neurosurgeons prefer the use of frameless computer assisted navigation systems. These systems allow both the planning and registration of the surgery in an interactive display with multiple views. Most image-guidance systems have fusion capabilities for multiple imaging data sets that may include magnetic resonance imaging (MRI), computed tomography (CT), and magnetic resonance angiography (MRA) and venography (MRV). The neurosurgical procedure is usually interrupted during scanning time, which may take from a few seconds to several minutes depending on the acquisition technique. Generally, the MR techniques are expensive and have high operational costs; however, they provide superior image quality and an increased field of view. On the other hand, there are several low field systems that provide inferior image quality and smaller fields of view, although they require cheaper pieces of equipment and shorter acquisition times [3-4].

In neurosurgery, recent trends suggest the incorporation of endoscopy and ultrasound into a single use. This technique is called endoneurosonography (ENS) and is for real-time visualization of brain structures such as tumors. The ENS information can be used to generate 3D representation models of brain tumors during surgery [5-7]. On the one hand, endoscopy has recently resurged in the neurosurgical operations room. When coupled with frameless neuronavigation, it has proven to be accurate, reliable, and extremely useful in selected intra-cranial procedures. By combining improved illumination with endoscopic vision and angled lens, the endoscope has revolutionized the treatment of intra-ventricular lesions [8-9]. On the other hand, the image resolution and signal-to-noise ratio of intra-operative ultrasound (US) have improved, so that these systems may provide useful information when trying to access a small, sub-cortical metastasis or when resecting multiple lesions through more than one craniotomy. Further advances in US, allow 3D image acquisition to be coupled with image-guidance systems for real-time acquisition of tumor location. When the US is used in Doppler mode, information about flow velocity in vessels is feasible [10].

The information from both endoscopy and ultrasound can be combined in order to get 3D information of brain tumors, which is used to generate a geometric model of the tumor. In this sense, 3D modeling using neural networks can be useful, since the representation can be developed in two stages: the first one, an off-line training, which generates an initial model of the tumor from preliminary ENS information, and the second one, an on-line training, which adjusts the model according to the new information received from the ENS equipment.

In this work, we present a methodology for 3D modeling of brain tumors using endoneurosonography and artificial neural networks. The use of self-organizing maps (SOM) and neural gas networks (NGN) is specifically studied. Experimental tests of 3D modeling are performed using virtualized phantom brain tumors, which are architectures that are compared with each other in terms of performance, computational cost, processing time, graphical convenience, and number of training epochs. Finally, we present the results of 3D modeling of brain tumors from an ENS database.

This paper is organized as follows: section 2 describes in a general way the methodology for 3D modeling of brain tumors using ENS and unsupervised neural networks. Section 3 presents the ENS equipment and the computer vision techniques used for processing the endoscopic and ultrasound images. Section 4 explains the 3D modeling based on unsupervised neural networks. Section 5 shows experimental results of 3D modeling of phantom brain tumors from our own ENS database. Finally, section 6 is devoted to conclude and summarize the work.


This section presents, in a comprehensive manner, the proposed methodology for 3D modeling of brain tumors using ENS and unsupervised neural networks. The endoneurosonography data acquisition is carried out using an endoscope and an ultrasound probe. As a result, two kinds of data need to be processed: i) an ultrasound image, used to segment brain tumors by means of morphological techniques; and ii) a pair of stereoscopic images, used to estimate the 3D pose of the ultrasound probe. By combining this information, segmentation results can be referenced in the 3D space, producing a 3D point cloud with the tumor information. Finally, the tumor is modeled using unsupervised neural networks and a visualization is carried out. Figure 1 shows the 3D modeling scheme of brain tumors.

1.1 Endoneurosonography

In neurosurgery, image-guided surgery involves the registration of both pre-operative and intra-operative medical images, typically from volumetric modalities such as MRI, CT, PET, and US, with a surgical coordinate system [11-12]. For most operations, the surgical coordinates are defined by a set of x, y, and z reticules on a surgical frame affixed to the patient skull. In modern computer-aided surgery systems, the frame is replaced by a tracking system (usually optical, but possibly mechanic or magnetic) used to track the surgical tools [10]. Virtual representations of the tools are rendered over a volumetric model to provide guidance to the surgeon. In order to obtain good results, the brain must not be touched during the surgery, which is only possible if minimally invasive surgery is used, that is, performing the operation through a small hole in the skull [5].

The incorporated use of endoscopy and ultrasound can be used to generate a 3D representation of brain tumors during neurosurgery [6, 13]. Endoscopic images are very useful because they allow watching what happens inside the skull; however, these images do not allow to see through opaque structures. Thus, it is necessary to complement such images with additional medical imaging techniques, such as MRI, CT, PET, or US. Compared to other techniques, ultrasound images are cheaper, faster, and easier to obtain in intra-operative scenarios [17, 10, 14]. The best-known application of US is the prenatal ultrasound, but it is widely used in applications such as the detection of tumors and vessel diseases in different parts of the body, including the brain. These diseases are identified as irregular regions in the ultrasound images [15]. When these images are under analysis, the diagnosis is based on the geometry, orientation, movement, and growing area of the region, which can be effectively supported by using computer vision techniques [16] over endoneurosonography images, presented in detail in section 3.

1.2 3D modeling of brain tumors

One of the most interesting applications of 3D modeling using neural computation is the ability to model time-varying objects, where information is not fully known at the beginning of the training but is constantly updated [17].

For our purposes, we used endoscopic and ultrasound videos, as well as ENS for real-time acquisition in intra-operative scenarios of 3D information of brain structures, such as tumors. While the surgeon handles the ENS equipment, the 3D data that define the tumor morphology are constantly updated. This information should be used to adapt the representation model.

Compared to other methods, neural networks offer interesting possibilities, since they produce continuous and compact representations, do not require large amounts of memory or long rendering times, provide volumetric representations, and allow 3D modeling in two stages: the first one, an off-line 3D modeling of brain tumors using ENS and neural networks training in order to generate an initial model of the tumor from ENS preliminary information, and the second one, an on-line training in order to adjust the model according to new information received from ENS [18].

From previous studies on 3D modeling [17-18], superiority of NGN and SOM over other neural architectures has been established. Consequently, section 4 presents the use of these neural architectures for 3D modeling tasks.


As was stated before, intra-operative ENS can be used to render internal structures such as tumors in the brain. The ENS equipment consists of a telescope, an endoscope, an US probe, and a working channel that is introduced into the brain during neurosurgery. The endoscopic system simultaneously captures color images from two miniaturized CCD sensors. The telescope, plugged into the endoscope, is introduced into the working channel to transmit images during surgery. Finally, the ultrasound probe is also introduced into the working channel to get US images of brain structures. Figure 2 shows the ENS acquisition in an intra-operative scenario. Note that the ultrasound probe must be seen by the endoscopic cameras in order to perform the stereo reconstruction. Additionally, the plane of the ultrasound image is perpendicular to the US probe.

2.1 Endoscopic image processing

In every step, two color images are obtained from the endoscopic cameras and an ultrasound image is obtained from the ultrasound sensor (figure 3). The endoscopic cameras are fully characterized (intrinsically and extrinsically), making it possible to perform a stereo reconstruction of the surgical scene. Endoscopic images have a dimension of 352x240 pixels [19]. The object of interest is the ultrasound probe, which is visible in the endoscopic images and is made of metal, rotates on its own axis, and moves randomly forward and backward through the endoscopic channel. To take advantage of this behavior, we performed a segmentation and tracking of the US probe using luminance, optical flow, and clustering.

The goal is to determine the location of the ultrasound probe throughout endoscopic frames, which is developed by applying a spatio-temporal tracking method [13]. In this approach, such segmentation is not done on a simple frame-by-frame basis but multiple image frames to segment the ultrasound probe are used.

For this purpose, features from both current and previous images are extracted. The features are then clustered to establish the US probe region in the image. The feature vector has two components: the first one is the image luminance because the ultrasound probe is made of metal and, therefore, is brighter than the other objects in the background; the second one is the Euclidean norm of the optical flow. Figure 4b shows the computation of the optical flow for an endoscopic image. Note that the highest values correspond to the image regions in which the ultrasound probe is moving.

We can obtain information about both spatial and temporal scenes by using the above features. The optical flow and the image luminance are used to define the feature space. The segmentation is performed using the k-means clustering algorithm, which is a simple and fast, numerical, unsupervised, non-deterministic, and iterative method that produces good clustering results [20].

In this application, the space feature is divided into two areas: the ultrasound probe and the background. After the clustering algorithm is applied, the image is morphologically opened to a reduced speckle noise [21]; particularly, small holes resulting from sub-sampling the Aloka ultrasound system [22] are filled. Figure 4c shows the segmentation result using the procedure described above and figure 4d shows the detection of the probe axis line using the Principal Component Analysis (PCA) [13, 7].

2.2 Ultrasound image processing

We use the ultrasound device Aloka SSD-5000 in this work, since ultrasonic sensors for 360[degrees] scanning are suitable for ultrasound data acquisition in minimally invasive surgeries [16, 5]. The goal in this stage is to remove the noise in ultrasound images and to segment brain tumors. Several works have the same purpose: [23] applying image processing techniques to characterize the shape, contour, position, and orientation of thermal lesions in US images. The region of interest (ROI) is manually selected and an image segmentation algorithm is implemented based on morphological operations. [24] proposed a segmentation technique based on intra-vascular ultrasound images for studies of coronary artery diseases, also used snakes and the Contourlet transform in order to detect calcification automatically and accurately.

In this work, morphological operations are applied to reduce speckle noise [25] and to fill in small holes. Equation 1 shows the morphological closing between the image Ius and the structuring element M, where [direct sum] and [THETA] represent the morphological dilatation and erosion, respectively. After that, a global threshold using the Otsu's method is applied to segment ROIs.

I * M = ([I.sub.US] [direct sum]M)[THETA]M (1)

The methodology for ultrasound image processing is taken from [6], and consists in: i) exclusion of the central part of the image because it only contains noise, ii) application of an automatic threshold using the Otsu's method to select only the brighter regions, iii) application of morphological closing to fill in holes and reduce speckle noise, iv) use of a chain code to determine and eliminate the smallest areas, v) selection of the ROI, and vi) application of the logical operator "AND" between the ROI and the original image. The resulting image is the segmented tumor. Figure 5a shows a typical image acquired by ultrasound equipment using a radial micro-tip transducer at 10 MHz and figure 5b shows the segmentation result using the procedure explained above.

The center of the ultrasound image is the origin of the coordinate system because it is there where the ultrasound sensor is located. The central part of the image must be excluded due to the noise caused by transducer rotation. By using technical information from the Aloka system [22], it is possible to compute a correspondence between lengths in the ultrasound images (measured in pixels) and real lengths in the 3D real-world space (measured in millimeters).

2.3 D reconstruction results

It is possible to compute the 3D location of a brain tumor with respect to a reference coordinate system by using the ENS information. The aim is to extract information from the ultrasound images and to find the transformations to place this information in the 3D space with respect to a reference coordinate system [7]. Figure 6 shows the 3D information (point cloud and its convex hull) on a phantom brain tumor acquired from an ENS sequence of 790 frames. Each ENS frame consists of two endoscopic images and an ultrasound image, so this representation involves the processing of 790 X 3 = 2370 images. The sampling frequency is 4 fps.


The unsupervised architecture is a type of neural network based on competitive learning which can be used for 3D modeling of virtualized reality objects. Two of the most popular unsupervised neural architectures are SOM and NGN [26].

These architectures can be used for estimating a manifold topology from point samples [27]. The input space is clustered by assigning neurons to specific regions. The number of inputs of each neuron is equal to the input space dimension. The synaptic weight vectors are interpreted as locations in this space. Each region is a group of nearby locations according to a distance with respect to the assigned neuron. After the training, two input vectors, belonging to the same region in the space, should be represented by two very close neurons. In both SOM and NGN, each synaptic weight encodes the 3D coordinates of a point clustering on the object surface. During the training, the model falls asymptotically towards the points in the input space according to a density function, and therefore, takes the shape of the object to be modeled.

3.1 Self-organizing maps (SOM)

A SOM is a set of neurons organized in a regular grid with rectangular or hexagonal connections. The mesh can have different configurations depending on the object being modeled. Input data are the 3D coordinates (x, y, z) of points on the object surface. Therefore, each neuron corresponds to a 3D point and is represented by a weight vector mi. The aim of the SOM algorithm is to approximate the input space [xi] through prototypes or pointers in the form of synaptic weights mi, whereby the characteristic map [PHI] gives a faithful representation of the input vectors [28].

The SOM is trained iteratively and, at each training stage, a random vector x is selected from the input dataset. The distance from x to all SOM weight vectors is calculated. The closest neuron to the input vector x is called Best Match Unit (BMU), denoted as c. The BMU and its topological neighbors are updated using the adaptation rule presented in equation 2, where [alpha](t) is the learning rate and hci(t) is the neighborhood kernel around the BMU, which defines the region of influence of the input vectors on the map. We use a Gaussian neighborhood function [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where [sigma](t) is a linearly decreasing function and dci is the distance from neurons c to i through the grid connections.

[m.sub.i](t +1) = [m.sub.i](t) + [alpha](t)[](t)[X(t)- [m.sub.i](t)] (2)

Before training, the input dataset is normalized and synaptic weights are initialized randomly along the two principal components (PC) of data distribution, extracted by PCA. Neighborhoods are defined by a rectangular grid.

3.2 Neural Gas Networks (NGN)

A NGN is composed of neurons, which move in the input space during training, and the position of the BMU and its neighbors is updated at each training epoch. Unlike SOM, NGN does not use a neuron ranking based on grid connections; instead, it uses a neighborhood ranking based on the closest neurons in the entire neural set. In this sense, NGN neurons can move freely through the input space and their topographic error is zero [29]. Since NGN uses a neighborhood ranking based on the entire input space, it has a better adaptation than SOM. Compared to SOM, NGN converges faster, achieves lower distortion errors, has higher computational costs (increases with the computational complexity of the sorting algorithm), and achieves a better performance using smaller training sets [18].

The NGN is trained iteratively. For each presented vector x, we compute the neighborhood ranking of the synaptic weight vectors (wi0, wi1, ..., wi(N -1)), where wi0 and wi(N -1) are the closest and farthest vectors to x, respectively. The adaptation rule is given by equation 3, where s is the learning rate, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the neighborhood function around the BMU, and [k.sub.i] (x, w) denotes the position of neuron i in the neighborhood ranking. For [lambda]=0, this algorithm is identical to k-means clustering, while for [lambda][not equal to]0 not only do we update the winning neuron but also the neighboring neurons according to the neighborhood ranking (Shi et al., 2010).

[DELTA][w.sub.i] = [epsilon] * [h.sub.[lambda]] ([k.sub.i] (x, w))*(x, [w.sub.i]) (3)

3D modeling results of phantom brain tumors using these unsupervised neural architectures are presented in the next section.


We used a database composed of 7 x 104 ENS frames of phantom brain tumors for the experiments. This database corresponds to 4.86 hours of acquisition using the ENS equipment. The information of each ENS frame depends on the segmented regions in each ultrasound image. Table 1 shows the information of six selected ENS sequences used in our experiments. For each sequence, a 10% of the data was used for the initial tumor modeling using off-line training, and the other 90% was used for the on-line adaptation of the representation model.

Figures 7 and 8 show the 3D modeling of two phantom brain tumors using ENS and unsupervised neural networks. These networks were trained off-line with 23421 and 9116 points, respectively. After that process, the off-line training is switched to an on-line training in order to adapt the shape of the model according to the new ENS information; in both cases, 2000 neurons were used.

Figure 9 shows the 3D modeling of a brain tumor using 254 ENS frames (stream 3) taken at 4 frames per second (fps). This video corresponds to a stream of 63.5 seconds and was performed using a SOM with 2000 neurons arranged in a cylindrical mesh with the hexagonal neighborhood. During the off-line training, we used five epochs for rough tuning (8 [greater than or equal to] [sigma]t > 3) and ten epochs for fine tuning (3 [greater than or equal to] [sigma]t [greater than or equal to] 1). The dataset for off-line training was composed of 1.5 x 105 points taken from the first 6.35 seconds of the ENS stream (Figure 9a); the dataset for on-line training was composed of 1.3 x 106 points taken from the rest of the ENS stream (Figure 9b). During network adaptation, the off-line information was presented in batch mode, while the on-line one was presented in sequential mode. Figures 9d and 9c show the 3D modeling using SOM. Topographic and quantization errors were 12.58% and 33.75, respectively.

Figure 10 shows the 3D modeling of a brain tumor using 790 ENS frames (stream 4) taken at 4 fps. This video corresponds to a stream of 197.5 seconds and was performed using a NGN with 2000 neurons arranged in a cylindrical mesh with hexagonal neighborhood. During off-line training, five epochs for rough tuning and ten epochs for fine tuning were used. The dataset for off-line training was composed of 2.3 x 105 points taken from the first 19.75 seconds of the ENS stream (figure 10a); the dataset for on-line training was composed of 2.1 x 106 points taken from the rest of the ENS stream. This on-line information was input sequentially in order to adapt the network (figure 10b). Figure 10c shows the 3D modeling using NGN. The quantization err or was 4.16.

Table 2 shows the errors of the 3D modeling using SOM and NGN. The quantization error was calculated as the average Euclidean distance from the points of the tumor surface to the nearest neuron in the network (BMU). The topographic error is a quantitative measure of the SOM topology preservation after training, which was calculated as the proportion of all inputs for which the nearest neuron and the second nearest neuron were not adjacent in the mesh network. The topographic error in a NGN network is zero [29]. Note that the topographic error of the SOM is more than 50% for stream 5 due to the improper initialization of the network, which is done randomly over the principal axis of the off-line dataset. Since the tumor morphology is not known at the beginning of the modeling, it is not possible to guarantee the representativeness of the training dataset with respect to future data. Additionally, this bad seed is reflected on high quantization errors.

Using the correspondence between measures in the virtualized model and measures in the real world, each unit in the representation space corresponds to 0.1875 mm in the real world. Table 3 shows the quantization errors of 3D brain tumor models in millimeters. For all of the experimental streams, NGN presented the best performance with an average quantization error of 0.73214 [+ or -] 0.4505 mm, while SOM presented an average quantization error of 6.07372 [+ or -] 4.3618 mm and a topographic error of 18.22 [+ or -] 21.28 %.

[30] presented a clinical study about motion and deformation of brain structures during neurosurgery. The 3D ultrasound, applied to 12 patients, reported average vertical deformations of 3.0 mm with a maximum value of 7.5 mm, and average horizontal deformations of 3.9 mm with a maximum value of 8.2 mm. According to these results, this NGN model is accurate at detecting brain deformations during a neurosurgery.

Other studies have shown that the surface of the brain is deformed by up to 20 mm after the skull is opened during neurosurgery, which could lead to substantial error in commercial image-guided surgery systems [31]. In [32] quantitative analyze the intra-operative brain deformation of 24 subjects, using MRI to investigate whether rules can describe or predict the deformation. Their study indicates that brain shift occurs predominantly in the hemisphere ipso-lateral to the craniotomy. For these clinical cases, SOM modeling could not guarantee, at least with this number of neurons, the required quality.

In [33], using MRI as an acquisition technique has reported average deformations between 5 and 10 mm, with maximum values up to 20 mm. In these cases, the NGN network can be useful for 3D modeling since their average quantization error is below this range.


This work shows a straightforward and efficient solution to the problem of the acquisition and modeling of 3D information of brain structures using ENS and unsupervised neural networks. The method is simple, efficient, and robust at dealing with reasonable occlusions and random probe displacements.

The proposed methodology is applied for brain structure modeling in order to assist the tumor extraction during neurosurgery. With ENS information, tumor regions can be located in the 3D space and the tumor morphology can be modeled to get an efficient and accurate representation. According to clinical studies reported in the literature and the results obtained in this work, NGN networks have greater potential than SOM for 3D modeling of brain tumors from endoneurosonography.

In 3D modeling with unsupervised neural networks, tumor morphology is encoded directly into synaptic weights of the network, which can be understood as positions in the input space. For this reason, displaying is fast and consists in plotting the neural networks over the 3D input space. This methodology offers an advantage in displaying time and disk storage space because it only requires the manipulation of the neurons instead of the entire point cloud. As a general rule, the number of neurons must be less or equal to the number of points in the input dataset.

Another advantage is that neural modeling does not require any a priori knowledge of the structure to be modeled. That is quite appropriate for this application, since the tumor morphology is not known at the beginning of the acquisition. Additionally, due to the adaptation capacity of the neural networks, the model can be adjusted dynamically according to information coming from the ENS equipment.

Currently, 3D modeling based on unsupervised neural networks is not possible for real-time application yet due to high training times. One of the solutions on which research is being conducted is the acceleration of the adaptation and visualization steps by using Graphic Processing Units (GPU). Additionally, given the relatively low cost of SOM training compared to NGN training, a hybrid neural model able to combine the speed of the SOM with the adaptability of the NGN is being developed.


[1] A. Di-leva, "Microtechnologies in neurosurgery," Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, vol. 224, n.o 6, pp. 797-800, 2010.

[2] P. Ferroli et al., "Advanced 3-dimensional planning in neurosurgery," Neurosurgery, vol. 72, n.o 1, pp. A54-A62, 2013.

[3] G.H. Barnett and N. Nathoo, "The modern brain tumor operating room: From standard essentials to current state-of-the-art," Journal of Neuro-Oncology, vol. 69, n.o 1, pp. 25-33, 2004.

[4] A.T. Stadie and R.A. Kockro, "Mono-stereo-autostereo: The evolution of 3-dimensional neurosurgical planning," Neurosurgery, vol. 72, n.o 1, pp. A63-A77, 2013.

[5] K. Resch and J. Schroeder, "Endoneurosonography: technique and equipment, anatomy and imaging, and clinical application," Neurosurgery, vol. 61, n.o 3, pp. 146-160, 2007.

[6] R. Machucho-Cadena et al., "Rendering of brain tumors using endoneurosonography," In proceedings of the 19th International Conference on Pattern Recognition (ICPR2008), Tampa, Florida, 2008.

[7] A.F. Serna-Morales et al., "Acquisition of three-dimensional information of brain structures using endoneurosonography," Expert systems with applications, vol. 39, n.o 2, pp. 1656-1670, 2012.

[8] R.P. Naftel et al., "Small-ventricle neuroendoscopy for pediatric brain tumor management: Clinical article," Journal of Neurosurgery: Pediatrics, vol. 7, n.o 1, pp. 104-110, 2011.

[9] S. Constantini et al., "Safety and diagnostic accuracy of neuroendoscopic biopsies: An international multicenter study," Journal of Neurosurgery: Pediatrics, vol. 1, n.o 6, pp. 704-709, 2013.

[10] M. Ivanov et al., "Intraoperative ultrasound in neurosurgery a practical guide," British Journal of Neurosurgery, vol. 24, n.o 5, pp. 510-517, 2010.

[11] G. Unsgaard et al., "Intra-operative 3D ultrasound in neurosurgery," Acta Neurochirurgica, vol. 148, n.o 3, pp. 235-253, 2006.

[12] J. Roth et al., "Real-time neuronavigation with high-quality 3D ultrasound sonowand," Pediatric Neurosurgery, vol. 43, n.o 3, pp. 185-191, 2007.

[13] A.F. Serna-Morales et al., "Spatio-temporal image tracking based on optical flow and clustering: An endoneurosonographic application," In 9th Mexican International Conference on Artificial Intelligence, Pachuca, Mexico, 2010.

[14] N.H. Ulrich et al., "Resection of pediatric intracerebral tumors with the aid of intraoperative real-time 3-D ultrasound," Child's Nervous System, vol. 28, n.o 1, pp. 101-109, 2012.

[15] A. Burgess et al., "Focused ultrasound: Crossing barriers to treat alzheimer's disease," Therapeutic Delivery, vol. 2, n.o 3, pp. 281-286, 2011.

[16] K. Resch, Transendoscopic ultrasound for neurosurgery. Berlin-Heidelberg: Springer-Verlag, 2006, 148 p.

[17] A.F. Serna-Morales et al., "3D modeling of virtualized reality objects using neural computing," In proceedings of the IEEE International Joint Conference on Neural Networks (UCNN2011), San Jose, 2011.

[18] F. Montoya-Franco et al., "3D object modeling with graphics hardware acceleration and unsupervised neural networks," In Advances in Visual Computing: 7th International Symposium (ISVC 2011), Las Vegas, 2011.

[19] Richard-Wolf, "Endoscopy", Richard Wolf Medical Instruments Corporation, [on line], in, 2013.

[20] N. Shi et al., "Research on k-means clustering algorithm: An improved k-means clustering algorithm," In proceedings of the 3rd International Symposium on Intelligent Information Technology and Security Informatics (IITSI2010), Jinggangshan, 2010.

[21] T. Hou, et al., "Despeckling medical ultrasound images based on an expectation maximization framework," Acta Acustica, vol. 36, n.o 1, pp. 73-80, 2011.

[22] Aloka co. Ultrasound Diagnostic Equipment ALOKA SSD-5000. Mure 6-chome, Mitaka-Shi, Tokyo 181-8622, Japan, mN1-1102 Rev. 9, 2002.

[23] S. Marquez et al., "Characterization of ultrasound images of HIFU-induced lesions by extraction of its morphological properties," In proceedings of the 7th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE2010), Mexico, 2010.

[24] Q. Zhang et al., "Automatic segmentation of calcifications in intra-vascular ultrasound images using snakes and the contourlet transform," Ultrasound in Medicine and Biology, vol. 36, n.o 1, pp. 111-129, 2010.

[25] O. Michailovich and A. Tannenbaum, "Despeckling of medical ultrasound images," IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 53, n.o 1, pp. 64-78, 2006.

[26] T.M. Martinetz et al., "'Neural-gas' network for vector quantization and its application to time-series prediction," IEEE Transactions on Neural Networks, vol. 4, n.o 4, pp. 558-569, 1993.

[27] M. Piastra, "Self-organizing adaptive map: Autonomous learning of curves and surfaces from point samples," Neural Networks, vol. 41, pp. 96-112, 2013.

[28] T. Kohonen et al., "On the quantization error in SOM vs. VQ: A critical and systematic study". In Advances in Self-Organizing Maps. Santiago, 2009.

[29] M.H. Ghaseminezhad and A. Karami, "A novel self-organizing map (SOM) neural network for discrete groups of data clustering," Applied Soft Computing, vol. 11, n.o 4, pp. 3771-3778, 2011.

[30] M. Letteboer et al., "Brain shift estimation in image-guided neurosurgery using 3D ultrasound," IEEE Transactions on Biomedical Engineering, vol. 52, n.o 2, pp. 268-276, 2005.

[31] I. Chen et al., "Intraoperative brain shift compensation: Accounting for dural septa," IEEE Transactions on Biomedical Engineering, vol. 58, n.o 3, pp. 499-508, 2011.

[32] T. Hartkens et al., "Measurement and analysis of brain deformation during neurosurgery," IEEE Transactions on Medical Imaging, vol. 22, n.o 1, pp. 82-92, 2003.

[33] C. Maurer et al., "Measurement of intra-operative brain surface deformation under a craniotomy," In Medical Image Computing and Computer-Assisted Interventation (MICCAI98), Cambridge, USA, 1998.

Andres Serna ** Flavio Prieto ***

* This work was partially supported by the Universidad Nacional de Colombia at Manizales, under project code DIMA 20201006025, during 2011.

** Ing., MSc. Universidad Nacional de Colombia Sede Manizales. Currently PhD Candidate at Center for Mathematical Morphology (CMM), MINES ParisTech. 35, rue Saint Honore, 77305 Fontainebleau CEDEX, France. Phone: +33 (1) 64 69 47 06. E-mail:

*** Ing., MSc., PhD, Profesor Titular. Departamento de Ingenieria Mecanica y Mecatronica. Universidad Nacional de Colombia Sede Bogota. Carrera 30 No 45-03, Bogota, Colombia. Phone: +57 (1) 316 5000 Ext. 14103. E-mail:

Recibido: 29/11/2014 * Aceptado: 21/09/2016

DOI: 10.22395/rium.v16n30a7

Caption: Figure 1. Scheme of 3D modeling of brain tumors using endoneurosonography and unsupervised neural networks.

Caption: Figure 2. ENS acquisition in an intra-operative scenario

Caption: Figure 3. Example of typical ENS images.

Caption: Figure 4. Spatio-temporal segmentation and axis line determination of the US probe.

Caption: Figure 5. Morphological segmentation of brain structures.

Caption: Figure 6. 3D point cloud and 3D convex hull of a phantom brain tumor acquired from an ENS sequence of 790 frames.

Caption: Figure 7. Off-line trained model of a brain tumor using 23421 points. The quantization errors for SOM (b) and NGN (c) were 6.04 and 3.81, respectively.

Caption: Figure 8. Off-line trained model of a brain tumor using 9116 points. The quantization errors for SOM (b) and NGN (c) were 6.48 and 4.14, respectively.

Caption: Figure 9. SOM 3D modeling from ENS stream 3. The topographic and quantization errors were 12.58% and 33.75, respectively.

Caption: Figure 10. NGN 3D modeling from stream 4. The topographic and quantization errors were 0% and 4.16, respectively.
Table 1. ENS streams used in experimental tests.

           Number of   Duration    Number of 3D
           frames      (seconds)   points

Stream 1   20          5.0         1.9 x [10.sup.4]
Stream 2   164         41.0        1.1 x [10.sup.6]
Stream 3   254         63.5        1.5 x [10.sup.6]
Stream 4   790         197.5       2.3 x [10.sup.6]
Stream 5   580         145.0       6.1 x [10.sup.5]
Stream 6   4582        1145.5      1.9 x [10.sup.7]

Source: authors

Table 2. Topographic and quantization errors

                          SOM                            NGN

Stream   Topographic error   Quantization error   Quantization error

1              5.57%               13.51                 1.41
2             11.49%               36.86                 3.39
3             12.58%               33.75                 2.76
4              5.62%                9.76                 4.16
5             55.85%               68.09                 1.46

Source: authors

Table 3. Quantization errors of the 3D modeling (millimeters).

Stream     SOM      NGN

1        2.5331    0.2646
2        6.9113    0.6354
3        6.3281    0.5174
4        1.8300    0.7793
5        12.7661   1.4640

Source: authors
COPYRIGHT 2017 Universidad de Medellin
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Serna, Andres; Prieto, Flavio
Publication:Revista Ingenierias
Date:Jan 1, 2017
Previous Article:Analisis de la influencia de las tecnicas de compresion de voz en la deteccion de anomalias vocales.
Next Article:Editorial.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |