# 3D directional mathematical morphology for analysis of fiber orientations.

INTRODUCTION

Fiber-reinforced composites are nowadays frequently used for building the enclosure of aircrafts, boats or cars. Our application concentrates on fiber-reinforced plastics comprising a polymer matrix reinforced with glass or carbon fibers. The aim of our study is to predict the physical behavior of the material from the knowledge of its microstructure, reconstructed from micro or nano tomography images. One physical property of this very light material is the stiffness, which is highly influenced by the anisotropy of the included fibers. The material will be optimized by changing the parameters of an adapted random geometric model and evaluating the physical properties using numerical simulations.

Local geometric characteristics are essential information for the modeling of random fiber networks. Without this information the virtual material, derived from a random fiber model, is not realistic. It is therefore important to start with a model fitted to the real structure, to adapt the calculation of the physical behavior to the real physical properties of the material and then to start changing the modeling parameters to improve the physical behavior. The most important characteristic for the modeling is the orientation distribution, computed from local information.

The basis of our method is a sufficiently good binarization of a 3D image of a fiber system with solid and not too thin fibers. More precisely, the fiber radius should exceed the length of 3 pixels. Below that thickness, the discretization will have too much influence on the results. On these binarized images (with square or cubic grid), we compute the directional diameters in a fixed amount of orientations (4 in 2D and 13 in 3D) using directional distance transforms. The main inertia axes of the endpoints, given by the local centralized chords, provide an estimate of the local orientation. This estimate is biased towards the sampled directions. We present methods to correct this deviation in 2D and to reduce it in 3D. The inertia moments provide additionally the possibility of estimating the fiber radius and of smoothing the results by making use of the ratio of inertia moments.

Finally, our method is extended to gray level images using thresholded quasi distance. A preprocessing considerably reducing bias in estimation is suggested.

There exist already several methods to compute the local orientation in images, like the Gaussian orientation space by Robb et al. (2007) and the chord length transform by Sandau and Ohser (2007). The chord length transform is not yet studied on 3D images so we will compare our method to the Gaussian orientation space. The main idea of both approaches is to sample a certain amount of directions with different directional operators and referring to the direction with the highest filter response. Thus the results are always limited to the chosen amount of sampled directions and for more exact results their amount has to be increased, resulting in a considerable rise of computation time. Moreover, in order to increase the number of sampled directions, it is necessary to choose a finite number of directions as evenly as possible, which is a nontrivial problem on its own. Our approach avoids these problems as it is fixed to a small amount of sampled orientations (4 in 2D and 13 in 3D), whereas the results are obtained in the continuous Euclidean space.

ANALYSIS ON BINARY IMAGES

INERTIA MOMENTS OF DIRECTIONAL DISTANCE TRANSFORMS

In this section the calculation of local characteristics like fiber orientation and radius is treated. The algorithms are based on computing the directional distances to the background for every object point. The sampled directions [vs.sub.i] of the directed distance transform are chosen as the complete neighborhood (in 2D 8 neighbors, in 3D 26 neighbors), see Altendorf (2007). In order to achieve a nearly constant result in a cylindrical fiber, the arithmetic average of the two calculated distances for inverse directions d([vs.sub.i]) and d([-vs.sub.i]) is considered, which is equal to the half chord lengths defined in Sandau and Ohser (2007)

[d.sub.c]([vs.sub.i]) = 1/2(d([vs.sub.i]) + d([-vs.sub.i])).

The directed distance transform can be calculated efficiently following an adapted version of the algorithm introduced by Rosenfeld and Pfaltz (1966). The chord lengths are achieved by walking twice through the image: the first time forwards with increasing the distances in the image depending on the distance, assigned to his predecessors in the backward directions (neighbors which have been already visited); the second time by walking backwards through the image, we assign to the pixel the increased distance, assigned to his predecessors in forward directions, if those are inside the object. This algorithm runs in linear time with respect to the number of image pixels and assigns to every foreground pixel the directional thickness of the fiber.

From the endpoints [P.sub.i] = [d.sub.c]([vs.sub.i]) x [vs.sub.i], derived from the centralized distances in all sampled directions, we calculate the moments defined by Duda and Hart (1973). In our case the moments can be reduced to:

[M.sup.(2).sub.pq] = [7.summation over (i=0))] [([P.sub.i,x]).sup.p] [([P.sub.i,y]).sup.q] for 2D and

[M.sup.(3).sub.pqr] = [25.summation over (i=0))] [([P.sub.i,x]).sup.p] [([P.sub.i,y]).sup.q] [([P.sub.i,z]).sup.r] for 3D.

The inertia matrices adapted to our case are:

[IM.sup.(2).sub.f] = 1/8([M.sub.20] [M.sub.11] [M.sub.11] [M.sub.02])

for 2D and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for 3D. The inertia moments are the eigenvalues of these matrices and the inertia axes are the eigenvectors as defined by Bakhadyrov and Jafari (1999). Because of the different structure of the inertia matrices in 2D and 3D, the main inertia axis in 2D is the eigenvector to the highest eigenvalue (which indicates the elongation in the according direction), whereas in 3D the main inertia axis is the eigenvector having the lowest eigenvalue (which indicates the inertia by rotating the object around this axis).

The defined main inertia axis gives a first estimate of the fiber orientation, which is however biased towards the sampled directions.

CORRECTING THE BIAS

Evaluation of the presented method shows a certain deviation in the orientation estimate as presented in Fig. 1. By considering the endpoints just in a few sampled directions, those directions receive a high weight. This causes an attraction towards the sampled directions, explaining the deviation. The orientation estimate is perfect in those orientations lying on or in the middle of two sampled directions.

This nature of the bias motivated a theoretical study of the problem. The fiber is assumed to be a spherical cylinder with radius r, infinite length and orientation v, represented by the angle [theta] (in 3D [theta] and [empty set], derived from the spherical coordinates).

The centralized distances are given by:

[d.sub.c]([vs.sub.i]) = r/sin([angle](v,[vs.sub.i])) [??] r/sin([angle]([[theta].sub.i], [theta])), (1)

from which we can calculate the endpoints

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for 2D and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for 3D.

To calculate the inertia moments and main inertia axes in 2D, we adapt the existing formula to our case:

[[lambda].sub.1,2] = [M.sub.20] + [M.sub.02] [+ or -] [square root of [4[M.sup.2.sub.11] + [([M.sub.20] - [M.sub.02]).sup.2]/16, (2)

[[theta].sup.'] = 1/2 arctan ([2[M.sub.11]/[M.sub.20] - [M.sub.02]). (3)

Replacing the formula for [M.sub.pq] depending on the fiber parameters r and [theta] and after applying multiple steps of simplification for trigonometrical functions, it was possible to achieve the following equations, which depend only on the main parameters r and [theta]:

[[lambda].sub.1] = [r.sup.2] (2 + [square root of (3 [cos.sup.2] (4[theta]) + 1]/[sin.sup.2] (4[theta])), (4)

[[lambda].sub.2] = [r.sup.2] f([theta]), (5)

with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

The maximal possible deviation is limited to 10[degrees] in both cases 2D and 3D. From Eq. 7 it is possible to correct the deviation by inverting it. The orientation can be derived from the estimate [[theta].sup.'] as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The deviation and the corrected angle for the 2D case are illustrated in Fig. 1.

There exist also theoretical solutions for the eigenvalue problem in 3D (Jeulin and Moreaud, 2008). With this, it is possible to deduce an equation for the inertia moments and the inertia vectors. However, we did not achieve a simplification of these complex equations and reduction to the main parameters, which would provide the possibility to correct the orientation. Still there is a way to improve the orientation estimation also in the 3D case. Based on the idea of approximation in 2D, we reduce the deviation of the calculated direction by pushing it away from the closest sampled directions.

[FIGURE 1 OMITTED]

The attraction to a close sample direction is dependent on the distance between it and the real fiber orientation. Therefore, pushing away the computed orientations from the closest sampled directions is controlled by forces depending on the associated distance. First of all we define the forces, whose formula emerged from several tests based on the two dimensional correction curve.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

To complete the approach we need to define the direction in which the force operates. We have defined the force direction to be the projection of the sampled direction on the 2D subspace orthogonal to the calculated orientation v, pn([vs.sub.i],v). The approximate orientation [v.sup.'] is then calculated as follows:

[v.sup.'] = v + [summation over (i)] t([angle]([vs.sub.i],v)) pn([vs.sub.i],v).

This procedure reduces the maximal error from 9.97[degrees] to 4.78[degrees] and the mean error from 6.40[degrees] to 1.27[degrees]. The reduction of the deviation is visualized in Fig. 2 on the unit sphere in colors from 0[degrees] in blue to 10[degrees] in red.

[FIGURE 2 OMITTED]

Another aspect which needs to be considered only in the 3D case is the non-consistency of chord lengths for different points in the same fiber. In 2D the chord lengths stay constant for every point inside a fiber, except edge effects at the fiber ends. This situation is shown in Fig. 3(a). In 3D the fiber structure is more complex and therefore the assumption is not fulfilled, as shown in 3(b) for a section of a 3D fiber. We can observe that the chord lengths scale down by moving the point of interest closer to the fiber border. It is thus necessary to find another way to create stable measures for every point in the fiber. Near to the fiber core the directional distances stay stable, that's why we base our calculations not on the measures at the point of interest, but at the center of mass of the extremities, derived from the directional distances at the point of interest.

[FIGURE 3 OMITTED]

RADIUS MAPS

The second inertia moment [[lambda].sub.2] [member of] [0.75,1] can be used in the 2D case to recalculate the fiber radius r based on Eq. 5,

[r.sup.'] = [square root of ([[lambda].sub.2]/f([[theta].sup.'])].

There is no equivalent formula in 3D, thus we present a second method to estimate the radius, which can be calculated using the centralized distances [d.sub.c]([vs.sub.i]), as they hold already the information of the radius, see Eq. 1:

[[??].sub.i] = [d.sub.c]([vs.sub.i]) sin([angle](v,[vs.sub.i])) .

Based on these radius estimates, there are various possibilities to compute the final radius estimate. We have chosen a trimmed mean value: discarding the lower radius estimates reduces wrong estimates due to noise or border regions and discarding the higher radius estimates reduces wrong estimates due to crossing regions. The final estimate is computed as follows:

[r.sup.'] = 1/8 [10.summation over (i=3)] [[??].sub.i].

The evaluation showed that this method yields better results than the recalculation from the inertia moments, especially in regions, where fibers cross.

IMPROVEMENT BY SMOOTHING

The resulting direction and radius maps can be smoothed by using a mean filter based on the inertia ratio. The inertia moments indicate the elongation of an object in the direction of the inertia axes. For a ball, all inertia moments are the same, whereas for a fiber the first inertia moment differs significantly from the second (in 3D the second and third are similar). In a point, where two fibers cross, the first and second inertia moments are similar. Therefore we use the ratio of the first two inertia moments to indicate the relevance of the orientation information. The moment ratio for the 2D case (where [[lambda].sub.1] [greater than or equal to] [[lambda].sub.2]) is defined as:

[MR.sub.2]([[lambda].sub.1], [[lambda].sub.2]) = [[lambda].sub.1]/[[lambda].sub.1] + [[lambda].sub.2] [member of] [0.5,1).

and for the 3D case (where [[lambda].sub.1] [less than or equal to] [[lambda].sub.2] [less than or equal to] [[lambda].sub.3]), we define the moment ratio as:

[MR.sub.3]([[lambda].sub.1], [[lambda].sub.2]), [[lambda].sub.3]) = [[lambda].sub.2]/[[lambda].sub.1] + [[lambda].sub.2] [member of] [0.5,1).

To reduce the difference of orientation and radius information of neighbor pixels we smooth the images using a smoothing filter with a structuring element made of a ball with radius given by the radius map and filter weights given by the moment ratio. It is advisable to apply this smoothing first on the radius map and then on the orientation information to avoid mixing the orientations too much, due to a too large structuring element in crossing regions.

RESULTS

Working on synthetic data and knowing the ground truth, yields the possibility to evaluate the methods with an error histogram. Perfect results would show just one column on 0. The method, which has a higher peak near 0 and decreases faster, provides the better results. The error histogram of the angle maps for 2D and 3D synthetic images (Figs. 4 and 5) shows the improvement between the different steps of our method.

[FIGURE 4 OMITTED]

[FIGURE 5 OMITTED]

Furthermore, we compare our method to the Gaussian orientation space, which uses several elongated Gaussian filters in given directions and assumes the local orientation to be the one, which yields the highest filter response, see Robb et al. (2007). This method can be applied directly to the gray-level images and it is thus not necessary to find a binarization. The results are limited to the chosen directions, whereas our method computes angles in continuous space. That implies that the Gaussian method will need much more directions to achieve comparable results, which increases computation time, especially in 3D. The evaluated error histograms are shown in Fig. 6 for 2D and in Fig. 7 for 3D synthetic images.

On the chosen 3D model in an image of [200.sup.3] pixels, our method finishes in about one minute, whereas the Gaussian method needed two hours for comparable results.

[FIGURE 6 OMITTED]

[FIGURE 7 OMITTED]

APPLICATION ON DATASETS

In Fig. 8 the method is applied on a SAM-image (Scanning Acoustic Microscopy) of a glass-fiber reinforced polymer used for the wheel rim of cars. The sample has a volume fraction of 30% of 1 inch long fibers. Imaged is the projection of a thin slice focussed in a depth of 0.1 mm.

The cutout in Fig. 8d illustrates, that also for very thin fibers we can get a reasonable direction estimate. Nevertheless, as mentioned earlier, in too thin fibers (radius less than 2 pixels) the estimated directions are reduced to the sampled directions. This effect is visible in the direction distribution shown in Fig. 8e. For the 4 sampled directions we get unreasonable high peaks, which are caused by discretization limits.

[FIGURE 8 OMITTED]

In Fig. 9 we apply our method to the CRP plate. In the direction distribution on the unit sphere, shown in Fig. 9f, the two main distributions from the different layers are indicated by red marks. The [theta] angle maps can be used to separate the layers. A 3D rendering of the separated layers is shown in Fig. 9e.

[FIGURE 9 OMITTED]

ANALYSIS ON GRAY LEVEL IMAGES BY QUASI DISTANCE

In dense parallel fiber networks it appears often, that fibers merge at their borders and that thin frontiers disappear during the binarization process. Applying the directional distance transform directly on the gray level images could yield the advantage to detect these thin frontiers. Three different approaches were chosen. The first possibility to measure distances on gray level images is the quasi distance transform, invented by Beucher (2007), with an additional contrast threshold. The second approach uses Gaussian filters of adapting size. The third approach is based on the comparison of a shape model to the existing fiber structure.

QUASI DISTANCE DEFINITION

The quasi distance evaluates pixelwise for sizes i [member of] N the residual operator [tau], derived from the difference between erosions or dilations with a structuring element of varying size i and i + 1

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The quasi distance is defined as the size [i.sub.d] for which the dilation or erosion yields the highest residue when compared to the next size [i.sub.d] + 1. In our case, as we want to measure the directional distance, the structuring element is a directed segment. In this case the image can be treated as several 1D signals. For a 1D signal or gray level function f: [R.sup.+] [right arrow] R we can define the distance for a point [x.sub.0] in -X direction with help of the underbuild function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This function is increasing and keeps the value in [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The definition of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] equals the reconstruction by dilation from the point [x.sub.0], like it is known in mathematical morphology (see Vincent, 1993 or Salembier and Serra, 1995). The quasi distance is the distance to that point which has the highest gradient:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and [f.sup.c] the inverted image (in theory -f, on 8-bit images 255 - f).

The implementation can be simplified by defining an image walker in the requested direction and buffering the gray level values in a decreasing and in an increasing vector for one line, which needs to be updated respectively.

Furthermore the distance can be influenced by giving a threshold for the significant gradient [G.sub.1]. Thus the distance is defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This threshold treats the case where regions are separated just by a weakly contrasted line and a larger region of background farther is higher contrasted. In the standard case the distance will cross the low contrasted background line and stop at the higher contrasted background. If the threshold is lower than the contrast of the line separating the regions, the distance measure will stop at the line and detect the real fiber end.

GRAY VALUE DISTANCES BY ADAPTING GAUSSIAN FILTERS

This approach makes use of Gaussian filters of adapting size with respect to the distance to the point of interest. Let h(s)= ([h.sub.i] (s)), i= 0, ..., s be the vector of Gaussian filter weights with [sigma](s) = (s+1)/4 and [mu] = 0.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The filter (with filtersize [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (y) = [square root of ([x.sub.0] - y)]) is applied to the reconstruction by dilation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] with respect to the distance to [x.sub.0]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The distance is considered to yield the highest difference in [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (y):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. By increasing the size of the filter with increasing distance, the borders are more smoothed in far distances, thus close distances are preferred.

GRAY VALUE DISTANCES BY MODEL COMPARISON

The third approach considers a shape model for the fiber, which takes not only into account the local decrease of the gray level, but also the regularity of the values considered to be fiber foreground. The evaluation of a certain distance h from [x.sub.0] is dependent on the regularity of the values between [x.sub.0] - h and x0 and the decrease at the point [x.sub.0] - h. The curve is expected to be constantly high on fiber foreground (between [x.sub.s](h) = ([x.sub.0] - h) + s/2 and [x.sub.0]), whereas it should decrease from [x.sub.s](h) to [x.sub.e](h) = [x.sub.s](h) - s. The strength of the decrease can be chosen with respect to the image, on the treated images the minimal choice of s = 2 was optimal. The smoothed model decrease is considered to be like [h.sub.s](x) = 1/2 sin((x-([x.sub.0] - h))[pi]/s) + 1/2. Evaluation is done with

[I.sub.t]([x.sub.0],h)=max (0,[I.sub.g]([x.sub.0],h)-[square root of ([I.sub.i]([x.sub.0],h).sup.2)+[I.sub.2]([x.sub.0],h).sup.2])]),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with the mean value [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], the minimal value the curve does decrease to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the difference between these two values [f.sub.[DELTA]] = f - [f.sub.min]. The final distance is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

EVALUATION ON GRAY VALUE LINES

The presented approaches are evaluated on an original and preprocessed gray value line from the CRP data set. As preprocessing we used toggle mapping to enhance the contrast. The main idea of this filter is to build upper and lower bounds by dilation and erosion, and fit the original gray level in every point to the nearest of the bounds. For more details, see Fabrizio and Marcotegui (2006). Note that this operator can enhance salt and pepper noise.

On the preprocessed values only the thresholded quasi distance and the integral model approach are showing perfect results as presented in Fig. 10. For the approach with the Gaussian filters we can already see that the results are not as stable in neighboring points as for the other approaches.

[FIGURE 10 OMITTED]

On the original gray value line (without preprocessing) only the approach of the integral model shows acceptable results (Fig. 11).

RESULTS AND COMPARISON

The presented algorithms are tested on a 2D section of a 3D dataset of the carbon fiber reinforced polymer, which was introduced in Fig. 9. During the binarization process, thin contours between fibers are getting lost (see Fig. 12a). Therefore distance measures can cross several fibers, which distorts the calculation results. The thin frontiers between the fibers visible in the gray level image are enhanced by morphological toggle mapping of size 4 (Fig. 12b).

[FIGURE 11 OMITTED]

[FIGURE 12 OMITTED]

For the standard quasi distance it is still possible that the contrast between fiber and thin division line is too low, thus a higher contrast positioned further away is considered as object end. This circumstance causes similar problems in the measurement as in the binary case. The resulting direction maps for the standard quasi distance are shown in Fig. 13b. By using the threshold [G.sup.'] = 15 for a sufficient gradient, the thin border lines between the fibers can be detected as object ends and improve the measurements (see Fig. 13c). The comparison of the resulting direction map from Gaussian filters (see Fig. 13d) to the results by thresholded quasi distance is not trivial. The right part seems to have smoother values whereas the left part shows greater deviation to the real orientation. More detailed evaluation can be done from the radius maps presented in Fig. 14. Despite good expectations from the results on the gray value line, the approach of the model shape does not show convincing results in Fig. 13e. The problem with such dense fiber systems is that if the detection of the foreground end fails in just one direction, this direction will carry a too large weight, and intensively influence the estimated orientation.

[FIGURE 13 OMITTED]

Irregularities in the direction measurement can be smoothed with the adaptive smoothing depending on the moment ratio (introduced in section "Improvement by smoothing". These final results can compete with the result of the Gaussian orientation space applied on the gray level images with filter size (1,3) (presented for comparison in Fig. 13f.

In Fig. 14 the resulting radius maps are presented, which yield the possibility to evaluate the detection of fiber ends. High estimates for the radius indicate errors in the detection of fiber ends. In parallel fiber systems the measurement error caused by merged fibers has a higher influence on the radius calculation than on the direction calculation. Therefore evaluation of the directional foreground end detection is more reasonable on the radius maps. Obviously the approach with the thresholded quasi distance (presented in Fig. 14b) shows the most stable results here.

[FIGURE 14 OMITTED]

Regarding the computational complexity, the approach of quasi distance which runs in linear time (worst case [??](n log n)) yields an advantage compared to the approaches with Gaussian filters ([??]([n.sup.2])) or with model shape ([??]([n.sup.2] log n)). The given times are estimated for one line with length of n pixels.

CONCLUSION

We have seen that the presented method provides stable results, even for the few sampled directions. The presented analysis tool for binary images works automatically without any parameters and returns maps of local direction and radius estimation. The results are reasonable as shown in the error histograms on synthetic data. Also the computation time is acceptable, for example for an image of [300.sup.3] pixels our algorithm took 9 min, whereas the Gaussian orientation space in fine resolution took about two hours.

The main advantage of the Gaussian orientation space was the direct application on gray level images, for cases where a sufficiently good binarization cannot be achieved. With the thresholded quasi distance method, we have found a reasonable and efficient alternative. Quantitative analysis is in progress.

ACKNOWLEDGEMENTS

This paper is an extended version of a communication presented at ECS 10 (Milan, June 2009). We thank M. Steinhauser (Fraunhofer EMI in Freiburg) for the SAM-images of the GRP sample. We thank Prof. M. Maier from IVW GmbH Kaiserslautern for the CRP plate sample and L. Helfen (ANKA Angstromquelle Karlsruhe GmbH) for the X-ray micro tomography images, made in the ESRF Synchrotron (Grenoble) at Beamline ID19. We thank J. Stawiaski and C. Kessler for visualizations; and O. Wirjadi for the implementation of the Gaussian orientation space to compare our method.

REFERENCES

Altendorf H (2007). Consistent Pairs of Adjacency Systems in 3D Image Analysis. Diploma thesis, University of Mannheim, Fraunhofer ITWM.

Bakhadyrov I, Jafari MA (1999). Inertia tensor as a way of feature vector definition for one-dimensional signatures. In: Proc IEEE Int Conf Syst Man Cybern, Oct 12-15, 1999, Tokyo, Japan. 2:904-9.

Beucher S (2007). Numerical residues. Image Vision Comput 25:405-15.

Duda RO, Hart PE (1973). Pattern Classification and Scene Analysis. John Wiley & Sons Inc.

Fabrizio J, Marcotegui B (2006). Text segmentation in natural scene using toggle-mapping. Tech. rep., Univ Paris 06 and Mines ParisTech, France.

Jeulin D, Moreaud M (2008). Segmentation of 2D and 3D textures from estimates of the local orientation. Image Anal Stereol 27:183-92.

Robb K, Wirjadi O, Schladitz K (2007). Fiber orientation estimation from 3D image data: Practical algorithms, visualization, and interpretation. In: Proc 7th Int Conf Hybrid Intel Syst, Sept 17-19, 2007, Kaiserlautern, Germany. 320-5.

Rosenfeld A, Pfaltz JL (1966). Sequential operations in digital picture processing. J ACM 13:471-94.

Salembier P, Serra J (1995). Flat zones filtering, connected operators, and filters by reconstruction. IEEE Trans Image Process 4:1153-60.

Sandau K, Ohser J (2007). The chord length transform and the segmentation of crossing fibres. J Microsc 226:43-53.

Vincent L (1993). Morphological gray scale reconstruction in image analysis: Applications and efficient algorithms. IEEE Trans Image Process 2:176-201.

HELLEN ALTENDORF (1) AND DOMINIQUE JEULIN (2)

(1) Department of Image Processing, Fraunhofer Institute of Industrial Mathematics, Fraunhofer-Platz 1, D-67663 Kaiserslautern, Germany; (2) Center of Mathematical Morphology, Mines Paris Tech, 35 rue Saint Honore, 77305 Fontainebleau cedex, France

e-mail: Hellen.Altendorf@itwm.fraunhofer.de, Dominique.Jeulin@cmm.ensmp.fr

(Accepted September 9, 2009)
COPYRIGHT 2009 Slovenian Society for Stereology and Quantitative Image Analysis
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters