Printer Friendly

Image recognition of occluded objects based on improved curve moment invariants.

1. Introduction

Machine vision is an important multi-discipline research topic of computer science and information technology due to its application in artificial intelligence. Image recognition also draws considerable interest as a key branch of machine vision.

The cognition of image recognition can be illustrated as follows:

[FIGURE 1 OMITTED]

An ellipse shape is distorted in the above input image to be approximately a circle shape owing to the problems of the position and angle of camera when taking this picture. Afterwards, the image is processed with more useful information for the sake of recognition. Finally, the object in the input image can be recognized to be an ellipse shape with the aid of mathematics module tool.

To this point, many successful techniques have been developed to recognize objects in images [1]. From the point of whether the shape features are extracted from only the contour or from the whole shape region, shape representation can be categorized into two kinds: contour-based methods and region-based methods.

Contour-based approaches are more common in applications because they're accordant to the human sight habits and need less information to store.

Based on whether the shape is represented as a whole or represented by segments, shape representation can be classified into two other categories as follows: global approaches and structural approaches. The global approaches use a feature vector derived from the integral boundary to describe the shape in contour-based approaches.

Common global features are: area, circularity, eccentricity, major axis orientation, bending energy, convexity, ratio of principle axis, circular variance and elliptic variance [2, 3]. There are several main properties of an object, such as shape, colour, texture, brightness, etc. Accordingly, the objects are classified in different ways. In general, the shape information is mainly used to characterize objects.

Two-dimensional image shape is generally represented by its global feature, such as area, perimeter, centroid, moments of inertia, and Fourier descriptors; or by its local feature, such as line segments, arc segments, curve moments; corners, end points; etc.

Global feature of image shape can only describe the image shape which is wholly visible, therefore, only local feature is available for occluded objects in images.

At present, recognition tasks always use model-based machine vision systems to formulate the matching of the input image with a set of predefined models and that of different known objects. Hence, the two key aspects of a recognition system is the description of objects and the match between the known models in the prior knowledge database and unknown objects in the scene.

2. Recognition of 2D Images

To recognize the objects in images, much attraction is drawn to develop algorithms in research fields and application fields. Based on the differentfeatures used, the recognition algorithms for two-dimensional object shapes could be classified into the following six categories: dominant point based approach [4, 5], polygonal approximation approach [6, 7], curve segment approach [8, 9], Fourier descriptors approach [10], wavelet approaches [11, 12], skeleton approach [13] and scale space filtering approach [14].

Dominant points [15] (local extreme curvature points) are rich in information and fast to obtain because of the small number of features. This approach uses the internal angle and its curvature to describe each dominant point, but can not form a complete representation due to the absence of shape links between dominant points. Hence the result of the recognition is ambiguous and might be wrong for some special cases. Polygonal approximation approach only performs well for polygonal objects, so it couldn't be used for arbitrary shapes. Curve segment approach uses curve moment under different orders to compute feature vectors for each segment [16]. The geometrical characteristics of the object boundary can meet different requirements of detail levels and are invariant to camera position and rotation angle. It is computationally efficient and robust to noise, similarity transform and partial occlusion. But the physical meaning of some curve moments is not clear.

Fourier descriptors approach performs a matching on predefined resolution, but the accuracy is low.

Wavelet approach is a recent mathematic tool for signal analysis and has many advantages over traditional Fourier descriptors approach, such as localization both in frequency domain and spatial domain, multi-resolution analysis, sparse representation, etc. It uses the wavelet coefficients of curve segments of the object boundary as the feature. Since it needs at least two intact curve segments to identify the object in the scene, it is unable to recognize that of less than three intact corners, such as a circle or an ellipse [17-19].

Skeleton approach [20],employing skeletonization (also called median-axis transformation) on the pseudo-Euclidean distance map, has properties of homotopy, isotropy, reconstructibility and noise insensitivity. It needs the chain-coded contour, and the corners located by the skeleton of the background that matches the corner in sequence; then it decomposes the object into many sub-parts, and extracts the invariant global features of every subpart while recording the relations between every pair. This method can also recognize the occluded objects in images, but can not provide information of translation, orientation and scale, and the accuracy of the matching results would always be affected because the recovery of the occluded portions is approximate.

Scale space filtering approach [21-23] extracts four kinds of local geometrical primitives to describe the boundary of objects, including lines, arcs, corners and ends; then convolutes the object boundary with different scale Gaussian filters in scale space to choose the right scale to detect corners. Consequently, the geometrical measurements of each primitive are calculated, such as the angle of a corner, length of a line, radius of an arc, etc. It also imposes the relative feature measures to find similarity and dissimilarity of the objects in local portions and global regions. Finally, object matching is employed to generate hypotheses about which kind of model object appears in the scene and the hypothesis verification is used to estimate the position, orientation and size of the model object in the scene. This method can recognize man-made objects even under partial occlusion. However, it is hard to determine the right position of the start and end points of lines and arcs, because of the corner effect while doing scale space filtering. Hence the recognition accuracy would be affected.

All these algorithms can not meet all the needs of invariance to illumination, camera angle and blurring, can not always recognize occluded objects, and can not be applied to arbitrary shape such as non-polygon shapes and manmade parts.

3. Image Recognition of Occluded Objects

Occluded images can often be encountered in industry inspection, robotics application, astronomy monitoring, environment science, medical science and other fields.

[FIGURE 2 OMITTED]

A common case is the work parts on an assembly line: they often touch or overlap each other. In that case, the images, that the inspecting monitor in the workshop take, are with occluded objects. Similar cases also happen in robotic application and other circumstances.

As mentioned above, a model database is always used to get the prior knowledge of pre-defined objects. Then the input unknown image is pre-processed to obtain the similar feature matrices to those of model objects for the database. Through the matching of the two sets of feature matrices between model objects and unknown objects in the input image, the recognition of unknown objects can finally be realized.

[FIGURE 3 OMITTED]

The features used to represent the objects are of two types: global features and local features. To recognize occluded objects, only local features are available. There are many sorts of local features, such as line, arc, point, corner, curve, end, etc.

In our approach, we use curve as local feature to describe the objects in an image. Pre-processing of an image is applied before building model database or calculating the feature matrices of unknown objects.

[FIGURE 4 OMITTED]

4. Improved Curve Moment Invariants

Moment functions are applied in vision analysis ever since 1962 because of their advantages over other recognition features in shape representation: uniqueness, low noise-sensitivity, invariance to shape translation, rotation and scaling, less computational requirements, easy implementation, and easy matching by employing a single value as the feature.

Moment functions can represent global characteristics of the shape of an image, and can provide information of geometrical features of an image. They are widespread in vision analysis, such as invariant pattern recognition, object classification, pose estimation, image coding and reconstruction. The first type of moment applied to images is geometric moment in 1962 which is computationally easy [24]

The advantages of traditional moment over other recognition features in shape representation are:

* Less computational requirement and easy implementation

* Use of a single value as the feature, easy for matching.

* Uniqueness.

* Invariance to shape translation, rotation and scaling.

* Less noise-sensitiveness.

[FIGURE 5 OMITTED]

The traditional (p, q)th moment of the region R is defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.1)

The value of f(x,y) is defined to be 1 if the region R is closed and bounded, and 0 otherwise.

Hence, the central moment can be expressed as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.2)

where [bar.x] = [m.sub.10]/[m.sub.00] and [bar.y] = [m.sub.01]/[m.sub.00] .

For a digital image, equation (4.1) becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.3)

To improve the traditional moment, the definition in equation (4.1) is modified by using the shape boundary only.

For a curve or curve segment C, its curve moments of order (p, q)th are defined as

[m.sub.pq] = [??] [x.sup.p][y.sup.q] ds for p,q = 0,1,2,... (4.4)

where [??] is a line integral along the curve C, and ds = [square root of ([(dx).sup.2] + [(dy).sup.2])]

[FIGURE 6 OMITTED]

The central moments can be similarly defined as [25]

[u.sub.pq] = [??] [(x-[bar.x]).sup.p] [(y-[bar.y]).sup.q] ds (4.5)

where [bar.x] = [m.sub.10]/[m.sub.00] and [bar.y] = [m.sub.01]/[m.sub.00] .

For a digital shape, equation (4.3) becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.6)

It can be easily verified [18] that the central moments up to the order P + q [less than or equal to] 3 can be computed by the following formulas (4.7):

[u.sub.00] = [m.sub.00] [u.sub.01] = 0 [u.sub.10] = 0 [u.sub.11] = [m.sub.11] - [bar.y][m.sub.10] [u.sub.02] = [m.sub.02] - [bar.y][m.sub.01] [u.sub.20] = [m.sub.20] - [bar.x][m.sub.10] [u.sub.12] = [m.sub.21] - 2 [bar.x][m.sub.11] - [bar.y][m.sub.20] + 2[[bar.x].sup.2][m.sub.01] [u.sub.03] = [m.sub.03] - 3 [bar.y][m.sub.02] + 2 [[bar.y].sup.2][m.sub.01] [u.sub.30] = [m.sub.30] - 3[bar.x][m.sub.20] + 2[[bar.x].sup.2][m.sub.10] (4.7)

Observing the formulas; it is visible and apparent that the central moments are invariant to translation. They can also be normalized to be invariant to a scaling change by the following formula. The quantities below are called normalized central moments:

[[eta].sub.pq] = [u.sub.pq]/[u.sup.y.sub.00] for p + q = 2,3,... (4.8)

where [gamma] = p + q/2 + 1.

The following moment invariants were derived to be used as features for shape recognition, due to their invariance to scaling, translation and rotation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4.9)

After calculating this set of curve moment invariants, the object can be represented by these invariants as the feature of its shape.

5. Image Recognition Using Improved Curve Moment Invariants

Occlusion happens everywhere when there are more than two objects near each other or the captured image is distorted because of the transformation of the angle and position of the camera. An image with one or more invisible parts of object is called an image with occlusion. The objects in the image is accordingly called occluded objects.

[FIGURE 7 OMITTED]

5.1 Our Scheme

In our algorithm, curve moment invariants are employed as the local feature of objects in a scene image. The main procedures of our algorithm are as follows:

[FIGURE 8 OMITTED]

5.2. Pre-process the images

For the procedure of either constructing a model database or recognizing an unknown object, we need to firstly preprocess the image which is normally obtained through a CCD camera. The purpose of preprocessing is to improve the images in the ways that would increase the chances for success of the other processes. Here is the procedure of our pre-processing:

Noise Removal [right arrow] Binarization [right arrow] Edge Detection [right arrow] Boundary Tracking.

We use a 3 * 3 spatial Gaussian filterto convolve the input image to remove noise. The 3 * 3 spatial Gaussian filter commonly takes the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5.1)

In the binary image, the pixels in the foreground and background belonging to a shape are marked as "0", or "255". Subsequently, the boundary of the image could be detected and extracted under the 8-neighbour chain code algorithm:

[FIGURE 9 OMITTED]

5.3. Feature Definition of Objects

The object shape is partitioned into several segments after the boundary is obtained according to the corner points. The following is the definition of curve moment whose invariants will be used as object feature.

The moment definition in equation (4.1) is modified by using the shape boundary only. For a curve or curve segment C, its curve moments of order (p, q)th is defined as

[m.sub.pq] = [??][x.sup.p][y.sup.q] ds for p,q = 0,1,... (5.2) (5.2)

where [??] is a line integral along the curve C, ds = [square root of ([(dx).sup.2] + [(dy).sup.2])]

The central moments can be similarly defined as

[u.sub.pq] = [??] [(x-[bar.x]).sup.p] [(y-[bar.y]).sup.q]ds (5.3)

where [bar.x] = [m.sub.10/[m.sub.00] and [bar.y] = [m.sub.01]/[m.sub.00] .

For a digital shape, equation (5.3) becomes

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5.4)

5.4. Match Between Model Object and Unknown Object

Because the curve moment invariants of different orders exhibit large dynamic range, we take logarithm of curve moment invariant values as the feature values of objects. Then the feature matrix of known object is recorded in a database while that of the unknown object of interest is served as features for matching.

The first column of the table above represents the index of the curve segments running sequentially in clockwise direction. The second to eighth columns give the logarithm value of curve moment invariant of seven different order from [[phi].sub.1], [[phi].sub.2], [[phi].sub.3],..., [[phi].sub.7] respectively. Log ([[phi].sub.i]) is the natural logarithm of the element [[phi].sub.i].

We choose logarithm value of the invariants as the element of matrix also because sharp comparison property of the logarithm function can show better difference when the variable is changing with any difference; so that the improved curve moment invariants can represent the object shape in a clear and unique way.

6. Experiments

A seriesial of experiments are carried out to testify the efficiency and invariance of our method.

Firstly, a model database is built to store the information of predefined various model objects with curve moment invariants as the feature of them. Images from everyday life or Internet are used as model objects in the early experiments.

Subsequently, the images of unknown objects from the scenes should be pre-processed, which includes denoising, binarizing, boundary tracking, corner points extracting and boundary segmentation.

Afterwards, the boundary of respective objects is segmented. To obtain eligible curves, each segment of boundary is made of at least three corner points. Therefore, the curve moment invariants of each curve can be calculated and be used as the feature of each object.

Then a distance table is built to compute the difference of each pair of entry between model object and unknown object. Here [d.sub.k](i,j) is defined .to be the entry of the distance table.

M is defined as the number of matched values, T is defined as the number of total number of matrix values. The judge rule on matching is as follows:

* if [d.sub.k](i,j) < Threshold, it means this pair of entries matched;

* if M = T, the model object is exactly the unknown object in the image;

* if Ceil(T/2-1) < M < T, the model object is present in the image;

* else, the model object is not present in the image. This unknown object is immediately added into the model database for the sake of future matching.

Finally, the matching of feature values of objects between model database and unknown image is employed to recognize the unknown objects in the images.

In the beginning of our experiments, we firstly tried some simple images which contain 2D geometrical shapes such as rectangular, triangle, circle, ellipse, etc. In the following example, there are three shapes in the image: one pentagon, one rectangle and one ellipse, whose feature matrices are already calculated and stored in our database. Therefore, this image with occlusion could be tested to see if the occluded objects could match the model objects, after pre-processing and obtaining the feature values of each segment of the silhouette. The following figures show the pre-processing, boundary tracking, corner points extracting and recognition result.

[FIGURE 10 OMITTED]

The result of these first experiments showed that the occluded objects could be recognized through our method as the ellipse and the rectangle were recognized. However, the corner points are not extracted as we expected and the pentagon could not be recognized. As a consequence, the algorithm had to be improved. After modifying in particular denoising, using more appropriate filter functions, and resetting thresholds, the next stage of experiments proved more effective and accurate.

[FIGURE 11 OMITTED]

Subsequently, we tried more complex objects with irregular shapes such as some industry tools, everyday life utensils, aircrafts, flowers; animals and so on. Here we take an image with a plier occluded by a wrench as a simple example. The information of the plier and the wrench as model objects are pre-defined in our model database. They can thereafter be used for object recognition.

[FIGURE 12 OMITTED]

Similarly, we pre-process this image firstly to get the corner points and sub-parts of the boundary of the objects in the image. Then the feature values are calculated to represent each segment of the boundary according to the corner points. Next, we build a distance table to match the entries of the matrices of feature values of each segment between model objects and occluded objects.

[FIGURE 13 OMITTED]

Based on the judge rule, this figure showed we could detect the presence of the plier in the image, although it is occluded. The four non-occluded segments are clearly recognized, while others are more or less detected, so that the recognition of the plier is feasible. The wrench was recognized as well through the same process.

After several trials on different images with different occluded objects, our method proved to be efficient to recognize occluded objects even of arbitrary shapes and high occlusion rate.

7. Conclusion

This paper gives a detailed description of image recognition and presents the existing recognition methods of both isolated objects and partially occluded objects.

Based on the improved set of curve moment invariants, a novel method to recognize partially occluded objects is brought forward on the basis of previous traditional region moments. Efficiency and quickness in recognizing partially occluded objects in images are testified after a number of experiments. In addition, the features selected have proved to be invariant to affine transformation of objects, i.e. translation, scaling and rotation.

Compared to other approaches to recognize objects in images, this approach is more feasible for occluded objects with lower time consuming and noise sensitivity as well as the invariance to the affine transformation of objects in images.

Future work will be focused on improving the accuracy of this approach due to few defects of image pre processing. Gaussian Filter Function will be combined with other filter functions to obtain better effect of denoising and binarizing of images.

Received 12 June 2008; Revised 13 February 2009; Accepted 9 March 2009

References

[1] Zhang, D.S., Lu, G.J (2004). Review of shape representation and description techniques. Journal of Pattern Recognition 37 1-19.

[2] Bowie, J.E., Young, LT (1974). An analysis technique for biological shape. Journal of Computer Graphics And Image Process 25 357-370.

[3] Peura, M., Livarinen, J (1997). Efficiency of simple shape descriptors. In Proc. of the 3rd International Workshop on Visual Form, 443-451.

[4] Tsang, P.W.M., Yuen, P.C., Lam, F.K (1994). Classification of partially occluded objects using 3-Point matching and distance transform. Journal of Pattern Recognition 27(1) 27-40.

[5] Zhang, J., Zhang, X., Krim, H., Walter, G.G (2003). Object representation and recognition in shape spaces. Journal of Pattern Recognition 36 1143-1154.

[6] Bhanu, B., Faugeras, O.D (1984). Shape matching of two-dimensional objects. IEEE Transactions on Pattern Analysis and Machine Intelligence 6(1) 137-156.

[7] Liu, H.C., Srinath, M.D (1990). Partial shape classification using contour matching in distance transformation. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(11) 1072-1079.

[8] Turney, J.L., Mudge, T.N., Volz, R. A (1985). Recognizing partially occluded parts. IEEE Transactions on Pattern Analysis and Machine Intelligence 7(4) 410-421.

[9] Eric, W., Crimson, L (1989). On the recognition of curved objects. IEEE Transactions on Pattern Analysis and Machine Intelligence 116 632-643.

[10] Gorman, J.W., Mitchell, O.R., Kuhl, F.P (1988). Partial shape recognition using dynamic programming. IEEE Transactions on Pattern Analysis and Machine Intelligence 102 257-266.

[11] Yoon, S.H., Kim, J.H., Alexander, W.E., Park, S.M., Sohn, K.H (1998). An optimum solution for scale-invariant object recognition based on the multi-resolution approximation. Journal of Pattern Recognition 31 889-908.

[12] Tsang, K.M (2001). Recognition of 2D standalone and occluded objects using wavelet transform. IEEE Transactions on Pattern Analysis and Machine Intelligence 154 691-705.

[13] Li, Z.R (2000). Recognition of Overlapping Objects through Skeletonization and Decomposition. Thesis Submitted for the Degree of Master of Engineering, Department of Mechanical and Production Engineering, National University of Singapore 25-35.

[14] Wei, X.F (1998). Recognizing Two-dimensional Object Shapes using Prominent Visual Features. Thesis Submitted for the Degree of Master of Engineering, Department of Mechanical and Production Engineering, National University of Singapore 49-55.

[15] Chen, Y.H (2002). Recognizing Two-dimensional Objects using Dynamic Programming. Thesis Submitted for the Degree of Master of Engineering, Department of Mechanical Engineering, National University of Singapore 13-17.

[16] Hu, M.K (1962). Visual pattern recognition by moment invariants. IRE Transactions on Information Theory 179-187.

[17] Du, T.H., Lim, K.B (2004). 2-D occluded object recognition using wavelets. In Proc. of4th International Conference on Computer and Information Technology (CIT'04), 227-232.

[18] Lim, K.B., Du, T.H., Zheng, H (2004). 2-D partially occluded objects recognition using curve moments. In Proc. of Th International Conference on Computer Graphics and Imaging, 303-308.

[19] Lim, K.B., Du, T.H (2006). A wavelet approach for partial occluded object recognition. In Proc. of 1s' International Symposium on Digital Manufacture (ISDM2006), 1-4.

[20] Li, Z.R (2000). Recognition of Overlapping Objects through Skeletonization and Decomposition. Thesis Submitted for the Degree of Master of Engineering, Department of Mechanical and Production Engineering, National University of Singapore 32-35.

[21] Wei, X.F (1998). Recognizing Two-dimensional Object Shapes using Prominent Visual Features. Thesis Submitted for the Degree of Master of Engineering, Department of Mechanical and Production Engineering, National University of Singapore 25-27.

[22] Xin, K., Lim, K.B., Hong, G.S (1995). A scale-space filtering approach for visual feature extraction. Journal of Pattern Recognition 28(8) 243-250.

[23] Lim, K.B., Xin, K., Hong, G.S (1995). Detection and estimation of circular arc segments. Journal of Pattern Recognition 16(6) 627-636.

[24] Mukundan, R.K., Ramakrishnan, R (1998). Moment Functions in Image Analysis: Theory and Applications. World Scientific Publishing Co. 49-89

[25] Gupta, L (1988). Invariant planar shape recognition. Journal of Pattern Recognition 21 235-239.

Kang Lichun (1); Lim Kah Bin (2), Yao Jin (3)

(1,3) School of Manufacturing Science and Engineering Sichuan University, Chengdu, China {jxbblscgg, jinyao 163}@163.com

(2) Department of Mechanical Engineering National University of Singapore, Singapore mpelimkb@nus.edu.sg

Kang Lichun, born in 1983, is a joint Ph. D student in both Sichuan University in China and National University of Singapore. She is now studying image processing and recognition. She also studied on task planning and robotic technology and published 6 papers.

Lim Kah Bin is an associate professor and Ph. D supervisor in department of Mechanical Engineering of National University of Singapore, his research is mainly in computer vision, pattern recognition and industrial AO.

Yao Jin, is a professorandvice-dean in the School of Manufacturing Science and Engineering in Sichuan University, China. His main research topics are robotics and mechanism, enterprise information and mechatronics. He worked as a visit scholar in Newcastle University (UK), McGill University, and Simon Fraser University (CAN).
Table 1. First columns of feature matrix of an object

 log log
 ([phi].sub.1) ([phi].sub.2)

Segment1 -2.618 -5.4396
Segment2 -2.6332 -5.5022
Segment3 -2.6649 -5.3784
Segment4 -3.825 -7.7522
Segment5 -2.523 -5.0902
Segment6 -2.5152 -5.0655
Segment7 -3.8304 -7.7389
Segment8 -2.6174 -5.2722

 log log
 ([phi].sub.3) ([phi].sub.4)

Segment1 -9.3322 -11.367
Segment2 -9.2043 -11.07
Segment3 -11.851 -11.942
Segment4 -16.211 -15.663
Segment5 -10.368 -12.559
Segment6 -10.543 -12.735
Segment7 -16.486 -16.008
Segment8 -12.234 -12.324

Figure 13. Recognition process of the occluded plier in image

Step 1: Pre-processing of the image

[ILLUSTRATION OMITTED]

Step 2: Feature matrix of the plier model: the plier is represented
by 8 segments

log [[phi].sub.1] [[phi].sub.2] [[phi].sub.3] [[phi].sub.4]

1 0,71 1,38 2,11 2,20
2 2,77 5,51 8,26 8,29
3 0,46 0,88 1,41 1,46
4 -1,01 -2,03 -2,62 -2,60
5 -0,22 -0,46 -0,47 -0,44
6 3,38 6,76 10,13 10,15
7 -0,16 -0,32 -0,26 -0,25
8 -0,93 -1,87 -2,40 -2,39

log [[phi].sub.5] [[phi].sub.6] [[phi].sub.7]

1 3,93 2,89 3,23
2 15,39 11,05 15,28
3 0,69 1,90 2,15
4 -6,06 -3,61 -7,12
5 -5,40 -0,67 -4,79
6 18,92 13,53 19,15
7 -0,84 -0,41 -1,40
8 -5,06 -3,32 -5,52

Step 3: Feature matrix of the image with occlusion: 15
corner points and therefore 15 segments in the image
boundary

log [[phi].sub.1] [[phi].sub.2] [[phi].sub.3] [[phi].sub.4]

1 1,09 2,16 3,24 3,29
2 0,60 1,18 1,86 1,88
3 0,80 1,58 2,44 2,47
4 3,30 6,61 9,92 9,92
5 5,00 10,01 15,01 15,01
6 3,21 6,43 9,64 9,65
7 0,12 0,22 0,49 0,51
8 0,85 1,70 2,62 2,64
9 5,37 10,75 16,12 16,12
10 0,28 0,56 0,95 0,96
11 3,25 6,50 9,76 9,76
12 2,16 4,31 6,49 6,49
13 4,48 8,96 13,45 13,45
14 -1,18 -2,45 -3,56 -3,46
15 -1,82 -4,46 -5,58 -5,80

log [[phi].sub.5] [[phi].sub.6] [[phi].sub.7]

1 5,86 4,37 7,52
2 3,32 2,47 1,26
3 3,85 3,26 3,13
4 16,42 13,23 17,03
5 27,87 20,01 17,54
6 17,36 12,86 18,80
7 -2,95 0,62 0,97
8 2,23 3,49 4,42
9 29,80 21,50 31,58
10 1,22 1,24 2,42
11 17,99 13,01 11,31
12 12,06 8,65 9,02
13 26,16 17,93 11,54
14 -6,63 -4,68 -6,89
15 -2,53 -8,14 -6,90

Step 4: Distance table: curves 1, 2, 4 and 6 are clearly in the
image; others are "not too far"; thus the plier is in the image

log [[phi].sub.1] [[phi].sub.2] [[phi].sub.3] [[phi].sub.4]

1 -0,08 -0,20 -0,33 -0,27
2 -0,54 -1,09 -1,66 -1,63
3 0,17 0,32 0,46 0,50
4 0,18 0,42 0,94 0,86
5 -0,33 -0,68 -0,96 -0,95
6 0,17 0,33 0,49 0,50
7 -0,44 -0,88 -1,21 -1,22
8 0,25 0,58 1,16 1,07

log [[phi].sub.5] [[phi].sub.6] [[phi].sub.7]

1 0,08 -0,37 0,09
2 -1,03 -2,17 -1,75
3 -0,53 0,66 -0,27
4 0,57 1,07 -0,23
5 -2,45 -1,29 -5,76
6 1,56 0,67 0,35
7 -2,07 -1,66 -3,82
8 1,56 1,36 1,37
COPYRIGHT 2009 Digital Information Research Foundation
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kang, Lichun; Lim, Kah Bin; Yao, Jin
Publication:Journal of Digital Information Management
Article Type:Report
Date:Jun 1, 2009
Words:4957
Previous Article:A novel graph containment query algorithm on graph databases.
Next Article:A generic context model enhanced with self-configuring features.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters