Printer Friendly

Fingerprint traceability of logs using the outer shape and the tracheid effect.


Traceability in the sawmilling industry is a concept that, among other benefits, could be used to more effectively control and pinpoint errors in the production process. The fingerprint approach is a traceability concept that in earlier studies has shown good potential for tracing logs between the log sorting station and the saw intake. In these studies, bark has been identified as a large source of measurement inaccuracy. This study was set out to investigate whether the fingerprint recognition rate could be improved when compensating for bark with traditional bark functions or a new automatic bark assessment based on the tracheid effect. The results show that the fingerprint recognition rate can be improved by using more sophisticated bark compensation. Compared to no bark compensation, improvements can be made by using the existing bark functions, and even further improvements can be made by using automatic bark assessment based on the tracheid effect. The results further show that the butt-end reducer between the log sorting station and the saw intake has a very negative effect on the fingerprint recognition rate, but that significant improvements in the recognition rate can be achieved by excluding the section of the log's butt end that is affected by the butt-end reduction.


Traceability can be defined in many different ways. Toyryla (1999) defines traceability as follows: "Traceability is the ability to preserve and access the identity and attributes of a physical supply chain's objects." The ability to attach and access the history of a specific manufactured object brings an abundance of opportunities when it comes to controlling the quality of that object and the process that produced it. A good example is the possibility to investigate circumstances surrounding rework and costumer return of faulty products. The ability to trace a product's history makes it possible to isolate and correct errors in the manufacturing process, hence preventing the same errors from occurring again (Wall 1995, Toyryla 1999). For the same reason, many benefits may accrue as a result of being able to trace products within the wood production industry (Kozak and Maness 2003).

A large-scale issue that is often brought up is the problem with illegal logging. This problem has a negative effect on both the environment and the economy of the affected countries (Dykstra et al, 2003). Traceability would, from this viewpoint, be a way to ensure that harvested logs and their final products originate from a certified harvest site. Since the wood production chain has a diverging flow with a number of people and companies involved in various steps of the handling (Uusijarvi 2000), traceability on a smaller scale could be viewed as a tool for the sawmilling industry to increase knowledge and understanding of factors that influence product quality and the manufacturing process.

Modern forestry and sawmilling companies often have sophisticated measurement equipment that generates large quantities of data at an individual level. These data are collected at certain points along the production chain but are unfortunately almost exclusively used as a means to control the production process close to the measurement point. Most of the generated data for a specific piece of wood are therefore discarded after the piece has moved past the measurement point. If the data for each specific piece were to be collected and stored in a database, the final product could then "be considered as an information intensive product" (Uusijhavi 2003). The challenge is therefore not to generate data, but to connect the generated data to each individual piece of wood. The reconnected data would make it possible to investigate and analyze large as well as small sections of the production chain. A good example is the connection between the diameter classes for logs in the log sorting station and the volume recovery of sawn planks and boards. Without reconnection of data, one is reduced to comparing physical volume for a larger group of logs with the physical volume of their planks and boards. With traceability, i.e., reconnection of data, one is given the opportunity not only to analyze and find the individual logs in the group that yield high volume recovery, but perhaps even more important, to find the logs in the group that yield low volume recovery for a specific sawing pattern. Being able to make this distinction then makes it possible to adjust process parameters such as log class limits or sawing patterns for an overall higher volume recovery.

Since sawmills have a diverging flow, and Modern sawmills have high production speed, the tracing and storing of data are not well suited for manual labor. A better alternative for handling the tracing and tracking is some form of automated identification (McFarlane and Sheffi 2002). There are a number of alternative methods for making the connection between measurement data and the individual piece of wood. Many of these alternatives are based on some form of marking/reading technique. Two well-known methods are barcode identification and radio frequency identification (RFID). Barcode identification is a noncontact method used in almost every supermarket checkout counter, where the bars in the code are optically read by a laser scanner. RFID is also a noncontact method wherein an antenna picks up the RFID tag's unique identification number when it enters the antenna's reading range (Finkenzeller 2003). For forestry traceability applications, RFID is probably better suited due to the fact that the tags can be read without an optical scan, thus making the dirt and handling involved in logging almost noninfiuential on the reading result, as opposed to reading barcode identification under the same circumstances. The drawback with RFID is the price for the RFID tags. A sawmill that produces 150,000 [m.sup.3] of sawn wood and has an average log volume of 0.18 [m.sup.3] handles approximately 1.8 million logs annually. The price for RFID tags is approximately 1 to 2 [euro] ($0.75 to $1.50) per tag (Uusijarvi 2003). If every log is to be tagged, the annual cost for tags alone will then be millions of dollars. An alternative way of identifying individual logs is to use the already existing measurement data and make identification by means of the fingerprint approach (Chiorescu 2003).

The fingerprint approach rests on the foundation that each log that enters a sawmill has unique individual features. This can be the log's outer features, such as diameter, length, taper, crook and ovality, as well as inner features, such as knot volume, distance between knot whorls, heartwood/sapwood content and so on. If one could measure these individual features accurately enough, it would then be possible to separate individual logs in the same way that human beings can be separated by the use of their fingerprints. Hence measurement accuracy is the key to being able to uniquely define and recognize a log amongst others with the fingerprint approach (Chiorescu 2003). In the research, Chiorescu also identified bark as being a factor that has a negative effect on measurement accuracy and consequently the fingerprint recognition rate. The results showed that 3-D-scanner recognition rate dropped from 89 percent to 57 percent between log sorting station and saw intake when the log sorting measurements were made on non-debarked, rather than debarked, logs. Swedish log sorting stations use bark functions to compensate for the bark thickness. The log's on-bark diameter is used in a linear regression model to calculate double bark thickness, which is then subtracted from the on-bark diameter to get the under-bark diameter (Zacco 1974). This method for bark deduction is, however, more suited for pricing and scaling purposes on groups of logs than for defining bark deduction on an individual level. Both the variation in bark thickness and the amount of missing bark will lead to errors in the diameter compensation.


A recent 3-D-scanner application that handles the bark issue on an individual level uses the tracheid effect to estimate bark thickness and missing bark. The tracheid effect is the physical phenomenon of laser light's ability to spread more along than across the wood fiber (Nystrom 2002). The 3-Dscanner application for bark assessment uses the tracheid effect to determine whether the scanned surface is clearwood or bark. This is made possible by the fact that bark's ability to spread laser light is very poor compared to that of wood. By calculations based on the spread of the laser light, the application is able to virtually debark the measured log and make geometrical calculations "under bark" (Forslund 2000, Flodin 2007).

The purpose of this study is to investigate whether it is possible to increase the fingerprint recognition rate between log sorting station and saw intake by using traditional bark functions or the tracheid effect to compensate for the previously shown negative influence of bark.

Materials and methods

The sawmill that hosted the collection of data are located in the coastal part of northern Sweden. The sawmill is a midsize mill with an annual production of approximately 150,000 [m.sup.3] of sawn timber. The mill uses fixed sawing patterns that are applied to logs that have been sorted into diameter classes. The data collection involved three of the stations at the sawmill. These were, sequentially, a log sorting station with a 3-D scanner, a combined debarker/butt-end reducer and a saw intake with a 3-D scanner. The butt-end reduction was preformed by a knifed rotating ring with fixed diameter (see Fig. 1). The diameter of the ring used in this study was 320 mm.


The study involved two groups with 50 randomly chosen Scots pine (Pinus sylvestris) logs in each group. The first group consisted of small-sized logs with a top diameter in the range of 148 to 154 mm. The second group consisted of larger sized logs with a top diameter in the range of 253 to 265 mm. Every log in each group was marked in both butt and top end with an ID number from 1 to 50. The second group was chosen because the diameter class in the 253 to 265-mm range is the largest diameter class to use the 320-mm butt-end reducer ring in normal production. Most of the logs in the larger sized group where therefore greatly affected and consequently shape-changed after they had gone through the butt-end reducer (see Fig. 2).

The measurement equipment used to generate the measurement data were two identical Sawco 3D scanners that were installed at both the log sorting station and the saw intake. The scanner has three measurement heads that use laser line triangulation to create cross sections of the log every 10 to 20 mm while it is fed through the scanner. The scanned cross sections are then stacked by the scanners software to recreate the log's outer shape (see Fig. 3).

The log sorting station scanner was equipped with Sawco's ProBark application that uses the tracheid effect to automatically assess whether the measured surface is bark or clearwood. Figure 4 shows the difference in how the laser line is spread in bark and in clearwood. The recording of raw data can be done with the automatic bark assessment (ABA) active or inactive. If ABA is inactive, the raw data files are recorded with the bark present in all cross sections. If ABA is active, each cross section in the raw data files is compensated with the individual log's calculated bark thickness. With ABA active, one could think of the logs as "virtually debarked".

The log sorting station data were collected in the middle of October. The logs where then stored until the saw intake data were collected in late November. There was no influence from snow on the measurement results in either of the collections. In the October collection, no snow had yet fallen, and the snow present at the time of the November collection was removed in the debarking of the logs. The data collection with the log sorting station's 3-D scanner involved four runs for each group, three times with ABA active and once with ABA inactive. The data collection at the saw intake involved one run through the 3-D scanner for each group. No repeated runs of the logs were possible at the saw intake due to the machinery set-up. During each of the runs, the sequence of the logs ID numbers was noted so that each raw data file could be matched to the corresponding log.


Data analysis

MatLab[R] 7.0 (The MathWorks 2007) was used for calculation and analysis of the raw data files. The first step was to calculate geometry measurements that define a log. The components needed to calculate the log's geometrical measurements were found in the raw data files which hold the information about the stacked cross sections illustrated in Figure 3. The required components that were extracted from the files were length coordinates for the cross sections, spatial coordinates for the cross section's center of geometry, the area of the cross sections and the average diameter of the cross sections. The measurements that were calculated are shown in Table 1. When all 11 measurement values had been calculated, a measurement accuracy and repeatability analysis was conducted for both of the log groups using the three runs with ABA active from the log sorting station together with the run from the saw intake. The aim of the analysis was to establish and rank the reliability of the 11 variables. Each variable was evaluated by two calculated values.

The first value was for internal variation which describes the span of the measured values that the variables had assumed. Internal variation was calculated in two steps.

1. Calculate the standard deviations (SDs) for the value spans that the variables had assumed in the three logsorting runs.

2. Set internal variation for each variable as the average value of the three SDs calculated in step one.

The second value was for the measurement difference which describes the accuracy in repeating the logs measurements between log sorting station and saw intake. Measurement difference was calculated in three steps.

1. Calculate absolute value difference between the three log sorting measurements and saw intake measurement.

2. Calculate the SDs of the absolute values obtained from step one.

3. Set measurement difference as the average value of the SDs calculated in step two.

The variables' reliability could then be determined as the internal variation divided by the measurement difference. The higher the quotient value was, the more reliable the variable could be considered. Table 2 shows the ranking of the variables according to this quotient value.


The fingerprint matching was done by means of multivariate principal components. This method had shown good results in previous studies (Chiorescu 2003). The method is a way to describe a dataset using underlying latent variables, i.e., principal components. The use of principal components is well suited to finding relationships between variables, reducing noise and allowing a dataset to be more simply described by fewer variables (Wold et al. 1987, Eriksson et al. 2001). If one imagines measured data as a point swarm in a multidimensional space where each point is an observation, principal components will align themselves orthogonally to account for as much of the variance in the point swarm as possible. The first component will account for the largest variation in the observations, the second component for the second largest variation and so on. Consequently, a linear combination of the first two or three components will not end up exactly at an original observation but will come close enough to make a good estimation. The original data matrix (X) can be projected onto the principal components to obtain the so called principal component scores (7). The score values for the observations are determined by the original data (X) and the principal component loadings (P). The relationship between original data and principal component data can be described as:

X = T * P + E = [n.summation over (i=1)] [t.sub.i] * [p'.sub.i] + E

E is the residual matrix, i.e., the variation in the data that is not explained by the linear combination of principal components, and n is the number of components included in the model. The values in the residual matrix E will decrease for each added principal component and eventually reach zero when the number of components equals the number of original variables. Before principal components are calculated, the data are usually centered and scaled to unit variance in order to allow each variable to equally influence the observations projected score values. In this study, the variables were centered and scaled to have a mean value equal to zero and a SD equal to one.

If there is preexisting knowledge about the variables' reliability they can be further scaled after the initial unit variance scaling. One could scale up variables that are more reliable and likewise scale down variables that are less reliable in their influence on the results. If a variable is scaled up, the previously mentioned point swarm of observations will be stretched in the direction of that variable. The stretch will also affect the orientation of the principal components in the swarm, giving the stretched variable a higher loading value and subsequently more influence on the projections that gives the score values. In this study, the upscaling was done by multiplying each variable value with a corresponding scaling factor. The result will be that a unit variance variable multiplied with a scaling factor of, for example, two will get its SD changed from one into two.

The multivariate matching procedure was done with an algorithm described at the end of the data analysis section. The algorithm iterated a scaling vector containing the 11 scaling factors, i.e., SDs, to be used on the variables. The iterative range for each scaling factor was based on the previously mentioned reliability test of the variables. Table 3 shows the scaling factors that were used for all logs. To keep the number of iterations and computer calculation time at a reasonable level, each scaling factor in the scaling vector was given on average three different iteration values. For all the variables except length and physical volume, which had proven most reliable, the scaling factor range incorporated the value zero which when used excluded the affected variable from the matching procedure. This set-up gave the algorithm 157,464 different scaling vector combinations to work through. An additional iteration step containing three values of explained variance was also included in the matching algorithm. These values were 60, 70, and 80 percent. This step determined the number of principal components to be included in the matching procedure, i.e., the number of principal components needed to explain at least the percentage of variance in the original data given by the explained variance value. The additional iteration step for explained variance gave the algorithm a total number of 472,392 iteration steps to work through.

The actual matching for each iteration step was done using the log sorting station and saw intake score matrices (7) and then matching logs according to Euclidian distance. Euclidian distance (ED) is the shortest distance between two observations and is calculated as follows, where p and q are the score values for each principal component and n is the number of principal components:

ED = [square root of ([n.summation over (i=1) ([q.sub.i] - [p.sub.i])[sup.2]

In this study, score values from the saw intake logs underwent Euclidian distance calculations one at a time with the score values of all the 50 logs from the log sorting station. Matching was then made to the nearest neighbor log with the shortest Euclidean distance within the 50 log sorting station logs. The final matching algorithm used in this study followed the sequence given below.

1. Read log sorting station and saw intake data of calculated measurements

2. Center and scale both datasets to unit variance

3. Multiply both datasets with the scaling vector

4. Calculate total number of principal components from the log sorting station data and the corresponding score and loading matrices

5. Reduce the number of principal components and also score/loading matrices to satisfy explained variance threshold value

6. Use reduced loading matrix to calculate score matrix for saw intake data

7. Calculate Euclidean distance between observations in score matrices and perform matching to nearest neighbor

8. Calculate and save the percentage of correct matching 9. Iterate explained variance threshold value and go to step 5

10. Iterate scaling vector and go to step 3

Five matching runs were performed for each group between saw intake and log sorting station to observe how the bark and also the butt-end reduction for the large-size logs influenced the results. That meant one run with no compensation for bark, one run where bark had been compensated with bark functions and three runs where bark had been compensated with the ABA application.

After the first findings, two alternative approaches were tried on the ABA data to evaluate whether it was possible to improve the recognition rate for the large-size group. The first approach was to alter the raw data files from the log sorting station to mimic the effects of the butt end reducer. The second alternative tried was to virtually cross-cut the log's butt end and perform matching on the remaining part of the log. Six different length reductions were tried.


During the handling between the log sorting station and the saw intake, one of the logs from the small-size group was accidentally broken in half and therefore left out of the data from the saw intake. Tables 4, 5, and 6 hold the best results of all the matching runs that were performed. The overall results from these tables show that the recognition rate can be improved with more sophisticated bark evaluation, but also that the bark issue is overshadowed by the effects of the butt-end reducer for the large-size group.

Noticeable is that there is a more than 20 percent lower recognition rate within the large-size group in Table 5 compared to the small-size group in Table 4. This drop is most likely primarily caused by the shape change that occurred when the large-size logs passed through the butt-end reducer. Significant improvement in recognition rate for the large-size group could, as Table 6 illustrates, be achieved by excluding a section of the log's butt end. The best recognition rate, 76.7 percent, was achieved using ABA and a length reduction of 750 mm. This, however, still falls behind the ABA recognition rate of 88.4 percent for the small-size group. The patterns in Tables 4, 5, and 6 are, on the whole, very similar. They show the lowest recognition rate for the runs where no compensation for bark was done. Compensation with traditional bark functions improves the recognition rate, which is even further improved when bark compensation is done using ABA. The attempt to alter the raw data files for the large-size group to mimic the shape change caused by the butt-end reducer did not give any significant increase in recognition rate.


Based on these results, ABA appears to be a better alternative than traditional bark functions for handling the bark compensation on an individual level. This is also in line with what could be expected, since ABA was introduced in order to handle bark compensation on an individual level, as compared with bark functions that are sufficient for handling bark compensation on a group level but that lack precision to handle it on an individual level.

One reason why the large-size group, after length reduction, didn't reach the same recognition rate as the small-size group may be that the virtual cross cutting of the logs eliminates some of the natural variation that is used to define the logs' outer shapes. Another reason could be that the large-size group probably consisted only of butt logs, while the smallsize group was composed of a random mix of butt, middle and top logs. This makes the total variation in shape more pronounced in the small-size group than in the large-size group.

The scaling factor range in the scaling vector was chosen with regard to the reliability of the variables and the number of iterations that the matching algorithm had to work through. Length was given the highest scaling factor range due to the fact that it was recognized to be far more reliable than the other variables. Bow height position, which had shown low reliability in both groups, was given a smaller range than the others. Physical volume was given the second highest values and a larger range, since it was the second most reliable variable in both groups, but at the same time sensitive to the effects of the butt-end reducer within the large-size group. Ideally, one would have wanted the study to have contained a larger amount of logs. This would have made it possible to set aside an independent test set of logs on which the matching model could have been validated. With a small amount of logs, there is a risk of over fitting the model so that it works very well on the training set, but poorly on a test set.

The number of principal components that were included in the matching procedures in any run didn't need to exceed three in order to satisfy the explained variance threshold values of 60, 70 and 80 percent. This is a good illustration of the advantage of more than 80 percent of the variance in the 11 original variables being explained by only three new latent variables, i.e., principal components. Three different threshold values were tried in order to see whether the explained variance played a large part in the recognition rate. The best results, in almost each run, came with the highest threshold value of 80 percent. It might have been interesting to try even higher threshold values, but with that comes the risk of rapidly increasing the amount of components and the modeling of noise.

The matching procedure for both the small- and large-size group was in this study done by calculating the Euclidean distance between each log measured at the saw intake to all the logs measured at the log sorting station. Consequently, a specific log could get multiple hits, i.e., be matched to several logs. An alternative approach would have been to remove matched logs one by one from the log sorting station data in order to eliminate the risk of multiple hits. However, on mismatch, this approach will automatically yield more matches that are incorrect, because the log removed from the log sorting data won't be available for a correct matching further down the line. By eliminating multiple hits, one will also eliminate the alert given that two or more logs have very similar shape. This alternative approach was therefore considered less suited for the matching procedure.

Another question raised during the study was whether each log measured at the saw intake should be matched within the group it belonged to or matched to both groups containing both small- and large-size logs. The approach chosen in this study was to match logs only within the same group. This approach would probably also be the best solution for a practical application. A sawmill of the size that hosted this study holds on average 70,000 to 80,000 logs in storage between the log sorting station and the saw intake. Each diameter class includes on average 3,000 to 4,000 logs when the class is run through the sawing procedure. If matching were to be done within a specific diameter class instead of the entire storage between log sorting station and saw intake, it would save a lot of calculation time needed to perform the matching. The drawback with this approach is that the matching becomes sensitive to mistakes; for example, if the logyard tractor by accident places timber from the wrong diameter class onto the sawing line.

All in all, the fingerprint approach offers a good potential to very cost effectively trace large amounts of logs between log sorting station and saw intake. The compromise is that the matching between logs is a probability match, rather than a secure match such as that obtained when using, for example, RFID. This indicates that the method could be well suited as a tool for process improvements where low-probability matches and multiple hits could be handled, but less suited as an origin traceability system, which requires a more secure match. An interesting idea for future work would be to investigate the extent to which "twin logs" that the matching algorithm confuses and mismatches actually differ from each other. If the purpose is process control, and these "twin logs" yield sawn timber of the same volume and quality, then a certain degree of confusion might be even more acceptable, considering the benefits that come with the fingerprint approach to tracing.


The fingerprint recognition rate can be improved by the use of more sophisticated bark compensation. Compared to no compensation, improvements can be made by using the traditional bark functions, and even further improvements can be made by using automatic bark assessment based on the tracheid effect. The butt-end reducer between the log sorting station and the saw intake has a very negative effect on the fingerprint recognition rate. Significant improvement in fingerprint recognition rate can be achieved by excluding the section of the log's butt end that is affected by the butt end reduction.

Literature cited

Chiorescu, S. 2003. The forestry-wood chain, simulation techniquemeasurement accuracy-traceability concept. Doctoral thesis 2003:03, Lulea Univ. of Tech., Skelleftea, Sweden.

Dykstra, D.P., G. Kuru, and R. Nussbaum. 2003. Tools and methodologies for independent verification and monitoring: Technologies for log tracking. Inter. Forestry Review 5(3):262-267.

Eriksson, L., E. Johansson, N. Kettaneh-Wold, and S. Wold. 2001. Multi- and Megavariate Data Analysis. Umetrics AB, Umea, Sweden.

Finkenzeller, K. 2003. RFID Handbook: Fundamentals and Applications in Contactless Smart Cards and Identification, 2nd ed. John Wiley and Sons Ltd., Chichester, West Sussex, England.

Flodin, J. 2007. Evaluation of transition from manual to automatic bark assessment using a 3D-scanner. Masters thesis 2007:011, Lulea Univ. of Tech., Skelleftea, Sweden. (In Swedish with English summary.)

Forslund, M. 2000. Evaluation of new technology aiming at increased accuracy when measuring the dimension of unbarked saw logs. Res. report no. P 0012041. SP Tech. Res. Inst. of Sweden, Wood Tech. (SP Tratek).

Kozak, R.A. and T.C. Maness. 2003. A system for continuous process improvement in wood products manufacturing. Holz als Roh- und Werkstoff 61:95-102.

The MathWorks. 2007. The MathWorks, Inc. Natick, Massachusetts.

McFarlane, D. and Y. Sheffi. 2002. The impact of automatic identification on supply chain operations. Inter. J. of Logistics Management 14(1):1-17.

Nystrrm, J. 2002. Automatic measurement of compression wood and spiral grain for the prediction of distortion in sawn wood products. Doctoral thesis 2002:37. Lulea Univ. of Tech., Skelleftea, Sweden.

Toyryla, I. 1999. Realizing the potential of traceability--A case study research on usage and impacts of product traceability. Doctoral thesis MA:97. Helsinki Univ. of Tech., Espoo, Finland.

Uusijarvi, R. 2000. Automatic tracking of wood-connecting properties from tree to wood product. Doctoral thesis R-99-43, Swedish Royal Univ., Stockholm. (In Swedish with English summary.)

--. 2003. Linking raw material characteristics with industrial needs for environmentally sustainable and efficient transformation processes (LINESET). QLRT-1999-01476 Final Rept. Res. report no. P 0309034. SP Tech. Res. Inst. of Sweden, Wood Tech. (SP Tratek).

Wall, B. 1995. Materials traceability: The a la carte approach that avoids data indigestion. Ind. Manage. Data Syst. 95 (1): 10-11.

Wold, S., K. Esbensen, and P. Geladi. 1987. Principal component analysis. Chemometrics and Intelligent Lab. Systems 2:37-52.

Zacco, P. 1974. The bark thickness of saw logs. Rept. No. R 90. Swedish Royal College of Forestry, Dept. of Forest Products. 87 pp. (In Swedish with English summary).

The authors are, respectively, Research Scientist, Lulea Univ. of Technology, Dept. of Wood Technology, Skellefteh Campus, Skelleftea, Sweden (; Associate Professor, SP Technical Research Inst, of Sweden, Wood Technology, Skelleftea, Sweden (; and Professor, Lulea Univ. of Technology, Dept. of Wood Technology, Skellefteh Campus, Skellefteh, Sweden (anders.gronlund@ltu. se), This paper was received for publication in June 2007. Article No. 10369.
Table 1.--Measurements calculated for each log.

Length (Len) Distance from the log's top to butt end (mm)

Physical Volume (PhV) The log's physical volume ([dm.sup.3])
Top Diameter (ToD) The log's top-end diameter (mm)
Middle Diameter (MiD) The log's middle diameter (mm)
Butt Diameter (BuD) The log's butt-end diameter (mm)
Total Taper (ToT) The absolute value of the change in diameter
 per meter of log length from the top end to a
 point measured one meter from the butt end
Top Taper (TpT) The absolute value of the change in diameter
 from the top end of the log to a point measured
 one meter from the top end (mm/m)
Butt Taper (BuT) The absolute value of the change in diameter
 from the butt end of the log to a point
 measured one meter from the butt end (mm/m)
Bow Height (BoH) The maximum distance between the log's
 curvature and a straight line connecting the
 center of the log's end surfaces (mm)
Bow Radius (BoR) The radius of a circle fitting the log's length
 and bow height (m)
Bow Position (BoP) The distance from the log's top end to the
 point of maximum bow height (mm)

Table 2.--Variable reliability (a higher quotient value suggests
that the related variable can be more reliably used in the
multivariate matching procedure).

 Small-size logs Large-size logs
Variable Quotient Variable Quotient

 Len 26.9 Len 24.9
 PhV 5.4 PhV 4.4
 BoH 4.1 ToD 2.2
 ToT 3.3 MiD 2.1
 BoR 2.8 ToT 1.7
 BuT 2.6 TpT 1.4
 MiD 2.6 BoR 1.2
 ToD 2.3 BuD 1.2
 BuD 2.2 BoH 1.2
 TpT 1.7 BoP 1.1
 BoP 1.6 BuT 1.0

Table 3.--Iterative range for the scaling factors used on the
different variables.

Variable Scaling factor

 Len 3 to 5
 PhV 1 to 4
 ToD 0 to 2
 MiD 0 to 2
 BuD 0 to 2
 ToT 0 to 2
 TpT 0 to 2
 BuT 0 to 2
 BoH 0 to 2
 BoR 0 to 2
 BoP 0 to 1

Table 4.--Fingerprint recognition rate for small-size logs with
different bark compensation methods.

 Fingerprint Average
Bark compensation recognition rate recognition rate


No compensation 77.6 77.6
Bark functions 83.7 83.7
ABA 1 85.7
ABA 2 91.8 88.4
ABA 3 87.6

Table 5.--Fingerprint recognition rate for the large-size logs with
different bark compensation methods.

 Fingerprint Average
Bark compensation recognition rate recognition rate


No compensation 54.0 54.0
Bark functions 62.0 62.0
ABA 1 60.0
ABA 2 60.0 63.3
ABA 3 70.0

Table 6.--Fingerprint recognition rate for large-size logs with
different length reductions and different bark compensation

Length reduction No compensation Bark functions Average ABA
 (mm) (percent)

 250 54.0 58.0 63.3
 500 56.0 62.0 75.3
 750 66.0 70.0 76.7
 1000 66.0 72.0 73.3
 1250 70.0 68.0 71.3
 1500 62.0 62.0 68.7
COPYRIGHT 2008 Forest Products Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Flodin, Jens; Oja, Johan; Gronlund, Anders
Publication:Forest Products Journal
Article Type:Report
Geographic Code:4EUSW
Date:Apr 1, 2008
Previous Article:Coatings groups agree to combine governance and management functions.
Next Article:Predicting moisture content of yellow-poplar (Liriodendron tulipifera L.) veneer using near infrared spectroscopy.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters