Printer Friendly

Classification of acoustic emission signals using wavelets and Random Forests : application to localized corrosion.

1 GENERAL SPECIFICATIONS

Considering the crevice corrosion process, emitted gas (i.e. bubbles) coming from chemical reactions generate AE activity, which can be recorded by sensors located on the surface of the specimen. Since AE signals associated to crevice corrosion are characterized by low energy content, it is very difficult to separate those signals from the environmental noise [1, 2]. Thus, an in-depth work has been realized to preprocess the corresponding waveforms and a major motivation was to find the most relevant set of features. Chosen classification algorithm must be fast, reliable and not very sensitive to a mislabeled learning database (due to real-time and reliability industrial constraints). Moreover, it is preferable to provide a confidence level for the final decision. A whole approach combining the waveform preprocessing and Random Forest supervised classification has been implemented. To validate this new methodology, synthetic data were first used throughout an in-depth analysis, comparing Random Forests (RF) to the k-Nearest Neighbor (k-NN) algorithm, in terms of accuracy and speed processing. Then, tests on real cases involving noise and crevice corrosion were conducted. In order to build up various data sets, pH, temperature, NaCl concentration and [H.sub.2][O.sub.2] addition were controlled to obtain controlled crevice corrosion for some experiments and no corrosion for the others. The purpose of the classification is to isolate AE signals from corrosion to noise.

2 WAVEFORM PROCESSING

This important preliminary step is performed on waveforms directly acquired from sensors. The motivation here is to normalize those AE signals for consistent comparison. It is possible to discard useless information, numerically store the waveforms for further analysis and denoise them. The waveform preprocessing consists in the three following steps : Pre-trigger removing, tail cutting, Shape Preserving Interpolation (SPI) resampling. Tail cutting resides in dynamically cutting the end of the waveform according to an energy criterion. For each point in the waveform, the cumulative energy computed from the beginning is compared to the energy contained in a 10 [micro]s length window following that point. If this energy is less than a certain threshold T (in %) of the cumulative energy, then the corresponding point represents the end of the signal.

Wavelet denoising [3] can also be performed and uses the wden function from the Matlab Wavelet Toolbox. Specific parameters have been set: the universal threshold of Donoho [4] is used to select the wavelet coefficients in combination with a soft thresholding being rescaled using level dependent estimation of level noise. Decomposition is made at level 3 with the symmlet8 as the mother wavelet.

3 FEATURES EXTRACTION AND RANDOM FOREST CLASSIFICATION

Thus, each waveform is turned into a compact representation through a set of 30 features, in time, frequency and wavelet domains (Tab. 1). Besides common features such as amplitude, duration, energy, rise time, partial powers or peak frequency, other features derive from speech recognition and sound description studies. Wavelet features are actually a specific set of features using wavelet packet energy [5]. The energy percentage of the terminal nodes of the wavelet packet tree is computed, leading to 8 wavelet packet energy features.

The supervised classification is based on Random Forests which are an ensemble learning method that operate by constructing a multitude of decision trees during training, each capable of producing a response (vote) when presented with a new set of features during testing. The algorithm was originally developed by Leo Breiman and Adele Cutler in 2001 [6]. During the testing phase, each AE signal is ran down each tree of the Forest, leading to T votes. The final decision can be obtained two different ways. The first one is simply the usual majority voting (MV) rule. In this work, another decision rule is introduced called security voting (SV) rule. In this special rule, one given AE signal is assigned to a specific class if more than 70% of the total number of trees voted for that class. The whole approach combining the waveform preprocessing and the RF supervised classification has been implemented into the software RF-CAM ("Random Forests Classification for Acoustic emission Monitoring").

4 VALIDATION ON GROUND TRUTH DATA

Ground truth data come from the synthetic dataset collected in [7]. Those data represent four clearly identified classes (2000 signals per class) and are described with a set of M = 9 features. A Training Set is built, comprised of 70% of those data (5600 signals taken at random), the remaining 30% (2400 signals) constitute the Testing Set. In order to test the robustness of both algorithms regarding mislabeled data and the introduction of noise and outliers, a new evaluation tool called the alter-class matrix (ACM) is used [8]. The ACM is a particular n-square matrix designed to alter the original training set in order to simulate uncertainty on labeled data for supervised classification. For example, the class C1 becomes C1 alter, composed of 79% of waveforms from C1, 2% from C2, 15% from C3 and 4% from C4. Figure 1 shows the alteration of the class C1 from the Training Set, for different values of the trust factor (100, 90 and 60). Forcing the data to be mislabeled using the ACM can also be seen as a random and progressive introduction of noise and outliers in the original data

In all these tests, RF is compared to the widely used and efficient k-Nearest Neighbor (k-NN) algorithm [9]. The following parameters have been set for the algorithms:

- Random Forests: The number of trees T = 200 has been set, the number of randomly selected features has been set to the recommended value for classification, i.e. m = [square root of (term)]M where M is the total number of features.

- k-NN: The optimal value of k = 15 derives from the leave-one-out cross-validation (LOOCV) method [10], where the usual Euclidean distance is used. Recognition rates are computed for both RF and k-NN algorithms. They correspond to the ratio of the number of predicted labels to the number of true labels. Globally, RF outperforms k-NN up to 10% and is less sensitive to a slightly mislabeled library. For example, the recognition rates on synthetic data for both RF and k-NN algorithms are respectively 96,1 % and 88,9 for a trust factor of 70. Moreover, k-NN is globally linearly sensitive to the increase of the number of waveforms and the number of features whereas RF is almost not influenced.

5 APLLICATION TO THE REAL CORROSION DATA

Experiments are conducted on 304L stainless steel. Samples are immersed in a corrosive solution (with different values of NaCl (2g/L or 35g/L), pH (6.7, 8.3 or 10.5) and temperature (25 [degrees]C or 50 [degrees]C). In average, each corrosion experiment has been conducted twice under the same conditions. The pre-treatment (before immersion) of samples is performed step by step as follows: grinding to 400#, rinsing, chemical passivation (20 vol.% HNO3 during 1 hour, at room temperature), drying in air. Stainless steel sheet is then assembled by two formers made by polymethyl methacrylate (PMMA). This latter device allows two confined areas on both sides of the specimens in order to enhance crevice corrosion (Fig. 2). The open circuit corrosion potential (OCP) is continuously recorded owing to a saturated calomel electrode (SCE) as a reference (Fig. 2).

The AE acquisition system is a Mistras AEDSP embedded computer board. The sensors (R15) are applied on the surface of the specimen, outside the corrosive solution (distance sensors/sample = 40mm). R15 sensors have been chosen because of their good frequency sensitivity around 150 kHz. Sensor coupling is performed using vacuum grease. To ensure the repeatability of the results, assembly torque is controlled using a dynamometric key and set to 3Nm. Acquisition parameters were set as follows: peak definition time (PDT) = 200 [micro]s, hit definition time (HDT) = 400 [micro]s, hit lockout time (HLT) = 200 [micro]s. Acquisition threshold depends on environmental noise, thus varying from 19dB to 28dB. An optimal sampling frequency equals to 4MSPS (and 4K points per waveform) has been set as a very good compromise between the size of the data to process (real-time constraint) and the robustness of the extracted features (reliability constraint). Then, a set of 30 features are extracted from recorded waveforms (Tab. 1). Some experiments are performed with the addition of [H.sub.2][O.sub.2] in order to accelerate corrosion. The OCP drop shows that crevice initiates as soon as [H.sub.2][O.sub.2] is added. For the sake of illustration, experiments presenting no corrosion and performed at a higher temperature are depicted in Figure 3. The specimen exhibiting no corrosion presents higher activity than the corroded one because the corresponding test is conducted at 50[degrees]C, which induces a high acoustic activity due to the dilatation of the device and the generation of bubbles, together considered as noise. In order to build various data sets, pH and temperature values, NaCl concentration and [H.sub.2][O.sub.2] addition are controlled to obtain crevice corrosion for some experiments and no corrosion for the others. 13 out of 17 experiments (almost 75%) are used for the training set. In absence of corrosion, OCP does not decrease (Fig. 3a). However, even if there is no corrosion, some AE activity is observed all along the experiment. These signals are attributed to noise (i.e. dilatation phenomena and bubbles evolution within the liquid due to the temperature of the test (50 [degrees]C)). A first class (denoted as NC) is built from 1200 signals. Regarding the experiments involving crevice corrosion (Fig. 3b), AE activity starts prior to the addition of [H.sub.2][O.sub.2], and is mainly assigned to noise. After [H.sub.2][O.sub.2] addition, gathered signals are mainly assigned to corrosion and constitute a second class (denoted as CC), composed of 1167 signals. It has to be noticed that this CC class is not pure and also contains noise signals. The previous study on the use of the ACM for altered data shows that recognition results are satisfactory even if classes are altered up to 20%. Besides, since there is no class-privilege in real conditions, a special attention has been paid to build a well-balanced training set, for a total of 2367 waveforms. The remaining four experiments are used to construct the different testing sets. Two of them present no corrosion (namely NCa and NCb), the other two show crevice corrosion (namely CCa and CCb), for a total of 1311 waveforms. Preprocessing and features extraction have been applied to all waveforms.

The percentage of signals for the Majority Voting (MV) rule is given regarding the original total number of signals of the test set. The percentage of signals for the Security Voting (SV) rule is given regarding the number of remaining signals, after the security threshold of 70% is applied.

For each test (Table 2), the proper majority class has been recognized and results show that using SV leads to a reinforcement of the usual MV decision, when it comes to assign signals to a specific class. Moreover, for each test case, most of the signals corresponding to the minority class are discarded, thus strengthening the trend of the majority class.

6 CONCLUSIONS

Results show that this approach performed the best on ground truth data and was also very promising on real data, especially for its reliability, performance and speed, which are serious criteria for corrosion monitoring in the chemical industry. The results associated to the usual majority voting (MV) rule were satisfactory in terms of finding the proper majority class. In order to take into account the industrial reliability constraint, another decision rule called security voting (SV) has been implemented as a confidence level (set to 70%), and reinforced the final decision process taken by the MV.

Future prospects to be considered are the enlargement of the learning library in order to identify other corrosion mechanisms such as pitting corrosion. Finally, this methodology which was developed and validated at the laboratory scale can be applied at the industrial scale, providing the construction of a consistent new learning library.

ACKNOWLEDGEMENTS

The authors wish to thank the French Ministry of Economy, Finance and Industry alongside the AXELERA competitiveness cluster for their financial support, within the FUI IREINE project ("Innovation for the REliability of INdustrial Equipments"). The authors are also grateful to Solvay, IFPEN, Arkema and Mistras industrial partners for their technical collaboration. The authors are also thankful to R. Di Folco for his technical support regarding the corrosion experiments.

REFERENCES

[1] Y. Kim, M. Fregonese, H. Mazille, D. Feron, G. Santarini (2006), Study of oxygen reduction on stainless steel surfaces and its contribution to acoustic emission recorded during corrosion processes, Corrosion Science 48 3945-3959

[2] Y. Kim, M. Fregonese, H. Mazille, D. Feron, G. Santarini (2003), Ability of acoustic emission technique for detection and monitoring of crevice corrosion on 304l austenitic stainless steel, NDT&E International 36 553-562.

[3] D. Donoho (1994), I. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika 81 425-455

[4] D. Donoho (1995), De-noising by soft-thresholding, IEEE Transactions on Information Theory 41 613-627.

[5] R. Coifman, Y. Meyer, S. Quake, V. Wickerhauser (1994), Signal processing and compression with wavelet packets, in: J. S. B. et al. (Ed.), Wavelets and Their Applications, volume 442, Springer, pp. 363-379.

[6] L. Breiman (1996), Bagging predictors, Machine Learning 24 123-140.

[7] A. Sibil, N. Godin, M. R'Mili, E. Maillet, G. Fantozzi (2012), Optimization of acoustic emission data clustering by a genetic algorithm method, Journal of nondestructive evaluation 31 169-180.

[8] N. Morizet, N. Godin, J. Tang, E. Maillet , M. Fregonese, B.Normand (2016), Classification of acoustic emission signals using wavelets and Random Forests : application to localized corrosionMechanical Systems and , Signal Processing, Volumes 70-71, 1026-1037

[9] T. Cover, P. Hart, Nearest neighbor pattern classification (1967), IEEE Transactions on Information Theory 13 21-27.

[10] S. Arlot, A. Celisse (2010), A survey of cross-validation procedures for model selection, Statistics Surveys 4 40-79.

N. Morizet, N. Godin, J. Tang, M. Fregonese, B. Normand

INSA de Lyon, MATEIS Laboratory -UMR CNRS 5510. 7, Avenue Jean-Capelle 69621 Villeurbanne Cedex, France.
Table 1 : The set of the 30 features. Those features are recalculated
from waveforms. "L" = Low-pass filter, "H" = High-pass filter. Thus,
"LLH" consists in cascading two low-pass filters and one high-pass
filter.

                    ID   Feature                           Unit

                    R1   Amplitude                         V
                    R2   Duration                          s
                    R3   Energy                            [V.sup.2]
                    R4   Zero-crossings                    -
                    R5   Rise time                         s
                    R6   Temporal centroid                 5
                    R7   Temporal decrease                 [alpha]
                    R8   Partial Power 1 ([100; 200]kHz)   %
Wavelet Features    R9   Partial Power 2 ([200; 400]kHz)   %
Frequency Features  R10  Partial Power 3 ([400; 700]kHz)   %
Time Features       R11  Partial Power 4 ([700; 1000]kHz)  %
                    R12  Frequency centroid                [H.sub.z]
                    R13  Peak frequency                    [H.sub.z]
                    R14  Spectral spread                   [H.sub.z]
                    R15  Spectral skewness                 -
                    R16  Spectral kurtosis                 -
                    R17  Spectral slope                    -
                    R18  Roll-off frequency                [H.sub.z]
                    R19  Spectral spread to peak           [H.sub.z]
                    R20  Spectral skewness to peak         -
                    R21  Spectral kurtosis to peak         -
                    R22  Roll-on frequency                 [H.sub.z]
                    R23  Wavelet Packet Energy 1 (LLL)     %
                    R24  Wavelet Packet Energy 2 (LLH)     %
                    R25  Wavelet Packet Energy 3 (LHL)     %
                    R26  Wavelet Packet Energy 4 (LHH)     %
                    R27  Wavelet Packet Energy 5 (HLL)     %
                    R2S  Wavelet Packet Energy 6 (HLH)     %
                    R29  Wavelet Packet Energy 7 (HHL)     %
                    R30  Wavelet Packet Energy 8 (HHH)     %

Table 2: RF classification results of the proposed method for NC and
CC experiments.

Experiment  Number of votes  Number of votes  Decision rule

            (%) NC class     (%) CC class
NCa         179 (76.8%)        54 (23.2%)     MV
            103 (92.8%)          8 (7.2%)     SV
NCb         459 (95.6%)         21 (4.4%)     MV
            410 (98.3%)          7 (1.7%)     SV
CCa         152 (34.9%)       283 (65.1%)     MV
             89 (29.3%)       215 (70.7%)     SV
CCb          69 (42.3%)        69 (42.3%)     MV
             40 (37%)          72 (64.3%)     SV
COPYRIGHT 2017 Acoustic Emission Group
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Morizet, N.; Godin, N.; Tang, J.; Fregonese, M.; Normand, B.
Publication:Journal of Acoustic Emission
Article Type:Report
Date:Jan 1, 2017
Words:2629
Previous Article:On Acoustic Emission Sensor Characterization.
Next Article:Damage Evaluation of RC Bridge Deck under Wheel Loading Test by Means of AE Tomography.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters