Printer Friendly

Fault Diagnosis of High-Power Tractor Engine Based on Competitive Multiswarm Cooperative Particle Swarm Optimizer Algorithm.

1. Introduction

The diesel engine is the most important part of the tractor. Unlike automobiles, tractor diesel engines have complex structures, complicated operating conditions, and harsh working conditions. On the contrary, the operating conditions of automobiles are gentle, and the working environment is suitable. Compared with automobiles, tractor engine and other key parts are more prone to failure. Therefore, real-time and accurate diagnosis of tractor faults is more important than automobiles.

Cao used K-means clustering analysis to cluster the data and designed BP neural network to diagnose the running state of the diesel engine. The fusion of K-means algorithm and BP neural network algorithm was realized, which effectively improved the accuracy of diagnosis [1]. Nan collected the vibration signal, instantaneous speed signal, and cylinder pressure signal of the cylinder head of the tractor engine and diagnosed the fault signal via the RBF neural network [2]. Verbert used the weighted evidence synthesis method, determined the weight coefficient by the value of the degree of conflict between the obtained evidences, and finally employed D-S evidence theory to synthesize and thus improved the accuracy and reliability of fault diagnosis effectively [3]. Hui put forward the theory of online monitoring and fault diagnosis based on Dempster-Shafer. The Dempster-Shafer evidence theory was used for fusion diagnosis to diagnose the temperature regulating valve device of the diesel engine cooling system, which improved the reliability of system identification [4]. Particle swarm optimization algorithm is an iterative optimization tool with the characteristics of global optimization, which can solve the problem of neural network structure and weight optimization [5]. However, there is only one population in the standard particle swarm optimization algorithm. The characteristics of division and collaboration, multilevel, and diversity of evolution among the populations are not reflected; on the other hand, the information exchange in the group is limited to the information exchange between each individual and the best individual, which is likely to cause the stagnation of the population evolution [2]. Therefore, a multiswarm cooperative particle swarm optimizer (MCPSO) was introduced.

In the current diesel engine fault diagnosis research, researchers used engine vibration signals to obtain engine fault data. For different parts of the fault, different sensors need to be placed for separate information collection. Due to the influence of external noise and the complexity of multisensor data collection, the authenticity of the fault information will be reduced. However, the CAN bus can connect the sensors of the engine together, and sensor nodes can be added on the CAN bus at will, which increases the collection range of fault data. In addition, the CAN bus is robust to subsystem faults and electromagnetic interference, which guarantees the authenticity of fault data and improves the efficiency of fault diagnosis. On the other hand, the optimization performance of the traditional K-means clustering analysis method greatly depends on the choice of K value and the selection of the initial clustering center, and the clustering result is worse than that of the PSO algorithm. Therefore, in the process of fault diagnosis, how to ensure the authenticity of the fault data and the accuracy of the diagnosis results is particularly important. Therefore, in view of the problems that diesel engine vibration signals are easily affected by noise and the diagnosis accuracy of existing neural network algorithms is low, this paper used CAN bus for information collection and competitive multiswarm coevolution particle swarm optimization algorithm to diagnose tractor faults. The aim is to improve the speed and accuracy of tractor fault diagnosis, which can effectively diagnose the fault in the early stage of the fault and improve the efficiency of tractor work.

Based on the above analysis, the research content of this article is as follows. (1) Under the five different tractor fault conditions, the CAN bus was used to collect the information of eight sensors such as diesel engine speed, engine load, and air flow; (2) analyzed the collected data through SAE J1939 protocol; (3) established BP and PSO-BP fault diagnosis models, respectively, and optimized the PSO optimization algorithm on the basis of both; (4) optimized the weights in the PSO algorithm and established the LDWPSO-BP neural network; (5) optimized the particle swarm in the PSO algorithm. First, the initial population of the particle swarm optimization algorithm was divided into four, namely, three slave swarms and one master swarm. The slave swarms updated the speed and position independently and sent the best particle information to the master swarm. The master swarm updated the speed and position according to the optimal particle information of the slave swarms and searched for the global optimal solution; (6) analyzed the fault diagnosis performance of BP neural network, PSO-BP neural network, LDWPSO-BP neural network, and COM-MCPSO-BP neural network and got the diagnosis result. The specific research process is shown in Figure 1.

2. Competitive Multigroup Coevolution Particle Swarm Optimization Algorithm

2.1. Particle Swarm Optimization Algorithm. Particle swarm optimization is an intelligent algorithm based on population. Each member of the population is called a particle. Each particle is a potential feasible solution. The position of food in the population is expressed as a global optimal solution. In order to approach the position of the food, each particle learns constantly from the best position (pbest) and global best particle position (gbest) and finally approaches the position of the food [6].

The mathematical description of the particle swarm algorithm is as follows; assuming that the population size is N, at iteration time t, the position coordinates of each particle in the D-dimensional space can be expressed as [[bar.x].sub.i](t) = ([x.sup.1.sub.i], [x.sup.2.sub.i], ..., [x.sup.d.sub.i], ..., [x.sup.D.sub.i]), and the velocity of the particles is expressed as [[bar.v].sub.i](t) = ([v.sup.1.sub.i], [v.sup.2.sub.i], ..., [v.sup.d.sub.i], ..., [v.sup.D.sub.i]). The coordinate position [[bar.x].sub.i](t) and speed [[bar.v].sub.i](t) are adjusted at time t + 1 as follows:

[mathematical expression not reproducible]. (1)

[[bar.x].sub.i](t + 1) = [[bar.x].sub.i](t) + [[bar.v].sub.i] (t + 1), (2)

where [c.sub.1] and [c.sub.2] are the learning factors; [r.sub.1] and [r.sub.2] are random numbers between 0 and 1.

Inertia weight [omega] in formula (1) describes the influence of particle inertia on particle velocity. The value can adjust the global and local optimization ability of PSO. The larger the value, the stronger the global optimization ability and the weaker the local optimization ability, otherwise, the stronger the local optimization ability. Therefore, this paper adopts the strategy of linearly decreasing weight to dynamically adjust [omega]: at the beginning of the algorithm, [omega] can be given a large positive value. As the search progresses, [omega] can be gradually reduced linearly, thereby enabling the particles to explore the optimal value region at a faster speed in the early stage of the algorithm. And in the late stage of the algorithm, it can perform fine search in the optimal value area, so that the algorithm has a greater probability of converging to the global optimal solution position [7]. This algorithm is called Linear Decreasing Weight particle swarm optimization (LDWPSO), and the expression is shown in formula (3):

[omega] = [[omega].sub.max] - ([[omega].sub.max] - [[omega].sub.min]) x t/[T.sub.max], (3)

where [T.sub.max] is the maximum evolutionary algebra; [[omega].sub.min] is the minimum inertia weight; and t is the current number of iterations.

2.2. BP Neural Network. The BP neural network adopts a three-layer topology [8, 9], as shown in Figure 2, which is composed of an input layer, a hidden layer, and an output layer. In Figure 2, the number of input layer neurons is n, and the input vector is x [member of] [R.sup.n], X = [[[x.sub.1], [x.sub.2], ..., [x.sub.i], ..., [x.sub.n]].sup.T], the number of hidden neurons is h, x' [member of] [R.sup.h], X' = [[[x'.sub.1], [x'.sub.2], ..., [x'.sub.i], ..., [x'.sub.n]].sup.T], the number of output neurons is m, Y = [[[y.sub.1], [y.sub.2], ..., [y.sub.k], ..., [y.sub.m]].sup.T], the output vector is y [member of] [R.sup.m], and the activation function adopts the S-shaped function:

f(u) = 1/1 + exp(-u). (4)

The weight between the input layer and the hidden layer is [W.sub.ji], the threshold is [[theta].sub.j], the weight between the hidden layer and the output layer is [W.sub.kj], the threshold is [[theta]'.sub.k], and the neuron output of each layer satisfies

[x'.sub.j] = f([u.sub.j]) = f([n.summation over (i=1)][W.sub.ji][x.sub.i] - [[theta].sub.j]), (5)

[y.sub.j] = f([u'.sub.k]) = f([n.summation over (i=1)][W.sub.kj][x'.sub.i] - [[theta]'.sub.j]). (6)

During the training process, the training accuracy of the BP neural network is measured by the training error RMSE [10]:

[mathematical expression not reproducible], (7)

where Q is the number of training samples and [t.sup.i.sub.m] is the expected output of the i-th training sample at the k-th node of the output layer.

2.3. BP Neural Network Training Algorithm. The BP algorithm is a supervised learning algorithm. The input learning samples are P: [x.sup.1.sub.i], [x.sup.2.sub.i], ..., [x.sup.q.sub.i], ..., [x.sup.p.sub.i], (i = 1, 2, ..., n), and the expected corresponding outputs are [t.sup.1.sub.k], [t.sup.2.sub.k], ..., [t.sup.q.sub.k], ..., [t.sup.p.sub.k], (k = 1, 2, ..., m). The learning algorithm is to correct the connection weight and threshold of the error between the actual output [y.sup.1.sub.k], [y.sup.2.sub.k], ..., [y.sup.q.sub.k], ..., [y.sup.p.sub.k] (k = 1,2, ..., m) and the expected output, so that [y.sup.q.sub.k] is as close as possible to the required [t.sup.q.sub.k]. For the convenience of calculation, the threshold is written into the connection weight; let: [[theta].sub.j] = [W.sub.n+1,j], [[theta].sub.k] = [W.sub.h+1,k], [x.sub.n+1] = -1, [x.sub.h+1] = -1, then the neuron output of each layer is as follows:

[mathematical expression not reproducible], (8)

[mathematical expression not reproducible]. (9)

The q-th sample is input into the network to obtain the output [y.sup.q.sub.k], (k = 1,2, ..., m), the error of which is the sum of the squared errors of the output units, and the following formula is satisfied:

[E.sub.q] = 1/2 [m.summation over (k=1)] [([t.sup.q.sub.k] - [y.sup.q.sub.k]).sup.2]. (10)

For p learning samples, the total error is

E = 1/2 [p.summation over (q=1)][m.summation over (k=1)] [([t.sup.q.sub.k] - [y.sup.q.sub.k]).sup.2]. (11)

Assume that [W.sub.sc] is the connection weight between any two neurons in the network (that is, [W.sub.ji] and [W.sub.kj]), including the threshold, and E is a nonlinear error function related to [W.sub.sc]. Let [epsilon] satisfy the following formula:

[epsilon] = 1/2 [m.summation over (k=1)] [([t.sup.q.sub.k] - [y.sup.q.sub.k]).sup.2], (12)

E = [p.summation over (q=1)] [epsilon] ([W.sup.q], [x.sup.q]). (13)

Using the gradient method, the correction value for each [W.sub.sc] is

[DELTA][W.sub.sc] = [p.summation over (q=1)] [eta][partial derivative][epsilon]/[partial derivative][W.sub.sc], (14)

where [eta] is the step size.

[mathematical expression not reproducible]. (15)

The gradient method can change the total error in the direction of reduction until [DELTA]E = 0. This learning method makes its vector W stable to a solution, but it is not guaranteed to be the global minimum solution of E, which may be a local smallest solution.

2.4. PSO-BP Neural Network Fault Diagnosis Model. In the traditional BP neural network algorithm, the gradient descent method depends on the selection of the initial value; therefore, the training time is long, and it tends to fall into the local minimum and is difficult to select the number of hidden layers of the network structure. However, the PSO algorithm has the characteristics of fast convergence speed and powerful global search capabilities. The combination of the two will complement each other. Besides, BP neural network and PSO algorithm have certain commonality, and both of them are realized by simulating biological characteristics. The improved PSO algorithm can optimize the threshold and weight of the BP neural network, improving the shortcomings of the learning ability and convergence speed of the BP neural network, and giving full play to the powerful nonlinear mapping ability of the BP neural network [11].

PSO-BP neural network fault diagnosis is mainly divided into three parts: the determination of the neural network structure, the PSO-BP algorithm training network model, and the diagnosis process of the test sample [12], as shown in Figure 3.

Specific steps are as follows:

(1) Initialize the algorithm parameters and determine the parameters of the BP neural network

(2) Select the mean square error of BP neural network as the fitness function of PSO algorithm

(3) Find individual extreme values and group extreme values

(4) Update the population speed and the position

(5) Calculate particle fitness

(6) Update the population speed and the position again

(7) Determine whether the termination condition is met, if not, return to (6)

(8) Use the optimization result as the initial value of the BP neural network

(9) After the training, enter the test sample into the neural network model corresponding to the individual for classification diagnosis

2.5. Multiswarm Cooperative Particle Swarm Optimizer Optimize BP Neural Network. In nature, the evolution of populations is mostly caused by the interaction of different populations. The evolution of a population often causes changes in other biological populations and can realize the information transmission and interaction between each other. In the traditional particle swarm optimization algorithm, the information that guides individual evolution only comes from the group, which makes it easy for the algorithm to produce "convergence" in the later stage. As a result, the evolutionary ability declines, and the performance of the algorithm is that the search is slow or even stagnant, falling into the local best point [13].

Inspired by the symbiosis in biological systems, this paper adopted a multiswarm coevolution model. In this model, the evolution of an individual is affected by its symbiotic group information as well as its own group. Based on the multiswarm coevolution model, PSO is embedded in it, hence, a multiswarm cooperative particle swarm optimizer (MCPSO).

First, based on the idea of mutual benefit and symbiosis, a multiswarm coevolution model was designed. In this model, a master-slave structure is used to represent the relationship between symbiotic groups. The entire population is divided into N subgroups (symbiotic groups), where Nm is the main group number and Ns is the subgroup number, and each subgroup contains the same number of individuals. In the evolution process, the symbiotic group that participates in the information exchange of other subgroups is called the master swarm, and the symbiosis that independently evolves and does not participate in the information exchange of other subgroups is called the slave swarm. The multiswarm coevolution model is shown in the figure below.

In Figure 4, each slave group executes independently the improved PSO algorithm (speed and position update) during the iteration process. Before all slave swarms perform the next status update, they send the best individual information found so far to the master swarm, and the master swarm updates the status based on the experience of these best individuals. The update method is as follows:

[mathematical expression not reproducible], (16)

[x.sup.M.sub.id] - [x.sup.M.sub.id] + [v.sup.M.sub.id], (17)

where M is the master swarm; Q is other symbiotic groups except the master swarm; [p.sup.M.sub.g] is the optimal particle in the master swarm; [p.sup.Q.sub.g] is the optimal particle in Q; [c.sub.3] is the learning factor; [r.sub.3] is a random number between 0 and 1.

In addition, [phi] is the migration factor, indicating the participation of the symbiotic group, which is determined by the following formula:

[mathematical expression not reproducible], (18)

where [Gbest.sup.M] is the fitness value determined by pM. [Gbest.sup.Q] is the fitness value determined by [p.sup.Q.sub.g].

In this update mechanism, each particle in the master swarm p updates its state according to its own optimal value, the optimal value in the master swarm, and the optimal value in the slave swarm. The self-swarm and other swarms are in a competitive relationship. In other words, the experience value of the symbiotic swarm will be adopted only when it is better than the value found by itself; otherwise, it will evolve in its own way; that is why it is called competitive MCPSO (COM-MCPSO).

The specific optimization process of COM-MCPSO algorithm for BP neural network is as follows:

(1) Import the processed fault data and establish a BP neural network

(2) Initialize the population, divide the population into four equal parts randomly, and choose one of them as the master swarm while the others as the slave swarms

(3) Each slave swarm finds the individual extremum and swarm extremum independently and updates the position and velocity

(4) Before each slave swarm performs the next evolution, the global optimal particle (gbest) is sent to the master swarm

(5) The master swarm selects the global optimal particle optimal value of each slave swarm and uses formula (16) to update the position and velocity of the master swarm

(6) Calculate particle fitness

(7) Determine whether the termination condition is met, if not, return to (6)

(8) Use the optimization result as the initial value of the BP neural network

(9) Train BP neural network based on the above conditions

According to the above research and analysis results, the diesel engine fault data under five different working conditions are classified and diagnosed.

3. Experimental Data Analysis

3.1. Experimental Data Collection Based on CAN Bus. CAN bus is a network communication technology with high reliability, perfect function, and low cost, which is widely used in many fields such as automobile industry, industrial control aviation industry, and safety monitoring [14]. Figure 5 shows a typical CAN topology. As shown in the figure, the CAN bus includes two lines, CAN-H and CAN-L, and the node is connected to the CPU through the CAN controller.

At present, the CAN bus based on SAE J1939 protocol has been widely used in advanced internal combustion engines at home and abroad. Through the accessory ECU, the diesel engine monitoring system based on SAE J1939 protocol can obtain various operating parameters and fault information of diesel engines in real time. It provides a lot of information for the operator to understand and grasp the current state of the engine [15, 16].

The J1939 protocol is transmitted in the form of a protocol data unit (PDU), where one PDU includes 8 bytes. In the J1939 application layer protocol, the parameter group is defined in detail, including the parameter update rate, effective data length, data page, PDU format, PDU details, default priority, and the content of the parameter group, and each parameter group PG has corresponding parameter group number (PGN) [17]. At the same time, each parameter of the J1939 protocol is assigned a number (SPN), which defines the physical meaning of the bytes in the PDU data field, including data length, data type, resolution, range, and reference label [18, 19]. Details are shown in Table 1.

From the above parameters, the actual physical value corresponding to each parameter can be calculated. According to the original hexadecimal number of J1939 protocol parameter group PG, the corresponding design physical quantity can be obtained from formula (19) [20,21].

Value - Re s x R + Offset, (19)

where Value is the actual value of the parameter; Res is the parameter resolution; R is the parameter value; Offset is the offset of the parameter. According to the parameter value range, the continuity parameter is generally represented by 1-4 bytes.

In this regard, the article took Weichai WP6 diesel engine as the research object with the USB-CAN device as the medium and employed CANPro software to collect diesel engine information under five operating conditions. The experimental prototype of Weichai WP6 diesel engine is shown in Figure 6.

Five diesel engine operating conditions are normal [f.sub.1], low oil pressure [f.sub.2], and intake pipe blockage [f.sub.3], high pressure oil pump failure [f.sub.4], and piston ring broken [f.sub.5]. The collected information includes engine speed [I.sub.1], engine load [I.sub.2], coolant temperature [I.sub.3], air flow [I.sub.4], intake manifold pressure [I.sub.5], intake camshaft position [I.sub.6], exhaust camshaft position [I.sub.7], and fuel supply advance angle [I.sub.8]. Then the collected information through SAE J1939 protocol was analyzed to obtain specific diesel engine fault information and the change trend of various data as shown in Table 2 and Figure 7.

3.2. Parameter Selection. The structure of the selected BP neural network is 8-13-5, the transfer functions of the hidden layer and the output layer are tansig and logsig, and the trainlm function is selected as the training function. The conversion formula is as follows:

[mathematical expression not reproducible], (20)

where [A.sub.1] is the hidden layer output value; D is the input value of the input layer neuron; [A.sub.2] is the output value of the neurons in the output layer; [B.sub.1], [B.sub.2] are the threshold matrix; [W.sub.1], [W.sub.2] are weight matrix.

According to the BP network structure, encode the particle length of the PSO algorithm, as shown in formula (21):

L = n x h + h + h x m + m, (21)

where n is the number of input layer nodes; h is the number of hidden layer nodes; m is the number of output layer nodes.

From the above data, the particle length of the PSO algorithm is 187, and the population size is 40. The learning factor of the algorithm is [c.sub.1] = [c.sub.2] = 2, the maximum evolution number of the population is 300, the maximum iteration number of the BP neural network is 3000, the momentum coefficient is 0.9, the learning rate is 0.3, the learning rate increment is 1.05, and the target error is [10.sup.-5]. From the diesel engine data of five different working conditions, 6 sets of data are randomly selected for each working condition. A total of 30 sets of data are used as training samples, and the remaining 20 sets are used as test samples to establish a COM-MCPSO optimized BP fault diagnosis model.

3.3. Comparative Experiment Analysis of Fault Diagnosis Model. Taking the diesel engine fault data under five different working conditions as the diagnosis object, the diesel engine fault is encoded as a digital signal, which is convenient for the identification of the fault diagnosis model, and the fault is recorded as "1"; otherwise it is "0"; according to the structure of the BP neural network, the ideal output of the neural network is shown in Table 3.

In order to verify the effectiveness of the COM-MCPSO-BP neural network after parameter selection, it is compared with BP neural network (BPNN), PSO-BP neural network, and linear decreasing weight PSO-BP (LDWPSO-BP) neural network. Among them, BPNN, PSO-BP, LDWPSO-BP, and COM-MCPSO-BP used the same neural network topology 8-13-5. The training results are shown in Figures 8 and 9.

It is easy to know from the figure that, in the PSO algorithm, the population converges to around 0.17 after 70 iterations; in the LDWPSO algorithm, the population converges to around 0.168 after 40 iterations, and moreover, the convergence speed and convergence accuracy are better than those of the traditional PSO algorithm; In the COM-MCPSO algorithm, the population converges to around 0.16 after only 35 iterations, which is faster and more accurate than the LDWPSO algorithm. The BP neural network optimized by COM-MCPSO achieves the target accuracy after only 589 steps of training. As shown in Figure 6, compared with the other three algorithms, the calculation speed is faster and the accuracy is higher. Finally, the remaining 20 sets of data are used as test samples, and the test samples are input into BP, LDWPSO-BP, and COM-MCPSO-BP to compare the diagnostic results of three different neural networks. The diagnosis results are shown in Tables 4-7.

In order to verify the fault diagnosis performance of each neural network, four indexes of MAE, MRE, MSE, and accuracy are selected to check the performance of the network. The calculation method is shown in formulas (22)-(24). The diagnosis results are shown in Table 7.

(1) The absolute value of average error MAE:

MAE = 1/N [N.summation over (i=1)][absolute value of [d.sub.i] - [y.sub.i]], (22)

where N is the total number of samples, [d.sub.i] is the Expected output, and [y.sub.i] is the actual output.

(2) The absolute value of the average relative error MRE:

MRE = 1/N [N.summation over (i=1)] [absolute value of [d.sub.i] - [y.sub.i]/[y.sub.i]] x 100%. (23)

(3) The mean square error MSE:

MSE = 1/N [N.summation over (i=1)] [([d.sub.i] - [y.sub.i]).sup.2]. (24)

It can be seen from Table 8 that the recognition accuracy of the COM-MCPSO-BP neural network is the highest, reaching 94.84%, which is 15.88%, 10.11%, and 2.6% higher than that of BP, PSO-BP, and LDWPSO-BP, respectively. It shows that the COM-MCPSO-BP neural network has higher diagnostic accuracy than that of the other three neural networks and verifies the effectiveness of the multigroup coevolution particle swarm optimization algorithm.

4. Conclusion

The multiswarm cooperative particle swarm optimizer BP neural network effectively improves the accuracy of fault diagnosis concerning the high-power tractor diesel engine fault complexity, fault correlation, and multifault concurrency. On the other hand, the COM-MCPSO algorithm with a strong global search ability was used to optimize the structure of the BP neural network, which overcomes the BP neural network's vulnerability to local minimum value, slow training, and other issues. Finally, the constructed COM-MCPSO-BP fault diagnosis model was applied to the fault diagnosis of high-power tractor diesel engines to achieve accurate identification of diesel engine faults. Experimental results show that the fault diagnosis algorithm has higher accuracy and stronger generalization ability, which is better than BP neural network and PSO optimized BP neural network, so that it can better achieve diesel engine fault diagnosis.

https://doi.org/10.1155/2020/8829257

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The research was funded partially by the Agricultural Science and Technology Independent Innovation Fund of Jiangsu Province (CX(19)3081) and the Key Research and Development Program of Jiangsu Province (BE2018127).

References

[1] H. Cao, L. Hu, W. Q. Xie, and J. G. Yang, "Fault diagnosis method of diesel engine based on multi-source information fusion," China Navigation, vol. 1, pp. 73-172, 2020.

[2] Y. X. Nan, J. W. Zhang, and Z. Y. Dong, "Application of RBF neural network in vibration fault diagnosis of diesel engine," Industry and Technology Forum, vol. 18, pp. 48-49, 2019.

[3] K. Verbert, R. Babuska, and B. De Schutter, "Bayesian and Dempster-Shafer reasoning for knowledge-based fault diagnosis-a comparative study," Engineering Applications of Artificial Intelligence, vol. 60, pp. 136-150, 2017.

[4] K. H. Hui, M. H. Lim, M. S. Leong, and S. M. Al-Obaidi, "Dempster-Shafer evidence theory for multi-bearing faults diagnosis," Engineering Applications of Artificial Intelligence, vol. 57, pp. 160-170, 2017.

[5] P. Wang, H. Zou, K. L. Wang, and Z. Z. Zhao, "Research on hot deformation behavior of Zr-4 alloy based on PSO-BP artificial neural network," Journal of Alloys and Compounds, vol. 18, pp. 48-49, 2019.

[6] J. Huang and L. He, "Application of improved PSO-BP neural network in customer churn warning," Procedia Computer Science, vol. 131, pp. 1238-1246, 2018.

[7] H. Wang, M.-J. Peng, J. Wesley Hines, G.-Y. Zheng, Y.-K. Liu, and B. R. Upadhyaya, "A hybrid fault diagnosis methodology with support vector machine and improved particle swarm optimization for nuclear power plants," ISA Transactions, vol. 95, pp. 358-371, 2019.

[8] J. Matos, R. P. V. Faria, I. B. R. Nogueira, J. M. Loureiro, and A. M. Ribeiro, "Optimization strategies for chiral separation by true moving bed chromatography using particles swarm optimization (PSO) and new parallel PSO variant," Computers & Chemical Engineering, vol. 123, pp. 344-356, 2019.

[9] W. W. Hua, Y. L. Li, T. M. Xue et al., "Tool wear during high-speed milling of wood-plastic composites," Bioresources, vol. 14, pp. 8678-8688, 2019.

[10] I. Sancaktar, B. Tuna, and M. Ulutas, "Inverse kinematics application on medical robot using adapted PSO method," Engineering Science and Technology, an International Journal, vol. 21, no. 5, pp. 1006-1010, 2018.

[11] Z. Q. Guo, S. S. Yang, Y. B. Li, H. H. Yang, X. Pang, and B. S. Luo, "Inversion of soil parameters of subway shield site based on PSO-BP neural network," Journal of Taiyuan University of Technology, vol. 51, pp. 171-176, 2020.

[12] Z. C. Qiao, Y. Q. Liu, and Y. Y. Liao, "An improved method of EWT and its application in rolling bearings fault diagnosis," Shock and Vibration, vol. 2020, Article ID 4973941, 18 pages, 2020.

[13] S. W. Fei, "The hybrid method of VMD-PSR-SVD and improved binary PSO-KNN for fault diagnosis of bearing," Shock and Vibration, vol. 2019, Article ID 4954920, 8 pages, 2019.

[14] M. Zago, S. Longari, A. Tricarico et al., ""ReCAN-dataset for reverse engineering of controller area networks," Data in Brief, vol. 29, Article ID 105149, 15 pages, 2020.

[15] M. K. Ishak and F. K. Khan, "Unique message authentication security approach based controller area network (CAN) for anti-lock braking system (ABS) in vehicle network," Procedia Computer Science, vol. 160, pp. 93-100, 2019.

[16] X. M. Xu, D. Chen, L. Zhang, and N. Chen, "Hopf bifurcation characteristics of the vehicle with rear axle compliance steering," Shock and Vibration, vol. 2019, Article ID 3402084, 12 pages, 2019.

[17] R. A. Rohrer, S. K. Pitla, and J. D. Luck, "Tractor CAN bus interface tools and application development for real-time data analysis," Computers and Electronics in Agriculture, vol. 163, Article ID 104847, 9 pages, 2019.

[18] D. S. Paraforos and H. W. Griepentrog, "Tractor fuel rate modeling and simulation using switching Markov chains on CAN-Bus data," IFAC-Papers Online, vol. 52, no. 30, pp. 379-384, 2019.

[19] G. Molari, M. Mattetti, N. Lenzini, and S. Fiorati, "An updated methodology to analyse the idling of agricultural tractors," Biosystems Engineering, vol. 187, pp. 160-170, 2019.

[20] J. H. Huang, C. L. Zhang, L. J. Huang, L. Chen, and C. L. Zhao, "Communication design of CAN to USB interface based on cortex-M4," Instrumentation Technology and Sensors, vol. 9, pp. 33-36, 2018.

[21] S. E. Marx, J. D. Luck, R. M. Hoy, S. K. Pitla, E. E. Blankenship, and M. J. Darr, "Validation of machine CAN bus J1939 fuel rate accuracy using Nebraska tractor test laboratory fuel rate data," Computers and Electronics in Agriculture, vol. 118, pp. 179-185, 2015.

Maohua Xiao [ID], Weichen Wang, Kaixin Wang, Wei Zhang, and Hengtong Zhang

College of Engineering, Nanjing Agriculture University, Nanjing 210031, China

Correspondence should be addressed to Maohua Xiao; xiaomaohua@njau.edu.cn

Received 4 June 2020; Revised 28 June 2020; Accepted 18 July 2020; Published 3 August 2020

Academic Editor: Changqing Shen

Caption: Figure 1: Research content flow chart.

Caption: Figure 2: BP neural network structure.

Caption: Figure 3: PSO-BP neural network model.

Caption: Figure 4: Multiswarm coevolution model.

Caption: Figure 5: Typical CAN bus connection diagram.

Caption: Figure 6: Schematic diagram of Dongfeng 1504-5 tractor and Weichai WP6 diesel engine. (a) Dongfeng Agricultural Machinery 1504-5 tractor. (b) Weichai WP6 diesel engine. (c) Structural diagram of Weichai WP6 diesel engine.

Caption: Figure 7: Variation trend of eight parameters under five different working conditions. (a) Air flow change trend. (b) Coolant temperature change trend. (c) Engine load change trend. (d) Engine speed change trend. (e) Exhaust camshaft position change trend. (f) Fuel supply advance angle change trend. (g) Intake camshaft position change trend. (h) Intake manifold pressure change trend.

Caption: Figure 8: Comparison of adaptability changes of three different algorithm.

Caption: Figure 9: Error convergence curves of four different neural networks. (a) BP neural network error curve. (b) PSO-BP neural network error curve. (c) LDWPSO-BP neural network error curve. (d) COM-MCPSO-BP neural network error curve.
Table 1: Parameter information of engine based on SAE
J1939 protocol.

                     Data name            PGN     Data    Resolution
                                                 length
                                                 (byte)

Engine              Engine load          61433     1          1%
management        Intake pressure        65270     1        2 kPa
system          Coolant temperature      65262     1         1 C
                   Engine speed          61444     2      0.125 rpm
             Intake manifold pressure    65149     2       0.1 kPa
             Intake camshaft position    65235     2       0.1 deg
             Exhaust camshaft position   65236     2       0.1 deg
             Fuel supply advance angle   65344     2       0.1 deg

                     Data name             Data range     Offset

Engine              Engine load             0%-250%           0
management        Intake pressure          0-500 kPa          0
system          Coolant temperature        -40-200 C        -40
                   Engine speed          0-8031.875 rpm        0
             Intake manifold pressure     0-6425.5 kPa         0
             Intake camshaft position       0-60 deg          0
             Exhaust camshaft position     -30-0 deg          0
             Fuel supply advance angle      0-15 deg          0

Table 2: Sample data of diesel engine failure under different
working conditions.

                                     Sample I

Working     Serial    [I.sub.1]   [I.sub.2]   [I.sub.3]   [I.sub.4]
condition   number

[f.sub.1]     1         647.53       27.49      102.00       33.10
              2         645.77       27.59      102.72       32.70
              3         644.75       27.53      102.76       33.10
              4         647.53       27.53      102.81       32.70
              5         644.52       27.49      102.74       32.90

[f.sub.2]     11        648.64       30.15      104.12       33.50
              12        644.63       30.46      104.47       33.70
              13        642.32       31.24      103.69       33.40
              14        645.89       32.59      105.63       34.10
              15        644.52       31.54      104.64       34.30

[f.sub.3]     21        647.56       26.35      102.10       32.10
              22        644.32       26.23      102.13       32.10
              23        645.54       26.12      102.22       32.15
              24        645.89       25.89      102.05       31.90
              25        643.21       26.02      102.02       31.80

[f.sub.4]     31        640.23       26.38      102.23       32.55
              32        640.35       36.25      102.19       32.40
              33        639.24       26.45      102.12       32.45
              34        639.56       25.76      102.05       32.30
              35        638.76       26.08      102.00       32.20

[f.sub.5]     46        640.12       26.85      102.10       32.40
              47        638.45       26.55      102.10       32.33
              48        649.14       27.67      101.98       33.65
              49        645.12       27.35      101.89       33.45
              50        644.32       27.32      102.03       33.42

                                     Sample I

Working     Serial    [I.sub.5]   [I.sub.6]   [I.sub.7]   [I.sub.8]
condition   number

[f.sub.1]     1          320        36.67      -18.90       6.00
              2          320        36.58      -18.90       6.75
              3          322        36.45      -18.93       3.75
              4          319        36.78      -19.00       7.50
              5          318        36.55      -19.14       8.25

[f.sub.2]     11         325        36.12      -19.10       6.50
              12         325        36.54      -19.32       6.65
              13         324        36.89      -19.24       6.75
              14         326        37.10      -18.90       6.75
              15         323        36.78      -18.89       5.50

[f.sub.3]     21         320        36.32      -18.67       6.75
              22         321        36.25      -18.58       6.55
              23         320        36.35      -18.69       6.85
              24         319        36.56      -18.45       6.65
              25         318        37.02      -19.13       6.60

[f.sub.4]     31         312        36.12      -19.20       6.88
              32         313        36.21      -19.15       6.55
              33         312        36.24      -19.12       6.42
              34         315        36.34      -18.98       6.39
              35         313        36.45      -19.05       6.22

[f.sub.5]     46         318        37.05      -18.96       6.95
              47         316        36.84      -18.85       6.50
              48         322        36.56      -18.87       6.35
              49         320        36.43      -19.03       6.75
              50         320        36.23      -19.10       6.55

Table 3: Expected output of neural network.

Fault type              Expected output

[f.sub.1]       1       0       0       0       0
[f.sub.2]       0       1       0       0       0
[f.sub.3]       0       0       1       0       0
[f.sub.4]       0       0       0       1       0
[f.sub.5]       0       0       0       0       1

Table 4: BP neural network diagnosis results.

Fault type                             Actual output

[f.sub.1]       0.5430      0.4999         0           0        0.0220
[f.sub.2]       0.0002      0.9994      0.0001         0        0.0001
[f.sub.3]       0.0032         0        0.6560      0.0001      0.0001
[f.sub.4]          0        0.2486      0.5044      0.9999         0
[f.sub.5]          0        0.2874      0.0077      0.0001      0.7498

Table 5: PSO-BP neural network diagnosis results.

Fault type                             Actual output

[f.sub.1]       0.7949      0.4611         0           0        0.0001
[f.sub.2]       0.0001      0.9999         0           0        0.0001
[f.sub.3]       0.0013         0        0.6702      0.0001      0.0001
[f.sub.4]          0           0        0.5005      0.9999      0.0001
[f.sub.5]       0.0784         0        0.0162      0.0031      0.7717

Table 6: LDWPSO-BP neural network diagnosis results.

Fault type                             Actual output

[f.sub.1]       0.8668      0.4490      0.0009      0.0006      0.0002
[f.sub.2]       0.0005      0.9998      0.0001         0        0.0001
[f.sub.3]       0.0149         0        0.9985      0.1333         0
[f.sub.4]       0.0001         0        0.4847      0.9943      0.0006
[f.sub.5]       0.1369         0        0.0081      0.0003      0.7528

Table 7: COM-MCPSO-BP neural network diagnosis results.

Fault type                             Actual output

[f.sub.1]       0.9747      0.2500         0        0.0001      0.0004
[f.sub.2]       0.0001      0.9999      0.0002         0        0.0001
[f.sub.3]       0.0001         0        0.9985      0.1983      0.0002
[f.sub.4]       0.0004         0        0.4993      0.9956      0.0001
[f.sub.5]       0.1598         0        0.0049      0.0009      0.7733

Table 8: Network performance comparison.

Neural network        MAE      MRE (%)    MSE     Accuracy (%)
category

BP                  0.5252      2.94     0.2089      78.96
PSO-BP              0.3647      2.409    0.1342      84.73
LDWPSO-BP           0.3238     1.6979    0.1104      92.24
COM-MCPSO-BP        0.2745     1.4650    0.0857      94.84
COPYRIGHT 2020 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2020 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Xiao, Maohua; Wang, Weichen; Wang, Kaixin; Zhang, Wei; Zhang, Hengtong
Publication:Shock and Vibration
Date:Aug 31, 2020
Words:6545
Previous Article:Rolling Bearing Diagnosis Based on Adaptive Probabilistic PCA and the Enhanced Morphological Filter.
Next Article:General Log-Linear Weibull Model Combining Vibration and Temperature Characteristics for Remaining Useful Life Prediction of Rolling Element Bearings.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |