# Numerical analysis of modeling based on improved Elman neural network.

1. IntroductionThe networks, communications, and television systems have entered the digital age. The class-D power amplifiers (CDPAs) [1] have become increasingly popular for audio applications because of their high power efficiency. Since the output transistors of CDPAs operate in the ohmic and cut-off regions, there exists nonlinearity in the system. One of the nonlinear phenomena is the intermodulation distortion (IMD) [2]. The CDPAs also have the memory effects, resulting from the node voltage and current depending on not only the current input but also the historical signals due to the existence of parameters with dynamic distribution [3]. The existence of memory effects [4] is often identified by imbalances between the corresponding upper and lower distortion products, such as the same order of IMD.

Behavioral modeling [5, 6] of nonlinear circuits and systems has received much attention in recent years. In behavioral modeling, the nonlinear component is generally considered as a "blackbox," which is completely characterized by external responses, that is, in terms of input and output signals, through the use of relatively simple mathematical expressions. Behavioral modeling techniques provide a convenient and efficient means to predict system-level performance without the computational complexity of full circuit simulation or physical level analysis of nonlinear systems, thereby significantly speeding up the analysis process. The existing PA's behavioral models are mainly based on Volterra series or its expanded and simplified forms [7, 8]. However, its large number of coefficients complicates its practical implementation, which makes the standard Volterra series only limited to "weak" nonlinear PAs.

Owing to the fact that the neural networks are provided with available solutions for nonlinear function approximation, system identification, exclusive or and encoder problems, the study of PA's behavioral models based on neural networks has already been developed in recent years [9-12]. In order to well study the nonlinear characteristics of CDPAs, a new behavioral model based on improved Elman neural network (IENN) is proposed in this paper. In IENN, a self-connection of context nodes is added in this model, which could make the neurons more sensitive to the history of input data. The Chebyshev orthogonal polynomials [13, 14] instead of sigmoid functions are employed as the activation functions of hidden layer neurons to improve accuracy and the convergence rate of IENN, and the structure of IENN is simpler. The gradient descent (GD) algorithm is used to train the neural network model. Simulation results with two-tone signal, linear frequency modulated (LFM) signal, and binary phase shift keying (2PSK) signal as inputs have shown that the proposed model IENN could well depicts the nonlinear distortions of PAs.

The remainder of this paper is organized as follows. The basic Elman neural network (BENN) is introduced in Section 2. In Section 3, the new behavioral model based on IENN and the training algorithm of IENN is presented in detail. Simulation results using two-tone signal and broadband signals as input are given in Section 4. The conclusion is shown in Section 5.

2. The Basic Elman Neural Network

The architecture of BENN [15, 16] is illustrated in Figure 1, which is generally divided into four layers: input layer, hidden layer, context layer, and output layer. The feedforward loop consists of input layer, hidden layer, and output layer in which the weights connecting two neighboring layers are variable. There exists a back-forward loop between context layer and hidden layer, which makes the neural networks sensitive to the history of input data. In BENN, the context neurons can be treated as the memory units, so the model can manifest the memory effect of nonlinear system theoretically. Furthermore, because the dynamic characteristics of BENN are provided only by internal connections, there is no need to use the state as input or training signal, which makes BENN prior to static feedforward network.

3. The Behavioral Model Based on IENN

3.1. The Architecture of Improved Elman Neural Network. The architecture of IENN is presented in Figure 2; it is similar to BENN. To improve the learning speed and output accuracy, some changes are made. To better deliver the memory effect of the nonlinear system, a self-feedback is added to the context layer neurons with a feedback coefficient gain. This operation increases the memory depth and makes the model's output more sensitive to the history inputs. The value of Chebyshev orthogonal basis functions can easily be calculated by recursion operation, which is simpler than the sigmoid function. And the Chebyshev orthogonal basis functions have been used as active functions in many neural networks [14,17,18] for different applications and proved to be fast and accurate. We consider using the first category Chebyshev orthogonal basis as the activation function in the hidden layer instead of the sigmoid function in BENN. The research results have proved that the IENN model can simplify the computing complexity, reduce the training time, and enhance the convergence precision.

In IENN, the input layer has R nodes, the hidden layer and the context layer own N nodes, and the output layer possesses M nodes. The basic functions in each layer are as follows.

3.1.1. Input Layer. In the input layer,

[u.sub.q](k) = [e.sub.q](k), q = 1, 2, ..., R, (1)

where k represents the kth iteration step and [e.sub.q](k) and [u.sub.q](k) denote the input and the output of the input layer, respectively.

3.1.2. Hidden Layer. The input of the jth hidden layer neuron is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (2)

where [x.sup.c.sub.l](k) is the output of the lth context layer neuron, [w.sup.1.sub.jl](k) represents the weight from the Ith context layer neuron to the jth hidden layer neuron, and [w.sup.2.sub.jq](k) represents the weight from the qth input layer neuron to the jth hidden layer neuron.

Since the input of Chebyshev orthogonal basis functions is defined within the interval [-1,1], the input of the hidden layer needs to be normalized. The normalization of [v.sub.j](k) is defined as

[[bar.v].sub.j](k) = [v.sub.j](k)/[max.sub.1[less than or equal to]p[less than or equal to]N]([absolute value of [v.sub.p](k)]), j = 1, 2, ..., N. (3)

The output of the jth hidden layer neuron is

[x.sub.j](k) = [f.sub.j][[[bar.v].sub.j](k)], j = 1, 2, ..., N. (4)

The function [f.sub.j](*) indicates the first category Chebyshev orthogonal basis functions given in Figure 2.

3.1.3. Context Layer. In the context layer, the output is represented as

[x.sup.c.sub.l](k) = [alpha][x.sup.c.sub.l](k - 1) + [x.sub.l](k - 1), l = 1, 2, ..., N, (5)

where 0 [less than or equal to] [alpha] [less than or equal to] 1 is the self-connection feedback gain of the context layer. When [alpha] = 0, this network is reductive into the BENN.

3.1.4. Output Layer. The output [[??].sub.i](k) of IENN can be expressed as

[[??].sub.i](k) = [N.summation over (j=1)][w.sup.3.sub.ij](k)[x.sub.j](k), i = 1, 2, ..., M, (6)

where [w.sup.3.sub.ij](k) denotes the weight from jth hidden layer neuron to ith output layer neuron.

3.2. Training Algorithm. Training of the neural networks has been developed rapidly in recent years [19-21]. The gradient descent (GD) algorithm, as a basic approach for training neural networks in many areas, searches the parameter space of the network in the steepest descent way to minimize the error between the network output and the desired output [22]. In IENN, the gradient descent algorithm is used to update the weights. Assume that the actual system output vector is y(k) = [[[y.sub.1](k), [y.sub.2](k), ..., [y.sub.M](k)].sup.T] (T is used for transpose) and the kth iteration of IENN model output vector is [??](k) = [[[[??].sub.1](k), [[??].sub.2](k), ..., [[??].sub.M](k)].sup.T]. The error-function, namely, the sum of squared error (SSE), is defined as

SSE(k) = [1/2][[y(k) - [??](k)].sup.T][y(k) - [??](k)]. (7)

By using the partial derivative of error-function with respect to the weight parameters, the increments of the weights are as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (8)

with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (9)

where [f'.sub.j](*) is the first derivative of the normalized input of hidden layer neurons [[bar.v].sub.j](k). [[eta].sub.1], [[eta].sub.2], and [[eta].sub.3] represent the learning rate of [w.sup.1.sub.jl], [w.sup.2.sub.jq], and [w.sup.3.sub.ij], respectively.

In order to well analyze the error of system output and IENN output, the transient absolute error vector [sigma](k) is defined as

[sigma](k) = [absolute value of (y(k) - [??](k))]. (10)

The mean error of [sigma](k) is

[bar.[sigma]](k) = [[[summation].sup.M.sub.i=1][absolute value of ([y.sub.i](k) - [[??].sub.i](k))]]/M. (11)

3.3. Training Steps of IENN. By using the GD method, the training steps to determine the optimal number of neurons in hidden layer are as follows. The initial values of [alpha], [[eta].sub.1], [[eta].sub.2], and [[eta].sub.3] are got by continuous testing.

Step 1. Prepare the training input and output data. Set the initial number for neurons N = 4 in hidden layer, define the maximum neurons [N.sub.max] = 100 in hidden layer, and the maximum iteration step [K.sub.max] = 100. The threshold value of SSE is [[epsilon].sub.min].

Step 2. Set the self-connection feedback coefficient [alpha] = 0.1; weights of IENN [w.sup.1.sub.jl](1), [w.sup.2.sub.jq](1), and [w.sup.3.sub.ij](1) as constant 0 and their learning rate [[eta].sub.1] = [[eta].sub.2] = [[eta].sub.3] = 0.01; the partial derivative [partial derivative][x.sub.j](0)/[partial derivative][w.sup.1.sub.jl](0) = 0. The initial iteration step k = 0.

Step 3. Increase the number of iteration step k = k + 1; if k > [K.sub.max], end the training process. According to formulas (1) to (7), calculate the value of every neuron in every layer and the SSE(k) of kth iteration step. If the SSE(k) is less than [[epsilon].sub.min], end the training process.

Step 4. Calculate the increments of the weights shown in formulas (8); then the weights are updated as [w.sup.1.sub.jl](k + 1) = [w.sup.1.sub.jl](k) + [DELTA][w.sup.1.sub.jl](k), [w.sup.2.sub.jq](k + 1) = [w.sup.2.sub.jq](k) + [DELTA][w.sup.2.sub.jq](k), and [w.sup.3.sub.ij](k + 1) = [w.sup.3.sub.ij](k) + [DELTA][w.sup.3.sub.ij](k). Jump to Step 3.

Step 5. Increase the number of neurons N in hidden layer; if N > [N.sub.max], end the training process. Jump to Step 2.

4. Simulation Results and Analysis

In order to verify the correctness and reliability of IENN model, the training sample sequences are achieved from input e of half-bridge CDPA shown in Figure 3.

As shown in Figure 3, the PWM signal q is produced by the comparison between the two-tone signal and the triangular signal. The frequency and amplitude of triangular signal are [f.sub.t] = 400 kHz [AM.sub.t] = 9.6 V. The output signal of CDPA is termed as y. A group of the training data is extracted by the sampling frequency [f.sub.s] = 1 MHz. The testing data has the same length and sampling frequency with the training data; the difference is the starting time. In the simulation results, the testing data's starting time is treated as 0 ms.

4.1. Optimal Neurons Number in Hidden Layer. In order to determine the optimal neurons number in hidden layer of two models, by using the training data of two-tone signal, with the frequencies of the two-tone signal being [f.sub.1] = 4.36 kHz and [f.sub.2] = 30 kHz, their amplitudes being [AM.sub.1] = [AM.sub.2] = 4 V, and the signal length being 0.5 ms, the relationship between SSE(k) and the number of hidden layer neurons is studied. When the maximum iteration step [K.sub.max] = 100, the number of hidden neurons N increases from 5 to 100 with the interval of 5; do not set the error threshold of SSE(k); the error curves of SSE(k) are shown in the left side of Figure 4. The lower side of Figures 4(a) and 4(b) shows the SSE(k) changing with hidden neurons number; when the iteration step K = 50, the neurons number in hidden layer increases from 5 to 50 with the interval of 1.

It can be seen in the top of Figures 4(a) and 4(b) that, with the increase of the iteration step, the error curves of SSE(k) drop rapidly. The larger the number of hidden neurons N is, the faster SSE(k) decreases, and the less iteration steps needed to reach the same SSE are. The comparison between BENN and IENN shows that IENN has faster convergence rate than BENN. For example, to reach the same SSE(k) of 50, with the same number of hidden neurons N = 15, BENN needs an iteration number of about 55, while IENN needs an iteration number of only 25; the calculation of IENN is reduced to almost half of BENN.

On the lower side of Figures 4(a) and 4(b), when BENN and IENN have the same number of iteration; to achieve the same SSE(k), IENN needs less neurons in hidden layer than BENN. For instance, to reach the logarithmic SSE(k) of 0 dB, BENN needs 30 neurons in hidden layer while IENN needs only 15 neurons in hidden layer.

In consideration of the convergence rate and the calculation, N = 25 is chosen as the number of hidden layer neurons in the following discussion.

4.2. Simulation Analysis of Four Models with Two-Tone Signal Input. The Volterra-Laguerre (VL) model [7] and the Chebyshev neural network (CNN) model [17, 23, 24] are introduced to be compared with the BENN and IENN model. The VL model is proposed in [7]. There are two parameters in this model: the number of Laguerre orthogonal functions K and the pole of Laguerre functions [lambda]([absolute value of [lambda]] < 1). When K = 3, this model cannot reconstruct the output well. Here, we choose K = 5 and [lambda] = 0.97; there are 605 parameters needed to be estimated. The CNN model in [23] employs a group of Chebyshev orthogonal polynomials to activate the hidden layer neurons, and based on the GD method, the iterative training formula is obtained. For three neural network models, set the number of hidden layer neurons N = 25 and the iteration step [K.sub.max] = 50. Using the two-tone signal as input, the simulation results of four behavioral models in time domain are shown in Figure 5.

In Figure 5, the time domain error is the transient error y - [??]. Values of the mean error [bar.[sigma]] and the maximum transient error [[sigma].sub.max] of four models are listed in Table 1 with two-tone signal input.

It can be seen in Figure 5 and Table 1 that IENN is the most accurate model among four models. The VL and CNN model cannot reconstruct the output signal accurately, and the transient error is very large at the beginning of the data. Both the BENN and IENN have stable approximation capability; under the same conditions, IENN is more precise than BENN. The final maximum transient error of BENN is 0.0391 V, while it is only 1.78 x [10.sup.-5] V in IENN.

The two-tone signal is often used to study the memory effect of the nonlinear system [4, 25] since the IMD of the signal is easy to measure. When a two-tone signal is used as training data, the simulation results of four models in frequency domain are given in Figure 6.

In Figure 6(d), [f.sub.1] = 4.36 kHz and [f.sub.2] = 30 kHz are the input two-tone signal's frequencies. [f.sub.3] = [f.sub.2] - [f.sub.1] and [f.sub.4] = [f.sub.2] + [f.sub.1] are the second order IMD (IMD2). [f.sub.5] = [f.sub.2] - 2[f.sub.1] and [f.sub.6] = [f.sub.2] + 2[f.sub.1] are the third order IMD (ImD3). The existence of IMD means the system is nonlinear and the asymmetry of IMD demonstrates the memory effect of the system. The circuit output spectrum and the spectrum error are listed in Table 2.

As shown in Figure 6 and Table 2, the spectrum error at IMD2 and IMD3 of VL model is a little large, the asymmetry between the upper and lower sidebands has been weakened, and some of the memory effect characteristics are lost. The short memory length of VL model is the reason for this. But the number of parameters in this model is already large; if the memory length increases, the parameters will increase rapidly. The spectrum of CNN in Figure 6(b) shows that it has lost almost all the information of the IMD. Since the CNN model is a feedforward neural network, the output of the model is only related to the input at present moment; it cannot express the previous influence of the inputs on the output, namely, that the CNN model cannot demonstrate the memory effect. The spectrum errors of BENN and IENN model are stable; under the same conditions, the spectrum error of BENN is 0.011 dB, and IENN is 4.92 x [10.sup.-6] dB. The IENN is much more accurate than BENN.

4.3. Simulation Analysis of Four Models with LFM Signal Input. For the experimental validation, the LFM signal is used as input e of half-bridge CDPA, whose center frequency is 30 kHz, amplitude is 8.5 V, bandwidth is 4 kHz, and training data length is 2.0 ms. Other parameters of the simulation are the same as above. Using the LFM signal as training samples, the simulation results of four behavioral models in time domain are shown in Figure 7; the results in frequency domain are given in Figure 8. The mean error [bar.[sigma]] and the maximum transient error [[sigma].sub.max] of four models in time domain are listed in Table 3, and the average spectrum errors and the maximum spectrum errors are listed in Table 4.

As shown in Figure 7 and Table 3, when the LFM signal is used as input of CDPA, the transient error of the VL and CNN model is very huge, and the output signal cannot be reconstructed accurately. In the same conditions of the number of hidden neurons N = 25 and the iteration step [K.sub.max] = 50, the time domain errors of both IENN and BENN model are basically the same and the final maximum transient error of BENN is 1.74 x [10.sup.-5] V, while it is 1.83 x [10.sup.-5] V in IENN. Both the BENN and IENN have stable approximation capability.

As shown in Figure 8 and Table 4, using the training data of LFM signal, the spectrum error of the VL and CNN model is similarly very large and has lost the correct information of the memory effect in frequency domain. The spectrum errors of BENN and IENN model are stable; under the same simulation conditions, the maximum spectrum error of BENN is 4.66 x [10.sup.-6] dB, and IENN is 4.92 x [10.sup.-6] dB. The performance of IENN and BENN model in frequency domain is almost the same.

4.4. Simulation Analysis of Four Models with 2PSK Signal Input. In further experiments, a 2PSK signal is used as input e of half-bridge CDPA, whose carrier frequency is 20 kHz, amplitude is 8.5 V, digital baseband signal is a 7-bit pseudorandom sequence (m sequence), baseband symbol width is 0.25 ms, and testing data length is 1.75 ms. Other parameters of the models are the same as above too. Using the 2PSK signal input, the simulation results of four behavioral models in time domain are shown in Figure 9, and the results in frequency domain are given in Figure 10. The mean error [bar.[sigma]] and the maximum transient error [[sigma].sub.max] of four models in time domain are listed in Table 5, and the average spectrum error and the maximum spectrum error are listed in Table 6.

It can be seen in Figure 9 and Table 5 that with a 2PSK signal input, the VL and CNN model cannot reconstruct the CDPA output accurately, and the transient error is still very large. Under the same conditions that the number of hidden neurons is 25 and the iteration step is 50, IENN model is more precise than BENN model. The final maximum transient error of BENN is 0.0398 V, while it is only 1.81 x [10.sup.-5] V in IENN. IENN model is the most accurate model among four models.

As shown in Figure 10 and Table 6, using the training samples of 2PSK signal, the spectrum errors of the VL and CNN model are still very large, and the memory effect of CDPA cannot be demonstrated by these models. The spectrum errors of BENN and IENN model are steady. Under the same conditions, the maximum spectrum error of BENN is 0.011 dB, and IENN is 4.92 x [10.sup.-6] dB. At this point, the IENN model is much more accurate than BENN model.

The comparison among four behavioral models under the condition of different input signals and the same simulation parameters shows that the proposed IENN model is the most accurate model for analyzing the nonlinearity and memory effect of the CDPAs in both time domain and frequency domain.

5. Conclusions

In this paper, a behavioral modeling based on IENN is proposed to describe the nonlinearity and memory effect of CDPAs. In IENN, a group of Chebyshev orthogonal basis functions is employed to activate hidden layer neurons to improve the learning speed and the accuracy and also to simplify the structure of model. A self-connection of context nodes is added to make the output more sensitive to the history of input data.

According to the simulation results, it can be seen that, to reach the same error threshold, compared to BENN, IENN needs fewer hidden layer neurons and less iteration steps. It means that IENN has fast learning speed and can use simpler network structure to achieve the same requirements than many other neural networks. Using the same number of hidden layer neurons and iteration number, simulation results by using the training data of two-tone, LFM and 2PSK signal have shown that the IENN is superior to VL, CNN, and BENN model in accuracy; it has reconstructed the nonlinear CDPA system with almost no transient or spectrum error; the memory effect is also visualized. Above all, the proposed IENN model is an effective, efficient, and simple behavioral model for nonlinear systems.

http://dx.doi.org/10.1155/2014/271593

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the Foundation of Key Laboratory of China's Education Ministry and A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.

References

[1] M. Margaliot and G. Weiss, "The low-frequency distortion in D-class amplifiers," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 57, no. 10, pp. 772-776, 2010.

[2] J. Yu, M. T. Tan, S. M. Cox, and W. L. Goh, "Time-domain analysis of intermodulation distortion of closed-loop class-D amplifiers," IEEE Transactions on Power Electronics, vol. 27, no. 5, pp. 2453-2461, 2012.

[3] H. Zhou, G. Wan, and L. Chen, "A nonlinear memory power amplifier behavior modeling and identification based on memory polynomial model in soft-defined shortwave transmitter," in Proceedings of the 2010 6th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM '10), pp. 1-4, September 2010.

[4] H. Ku and J. S. Kenney, "Behavioral modeling of nonlinear RF power amplifiers considering memory effects," IEEE Transactions on Microwave Theory and Techniques, vol. 51, no. 12, pp. 2495-2504, 2003.

[5] P. N. Landin, J. Fritzin, W. Van Moer, M. Isaksson, and A. Alvandpour, "Modeling and digital predistortion of class-D outphasing RF power amplifiers," IEEE Transactions on Microwave Theory and Techniques, vol. 60, no. 6, pp. 1907-1915, 2012.

[6] T. Liu, Y. Ye, X. Zeng, and F. M. Ghannouchi, "Memory effect modeling of wideband wireless transmitters using neural networks," in Proceedings of the 2008 4th IEEE International Conference on Circuits and Systems for Communications (ICCSC '08), pp. 703-707, May 2008.

[7] A. Zhu and T. J. Brazil, "RF power amplifier behavioral modeling using volterra expansion with laguerre functions," in Proceedings of the 2005 IEEE MTT-S International Microwave Symposium, pp. 963-966, June 2005.

[8] H. Ku, M. D. McKinley, and J. S. Kenney, "Quantifying memory effects in RF power amplifiers," IEEE Transactions on Microwave Theory and Techniques, vol. 50, no. 12, pp. 2843-2849, 2002.

[9] J. Li, J. Nan, and J. Zhao, "Study and simulation of RF power amplifier behavioral model based on RBF Neural network," in Proceedings of the 2010 International Conference on Microwave and Millimeter Wave Technology (ICMMT '10), pp. 1465-1467, May 2010.

[10] H. Ning, X. Jing, and L. Cheng, "Online identification of nonlinear spatiotemporal systems using kernel learning approach," IEEE Transactions on Neural Networks, vol. 22, no. 9, pp. 1381-1394, 2011.

[11] H. Li-Na and N. Jing-Chang, "Researches on GRNN neural network in RF nonlinear systems modeling," in Proceedings of the 2011 International Conference on Computational ProblemSolving (ICCP '11), pp. 577-580, October 2011.

[12] F. Mkadem and S. Boumaiza, "Physically inspired neural network model for RF power amplifier behavioral modeling and digital predistortion," IEEE Transactions on Microwave Theory and Techniques, vol. 59, no. 4, pp. 913-923, 2011.

[13] H. Zhao and J. Zhang, "Pipelined Chebyshev functional link artificial recurrent neural network for nonlinear adaptive filter," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 40, no. 1, pp. 162-172, 2010.

[14] M. Li, J. Liu, Y. Jiang, and W. Feng, "Complex-chebyshev functional link neural network behavioral model for broadband wireless power amplifiers," IEEE Transactions on Microwave Theory and Techniques, vol. 60, no. 6, pp. 1979-1989, 2012.

[15] Y.-C. Cheng, W.-M. Qi, and J. Zhao, "A new Elman neural network and its dynamic properties," in Proceedings of the 2008 IEEE International Conference on Cybernetics and Intelligent Systems (CIS '08), pp. 971-975, September 2008.

[16] Q. Song, "On the weight convergence of Elman networks," IEEE Transactions on Neural Networks, vol. 21, no. 3, pp. 463-480, 2010.

[17] L. L. Jiang, D. L. Maskell, and J. C. Patra, "Chebyshev functional link neural network-based modeling and experimental verification for photovoltaic arrays," in Proceedings of the 2012 Annual International Joint Conference on Neural Networks (IJCNN '12), pp. 1-8, June 2012.

[18] M. Li and Y. He, "Nonlinear system identification using adaptive Chebyshev neural networks," in Proceedings of the 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS '10), pp. 243-247, October 2010.

[19] A. Slowik, "Application of an adaptive differential evolution algorithm with multiple trial vectors to artificial neural network training," IEEE Transactions on Industrial Electronics, vol. 58, no. 8, pp. 3160-3167, 2011.

[20] X. Jing and L. Cheng, "An optimal PID control algorithm for training feedforward neural networks," IEEE Transactions on Industrial Electronics, vol. 60, no. 6, pp. 2273-2283, 2013.

[21] X. Jing, "Robust adaptive learning of feedforward neural networks via LMI optimizations," Neural Networks, vol. 31, pp. 33-45, 2012.

[22] F.-J. Lin, Y.-S. Kung, S.-Y. Chen, and Y.-H. Liu, "Recurrent wavelet-based Elman neural network control for multiaxis motion control stage using linear ultrasonic motors," IET Electric Power Applications, vol. 4, no. 5, Article ID IEPAAN000004000005000314000001, pp. 314-332, 2010.

[23] X. Xiao, X. Jiang, and Y. Zhang, "An algorithm for designing Chebyshev neural network," in Proceedings of the 2009 Second ISECS International Colloquium on Computing, Communication, Control, and Management (CCCM '09), pp. 206-209, August 2009.

[24] L. Mu, H. Yi-Gang, T. Wen, and L. Zu-run, "Fast learning algorithm for controlling Logistic chaotic system based on Chebyshev neural network," in Proceedings of the 5th International Conference on Natural Computation (ICNC '09), pp. 149-153, August 2009.

[25] D. H. Wisell, B. Rudlund, and D. Ronnow, "Characterization of memory effects in power amplifiers using digital two-tone measurements," IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 6, pp. 2757-2766, 2007.

Shao Jie, (1,2) Wang Li, (1,2) Zhao WeiSong, (1,2) Zhong YaQin, (1,2) and Reza Malekian (3)

(1) Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China

(2) Key Laboratory of Underwater Acoustic Signal Processing, Ministry of Education, Southeast University, Nanjing 210096, China

(3) Department of Electrical, Electronic and Computer Engineering, University of Pretoria, Pretoria 0002, South Africa

Correspondence should be addressed to Reza Malekian; reza.malekian@up.ac.za

Received 19 March 2014; Revised 19 May 2014; Accepted 26 May 2014; Published 18 June 2014

Academic Editor: Xingjian Jing

TABLE 1: Mean error [bar.[sigma]] and maximum transient error [[sigma].sub.max] of four behavioral models with two-tone signal input. Model [bar.[sigma]] [[sigma].sub.max] Condition (V) (V) VL 0.9272 31.3400 K = 5, [lambda] = 0.97 CNN 5.9397 27.6531 BENN 0.0182 0.0391 K = 25, [K.sub.max] = 50 IENN 8.2434e - 06 1.7754e - 05 TABLE 2: Spectrum of the circuit and spectrum error of four behavioral models with two-tone signal input. Freq. (Hz) Circuit VL CNN BENN IENN (dB) spectrum (dB) (dB) (dB) (dB) [f.sub.1] 71.83 -0.22 0.62 [f.sub.2] 65.15 0.24 2.39 [f.sub.3] 54.07 -0.58 23.56 0.0108 4.919e - 06 [f.sub.4] 46.33 -3.19 5.55 [f.sub.5] 42.58 -3.57 5.06 [f.sub.6] 45.01 -4.56 7.17 TABLE 3: Mean error [bar.[sigma]] and maximum transient error [[sigma].sub.max] of four behavioral models with LFM signal input. Model [bar.[sigma]] [[sigma].sub.max] Condition (V) (V) VL 0.9564 4.2593 K = 5, [lambda] = 0.97 CNN 11.0047 61.6258 BENN 1.1539e - 05 1.7378e - 05 N = 25, [K.sub.max] = 50 IENN 1.2176e - 05 1.8337e - 05 TABLE 4: Spectrum error of four behavioral models with LFM signal input. Model [bar.[sigma]] [[sigma].sub.max] Condition (V) (V) VL 2.3714 24.2021 K = 5, [lambda] = 0.97 CNN 33.0427 77.3836 BENN 4.6617e - 06 4.6617e - 06 N = 25, [K.sub.max] = 50 IENN 4.9190e - 06 4.9190e - 06 TABLE 5: Mean error [bar.[sigma]] and maximum transient error [[sigma].sub.max] of four behavioral models with 2PSK signal input. Model [bar.[sigma]] (V) [[sigma].sub.max] Condition (V) VL 1.0395 31.9100 K = 5, [lambda] = 0.97 CNN 8.6677 41.8680 N = 25, BENN 0.0271 0.0398 [K.sub.max] = 50 IENN 1.2309e - 05 1.8071e - 05 TABLE 6: Spectrum error of four behavioral models with 2PSK signal input. Model Average Max. Condition error (dB) error (dB) VL 8.1023 33.1085 K = 5, [lambda] = 0.97 CNN 14.5702 52.7683 N = 25, [K.sub.max] = 50 BENN 0.0108 0.0108 IENN 4.9190e - 06 4.9190e - 06

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Jie, Shao; Li, Wang; WeiSong, Zhao; YaQin, Zhong; Malekian, Reza |

Publication: | The Scientific World Journal |

Article Type: | Report |

Date: | Jan 1, 2014 |

Words: | 5309 |

Previous Article: | A parametric study of nonlinear seismic response analysis of transmission line structures. |

Next Article: | A graph-based ant colony optimization approach for process planning. |

Topics: |