Printer Friendly

A Diagonally Weighted Binary Memristor Crossbar Architecture Based on Multilayer Neural Network for Better Accuracy Rate in Speech Recognition Application.

I. INTRODUCTION

The neural network architecture is widely used in many research fields such as speech recognition, image recognition, robot control [1-3]. To simulate the neural network architecture on hardware, a large variety of hardware implementation using analog and digital electronics have been performed [4-6]. Here, synaptic weights can be stored in CMOS transistors. However, the transistor size cannot be scaled gradually according to the trend of CMOS transistor number in these chips every year.

In the conventional neural network hardware implementation, the synaptic weighs suffered from the difficulties of updating real-time weights by CMOS transistor characteristics. This is because it costs very huge power consumption for weight storage and nonlinearity in synaptic weight [7]. Moreover, the large scale of the synaptic weighting multiplier is also very limited in hardware implementation due to bottleneck problems. To overcome these problems in the neural network architecture, the new nanotechnologies are researching to exploit this system more compactly such as quantum dots [8] and resonant tunneling diode [9].

In addition, the introduction of the first memristor [10], implemented by Hewlett-Packard lab and completely made in 2008, has opened a new research field in applying the memristor device to neuron networks [11]. Memristor is a memory resistor, which is capable of storing the state after power off. The memristor can be connected as a synaptic weight in the neural network. Memristance can be changed and updated by applying the voltage, current, and timing during the training process of the neural network. In digital circuits, the memristor can achieve either a high-resistance state (HRS) or a low-resistance state (LRS). It means that memristor can store '1' or '0' with the two states. This memristor plays a role as a 2-terminal switch to change the resistance between high resistance state (HRS, logic '0') and low resistance state (LRS, logic '1').

Especially, the memristor-based crossbar architecture has been a promising candidate for future computing system thanks to their low power consumption, high density, and fault tolerance [12-16]. For example, the filamentary switching binary 2M synapse, consisting of M+ memristor crossbar array and M- memristor crossbar array, is easier to be fabricated than analog memristors for speech recognition applications [14]. In comparison, the twin memristor crossbar architecture shows a better recognition rate for pattern recognition than the 2M crossbar array, consisting of plus-polarity and minus-polarity connection [15]. To reduce the array circuit size, the memristor-based crossbar array architecture has been presented with fewer memristors and transistors. Here, both plus-polarity and minus-polarity connection matrices are realized by a single crossbar and a simple constant-term circuit, thus, reducing the physical size and power dissipation [15-16]. However, these methods have been limited to a single layer neural network in fundamental processing tasks of binary and gray-scale images. The large-scale processing tasks in functionality and hardware implementation are desired.

The multilayer neural network has been proposed for extending the single layer network to the higher dimension and more complex tasks of multiprocessing [17-20]. The multilayer neural network is able to process the cascaded duties for image segmentation, skeletonization, halftoning, and so on [17]. Thus, the mathematical model analysis and algorithm design have been researched in theory as well as in hardware implementation of FPGA platforms, which is essential for real applications [18]. Moreover, the artificial synapse connections in multilayer neural networks are implemented on-chip using memristor bridge synapse, consisting of multiple identical memristors that are arranged in a bridge-like architecture [19-20]. The bridge memristor synapse circuit can create both positive and negative weight values that can be implemented easily a neuromorphic system in hardware. The memristor-based multiplayer with online gradient descent training was proposed using single memristor and two CMOS transistor per one synapse [21]. A memristor-based single-layer neural network is expanded into a multiplayer neural network [22]. Here, a memristor-based AND logic switch is utilized to update synaptic crossbar circuit by an adaptive backpropagation algorithm, which causes area and power overhead and suffers from nonlinear characteristics of programming memristor [22]. A memristor crossbar circuit, acting as the synapse, which realizes the signed synaptic weights, was proposed for memristive multilayer neural network [23].

In addition, many researchers have used these binary memristor crossbars for the multilayer neural network model in speech recognition applications, which are based on matching audio inputs and synaptic weights [14-15]. However, this is not accurate if the input signal is unstable or voice samples come from various people. If the test sample is best matched with the trained sample, the output is the biggest. Nevertheless, the test voice is not entirely the same as the trained sample due to accent and dialect problems. Thus, the comparison results are not certainly reliable based on the signal comparison. The signal comparison technique that was applied to the conventional memristor crossbar architecture achieves lower recognition rate compared to the neural network technique that is applied to the diagonally weighted binary memristor crossbar architecture. The proposed technique is built from the expressions of binary digit multiplication and summation in the neural network model equations, which is more accurate than the conventional signal comparison technique.

In this paper, we propose the diagonally weighted binary crossbar array in the multilayer neuron network model, including M+ and M- memristor arrays corresponding to positive and negative weights, respectively. Depending on the input bits, the memristors are arranged in a diagonal format to generate the structure corresponding to the expressions of the neural network model. In addition, the statistical simulations are discussed and finally summarized.

II. METHODS

The neural network model for speech recognition application consists of two main processes that are weight installation and recognition process. First, in the weight installation, the voice input is processed and trained in a neural network model. This weight is quantized into m bits. The obtained bits are stored into the binary memristor. Second, in the recognition process, the voice input is processed and applied to a weighted memristor array to determine outputs.

In the first process, the features are extracted from the voice signal by the Mel frequency cepstral coefficients (MFCC) method, including the steps of pre-processing, framing, windowing, DFT, and Mel frequency log [24]. After that, they are trained in a neural network model by Matlab software. The output is '1' for vowel that is trained else the output is '0'. The recognition process is performed in each sound. In the recognition process, the input is quantized into n bits. The input is normalized for training. After the training process for each vowel, we have 48 weights correspondingly. The weights are adjusted proportionally to the corresponding coefficient, then quantized into n bits. The bits 1 and 0 are stored in two memristors with memristance values of [R.sub.on] and [R.sub.off].

The previous researches have proposed crossbar architecture for speech recognition based on the signal comparison. This is not accurate if the input signal is unstable or the voice samples come from various people. If the test sample is best matched with the trained sample, the output is the biggest. Figure 1(a) shows the data input is '0011'. We have 4 columns that are the weights. The first column is '1111', the second column is '0111', the third column is '1001' and the fourth is '0011'. Here, the black square input represents the high-logic level ('1') while the white square input represents the low-logic level ('0'). The black circle memristor represents the low resistance state while the white circle memristor represents the high resistance state.

In Figure 1(b), the fourth column is the best-matched column with the input vector of '0011'. The number of matched cells is as large as 4 in the fourth column. By summing the results from cells in [M.sup.+] and [M.sup.-] arrays, we can find the best-matched cells as illustrated in Figure 1(b). Hence, we determine the best-matched column with the input vector among four columns.

However, the test voice is not entirely same as the trained sample. Thus, the comparison results are not certainly reliable. For example, in Figure 2, the data input is '0111' as 7, having 4 columns, the first is '1000' as 8, the second '0001' as 1, the third is '0100' as 4 and the last column is '1111' as 15. The data input is 7 nearly equal to the first column that is 8. However, we apply the conventional memristor crossbar architecture to the first column, the result is 0 as a bad expectation. The output of the first column is 0 although the data input is nearly the same as the first column. Thus, the conventional architecture is the reason that causes a low recognition rate. In addition, we cannot recognize a lot of samples from different people with this architecture because each person has a private speech. Therefore, to raise the recognition rate with various human speeches, we propose a new memristor crossbar architecture that is based on the neural network model. Each input is multiplied with each weight. Summation of multiplication results is the output value.

Figure 3 shows the basic neural network model as follows: a set of inputs connects to the hidden layer neurons and these hidden layer neuron outputs connect to the output layer neurons. The output layer neurons are calculated from the input according to the following expressions:

[mathematical expression not reproducible] (1)

[z.sub.q]= log sig([net.sub.q]) (2)

[mathematical expression not reproducible] (3)

Where [x.sub.j] is the input. The [v.sub.j] is hidden layer weight. The [b.sub.q] is the threshold of the hidden layer. The activation function is logsig. The [w.sub.q] is the weight in the output layer. The b.sub.i is the threshold of the output layer.

Based on the above expressions, we propose a new crossbar architecture that includes memristors acting as the weights multiplied with the inputs. As the result, a novel binary memristor crossbar architecture based on the neural network model is presented. Each data input is multiplied with each weight. To multiply the input and the weight, the memristors are arranged as in Figure 4. In Figure 4(a) and 4(b), the data input is '0111'. The row weights are '0101' as 5 and '1101' as 13. This work is similar to the multiplication of two 4-bit binary numbers. Since then, we have 7 columns according to 7 significant position factors 1, 2, 4, 8, 16, 32 and 64, respectively. In Figure 4(a), the result is 7*13 = 91. In Figure 4(b), the result is 7*5 = 35. These results show that if b < c then a*b < a*c as the result of multiplying two integer numbers. In Figure 4(c), the data input is '1101' as 13 and the weight is '0111' as 7. The result is 13*7 = 91 that is the same as the result of Figure 4(a). These results show that a*b = b*a. This is an interchange between two integer numbers.

In Figure 5(a) and 5(b), each weight is divided into two parts that are [M.sup.+] and [M.sup.-] array. If we subtract [M.sup.-] from [M.sup.+], we get either the positive or negative weight, correspondingly. The input is multiplied with these weights. In Figure 5(a), the input is '11' as 3 and the weight is '01' as -1, so the result is -3. Similarly, in Figure 5(b), the input is 3 and the weight is +1, so the result is 3. Hence, these results show that the new memristor crossbar architecture operates like a neural network model. This is satisfied with Eq. (1).

Figure 6 shows the circuit simulated by Eq. (2). The logsig function circuit is designed with input [V.sub.p] and [V.sub.n]. Here, [V.sub.n] is the inverted [V.sub.p]. [V.sub.dc] is added to increase the threshold value. The simulation result of this logsig function is shown in Figure 7.

The activation function is logsig. The expression of logsig function is:

g(x) = 1/1 + [e.sup.-x] (4)

[mathematical expression not reproducible]

Where,

III. SIMULATION RESULTS

Our research focuses on recognizing five vowels: 'a', 'e', 'i', 'o' and 'u' from the human voice. To do this, first, features are extracted from the voice signal by MFCC. There are 48 feature values. Then, they are trained by the neural network model to recognize five vowels. After training, the input layer weights are quantized into n bits and the output layer weights are quantized into m bits. These weight values are stored in a binary memristor crossbar array. By doing so, we can recognize each vowel by multiplying the input signal and the stored weight in binary memristor. The summation of the multiplication results, which is the biggest output among five outputs, represents the input signal.

The features are extracted from voice signal by MFCC method including the steps such as pre-processing, framing, windowing, DFT, and Mel frequency log. Then, they are trained in a neural network model by MatLab. The neural network model has five neurons in the hidden layer, the transfer function is logsig. The output is '1' for vowel if it is trained else the output is '0'. The input is quantized into p bits with [2.sup.p] levels. The training input values are normalized in the range from -1 to 1. In this case, we choose p = 2. After the training process for each vowel, we have 2 x 48 x 5 weights in the input layer and 5 x 5 weights in the hidden layer respectively. The weights are quantized into two bits in the input layer and two bits in the hidden layer. The bits 1 and 0 are stored in two memristors with memristance values of [R.sub.on] and [R.sub.off], respectively. The multilayer neural network model includes 48 inputs, five neurons in the hidden layer, and five outputs. The activation function is the logsig in the hidden layer and linear in the output layer.

The Figure 8 shows the proposed binary memristor crossbar in the neural network model. We have 48 input channels, five column groups from [V.sub.1] to [V.sub.5] corresponding to five neurons in the hidden layer. There are six columns in each column group that is divided into [M.sup.+] and [M.sup.-] array, respectively. The memristors are arranged diagonally to determine the signed weight values. Each column in the [M.sup.+] or [M.sup.-] array has a significant position factor correspondingly. In the order from the left to the right, it is 4, 2 and 1, respectively. Next, three subtraction circuits determine the output after subtracting [M.sup.-] from [M.sup.+]. Then, three multiplier circuits that are corresponding to three position factors 4, 2 and 1, are calculated to recover the value of the multiplication between the inputs and the weights. Finally, an adder circuit adds three multiplier circuit outputs to the threshold voltage value. The [Z.sub.1] adder output is the value of Eq. (1). Thus, we describe the net value by using new binary memristor crossbar architecture. Next, an activation function circuit is proposed to describe Eq. (2). Similarly, the weights are quantized into two bits in the output layer and arranged in horizontal rows. The network output is generated by a group of two subtraction circuits, two multiplier circuits with 2 position factors that are 2 and 1, and an adder circuit.

Figure 9(a) shows the schematic of the proposed binary memristor crossbar circuit in detail. The circuit has 48 input channels, 2-bit binary in each channel, and each 2-bit binary weight stored into each row. The 2-bit weight is set up by four memristors. Each row has four memristors in each column group. Another row is shifted left to create six columns. These two rows are divided into two parts that are programmed either positive or negative weights. R.sub.g is used to get voltage instead of current. There are [V.sup.+.sub.1,4], [V.sup..sub.1,2], [V.sup.+.sub.1,1], [V.sup.-.sub.1,4], [V.sup.-.sub.1,2] and [V.sup.-.sub.1,1] for the first column group [V.sub.1]. Figure 9(b) shows a subtraction circuit, which is used to subtract [M.sup.-] from [M.sup.+]. We have [H.sub.1,4] = [V.sup.+.sub.1,4] - [V.sup.-.sub.1,4] with [R.sub.1] = [R.sub.2] = [R.sub.3] = [R.sub.4]. Similarly, we get [H.sub.1,2] and [H.sub.1,1].

Figure 9(c) shows a multiplier circuit. The multiplier output is calculated by [G.sub.1,4] = (1 + [R.sub.6]/[R.sub.5])[H.sub.1,4] corresponding to the multiplier factor of 4. Here, [R.sub.6] = [3R.sub.5]. Similarly, we have [G.sub.1,2] and [G.sub.1,1] with [R.sub.6] = [R.sub.5] and [R.sub.6] = 0. Figure 9(d) shows an adder circuit that [b.sub.1,1] is threshold voltage. We have [S.sub.1] = [G.sub.1,1] + [G.sub.1,2] + [G.sub.1,3] + [b.sub.1,1]. Therefore, [S.sub.1] is a net in Eq. (1). Figure 9(e) is an activation function circuit. The activation function output is [Z.sub.1] as seen in Eq. (2). Thus, the design meets requirements of circuit characteristics in the input layer. We have five values [Z.sub.1], [Z.sub.2], [Z.sub.3], [Z.sub.4] and [Z.sub.5] corresponding to five neurons in the hidden layer.

We have another memristor crossbar subtraction circuit in a role as the hidden layer weight. Similarly, we have a multiplier circuit and an adder circuit that work the same way the input layer does. The outputs are charged up by the capacitors [C.sub.a], [C.sub.e], [C.sub.i], [C.sub.o], and [C.sub.u]. Here, the five capacitors [C.sub.a], [C.sub.e], [C.sub.i], [C.sub.o], and [C.sub.u] are represented to the five vowels 'a', 'e', 'i', 'o', and 'u'.

The fastest charged capacitor among the five capacitors [C.sub.a], [C.sub.e], [C.sub.i], [C.sub.o], and [C.sub.u] determines a certain vowel corresponding to the input of a human voice.

The capacitor [C.sub.a] can be charged up by the weight summation [VC.sub.a]. If the weight summation of [VC.sub.a] is large, [C.sub.a] can be charged to [V.sub.CC] very fast. If the weight summation of [VC.sub.a] is small, it takes a longer time to charge [C.sub.a] to [V.sub.CC]. This is shown in Figure 10. Then [VC.sub.a], [VC.sub.e], [VC.sub.i], [VC.sub.o], and [VC.sub.u] are compared with a reference voltage [V.sub.ref] through the comparator [I.sub.1], [I.sub.2], [I.sub.3], [I.sub.4], and [I.sub.5] as shown in Figure 9(m). If [VC.sub.a] is bigger than [V.sub.ref], [D.sub.a] becomes high. In the other hands, [VC.sub.e], [VC.sub.i], [VC.sub.o], and [VC.sub.u] are smaller than [V.sub.ref], the outputs of [D.sub.e], [D.sub.i], [D.sub.o], and [D.sub.u] become low. [G.sub.1], [G.sub.2], [G.sub.3] are the OR gates. A delay time [tau] between [G.sub.4] and [G.sub.5] creates a small CLK pulse. [FF.sub.a], [FF.sub.e], [FF.sub.i], [FF.sub.o], and [FF.sub.u] are flip flops with inputs [D.sub.a], [D.sub.e], [D.sub.i], [D.sub.o], and [D.sub.u]. The simulation waveforms [VC.sub.a] [VC.sub.e], [VC.sub.i], [VC.sub.o], and [VC.sub.u] are shown in Figure 10. Here, [VC.sub.a] seems to be charged to [V.sub.CC] faster than the other capacitor nodes [VC.sub.e], [VC.sub.i], [VC.sub.o], and [VC.sub.u]. So, the vowel 'a' is the best among the other vowels. Then, all capacitors will be discharged in the next pulse to prepare for testing the next input.

The timing diagram shows the important signals in Figure 11. When the CLK signal is high, all the capacitor nodes [VC.sub.a], [VC.sub.e], [VC.sub.i], [VC.sub.o], and [VC.sub.u] are charged to [V.sub.CC]. At this time, [VC.sub.a] [VC.sub.e], [VC.sub.i], [VC.sub.o], and [VC.sub.u] are higher than [V.sub.ref]. Thus, [D.sub.a], [D.sub.e], [D.sub.i], [D.sub.o], and [D.sub.u] are at high level. If [C.sub.a] is charged to [V.sub.CC] faster than [C.sub.e], [C.sub.i], [C.sub.o], and [C.sub.u], [VC.sub.a] gets a higher voltage level among [VC.sub.a], [VC.sub.e], [VC.sub.i], [VC.sub.o], and [VC.sub.u]. If [VC.sub.a] becomes higher than [V.sub.ref], the [D.sub.a] becomes high. So, [D.sub.a] can also be the fastest rising signal among [D.sub.a], [D.sub.e], [D.sub.i], [D.sub.o], and [D.sub.u]. Because the [D.sub.a] generates the locking pulse that is the clock signal of D flip-flop circuits [FF.sub.1], [FF.sub.2], [FF.sub.3], [FF.sub.4], and [FF.sub.5], the [FF.sub.1] register leads to a high-level output signal. Therefore, we can determine this vowel is similar to the voice input. The [D.sub.a] signal makes [Output.sub.a] high-level and the other output signals become low-level as shown in Figure 11.

Table I shows the number of memristors used and the average recognition rate corresponding to the number of the input bits, the input layer weight bits, and the hidden layer weight bits. A statistical table shows the input bits are in the range from 1 to 5. The weight bits in the input layer are from 2 to 4 and the weight bits in the hidden layer are used as 2. Based on the statistics table, realized that the first row has a good recognition rate using just a few bits. Here, the input uses one bit, the input layer weight uses two bits, and the hidden layer uses two bit. The result is compared with the previous research [15] that shows the recognition rate increases from 89.6% to 94% while reducing the used memristor hardware up to more than 50%. Here, the proposed circuit consumes 1010 memristors while the conventional scheme does 2560 memristors.

Figure 12 shows the statistical distribution according to the memristance variation. The memristances of 1M[ohm] HRS and 10K[ohm] LRS have a standard deviation (= [sigma]) of 10%. The statistical variation is measured by Monte Carlo simulation. The Monte Carlo simulation estimates tolerant of recognition rate when memristance variation is in the range 1% to 15%. In Figure 13, the recognition rate of the proposed binary memristor crossbar decreases a little bit, only from 94% to 93.7% while the memristance variation increases from 1% to 15%.

IV. CONCLUSION

We proposed the diagonally weighted binary memristor crossbar architecture based on the multilayer neural network model for a better accuracy rate in the speech recognition application. The proposed crossbar architecture acts as a binary multiplier circuit between the inputs and the weights in a neural network model. The signed weights are stored in binary memristor arrays. Combined with the activation function, the proposed circuit is implemented to describe the speech recognition application for 5 vowels. Based on the input bit numbers from 1 to 5, the weight bit number in the input layer from 2 to 4 and the weight bit number in the hidden layer as 2, the memristor number increases from 1010 to 9650 and the recognition rate increases from 94% to 96.6% in 1000 tested samples. The recognition rate of the binary memristor crossbar decreases slightly from 94% to 93.7%, while variation in the memristance fluctuates from 1% to 15%.

REFERENCES

[1] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, "Phoneme recognition using time-delay neural networks," IEEE Trans. Acoust. Speech Signal Process., vol. 37, no. 3, pp. 328-339, Mar. 1989. doi: 10.1109/29.21701.

[2] H. A. Rowley, S. Baluja, and T. Kanade, "Neural network-based face detection," IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 1, pp. 23-38, Jan. 1998. doi: 10.1109/34.655647

[3] R. Fierro and F. L. Lewis, "Control of a nonholonomic mobile robot using neural networks," IEEE Trans. Neural Netw., vol. 9, no. 4, pp. 589-600, Jul. 1998. doi: 10.1109/72.701173

[4] J. B. Lont and W. Guggenbuhl, "Analog CMOS implementation of a multilayer perceptron with nonlinear synapses," IEEE Trans. Neural Netw., vol. 3, no. 3, pp. 457-465, May 1992. doi: 10.1109/72.129418.

[5] A. J. Montalvo, R. S. Gyurcsik, and J. J. Paulos, "Toward a general-purpose analog VLSI neural network with on-chip learning," IEEE Trans. Neural Netw., vol. 8, no. 2, pp. 413-423, Mar. 1997. doi: 10.1109/72.557695

[6] T. Shima, T. Kimura, Y. Kamatani, T. Itakura, Y. Fujita, and T. Iida, "Neuro chips with on-chip back-propagation and/or Hebbian learning," IEEE J. Solid-State Circuits, vol. 27, no. 12, pp. 1868-1876, Dec. 1992. doi: 10.1109/4.173117.

[7] L. Gatet, H. Tap-Beteille, and F. Bony, "Comparison Between Analog and Digital Neural Network Implementations for Range-Finding Applications," IEEE Trans. Neural Netw., vol. 20, no. 3, pp. 460-470, Mar. 2009. doi: 10.1109/TNN.2008.2009120

[8] W. H. Lee and P. Mazumder, "Motion Detection by Quantum-Dots-Based Velocity-Tuned Filter," IEEE Trans. Nanotechnol., vol. 7, no. 3, pp. 355-362, May 2008. doi: 10.1109/TNANO.2007.915019

[9] P. Mazumder, S. Li, and I. E. Ebong, "Tunneling-Based Cellular Nonlinear Network Architectures for Image Processing," IEEE Trans. Very Large Scale Integr. VLSI Syst., vol. 17, no. 4, pp. 487-495, Apr. 2009. doi: 10.1109/TVLSI.2009.2014771

[10] L. Chua, "Memristor-The missing circuit element," IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507-519, Sep. 1971. doi: 10.1109/TCT.1971.1083337.

[11] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, "The missing memristor found," Nature, vol. 453, no. 7191, pp. 80-83, May 2008. doi. https://doi.org/10.1038/nature06932.

[12] M. Hu, H. Li, Y. Chen, Q. Wu, G. S. Rose, and R. W. Linderman, "Memristor Crossbar-Based Neuromorphic Computing System: A Case Study," IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 10, pp. 1864-1878, Oct. 2014. doi: 10.1109/TNNLS.2013.2296777.

[13] D. Chabi, Z. Wang, C. Bennett, J. Klein, and W. Zhao, "Ultrahigh Density Memristor Neural Crossbar for On-Chip Supervised Learning," IEEE Trans. Nanotechnol., vol. 14, no. 6, pp. 954-962, Nov. 2015. doi: 10.1109/TNANO.2015.2448554.

[14] S. N. Truong, S.-J. Ham, and K.-S. Min, "Neuromorphic crossbar circuit with nanoscale filamentary-switching binary memristors for speech recognition," Nanoscale Res. Lett., vol. 9, no. 1, p. 629, 2014. DOI: 10.1186/1556-276X-9-629.

[15] S. N. Truong, S. Shin, S.-D. Byeon, J. Song, H.-S. Mo, and K.-S. Min, "Comparative Study on Statistical-Variation Tolerance Between Complementary Crossbar and Twin Crossbar of Binary Nano-scale Memristors for Pattern Recognition," Nanoscale Res. Lett., vol. 10, Oct. 2015. doi: 10.1186/s11671-015-1106-x.

[16] H. M. Vo, "Training On-chip Hardware with Two Series Memristor Based Backpropagation Algorithm," in 2018 IEEE Seventh International Conference on Communications and Electronics (ICCE), 2018, pp. 179-183. doi: 10.1109/CCE.2018.8465750.

[17] H. Harrer, "Multiple layer discrete-time cellular neural networks using time-variant templates," IEEE Trans. Circuits Syst. II Analog Digit. Signal Process., vol. 40, no. 3, pp. 191-199, Mar. 1993. doi: 10.1109/82.222818.

[18] S. N. Truong, K. V. Pham, W. Yang, and K. Min, "Sequential Memristor Crossbar for Neuromorphic Pattern Recognition," IEEE Trans. Nanotechnol., vol. 15, no. 6, pp. 922-930, Nov. 2016. doi: 10.1109/82.222818.

[19] S. P. Adhikari, H. Kim, R. K. Budhathoki, C. Yang, and L. O. Chua, "A Circuit-Based Learning Architecture for Multilayer Neural Networks With Memristor Bridge Synapses," IEEE Trans. Circuits Syst. Regul. Pap., vol. 62, no. 1, pp. 215-223, Jan. 2015. doi: 10.1109/TCSI.2014.2359717

[20] D. Soudry, D. D. Castro, A. Gal, A. Kolodny, and S. Kvatinsky, "Memristor-Based Multilayer Neural Networks With Online Gradient Descent Training," IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 10, pp. 2408-2421, Oct. 2015. doi: 10.1109/TNNLS.2014.2383395

[21] H. Kim, M. P. Sah, C. Yang, T. Roska, and L. O. Chua, "Neural Synaptic Weighting With a Pulse-Based Memristor Circuit," IEEE Trans. Circuits Syst. Regul. Pap., vol. 59, no. 1, pp. 148-158, Jan. 2012. doi: 10.1109/TCSI.2011.2161360

[22] Y. Zhang, X. Wang, and E. G. Friedman, "Memristor-Based Circuit Design for Multilayer Neural Networks," IEEE Trans. Circuits Syst. Regul. Pap., vol. 65, no. 2, pp. 677-686, Feb. 2018. doi 10.1109/TCSI.2017.2729787

[23] X. Hu, G. Feng, S. Duan, and L. Liu, "A Memristive Multilayer Cellular Neural Network With Applications to Image Processing," IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 8, pp. 1889-1901, Aug. 2017. doi: 10.1109/TNNLS.2016.2552640.

[24] L. Muda, M. Begam, and I. Elamvazuthi, "Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and Dynamic Time Warping (DTW) Techniques," Journal of Computing, vol. 2, no. 3, pp. 138-143, Mar. 2010.

Minh-Huan VO

Ho Chi Minh City University of Technology and Education 01 Vo Van Ngan, Thu Duc District, Ho Chi Minh City, Viet Nam huanvm@hcmute.edu.vn

Digital Object Identifier 10.4316/AECE.2019.02010
TABLE I: STATISTICS OF THE NUMBER OF MEMRISTORS AND RECOGNITION RATE
BASED ON THE NUMBER OF BITS USED

         # bits in  # bits in   #           Average
# input  the input  the hidden  memristors  recognition
bits     layer      layer                   rate (%)

1        2          2           1010        94
2        2          2           1970        94.8
3        2          2           2930        95.6
4        2          2           3890        95.4
4        3          2           5810        95.6
5        2          2           4850        95.4
5        3          2           7250        96
5        4          2           9650        96.6
COPYRIGHT 2019 Stefan cel Mare University of Suceava
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Vo, Minh-Huan
Publication:Advances in Electrical and Computer Engineering
Date:May 1, 2019
Words:5053
Previous Article:Pupil Segmentation Using Orientation Fields, Radial Non-Maximal Suppression and Elliptic Approximation.
Next Article:A New Single-Stage Three-Phase PFC for Four-Switch Three-Phase Inverter Fed IM Drives.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters