# Neural network modeling with application of ultrasonic waves for estimation of carrageenan in aqueous solutions.

IntroductionCarrageenans consists of a main chain of D-galactose residues linked alternately [alpha] - (1 [right arrow] and [beta] - (1 [right arrow] 4). The differences between the fractions are the number and the position of the sulfate groups. These are water-soluble natural polymers, which occur in certain species of seaweeds. This is due to the possible presence of a 3-6 anhydro-bridge on the galactose linked through the 1 - and 4 -positions (Janaswamy, S. and R. Chandrasekaran, 2001). The quantification of Carrageenan (Matthew, A., Roberts, 1999) can be done from various methods such as colorimetric, spectroscopy and chromatography. The colorimetric analysis (Soedjak, H.S, 1994) and chromatographic methods (Chopin, T., E. Whalen, 1993; Quemener, B., C. Marot, 2000; Taurino A.M., C. Distante, 2003) require the sample preparation and depolymerization. The aim of this work is to estimate the Carrageenan concentration by applying ultrasonic waves coupled with simple neural network models (Ganapati Panda, Taposhi Chatterjee, 1992; Lauren Faussett, 1992) such as BP, RBF and two modified FLNN networks.

Analysis of materials using sonic waves

Sound in water propagates much faster (five times) than in air because of the differences (Alrutz, H., R. Schroeder, 1983) in the elasticity's of these mediums. Sonic waves propagate through most materials, allowing analysis of a wide variety of samples including (Breda o'driscoll, 2003) optically nontransparent materials. They probe the elastic (rather than electric and magnetic) characteristics of materials, which are extremely sensitive to intermolecular interactions. Compression in the sonic wave changes the distances (Lorimer, J.L. and T.J. Mason, 1995) between the molecules of the sample, which responds by intermolecular repulsions.

The high sensitivity of the ultrasonic parameters to intermolecular interactions permits the ultrasonic analysis (Milia A.Margulis, 1993; Taurino A.M., C. Distante, 2003) of a broad range of molecular processes. It is relatively easy to generate and change the wavelength of high-frequency ultrasonic wave's absorb on subdivision surfaces. This allows the construction of robust and multipurpose instruments (Vitaly Buckinu, Evgeny Kudryahsov, 2001; Vitaly Buckinu, 2002) that perform a multitude of analytical functions for fast, nondestructive analysis.

In then process industries, the online estimation of carrageenan by using underwater acoustic techniques (Malika Toubal, 2003) and artificial intelligence can be immensely useful in the quality control.

Materials and Methods

Carrageenan extracted from the Euchema spinosum (Sabah seaweed) Seaweed by the "Borneo Marine Research Institute, University of Malaysia Sabah" is used in this experiment. The Hydrophone with the following operating specifications is used to measure the frequency spectrum of sound level in the carrageenan solution.

Hydrophone Specifications

Operating depth: 700m; Survival depth: 1000m; Receiving Sensitivity: -211 dB[+ or -]3 dB; Transmitting sensitivity: 132 dB [+ or -] 3 dB.

dBFA software specifications

Manufacture : 01dB-Stell, MVI Technologies, www.01dB-stell.com, version: 4.2.10; copyrights: 2001; Memory size: 390 MB

[FIGURE 1 OMITTED]

The Experimental setup for the estimation of carrageenan using the air spager as an underwater sound generator and a wide band hydrophone transducer is the receiving sensor of the sound signals as shown in Figure 1. The source and the receiver can be moved on a straight line at constant depth for scanning the bottom profile. A personal computer receives the sound signal from the hydrophone, and the signals are discretized. These signals are then analyzed using the decibel frequency analyzer (dBFA) software and the frequency spectrum of the signal is obtained (Fig 2). The above experiment is repeated for various distances between the air spager and the hydrophone and for various carrageenan concentration levels. The network was trained and tested in the concentration range of 410 kg/ [m.sup.3] to 4735 kg/ [m.sup.3] .

The one third octave band frequency spectrum at 4 cm for different concentration levels of carrageenan is shown in Figure 2. The sound pressure level at various octave frequencies (range 20 Hz to 20000 Hz) and the distance between the air spager and the hydrophone are then associated to the mass of the carrageenan.

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

Network Architecture

Back propagation

A simple feed forward three layer neural network (Fernando, J. Pineda, 1987; Rigler, A.K., J.M. Irvine, 1991) is considered for modeling the system. The network consists of 'n' number of input neurons, 'p' number of hidden neurons, and 'm' number of output neurons as shown in Figure 3. The hidden and output neurons are normalized by the maximum--minimum method.

Network Training

In this method, a feed forward neural network (FFNN) network with 32 input neurons, 20 neurons in the hidden layer and 1 output neuron is considered. The hidden and output neurons are activated by bipolar sigmoidal activation function as given in Equation 1.

f (x) = [2(1 + exp) [(-x/q).sup.-1] (1)

The initial weights are randomized between - 0.5 and +0.5 and normalized using Equation 2. The normalized mean square error (NMES) has been calculated by using Equation 3. Each trial consists of 50 sets of randomized weight samples. For each weight sample, the FFNN is trained by BP algorithm with the learning rate [11, 18] and momentum factor as 0.1 and 0.4 respectively.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

Where [x.sub.min] - Minimum concentration, [x.sub.max]- Maximum concentration, x - Actual concentration, [x.sub.n]- Total number of input samples

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

Algorithm

Step 1: The weights are initialized randomly between -0.5 and 0.5 and thennormalized.

Step 2: While the sum squared error is greater than the tolerance, do steps 3 to 8.

Step 3: For each training pair x:t, do steps 4 to 7.

Step 4: Compute the output signal for the hidden units [Z.sub.j], j = 1,2, ... , p.

[z.sub.inj] = [summation over (i)] [x.sub.i] [v.sub.ij]

[z.sub.j] = f ([z.sub.inj]).

Step 5: Compute the output signal for the output units [Y.sub.k], k = 1,2, ..., m

[y.sub.ink] = [summation over (j)] [z.sub.j][w.sub.jk]

[Y.sub.k] - f [y.sub.ink])

Step 6: Compute [[delta].sub.k] for each output neuron, [Y.sub.k], k = 1,2, ..., m

[[delta].sub.k] = ([t.sub.k] - [y.sub.k]) f'[y.sub.ink]). For k = 1,2, ... , m and for j = 1,2, ... , p.

Compute the change in weight [w.sub.jk] and the new weight [w.sub.jk](new) as

[[DELTA][w.sub.jk]] = [alpha] [[delta].sub.k] [z.sub.j] + [eta] [DELTA] [w.sub.jk](old) and update the weight [w.sub.jk] (new) = [w.sub.jk] (old) + [DELTA] [w.sub.jk].

Step 7. Compute [delta] for each hidden neuron, [Z.sub.j], j = 1,2, ..., p.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For j = 1,2, ... ,p and i=l,2, ... ,n

[[delta].sub.j] = [[delta].sub.inj] f' ([Z.sub.inj])

Compute the change in weight [v.sub.ij] and the new weight [v.sub.ij] (new) as

[DELTA] [v.sub.ij] = [alpha] [[delta].sub.j][x.sub.i] + [eta] [DELTA] [V.sub.ij](old and update the weight [v.sub.ij] (new) = [v.sub.ij] (old) + [DELTA] [v.sub.ij].

Step 8: Test for stopping condition.

Radial Basis Function Neural Networks

Figure 4 shows the basic structure of the radial basis function network (RBFN). The input nodes pass the input values to the connecting arcs and the first layer connections are not weighted. The hidden nodes are the radial basis function units. The second layer of connections is weighted, and the output nodes are simple summations. The RBFN network (Andrew J., Kurdila, 1994; Chen, S., C.F.N.Cowan,1991) is used so that in higher dimensions, the input-output mapping is more likely to be learned than other schemes, which uses relatively fewer variables. The hidden units, instead of just evaluating a weighted sum of their inputs, encode the inputs by computing (Dimitry Gorinevsky, 1994; James, A., Leonard, 1991) how close they are to the centers of the receptive fields and connected to outputs.

The typical architecture of the RBFNs consist of three-layers: input layer, hidden layer and output layer. The hidden layer of RBFNs consists of a number of RBF units ([n.sub.h]) and bias ([b.sub.k]). Each hidden layer unit represents a single radial basis function, with associated center position and width. Each neuron on the hidden layer employs a radial basis function as non-linear transfer function to operate on the input data. The most often used RBF is Gaussian function that is characterized by a center ([c.sub.j]) and width ([r.sub.j]).

The RBFN functions by measuring the Euclidean distance between input vector (x) and the radial basis function center ([c.sub.j]) and perform the non-linear transformation with RBF in the hidden layer (Xiaojun Yao, Mancang Liu, 2002; Xiaojun Yao, Mancang Liu,2001; Xiaojun Yao, Xiaoyun Zhang, 2002) as given in Equation 4.

[FIGURE 4 OMITTED]

[h.sub.j](x) = exp (-[||x - [c.sub.j]||].sup.2)/[r.sup.2.sub.j] (4)

In which, [h.sub.j] is the notation for the output of the [j.sup.th] RBF unit. For the [j.sup.th] RBF c[right arrow] and r[right arrow] are the center and width respectively. The operation of the output layer is linear, which is given in Equation (5).

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

Where [y.sub.k] is the [k.sub.th] output unit for the input vector x, [w.sub.kj] the weight connection between the [k.sub.th] output unit and the [j.sup.th] hidden layer unit and [b.sub.k] the bias. From equations (4) and (5), designing RBFNs involves selecting centers, number of hidden layer units, width and weights. There are various ways for selecting the centers, such as random subset selection, K-means clustering, orthogonal least-squares learning algorithm, RBF-PLS, etc. The widths of the radial basis function networks can either be chosen (Li Min Fu., 1994; Rigler, A.K., J.M. Irvine, 1991) same for all the units or can be chosen different for each units.

Network Training

Radial Basis Function Neural network with 32 input neurons, 18 neurons in the hidden layer and 1 output neuron is considered. The input neurons represent the different octave band frequencies obtained from the frequency analyzer. The output neuron represents the concentration in the solution. The initial weights are normalized using Equation 2 and NMSE has been calculated using Equation 3.

FLNN with Hidden Layer (method -I)

Without loss of generality and simplicity a FLNN (Yoh Han Pao, Yoshiyasu Takefuji, 1992) having 'm' number of neurons in the hidden layer and one neuron in the output layer is considered. The network has (2n - 1) number of neurons in the input layer. The first 'n' neurons in the input layer receive input signals [x.sub.1] [x.sub.2] ... [x.sub.n] and the remaining (n - 1) neurons receive the functional composition of the input signal namely [x.sub.1], [x.sub.2], [x.sub.2][x.sub.3] ... [x.sub.n-1][x.sub.n]. Figure 5 depicts the proposed method - I. The hidden and output neurons are activated by bipolar sigmoidal activation function as given in Equation 1.The Functional Link units for the input layer is generated and if there are 'n' numbers of input neurons then (2n-1) neurons are in the functional link layer (Fig 5).

The network is trained using EBP procedure. The training takes places in three stages namely the feed forward of the input training pattern, back propagation of the associated error and weight adjustment. During feed forward, each input neuron receives an input signal and broadcasts it to each hidden neuron, which in turn computes the activation and passes it on to each out put unit. This again computes the activation to obtain the net output. During training, the net output is compared with the target value and the appropriate error is calculated. From the error, the error gradient at the hidden and the output neurons are calculated and the new weights are determined.

[FIGURE 5 OMITTED]

Network training

This method has a simple three layer FLNN with 32 input neurons and 63 functional link neurons in the input layer, 20 neurons in the hidden layer and 1 neuron in the output layer is considered. The 31 sound pressure levels corresponding to various octave band frequencies obtained from frequency analyzer.

The functional composition of the 32 input neurons (63 additional input features derived from the 32 original input features) along with the distance between the air spager and the hydrophone are used as the input feature to the FLNN and associated to the concentration of carrageenan in the solution.

The following training algorithm is used to train the Proposed method - I. The nomenclature is given in the Appendix.

Algorithm

Step 1: The weights are initialized randomly between - 0.5 and 0.5 and normalized.

Step 2: While the sum squared error is greater than the tolerance level, do steps 3 to 9.

Step 3: For each training pair (x: t), do steps 4 to 8.

Step 4: Generate the functional link input neurons of (n+1) to (2n - 1) units.

[x.sub.n+i] = [x.sub.i][x.sub.i+1] I = 1,2,3, ..., n-1

Step 5: Compute the output signal for the hidden units Z;, j = 1, 2, 3, ....., m

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[z.sub.j] = f (z.sub.in_j).

Step 6: Compute the output signal for the output units [Y.sub.k], k = 1, 2, 3, ....., p

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[y.sub.k] = f ([y.sub.in_k])

Step 7: Compute [[delta].sub.k], k = 1,2, ....., p

[[delta].sub.k] = ([t.sub.k] - [y.sub.k])f'([y.sub.in_k]). For (k = 1,2, ..., p) and for (j = 0,1,2, ... , m), Compute the change in weight [DELTA][w.sub.jk] and the new weight [w.sub.jk] (new)

[DELTA][w.sub.jk] = a [[delta].sub.k] [Z.sub.j] + h [DELTA][w.sub.jk](old) [w.sub.jk] (new) = [w.sub.jk] (old) + [DELTA][w.sub.jk].

Step 8: Compute [[delta].sub.i] (for j = 1, 2, 3, ... ,m.)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For ( = 1,2,3, ... ,m.) and for (i =1,2, ... ,2n-1.) Compute change in weight [DELTA][u.sub.ij], and the new weight [u.sub.ij](new)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Step 9: Test for stopping condition.

FLNN with extended Functional Link at the Hidden Layer (method - II)

Figure 6 depicts the proposed method - II. It consists of a three layer network (Simon Haykin, Neural Networks, 1999) with input, hidden and output layers. The hidden and output neurons are activated by bipolar sigmoidal activation function as given in Equation (1).

[FIGURE 6 OMITTED]

The Functional Link units for the input and hidden layer are generated and if there are 'n' numbers of input neurons then (2n-1) neurons are in the functional link input layer. If there are 'm' numbers of hidden neurons, then (2m-1) neurons are in the functional link hidden layer.

Network Training

The proposed method--II has three layer with 32 input neurons and 63 functional link neurons in the input layer, 20 hidden neurons and 39 functional link neurons in the hidden layer and 1 neuron in the output layer. The 31 sound pressure levels corresponding to various octave band frequencies are obtained from frequency analyzer. The functional composition of the 32 input neurons (63 additional input features derived from the 32 original input features) along with the distance between the air spager and the hydrophone are used as the input feature to the FLNN and associated to the concentration of carrageenan in the solution. Further, the functional composition of the signals obtained from the hidden neurons and fed to the 39 functional link hidden neurons (39 additional features are derived from the output of the 20 hidden neurons).The following algorithm is used to train the method- II.

Algorithm

Step 1: The weights are initialized randomly between - 0.5 and 0.5 and normalized.

Step 2: While the sum squared error is greater than the tolerance level, do steps 3 to 9.

Step 3: For each training pair (x: t), do steps 4 to 8.

Step 4: Generate the functional link input neurons of (n+1) to (2n - 1) units.

[x.sub.n+i] = [x.sub.i] [x.sub.i+1] i = 1,2,3, ... , n -1

Step 5: Compute the output signal for the hidden units [Z.sub.j], j = 1, 2, 3,....., m.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[z.sub.j] = f (z.sub.in_j)

Step 6: Generate the functional link input neurons of (m+1) to (2m - 1) units.

[z.sub.m+j] = [z.sub.j] [z.sub.j+1] j = 1,2, ... , m-1

Step 7: Compute the output signal for the output units [Y.sub.k], k = 1, 2, 3, ..., p.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[y.sub.k] = f (y.sub.in_k)

Step 7: Compute [[delta].sub.k], k = 1,2, ... , p.

[[delta].sub.k] = ([t.sub.k] - [y.sub.k] f' (y.sub.in_k).

For (k = 1,2, ...,p) and for [U..] = 1,2, ... , 2m-1), Compute the change in weight [DELTA][w.sub.jk] and the new weight [w.sub.jk] (new)

[DELTA][w.sub.jk] = a [[delta].sub.k] [Z.sub.j] + h [DELTA][w.sub.jk](old) [w.sub.jk](new) = [w.sub.jk](old) + [DELTA][w.sub.jk]

Step 8: Compute [[delta].sub.i] (for j = 1,2,3, ... ,2m-1.)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For (j = 1,2,3, ... ,2m-1.) and for (i =1,2,3, ... ,2n-1.) Compute change in weight [DELTA][u.sub.ij], and the new weight [u.sub.ij] (new)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Step 9: Test for stopping condition.

Results and Discussion

The best network performance can be obtained by choosing the optimal network architecture and network parameters. The studies of the network structure include the selection of the number of layers and number of nodes in each layer as shown in Figure 3, 4, 5 and 6. The ultrasonic sound signals were given as input values to the network and the carrageenan concentration was determined as the output.

The three neural network models are trained with 90 samples and tested with 110 samples. The network is trained with different learning rate and momentum factor. The optimal learning rate and momentum factor after a number of trials are chosen as 0.01 and 0.45 respectively. The network is trained with a training tolerance of 0.05 and testing tolerance of 0.05. The resulting cumulative error versus epoch graph for the different network models are shown in Figures 7, 9, 11 and 13 respectively. The actual value of the carrageenan concentration is compared with the predicted value and shown in Figures 8, 10, 12 and 14 respectively. The percentage of success rate obtained from the three methods is compared and it is found that the Radial Basis Function Neural Network provides a better classification compared to the other network models. The network training parameters and the computation time for training are shown in Table 1. From this table, it can be found that the computational time for RBF method is less when compared to the conventional Back propagation and FLNN methods. The overall results illustrates that the RBF neural network model is an appropriate method to quantify the carrageen in the process.

[FIGURE 7 OMITTED]

[FIGURE 8 OMITTED]

Conclusions

In this paper, new procedures are proposed to determine the carrageenan concentration based on sonometric techniques. Further, BP, RBF and FLNN models are developed to obtain the carrageenan concentration from the one third octave band frequency spectrum. In the proposed three methods the RBF neural network model was the ideal one for estimation of carrageenan concentration in aqueous solutions. The normalized mean square error is 0.001 and the computational time for network training is 26 sec. The proposed RBF method can be extended to determine the concentration level of the carrageenan in real time processing. This approach offers unprecedented tool in developing new carrageenan gel-based products in the processes industries.

[FIGURE 9 OMITTED]

[FIGURE 10 OMITTED]

[FIGURE 11 OMITTED]

[FIGURE 12 OMITTED]

[FIGURE 13 OMITTED]

[FIGURE 14 OMITTED]

Nomenclature

n: Number of input neurons.

m: Number of hidden neurons.

p: Number of output neurons.

x: Input training vector: x = ([x.sub.1],[x.sub.2],[x.sub.3], ... ,[x.sub.n])

[Z.sub.j], Hidden unit j.

[Y.sub.k], Output neuron k.

t: Output target vector: t = ([t.sub.1],[t.sub.2],[t.sub.3], ... ,[t.sub.p])

[u.sub.ij]: Connection weights from the ith input neuron to the [j.sub.th], hidden neuron.

[w.sub.jk]: Connection weight from the [j.sup.th] hidden neuron to the [k.sub.th] output neuron.

[u.sub.ik]: Connection weights from the [i.sub.th] input neuron to the [k.sub.th] output neuron.

[w.sub.ik]: Connection weight from the [i.sub.th] enhanced FLNN input neuron to the [k.sub.th] output neuron.

[z.sub.inj] : Net input to the [j.sup.th] hidden neuron.

[z.sub.inj] : Output of [j.sup.th] hidden neuron.

[y.sub.ink] : Net input to the [k.sub.th] output neuron.

[Y.sub.k] : Output of [k.sub.th] output neuron.

[d.sub.j]: Portion of error correction factor for [u.sub.ij].

[d.sub.k]: Portion of error correction factor for [w.sub.jk].

a: Learning rate.

h: Momentum factor.

Acknowledgment

Preparation of this paper was supported by Research and Development Center, University Malaysia Sabah. The contents of the paper reflect the authors' personal opinions. We thank Mr. Muralindran A/L Mariappan for his technical help in this work.

References

Alrutz, H., R. Schroeder, 1983. A fast Hadamard transform method for the evaluation of measurements using pseudorandom test signals, Proc. of 11th International Congress on Acoustics, Paris, 6: 235-238.

Andrew J., Kurdila, Francis, J. Narcowich, Joseph, D.Ward, 1994. Persistency of Excitation, Identification and Radial Basis Function, Proceedings of the 33rd Conference on Decision and Control, Lake Buena Vista, Florida, 2273-2278.

Breda O'driscoll, cormac smyth, Arno C. Aping, ronald W. Visschers, vitaly buckin, 2003. Ultrasonic spectroscopy for material analysis--recent advances, Spectroscopy Europe, 20: 20-25.

Chen, S., C.F.N.Cowan, P.M.Grant, 1991. Orthogonal Least Square Learning Algorithm for Radial Basis Function Networks, IEEE Transactions on Neural Networks, 2(2): 302 - 309.

Chopin, T., E. Whalen, 1993. A new and rapid method for carrageenan identification by FT IR diffuse reflectance spectroscopy directly on dried, ground algal material, Carbohydrate res, 246: 51-59.

Dimitry Gorinevsky, 1994. Learning Task Dependent Input shaping Control using Radial Basis Function Network, IEEE Transactions on Neural Networks, 2574 - 2579.

Fernando, J. Pineda, 1987. Generalization of Back-Propagation to Recurrent Neural Networks, Phys. Rev. Lett., 59(19): 2229-2232.

Ganapati Panda, Taposhi Chatterjee, 1992. Broadband Noise Cancellation using a Functional Link ANN based Non--linear filter, IEEE Transactions on Neural Networks, 4: 2061- 2066.

Janaswamy, S. and R. Chandrasekaran, 2001. Three-dimensional structure of the sodium salt of iota-carrageenan, Carbohyd. Res., 335: 181 - 194.

James, A., Leonard and Mark, A. Kramer, 1991. Radial Basis Function Networks for classifying process faults, IEEE Transactions on Control Systems, 11(3): 31-38.

John, P.Cater, 1987. Successfully using Peak Learning Rates of 10 (and greater) in Back Propagation Networks with the Heuristic Learning Algorithm, IEEE First International Conference on Neural Networks, San Diego, 645-651.

Lauren Faussett, 1992. Fundamentals of Artificial Neural Networks, Wiley Publishers.

Li Min Fu., 1994. Neural Networks in Computer Intelligence, New York, McGraw--Hill.

Lorimer, J.L. and T.J. Mason, 1995. Some Recent studies at Coventry University Sonochemistry centre, Ultrasonic Sonochemistry, 2: s55 - s57.

Malika Toubal, Bertrand Nongaillard, Edouard Radziszewski, Patrick Boulenguer, Virginie Langendor, Ultrasonic, 2003. Monitoring of Sol--Gel transition of natural hydrocolloids, Journal of Food Engineering, 58(1): 1-4.

Matthew, A., Roberts, 1999. Bernard quemener, Measurement of carrageenans in food: Challenges, progress, and trends in analysis, Trends in food science and technology, 10(4-5): 169- 181.

Milia A.Margulis, Sonochemistry and Cavitation, 1993. Gordon and Breach Publishers, 1" edition, Luxembourg, Germany.

Michael, K., A. Weir, 1991. Method for Self-Determination of Adaptive Learning Rates in Back Propagation, Neural Networks, 4: 371-379.

Quemener, B., C. Marot, 2000. Quantitative analysis of hydrocolloids in food systems by methanolysis coupled to reverse HPLC: 1. gelling carrageenans, Food hydrocolloids, 14: 9- 17.

Richard, B., McLain, M.A. Henson, 1998. Process application of a nonlinear adaptive control strategy based on radial basis function networks, American Control Conference Proceedings, 4: 2098--2102.

Rigler, A.K., J.M. Irvine and T.P.Vogl, 1991. Resealing of Variables in Back Propagation Learning, Neural Networks, 4: 225-229.

Simon Ha[Y.sub.k]in, Neural Networks, 1999. A comprehensive Function, New Jersey: Prentice Hall Inc.

Soedjak, H.S, 1994. Colorimetric determination of carrageenans and other anionic hydrocolloids with methylene blue, Analytical chemistry, 66: 4514 - 4521.

Taurino A.M., C. Distante, P. Siciliano, L. Vasanelli, 2003. Quantitative and qualitative analysis of VOCs mixtures by means of a microsensors array and different evaluation methods, Sensors and Actuators B., 93: 117-125.

Turquois, T., Vera, 1996. Composition of carrageenan blends inferred from -1-3C-NMR and infrared spectroscopic analysis, Carbohydrate pol., 31(4): 269- 278.

Vitaly Buckinu, Evgeny Kudryahsov, 2001. Ultrasonic shear wave rheology of weak particle gels, Advances in Colloid and Interface Science, 89 - 90, 401 - 422.

Vitaly Buckinu, Eugeny Kudryashov and Breda O'Driscoll, 2002. Spectroscopy perspectives- High-resolution ultrasonic spectroscopy for material analysis, American Laboratory--(Spectroscopy Perspectives Supplement), 28: 30-31.

Xiaojun Yao, Mancang Liu, Xiaoyun Zhang, Zhide Hu, 2002. Botao Fan, Radial basis function network-based quantitative structure-property relationship for the prediction of Henry's law constant, Analytica Chimica Acta, 462: 101-117.

Xiaojun Yao, Mancang Liu, Xiaoyun Zhang, Zhide Hu, Botao Fan, 2001. Prediction of enthalpy of alkanes by the use of radial basis function neural networks, Computers and Chemistry, 25: 475-482.

Xiaojun Yao, Xiaoyun Zhang, Ruisheng Zhang, Mancang Liu, Zhide Hu, Botao Fan, 2002. Radial basis function neural network based QSPR for the prediction of critical pressures of substituted benzenes, Computers and Chemistry, 26: 159-169.

Yoh Han Pao, Yoshiyasu Takefuji, 1992. Functional-link net computing: theory, system architecture, and functionalities, Computer, 25(5): 76 - 79.

Corresponding Author: Duduku Krishnaiah, Department of Chemical Engineering School of Engineering and Information Technology, University Malaysia Sabah, 88999, Kota Kinabalu, Sabah, Malaysia. Tel: +60-88-320000 ext 3059; Fax: +60-88-320348 E-mail: krishna@ums.edu.my

Duduku Krishnaiah, Awang Bono, Rosalam Sarbatly, Paulraj M. Pandiyan, C.Karthikeyan and D.M. Reddy Prasad

Department of Chemical Engineering Department of Computer Science and Engineering School of Engineering and Information Technology University Malaysia Sabah, 88999, Kota Kinabalu, Sabah, Malaysia

Duduku Krishnaiah, Awang Bono, Rosalam Sarbatly, Paulraj M. Pandiyan, C.Karthikeyan and D.M. Reddy Prasad: Neural Network Modeling with Application of Ultrasonic Waves for Estimation of Carrageenan in Aqueous Solutions: Adv. in Nat. Appl. Sci., 2(3): 152-164, 2008.

Table 1: Network Training Phase Back Radial Parameters propagation Basis Number of input neurons 32 32 Number of FLNN input neurons -- -- Number of hidden neurons 20 20 Number of FLNN hidden neurons -- -- Number of output neurons 1 1 Training tolerance 0.05 0.05 Testing Tolerance 0.05 0.05 Momentum factor 0.45 -- Learning rate 0.05 -- Number of samples for training 90 90 Number of samples for testing 110 110 Number of epochs 38746 1500 Success Rate 84.54% 95.45% Normalized Mean Square Error 0.020698 0.001 Computational time 212 sec 26 sec FLNN Parameters Method - I Method - II Number of input neurons 32 32 Number of FLNN input neurons 63 63 Number of hidden neurons 20 20 Number of FLNN hidden neurons -- 39 Number of output neurons 1 1 Training tolerance 0.05 0.05 Testing Tolerance 0.05 0.05 Momentum factor 0.45 0.45 Learning rate 0.01 0.01 Number of samples for training 90 90 Number of samples for testing 110 110 Number of epochs 12959 7892 Success Rate 91.09% 92.91 Normalized Mean Square Error 0.0125 0.0025 Computational time 73 sec 62 sec

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Original Article |
---|---|

Author: | Krishnaiah, Duduku; Bono, Awang; Sarbatly, Rosalam; Pandiyan, Paulraj M.; Karthikeyan, C.; Prasad, D |

Publication: | Advances in Natural and Applied Sciences |

Article Type: | Report |

Geographic Code: | 9MALA |

Date: | Sep 1, 2008 |

Words: | 4676 |

Previous Article: | Phytotoxicity and remediation of heavy metals by Alfalfa (Medicago sativa) in soil-vermicompost media. |

Next Article: | On Adomian Decomposition Method (ADM) for numerical solution of ordinary differential equations. |

Topics: |