# Prediction of passenger flow on the highway based on the least square support vector machine/Maziausiu kvadratu atraminiu vektoriu metodo taikymas keleiviu srautui greitkelyje prognozuoti.

1. IntroductionThe prediction of passenger flow on the highway is very important dynamic analysis. It plays a key role in the rational allocation of resources and manages the investment structure of a corporation. However, the factors having an influence on passenger flow are rather complex. Traditional prediction methods have straight-line extrapolation, exponential smoothing, regression analysis and moving average (Chatfield 2003). However, these models have some limitations on solving highly nonlinear problems, so these models cannot meet objective requirements. In recent years, some prediction methods such as the grey system (Lin, Liu 2005; Zhang, Shi 2005), neural network (Vlahogianni et al. 2005; Junevicius, Bogdevicius 2009; Cigizoglu 2005; Ma et al. 2004), etc. have been introduced. These methods are some good solutions to solve nonlinear problems; nevertheless, they have some shortcomings. Although artificial neural networks have nonlinear mapping ability and generalizing capacity, the speed of algorithm training is very slow, and the errors of learning processes are converged easily to local minimum. Therefore, artificial neural networks are inefficient to ensure the accuracy of learning processes. In addition, the artificial neural network can guarantee empirical risk minimization under conditions of limited samples. In case of a number of samples, artificial neural networks can easily get into dimension disasters; they hardly generalize and explain the obtained results due to overfitting (Peng et al. 2007). The essence of grey prediction is exponential growth prediction (Xie, Liu 2005). Grey prediction requires that original time series should be non-negative monotonic functions and can accord with exponential laws. Then, the condition of grey prediction may fail to be satisfied. For these reasons, it is necessary to seek for new ways to accurately predict passenger flow on the highway.

The Support Vector Machine (SVM) based on the statistical learning theory was originally proposed by Vapnik (1998) and is a new type of classification and regression tools. As the best small-sample learning theory, SVM applications have been successfully developed in some areas such as pattern recognition, function approximation and financial time series (Kim, Sohn 2009; Vapnik 1998). Compared with the neural network, SVM could well solve some problems of small sample size, high dimensionality, nonlinear and local minima (Suykens, Vandewalle 1999). However, when dealing with serious sample problems, SVM also faces some problems. Taking quadratic programming (QP) as an example, QP should undertake the matrix operations of the kernel function in each of the iterations, but the matrix memory of the kernel function was squared up with the number of samples. Owing to the accumulation of iteration error, the accuracy of algorithms would not be accepted. The least squares support vector machine (LS-SVM) is the extension of SVM. Compared with standard SVM, LS-SVM substitutes equality constraints for inequality constraints on the SVM algorithm; solutions to QP problems are directly transformed into the solution to linear equations.

The article is divided into five sections. Section 2 contains the basic principle of SVM and LS-SVM. Section 3 constructs the prediction model of passenger flow on the highway according to characteristics of highway passenger transport. Section 4 introduces passenger flow on Hangzhou highway, makes a prediction and presents experiment results. Finally, Section 5 summarizes the paper.

2. Introduction to SVM and LS-SVM

SVM (Cao, Tay 2003) is based on the statistical learning theory and is a new machine learning method. The basic idea showing that SVM (Vapnik 1999) solves the problems of regression is to map input space into higher feature dimensional space by non-linear mapping. In higher dimensional space, SVM utilizes the principle of structural risk minimization to construct a linear decision function and do linear regression in new feature space.

2.1. SVM

Given a sample dataset:

D = {([x.sub.1]), [y.sub.1]), ... ([x.sub.i], [y.sub.i]), ..., ([x.sub.l], * [y.sub.l])} [member of] [(x,y).sup.l], (1)

where: input feature vectors--[x.sub.i] [member of] [R.sup.n]; target values--[y.sub.i] [member of] R and -i = 1, 2, ..., l.

Our goal is to construct a regression function that represents the dependence of sample output y on inputs x. Let's define the form of this function as:

f(x) = [omega] x x + b, (2)

where: [omega] is weigh vector; b is deviation.

In order to measure deviations between the estimate and target value, we first define e-insensitive loss functions in the following form:

R(y,x) = [[absolute value of [y.sub.i] -([omega] x [x.sub.i] +b)].sub.[epsilon]]. (3)

Then, we find optimal [omega] and b to approximate the minimum of empirical risk function [R.sub.emp] or the minimum of the loss function as:

[R.sub.emp] = min 1/l [l.summation over (i=1)] [[absolute value of [y.sub.i] - ([omega] x [x.sub.i] + b)].sub.[epsilon]]. (4)

Supposed that all training data could be fitted inerrably by the linear function within the scope of [epsilon] according to Eq. (2), the regression problem is converted into the minimizing decision function expressed by Eq. (5):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (5)

where: c > 0; [omega] = [([[omega].sub.1], [[omega].sub.2], ..., [[omega].sub.l]).sup.T]; [[xi].sub.i] and [[xi].sup.*.sub.i] are an upper and a lower limit of a slack variable, the first term of Eq. (5) is responsible for finding a smooth solution while the second one minimizes training errors (c is the trade-off parameter between the terms).

Consequently, Lagrange can be formed as:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (6)

where: [[alpha].sub.i], [a.sup.*.sub.i] > 0; [[gamma].sub.i], [[gamma].sup.*.sub.i] [greater than or equal to] 0; i = 1, 2, ..., l.

According to the coincidence theorem, the above Eq. (6) can be converted into the following dual problem with an objective function and constrains:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (7)

where: the parameters of [[alpha].sub.i] and [[alpha].sup.*.sub.i] are Lagrange multipliers.

After calculating Eq. (7), the coefficient of the regression equation in Eq. (2) is as follows:

[omega] = [l.summation over (i=1)] [[alpha].sub.i] - [[alpha].sup.*.sub.i] [x.sub.i]. (8)

2.2. LS-SVM Regression

LS-SVM (Burges 1998) is a variant of SVM proposed by Suykens. The standard SVM regression by Vapnik is modified to transform the QP problem into a linear problem (Suykens, Vandewalle 1999, 2000). These modifications are formulated in the definition of LS-SVM as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (9)

Subject to [y.sub.i] = [[omega].sup.T] [phi]([x.sub.i]) + b + [xi].sub.i], i = 1, 2, ..., l,

where: [omega] is a full vector; [[xi].sub.i] is error variable; b is deviation and [gamma] is an adjustable constant.

From Eq. (9), the following Lagrange function can be formed:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (10)

where: the parameters of [[alpha].sub.i] are Lagrange multipliers.

Conditions for optimality are the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

where: i = 1, 2, ..., l.

The corresponding linear equation set (a Karush-Kuhn-Tucker system, see Suykens et al. 2002) is:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (12)

where: y = ([y.sub.1], ..., [y.sub.l]); [l.sub.v] = [1, 1 ... 1]; [alpha] = ([[alpha].sub.1], ..., [[alpha].sub.l]); [OMEGA] = k([x.sub.i], [x.sub.j]) = [[phi]([x.sub.i]) x [phi]([x.sub.j])] is the kernel function.

Finally, the regression model for LS-SVM can be obtained in the form of:

y(x) = [l.summation over (i=1)] [[alpha].sub.i]k([x.sub.i],x) + b. (13)

3. Construction of the Regression Model Based on LS-SVM

According to the basic principle of the LS-SVM regression problem, the regression process of SVM is shown in Fig.1. The actual steps are as follows:

Step 1: The determination of influencing factors and data samples. Based on the goals of prediction, we determine the factors that influence prediction goals and form training and testing datasets.

Step 2: Scaling data. In order to increase computing speed and prediction accuracy and avoid too high or low characteristic value, we need to scale the sample dataset. The transformation equation is shown as follows:

[x'.sub.ij] = [x.sub.ij] - [x.sub.imin]/[x.sub.imax] - [x.sub.imin] (14)

where: [x'.sub.ij] is scale values and passenger flow; [x.sub.imin] and [x.sub.imax] are the minimum and maximum of scale values and passenger flow respectively.

Step 3: Determine the input vectors of LS-SVM and establish mapping from input vector [x.sub.n] = [[x.sub.n-1], [x.sub.n-2], ..., [x.sub.n-m]] to output vector [y.sub.n] = [[x.sub.n]], [R.sup.m] [right arrow] R, m is embedding dimension. The function is expressed as:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (15)

[FIGURE 1 OMITTED]

Step 4: Selecting the kernel function. A selection of the kernel function and parameters both directly influence generalization capacity of LS-SVM. In LS-SVM model, the commonly used Kernel function includes the following:

* Linear:

K (x, y) = xy; (16)

* Polynomial:

K[(x, [x.sub.i]) = [(x,[x.sub.i]) + 1].sup.q]; (17)

* Radial basis function (RBF):

K(x, [x.sub.i]) = exp { - [[absolute value of x - [x.sub.i]].sup.2]/[[sigma].sup.2]; (18)

* Sigmoid:

K [(x, [x.sub.i]) = tanh[v(x, [x.sub.i]) + [alpha]]. (19)

Step 5: Parameter optimization and model application. We train the sample dataset by utilizing LSSVM model. Through cross validation, we find the best training parameters of LS-SVM, including penalty coefficient(C), kernel function parameter ([sigma]), etc. and input the testing dataset to predict passenger flow by using Eq. (13).

Step 6: Performance evaluation. The accuracy evaluation of the model is as follows:

* Prediction error:

[E.sub.MA] = [[absolute value of [y.sub.i] - [y.sup.*.sub.i]/[y.sub.i]; (20)

* Root mean square error:

[E.sub.MSE] = [square root of 1/n [n.summation over (i=1)] [([y.sub.i] - [y.sup.*.sub.i].sup.2]; (21)

* Prediction accuracy:

A = 1 - [E.sub.MSE] = 1 - [square root of 1/n [[n.summation over (i=1)] ([y.sub.i] - [y.sup.*.sub.i]).sup.2], (22)

where: [y.sub.i] is the actual value of the sample; [y.sup.*.sub.i] is the estimated value according to LS-SVM model; n is the number of the prediction value.

4. Empirical Case Studies

4.1. Architecture of the Index System

The prediction of passenger flow on the highway is based on historical data and influencing factors, and therefore, must accord with the characteristics and development trend towards passenger flow on the highway (Junevicius, Bogdevicius 2009).

In general, a highway traffic system is a complex one (Junevicius, Bogdevicius 2007). There are many influencing factors (Fig. 2) in predicting passenger flow on the highway, including economic (for example, urban economic development level) and non-economic factors (urban scale and urban population):

1. Urban economic development level. Demand for highway passengers mainly comes from two different fields: production and consumption. Economic development not only increases the activities of production, but also stimulates travels of consumers and drives a stable increase in travelling passengers along with economical development.

2. Urban scale and urban population. The quantity and structural changes of urban population will cause changes in traffic demand. Normally, when the frequency of travelling remains unchanged, population increase will cause a rise in highway passenger flow. In addition, with an increase in non-agricultural population, rural surplus labour will transfer to the town and result in an increase in passenger flow on the highway.

[FIGURE 2 OMITTED]

4.2. Data Preparation

The data presented in the article come from passenger flow on the highway, registered in Hangzhou Statistical Yearbook (1998-2008) and is provided by Hangzhou Bureau (http://www.hzstats.gov.cn). We have selected eight prediction indexes that greatly influence passenger flow: population (agricultural population, non-agricultural population), gross domestic product (primary industry, secondary industry and tertiary industry), per capita disposable income, the total retail sales of social consumer goods, civilian vehicle ownership. Specific descriptions of prediction indexes are shown in Table 1 and those of the sample data in Fig. 3.

4.3. Data Preprocessing

First, we construct a training dataset of eight inputs and one output referring to the data obtained within the period 2000-2005. Then, we make the corresponding testing sample set using data collected for the period 2006-2008 and apply the LS-SVM model for training and testing.

Second, we scale data on training and testing datasets for passenger flow on Hangzhou highway because scaling can avoid attributes in great numeric ranges and numerical difficulties during calculation. The data displayed in the article are scaled to (0, 1). The normalized results are shown in Table 2.

4.4. Regression Prediction of LS-SVM

One of the most important factors in building the prediction regression model using LS-SVM is the selection of the kernel function. In general, there are four main types of kernel: linear, polynomial, radial basis function (RBF) and sigmoid. In this article, the RBF kernel function is used as the default kernel, because RBF has better advantage than the other kernel function under a lack of prior knowledge. First, the RBF kernel can map nonlinear samples into higher dimensional feature space and deal with samples when relation between regression data and an attribute is nonlinear (Suykens et al. 2002).

[FIGURE 3 OMITTED]

Second, in terms of performance, the linear kernel is a special case of RBF (Keerthi, Lin 2003). The linear kernel having parameter C performs similarly to the RBF kernel having parameters (C, [gamma]). Third, the number of hyperparameters influences the complexity of the regression model, and therefore the polynomial kernel has more hyperparameters than the RBF kernel (Kim et al. 1999). Finally, the RBF kernel has few numerical difficulties because the kernel value lies between zero and one. On the contrary, the polynomial kernel value may go to infinity or zero when the degree is high. Then, the RBF kernel is used for building the default prediction regression model for passenger flow on the highway.

When the RBF kernel is selected as a default function, two parameters(C, [gamma]) associated with the RBF kernel also need to be decided. Upper bound C and kernel parameter [gamma] play important role in the performance. Keerthi and Lin (2003) suggested a practical guideline for SVM using grid-search and cross validation the reason for which is that the latter function can prevent from the overfitting problem; grid-search can avoid searching for an exhaustive parameter and may find good parameters for computational time. In addition, each parameter (C, [gamma]) is independent, and thus grid-search can be easily parallelized.

[FIGURE 4 OMITTED]

The article uses the methods of RBF, cross validation and grid-search to determine the optimal parameters of LS-SVM in accordance with the procedures of mesh generation and gradual refinement. First, we use a coarse grid (Fig. 4) and find out that the best (C, [gamma]) is (32, 0.0039) with the cross validation rate of 98.56%. Next, we use finer grid search (Fig. 5) and establish that the best (C, [gamma]) is (1, 0.125) with the cross validation rate of 0.995971%. After the best (C, [gamma]) is found, the whole training dataset is retrained to generate the final regression result of passenger flow on the highway for the period 2006-2008 (Table 3).

4.5. Results

This article uses the regression method of LS-SVM to predict passenger flow on the highway for the period 2006-2008 and measures the results of predictions applying four performance indexes (Table 4). The obtained data show that LS-SVM model has higher predicting accuracy and fitting degree (Fig. 6). In addition, through adjusting constants for LS-SVM model (Table 5), the article reduces error and strengthens the smooth degree of the regression function as soon as possible. Therefore, we suppose it is feasible and effective to predict passenger flow on the highway using the regression method of LS-SVM.

[FIGURE 5 OMITTED]

[FIGURE 6 OMITTED]

5. Conclusions

1. LS-SVM based on the statistical learning theory is a machine learning method having a strict theoretical basis and able to solve some problems of small sample size, high dimensionality, nonlinear and local minima. Due to its excellent merit, the article constructs the nonlinear regression model based on LS-SVM, and uses the example of nine samples to make a prediction. The obtained result shows that the prediction method based on LS-SVM is feasible and accurate and can provide a new method for passenger flow on the highway.

2. Passenger flow on the highway is an important index that reflects passenger capacity on the highway and has a very important meaning of grasping development trend, characteristics and rules for passenger flow on the highway. However, the prediction methods of passenger flow on the highway have still been based on the total amount by now. In fact, the prediction of the total passenger flow is only an aspect of prediction. In fact, the prediction of the total passenger flow is only an aspect of prediction. While discussing passenger flow, we should also consider space position and passenger distribution because these are important factors in formulating development plan for the highway and arranging stations.

3. A traffic system on the highway is a complex one. There are many influencing factors in predicting passenger flow on the highway, including economic, non-economic, quantitative and qualitative factors. To some extent, the method discussed in the article is mainly based on data, thus some limitations can be observed. In case some qualitative analyses should be added, it could compensate for a shortage of quantitative methods.

doi: 10.3846/16484142.2011.593121

Acknowledgement

Support for this work was provided by the National Natural Science Foundation of China (60979016), the Doctoral Research Foundation of Education Department of China (20092302110060) and the Foundation of New Century Educational Talents Plan of Chinese Education Ministry, China (NCET-0171).

References

Cao, L. J.; Tay, F. E. H. 2003. Support vector machine with adaptive parameters in financial time series forecasting, IEEE Transactions on Neural Networks 14(6): 1506-1518. doi:10.1109/TNN.2003.820556

Burges, C. J. C. 1998. A tutorial on support vector machines for pattern recognition, Date Mining and Knowledge Discovery 2(2): 121-167. doi:10.1023/A:1009715923555

Chatfield, C. 2003. The Analysis of Time Series: An Introduction. 6th edition. Chapman and Hall/CRC. 352 p.

Cigizoglu, H. K. 2005. Generalized regression neural network in monthly flow forecasting, Civil Engineering and Environmental Systems 22(2): 71-81. doi:10.1080/10286600500126256

Vlahogianni, E. I.; Karlaftis, M. G.; Golias, J. C. 2005. Optimized and meta-optimized neural networks for short-term traffic flow prediction: a genetic approach, Transportation Research Part C: Emerging Technologies 13(3): 211-234. doi:10.1016/j.trc.2005.04.007

Kim, H. S.; Sohn, S. Y. 2009. Support vector machines for default prediction of SMEs based on technology credit, European Journal of Operational Research 201(3): 838-846. doi:10.1016/j.ejor.2009.03.036

Junevicius, R.; Bogdevicius, M. 2007. Determination of traffic flow parameters in different traffic flow interaction cases, Transport 22(3): 236-239.

Junevicius, R.; Bogdevicius, M. 2009. Mathematical modelling of network traffic flow, Transport 24(4): 333-338. doi:10.3846/1648-4142.2009.24.333-338

Kim, H. S.; Eykholt, R.; Salas, J. D. 1999. Nonlinear dynamics, delay times, and embedding windows, Physica D: Nonlinear Phenomena 127(1-2): 48-60. doi:10.1016/S0167-2789(98)00240-1

Lin, K.-H.; Liu, B.-D. 2005. A gray system modeling approach to the prediction of calibration intervals, IEEE Transactions on Instrumentation and Measurement 54(1): 297-304. doi:10.1109/TIM.2004.840234

Ma, F.-H.; Yang, W.; Yang, F.; Yu, X.-X. 2004. Forecast water consumption with improved BP neural network, Journal of Liaoning Technical University 23(2): 191-193.

Peng, Z.-R.; Meng, J.-J.; Zhu, L.; Jiang, Z.-Y. 2007. SVM-based prediction of railway passenger volume, Journal of Liaoning Technical University Journal of Liaoning Technical University 26(2): 269-272.

Keerthi, S. S.; Lin, C.-J. 2003. Asymptotic behaviors of support vector machines with Gaussian kernel, Neural Computation 15(7): 1667-1689. doi:10.1162/089976603321891855

Suykens, J. A. K.; Vandewalle, J. 1999. Least squares support vector machine classifiers, Neural Processing Letters 9(3): 293-300. doi:10.1023/A:1018628609742

Suykens, J. A. K.; Vandewalle, J. 2000. Recurrent least squares support vector machines, IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 47(7): 1109-1114. doi:10.1109/81.855471

Suykens, J. A. K.; Van Gestel, T.; De Brabanter, J.; De Moor, B.; Vandewalle, J. 2002. Least Squares Support Vector Machines. World Scientific Pub. Co., Singapore. 308 p.

Vapnik, V. N. 1998. Statistical Learning Theory. 1st edition. Wiley-Interscience. 736 p.

Vapnik, V. N. 1999. An overview of statistical learning theory, IEEE Transactions on Neural Networks 10(5): 988-999. doi:10.1109/72.788640

Xie, N.-M.; Liu, S.-F. 2005. Discrete GM(1,1) and mechanism of grey forecasting model, Systems Engineering Theory & Practice 25(1): 93-98.

Zhang, F.-L.; Shi, F. 2005. Stochastic grey system model for forecasting passenger and freight railway volume, Journal of Central South University (Science And Technology) 36(1): 158-162.

Yanrong Hu (1), Chong Wu (2), Hongjiu Liu (3)

(1,2) School of Management, Harbin Institute of Technology, 150001 Harbin, China

(1,3) School of Management, Changshu Institute of Technology, 215500 Changshu, China

E-mails: (1) rosehyr2004@yahoo.com.cn; (2) wuchong@hit.edu.cn (corresponding author); (3) lionlhj@163.com

Received 10 August 2010; 20 April 2011

Table 1. Original data and influencing factors regarding passenger flow on Hangzhou Highway Population (10000 persons) Gross domestic product (billion Yuan) non- agricultural agricultural primary secondary tertiary Year population population industry industry industry 2000 394.59 226.99 103.96 709.32 569.28 2001 391.37 237.77 111.46 793.58 662.98 2002 384.79 252.02 114.64 901.82 765.37 2003 379.11 263.67 126.59 1075.78 897.4 2004 369.1 282.58 132.23 1318.23 1092.72 2005 362.91 297.54 148.21 1496.94 1297.5 2006 356.53 309.78 154.86 1734.58 1552.07 2007 348.6 323.75 163.47 2056.93 1879.77 2008 336.88 340.76 178.64 2389.38 2213.14 Civilian Per capita Total retail vehicle Highway's disposable sales of social ownership passenger income consumer goods (10000 flow (10000 Year (Yuan) (billion Yuan) persons) persons) 2000 9668 514.68 172343 17102 2001 10896 579.01 204223 18707 2002 11778 660.65 250018 19213 2003 12898 742.58 329436 19510 2004 14565 855.45 414355 20372 2005 16601 975.43 495876 21431 2006 19027 1112.37 596385 22961 2007 21689 1296.31 712398 24836 2008 24104 1558.38 822677 25630 Source: Hangzhou Statistical Yearbook (1998-2008)-- http://www.hzstats.gov.cn Table 2. Normalized data and influencing factors regarding passenger flow on Hangzhou Highway Population (10000 persons) Gross domestic product (billion Yuan) non- agricultural agricultural primary secondary tertiary Year population population industry industry industry 2000 1 0 0 0 0 2001 0.944204 0.094753 0.100428 0.050153 0.057 2002 0.830185 0.220005 0.14301 0.114579 0.119286 2003 0.731762 0.322405 0.303026 0.218123 0.199603 2004 0.558309 0.488617 0.378548 0.362433 0.318421 2005 0.451048 0.620111 0.592528 0.468805 0.442994 2006 0.340496 0.727696 0.681575 0.610252 0.597855 2007 0.203084 0.850488 0.796867 0.80212 0.797203 2008 0 1 1 1 1 Per capita Total retail Civilian Highway's disposable sales of social vehicle passenger income consumer goods ownership flow (10000 Year (Yuan) (billion Yuan) (10000 persons) persons) 2000 0 0 0 17102 2001 0.085065 0.061636 0.049021 18707 2002 0.146162 0.139858 0.119439 19213 2003 0.223746 0.218358 0.241557 19510 2004 0.339221 0.326502 0.372135 20372 2005 0.480258 0.441458 0.497487 21431 2006 0.64831 0.572665 0.652037 22961 2007 0.83271 0.748903 0.830427 24836 2008 1 1 1 25630 Table 3. Prediction results and error of passenger flow on the Hangzhou highway Time 2006 2007 2008 Actual value 22961 24836 25630 Prediction value 22955 24321 25559 Prediction error 0.00026 0.02073 0.0028 Table 4. The performance index of prediction model Maximum difference 515 [E.sub.MA] 0.00793 [E.sub.MSE] 0.0008 Prediction 99.5971% accuracy Table 5. Model parameters and optimized contrast of LS-SVM Squared correlation MSE C [gamma] coefficient Initialize results 0.0076 32 0.0039 0.98555 Optimize results 0.0082 1 0.1250 0.995971

Printer friendly Cite/link Email Feedback | |

Author: | Hu, Yanrong; Wu, Chong; Liu, Hongjiu |
---|---|

Publication: | Transport |

Article Type: | Report |

Geographic Code: | 4EXLT |

Date: | Jun 1, 2011 |

Words: | 4079 |

Previous Article: | Study on heterogeneous traffic flow characteristics of a two-lane road/Heterogeninio transporto eismo srauto charakteristikos dvieju juostu kelyje. |

Next Article: | A multicriterial mathematical model for creating vehicle routes to transport goods/Transporto priemoniu marsrutu prekems vezti daugiakriterinis... |

Topics: |