Printer Friendly

New MCEQLS AHP method for evaluating quality of learning scenarios.

Introduction

The aim of the paper is to investigate, propose, and demonstrate examples of practical application of MCEQLS (Multiple Criteria Quality Evaluation of Learning Software) AHP (Analytic Hierarchy Process) method for the expert evaluation of quality of learning scenarios (LS). A special attention is paid to LS suitability to particular learner groups.

Educational sector needs high quality software to provide their students with the qualitative education services. Currently, the educational sector mostly needs relevant software such as LS. LSs are the main parts of any education institution's e-learning system and environment, and therefore the overall quality of the learning services mostly depends on the quality of this kind of software. The main players in the educational sector are the educational institutions themselves (schools, universities, etc.), educational authorities (ministries of education, regional and other agencies, etc.) and policy makers, as well as providers of the educational software (content publishers, software vendors, etc.). Educational institutions are interested in using high quality learning software. Therefore, they need some kind of proper approaches, models, and methods how to choose qualitative educational software in the market or to find free software. The publishers and vendors are interested in offering such educational software to the institutions, and the policy makers are interested to be aware of these models and methods in order to formulate the education policy (e.g. while implementing the public tenders to purchase software for the educational sector). All these education sector stakeholders need to know for sure which kind of LSs are qualitative ones and which are not.

On the other hand, students prefer different learning styles, and educational institutions have to fit students' preferences by proposing them suitable LS software.

Therefore, this problem is of very high practical relevance for the educational sector that needs clear and easy to use models and methods to evaluate quality of LS in the market, both proprietary and free ones. These proper quality evaluation models and methods have to meet all the aforementioned stakeholders' needs. Therefore, the problem of evaluation of quality of LS learning software is high on the agenda of the international research and education systems.

The rest of the paper is organised as follows: overview of the used notions and methods is presented in Section 1; a new MCEQLS AHP method is presented in Section 2; MCEQLS AHP application research results are presented in Section 3; and conclusions are presented in later section. Section 3 is divided into two separate parts-creation of LS quality evaluation model (criteria tree) and practical application of MCEQLS AHP method for evaluating LS which in its turn is divided into (a) the use of the novel method of consecutive triple application of AHP to establish the weights of quality criteria, (b) application of triangular and trapezoidal fuzzy numbers to establish the values of quality criteria, and (c) practical example of evaluating several real-life LS alternatives used in iTEC project (iTEC 2011).

1. Overview of the used notions and methods

1.1. What is learning scenario?

The term "Learning Scenario" is commonly used in European e-Learning community (e.g. in iTEC project) as a synonym of term "Unit of Learning" (UoL). LS/UoL is referred here as an aggregation of learning activities (LAs) that take place in particular virtual learning environment (VLE) using particular learning objects (LOs). From Informatics Engineering point of view, LS/UoL is a learning software package aimed to implement particular teaching or learning method(s) and usually consisting of LOs, LAs and VLE. LO is referred here as "any digital resource that can be reused to support learning" (Wiley 2000). According to IMS Learning Design (IMS LD 2003) specification, LAs describe a role they have to undertake within a specified environment composed of LOs and services. Activities take place in a so-called "environment", which is a structured collection of LOs, services, and sub-environments. The term "VLE" is referred here as "a single piece of software, accessed via standard Web browser, which provides an integrated online learning environment" (Dagiene, Kurilovas 2007).

This LS/UoL notion is based on Koper and Tattersall (2005) work and IMS LD (2003). Quoting Koper and Tattersall (2005), "a UoL refers to a complete, self-contained unit of education or training, such as a course, a module, a lesson, etc". IMS LD (2003) specification's conceptual vocabulary clarifies that a UoL is an abstract term used to refer to any delimited piece of education or training. It is noted that a UoL represents more than just a collection of ordered LOs to learn, it includes a variety of prescribed activities (problem solving activities, search activities, discussion activities, peer assessment activities, etc.), assessments, services and support facilities provided by teachers, trainers and other staff members. A learning design as an integral part of any UoL is a description of a method enabling learners to attain certain learning objectives by performing certain LAs in a certain order in the context of a certain learning environment (VLE).

1.2. Learning scenarios personalisation strategies

The paper is aimed to evaluate quality of LS paying special attention to their suitability to particular learners' groups (styles). Probably the most popular among educational researchers classification of the learning styles is one developed by Honey and Mumford (1992), based upon the work of Kolb (1984). They identified four distinct learning styles or preferences Activist, Theorist; Pragmatist and Reflector:

--Activist: Activists are those people who learn by doing. Have an open-minded approach to learning, involving themselves fully and without bias in new experiences. Their preferred activities are: brainstorming, problem solving, group discussion, puzzles, competitions, and role-play.

--Reflector: These people learn by observing and thinking about what happened.

--Pragmatist: These people need to be able to see how to put the learning into practice in the real world.

--Theorist: These learners like to understand the theory behind the actions.

In this paper, the authors decided to analyse and present research results on suitability of LS to the only learner style, namely, activist learners. The main reason for this is that iTEC scenarios selected to demonstrate practical application of the novel method are (in the authors' opinion) mostly suitable for activist learning style (see 3.2. below).

1.3. What is multiple criteria decision analysis in e-learning science?

According to Oliver (2000), evaluation can be characterised as "the

process by which people make judgements about value and worth". In the context of learning technology this judgement process is complex and often controversial. Although the notion of evaluation is rooted in a relatively simple concept, the process of judging the value of learning technology is complex and challenging. Quality evaluation is defined as "the systematic examination of the extent to which an entity (part, product, service or organisation) is capable of meeting specified requirements" (ISO/IEC 14598-1:1999, 1999). Expert evaluation is referred here as a multiple criteria evaluation of learning software aimed at the selection of the best alternative based on score-ranking results (Kurilovas, Dagiene 2010a, b, 2009a, b).

Currently, a number of researchers (Shee, Wang 2008; Sun et al. 2008; iTEC 2011; iCOPER 2011) have already proposed several approaches for the evaluation of quality of learning software, but the main problems of some of those approaches are low cost effectiveness and high level of subjectivity of expert evaluation. There are also a number of problems dealing with comprehensiveness and overall construction of quality evaluation models. Several researchers (iTEC 2011; iCOPER 2011) do not use any appropriate scientific approaches, principles and methods to establish the proper evaluation models and methods.

According to Dzemyda and Saltenis (1994), if the set of decision alternatives is assumed to be predefined, fixed and finite, then the decision problem is to choose the optimal alternative (of the learning software) or, maybe, to rank them. But usually the experts (decision makers) have to deal with the problem of an optimal decision in the multiple criteria situation where the objectives are often conflicting. Evaluation of the quality of LS is a typical case where the criteria are conflicting, i.e. the learning software could be very qualitative against a number of criteria and not qualitative against the other ones, and vice versa. On the other hand, those evaluation criteria should reflect the opinion of all aforementioned stakeholders. In this case, according to Dzemyda and Saltenis (1994), "an optimal decision is the one that maximises the decision maker's utility".

Quoting Zavadskas and Turskis (2010), "there is a wide range of multiple criteria decision making problem solution techniques, varying in complexity and possible solutions. Each method has its own strength, weaknesses and possibilities to be applied". But, according to Zavadskas and Turskis (2010), there are still no rules determining the application of multi-criteria evaluation methods and interpretation of the results obtained.

The practical problem analysed in the paper is how to choose the best LS alternative in the market or create it. Here "the best" alternative means an alternative of the highest quality. Therefore, the main scientific problem analysed in the paper is creation of the proper models and methods for the expert evaluation of quality of LS. The problem is how to elaborate quite objective, exact and simple to use approaches, models, and methods for choosing the qualitative alternatives of LS software.

According to Ardito et al. (2006), despite the recent advances of the electronic technologies in e-learning, a consolidated evaluation methodology for the e-learning applications is not available. The evaluation of the educational software must consider its usability and more in general its accessibility, as well as its didactic effectiveness. According to Chua and Dyson (2004), despite the widespread use of the e-learning systems and the considerable investment in purchasing or developing them, there is no consensus on a standard framework for evaluating the system quality.

Many multidimensional approaches have been proposed as the extensions of the classical ones. A first one was the so-called Multiple Criteria Decision Making (MCDM), developed by the so-called American School. More recently, the European School has created a new type of approach to these problems, called Multiple Criteria Decision Aid. Many real life applications have successfully validated the feasibility of this approach. MCDM deals with different classes of decision problems (choice, classification, sorting, ranking), explicitly taking into consideration several points of view (multiple attributes or criteria, i.e. attributes with an ordered domain), in order to support the experts (decision makers) in finding a consistent solution of the problem at hand. MCDM methods are used in many areas of human activities. MCDM is one of the most widely used decision methodologies in science, business, and governments, which are based on the assumption of a complex world, and can help to improve the quality of decisions by making the decision-making process more explicit, rational, and efficient (MCDM 2011).

Each alternative in a multiple criteria decision-making problem can be described by a set of criteria that can be qualitative and quantitative. According to Zavadskas and Turskis (2010), criteria usually have different units of measurement and a different optimization direction. Real-world decision-making problems are usually too complex and unstructured to be considered through the examination of a single criterion, or point of view that will lead to an optimum decision. According to Turskis et al. (2009), all new ideas and possible variants of decisions must be compared according to many criteria. The problem of a decision-maker consists of "evaluating a finite set of alternatives in order to find the best one, to rank them from the best to the worst, to group them into predefined homogeneous classes, or to describe how well each alternative meets all the criteria simultaneously" (Zavadskas, Turskis 2010). There are many methods for determining the ranking of a set of alternatives in terms of a set of decision criteria. In a multiple criteria approach, the experts seek to build several criteria using several points of view.

There is a number of well-known methods of multiple criteria optimisation and determination of the priority of the analysed alternatives. According to (Zavadskas et al. 2008), there is a wide range of methods, based on the multi-criteria utility theory, e.g. SAW--Simple Additive Weighting (Ginevicius et al. 2008; Sivilevicius et al. 2008); MOORA--Multi-Objective Optimization on the Basis of Ratio Analysis (Brauers, Zavadskas, 2006; Brauers et al. 2008; Kalibatas, Turskis, 2008); TOPSIS--Technique for Order Preference by Similarity to Ideal Solution (Hwang, Yoon 1981; Zavadskas et al. 2006); VIKOR--a compromise ranking method (Zavadskas, Antucheviciene 2007); COPRAS--Complex Proportional Assessment (Zavadskas et al. 2007); games theory methods (Peldschus, Zavadskas 2005; Antucheviciene et al. 2006), and other approaches (Turskis 2008). But, according to Zavadskas and Turskis (2010), it is hardly possible to evaluate the effect of various methods of a problem solution.

One of the main problems in this task is how to establish a 'proper' (i.e. as objective as possible) system of LS quality criteria which should reflect the objective scientific principles of constructing a model (criteria tree) for their quality evaluation. These issues have been analysed in the research works on the multiple criteria decision analysis (MCDA). There are a number of MCDA-based principles of identification of quality evaluation criteria elaborated by Belton and Stewart (2002).

Another problem is application of suitable MCDA methods in the numerical evaluation of quality of LS. The main problems of all the existing approaches in the area are a high level of expert evaluation subjectivity as well as their insufficient exactness, clarity and usability.

Therefore, in their previous works, the authors analysed several scientific methods, requirements and principles to minimise those problems of the evaluation of quality of the learning software and proposed to use MCEQLS (Multiple Criteria Evaluation of Quality of the Learning Software) approach which is, in their opinion, quite easy-to-use in real-life situations. MCEQLS approach was presented in (Kurilovas, Dagiene 2010a) and refined in (Kurilovas et al. 2011a). In those works, the author showed that the MCEQLS approach could significantly improve the quality of the expert evaluation of the learning software and noticeably reduce the expert evaluation subjectivity level.

2. A new MCEQLS AHP method in multi-criteria decision-making in e-learning science

2.1. MCEQLS approach

The refined MCEQLS approach consists of the complex application of a number of scientific principles, methods and requirements such as (1) the principles of MCDA for identification of quality evaluation criteria, (2) technological quality criteria classification principle, (3) fuzzy group decision making theory, and (4) experts' additive utility function using normalized weights of quality criteria (Kurilovas et al. 2011a). A short description of all those stages of MCEQLS approach is presented below.

(1) According to Zavadskas and Turskis (2008), each alternative in the multi-criteria decision making problem can be described by a set of criteria. Criteria can be qualitative and quantitative. According to Belton and Stewart (2002), in identifying criteria for the decision analysis, the following considerations (principles) are relevant to all MCDA approaches: (1) value relevance; (2) understandability; (3) measurability; (4) non-redundancy; (5) judgmental independence; (6) balancing completeness and conciseness; (7) operationality; and (8) simplicity versus complexity.

(2) From technological point of view, we can divide the learning software quality criteria into 'internal quality' and 'quality in use' criteria. According to Gasperovic and Caplinskas (2006), 'internal quality' is a descriptive characteristic that describes the quality of software independently from any particular context of its use, while 'quality in use' is evaluative characteristic of software obtained by making a judgment based on the criteria that determine the worthiness of software for a particular project. According to (Kurilovas et al. 2011a), this technological quality criteria classification principle, any technological quality evaluation model (set of criteria) for the learning software should provide the experts (decision-makers) the clear instrumentality who (i.e. what kind of experts) should analyse what kind of the software quality criteria in order to select the best software alternative suitable for their needs. Software engineering experts should analyse 'internal quality' criteria based on the scientific informatics engineering knowledge, and the programmers and users (e.g. teachers) should analyse 'quality in use' criteria based on the users' feedback, design and usability issues, etc.

(3) According to Ounaies et al. (2009), the wide-used measurement criteria of the decision attributes quality are mainly qualitative and subjective. In this context, decisions are often expressed in the natural language, and evaluators are unable to assign exact numerical values to different criteria. Assessment can be often performed by the linguistic variables such as 'bad, 'poor, 'fair', 'good' and 'excellent, Several methods such as qualitative weight and sum (QWS) approach presented by Graf and List (2005) apply the symbols E, *, #, +, |, and 0 to express the values of the evaluated quality. These linguistic variables and symbols allow reasoning with imprecise information, and they are commonly called fuzzy values. Integrating these different judgments to obtain a final evaluation is not evident. In order to solve this problem Ounaies et al. (2009) suggest using the fuzzy group decision making theory to obtain final assessment measures. According to their proposal, first, linguistic variable values should be mapped into fuzzy numbers, and, second, into non-fuzzy values (Table 1).

(4) The main problem in the expert evaluation of the learning software is the problem of application of the suitable models and methods. There are a number of probably suitable methods for evaluation of the quality of the learning software such as LS that are well known in the optimisation theory, e.g. vector optimisation methods. One of them is the multiple criteria evaluation method referred here as the experts' additive utility function represented by formula (1) below including the learning software evaluation criteria, their ratings (values) and weights. This method is well-known in the theory of optimisation and is named "scalarisation method". According to this method, a possible decision here could be to transform a multiple criteria task into a one-criterion task by adding all the quality criteria multiplied by their weights. It is valid from the optimisation point of view of the, and a special theorem exists for this case (Kurilovas, Serikoviene 2010). Therefore, here we have the experts' additive utility function:

f(x) = [m.summation over (i=l)][a.sub.i][f.sub.i](X). (1)

Here [f.sub.i]([X.sub.i]) is the rating (value) of the criterion i = 1, 2 ... m for the each of the examined software alternatives [X.sub.i], and [a.sub.i] are the weights of the evaluation criteria.

The weight [a.sub.i] of the evaluation criterion reflects the expert's opinion on the criterion's importance level in comparison with the other criteria for the particular learning styles (e.g. activist learners). The following 'normalisation' requirement exists for the weights of the evaluation criteria in formula (1):

[m.summation over (i=l)][a.sub.i] = 1, [a.sub.i] > 0. (2)

According to Zavadskas and Turskis (2010), the normalisation aims at obtaining comparable scales of criteria values. The major is the meaning of the utility function (1) the better the learning software package meets the quality requirements in comparison with the ideal (100%) quality (Kurilovas 2009). The biggest value of the function (1) is the best, and the least one is the worst (Zavadskas, Turskis 2010).

2.2. A new method of consecutive triple application of AHP to establish the weights of quality criteria

According to (Kurilovas et al. 2011b), the weight of the evaluation criterion reflects the experts' opinion on the criterion's importance level in comparison with the other criteria for the particular learning styles.

As it was mentioned above, in this paper, the authors propose a novel method of consecutive triple application of AHP to establish proper weights of LS quality evaluation criteria in the case when there are several experts evaluators.

According to Saaty (1990), AHP is a useful method for solving complex decision-making problems involving subjective judgment. In AHP, the multi-attribute weight measurement is calculated via pair-wise comparison of the relative importance of two factors (Lin 2010). The design of the questionnaire incorporates pair-wise comparisons of decision elements within the hierarchical framework. Each evaluator is asked to express relative importance of two criteria in the same level by a nine-point rating scale. After that, we have to collect the scores of pair-wise comparison, and form pair-wise comparison matrices for each of the K evaluators.

According to Saaty (2008), the fundamental scale of absolute numbers is as follows:
Table 2. Pair-wise comparison scale for AHP preferences

Numerical rating   Verbal judgements of preferences

9                  Extremely preferred
8                  Very strongly to extremely
7                  Very strongly preferred
6                  Strongly to very strongly
5                  Strongly preferred
4                  Moderately to strongly
3                  Mo derately preferred
2                  Equally to moderately
1                  Equally preferred


After that, we have to construct a set of pair-wise comparison matrices (size n x n) for each of the lower levels with one matrix for each element in the level immediately above by using the relative scale measurement shown in Table 2. The pair-wise comparisons are done in terms of which element dominates the other. There are n(n-1)/2 judgments required to develop the set of matrices in this step. Reciprocals are automatically assigned in each pair-wise comparison. Then hierarchical synthesis is used to weight the eigenvectors by the weights of the criteria and the sum is taken over all weighted eigenvector entries corresponding to those in the next lower level of the hierarchy. In order to check the correctness of calculation, having made all the pair-wise comparisons the eigenvalue [[lamba].sub.max] is calculated.

The proposed method of consecutive triple application of AHP is presented in Fig. 1 below. It is based on LS quality model presented in Fig. 1 below, AHP, and MCEQLS technological quality criteria classification principle (see sub-section 2.1). The proposed method consists of application of AHP in three consistent stages as follows:

1. Establishment of comparative weights of three different groups of LS quality criteria (i.e. LOs, LA and VLE) and weights [a.sub.i] of all quality criteria.

2. Establishment of comparative weights of 'internal quality' and 'quality in use' criteria groups from activist learner point of view. The final 'internal quality' criteria weights are established in this stage.

3. Establishment of final weights of quality criteria from activist learners point of view by application of AHP once again only for 'quality in use' criteria.

After establishing quality criteria weights by application of the method of consecutive triple application of AHP and establishing criteria values using different fuzzy numbers methods, formula (1) is used to calculate the values of the experts' additive utility functions for each of the explored LS alternatives.

3. Example of application of MCEQLS AHP method

3.1. Learning scenario quality model

LS quality model is presented in the Fig. 1 below. The model consists of three groups of quality criteria (i.e. components of LS), namely LOs, LAs, and VLE. This LS structure is based on IMS LD (2003) specification and Koper and Tattersall (2005) work (see sub-section 1.1).

The selection of criteria is based on literature analysis, MCDA criteria identification principles (see MCEQLS stage (1) in 2.1.), and technological criteria classification principle (see MCEQLS stage (2) in 2.1.). Furthermore, sets portrait method was used to analyse correspondence between software quality characteristics (see international software quality standard ISO ISO/IEC 9126-1:2001(E)) and LS quality criteria, on the one hand, and between activist learner characteristics and LS quality in use criteria, on the other. LOs and VLEs quality criteria systems used in this model have been accordingly presented earlier in (Kurilovas et al. 2011a, b) and (Kurilovas, Dagiene 2009b). LA quality criteria system has been created specially for the present research. Several LA quality criteria were proposed by iTEC working groups, and a number of scientific papers were additionally analysed (Shee, Wang 2008; Sun et al. 2008; iCOPER 2011) by the authors to propose LA quality criteria for the present model.

[FIGURE 1 OMITTED]

Finally, Belton and Stewart (2002) MCDA principles were applied to form the system of criteria. The authors have paid special attention to Non-redundancy, Judgemental independence, Balancing completeness and conciseness, and Simplicity Vs complexity principles to create the comprehensive criteria tree presented in Fig. 1.

In the model, the authors also apply technological criteria classification principle to clearly separate 'internal quality' and 'quality in use' criteria. This principle has been consecutively applied to all groups of LS quality criteria.

3.2. Practical example of evaluating quality of learning scenarios in iTEC project

Two Cycle 1 detailed LS alternatives proposed by iTEC experts were chosen by the authors to demonstrate application of MCEQLS AHP method for evaluating LS quality and their suitability to chosen activist learner profile:

--[LS.sub.1]: "A Breath of Fresh Air"

(available online at http://itec.eun.org/web/guest/scenario-library)

--[LS.sub.2]: "Online Repositories Rock" (available online at http://itec.eun.org/web/guest/scenario-library)

The scenarios are presented not in IMS LD package form, but in narrative form that is more convenient to teachers to validate the scenarios. The scenarios do not propose any particular existing LOs to be used--it is up to students to create suitable LOs by themselves. There is also no clear instruction what VLEs should be used to implement the scenarios, but there is common understanding that flexible VLE like Moodle should be suitable to implement the scenarios. Project experts also propose a number of widgets for each scenario to enrich VLE. Therefore, for the presented example the authors make an assumption that 'good' LOs will be created for both chosen LS. This means that all LOs quality criteria in the model presented in Fig. 1 should be evaluated 0.675 according to triangle fuzzy numbers method, and 0.800--according to trapezoidal fuzzy numbers method. VLE Moodle was chosen by the authors as a proper environment to implement both LS. Since VLE Moodle was already evaluated by the experts in (Kurilovas, Dagiene 2010a, b; 2009a), the authors used the same values to evaluate VLE component of LS criteria (see Fig. 1).

Three iTEC experts incl. the authors took part in LS quality evaluation process. Application of the aforementioned method of consecutive triple application of AHP showed the following results:

--Stage 1 (establishment of comparative weights of three different groups of LS quality criteria and weights ai of quality criteria) results have shown that the experts evaluators prefer LOs and LA components more in comparison with VLE component (LOs -39.7%, LA -39.7%, and VLE -20.6%). Weights a. for all 24 LS quality criteria were also calculated in conformity with AHP.

--Stage 2 (establishment of comparative weights of 'internal quality' and 'quality in use' criteria groups from activist learner point of view) results have shown that the evaluators prefer 'quality in use' criteria in comparison with 'internal quality' criteria (respectively 69.4% Vs 30.6% for LOs, 72.2% Vs 27.8% for LA, and 61.1% Vs 38.9% for VLE) to analyse suitability of chosen LS to activist learners.

--Stage 3 (establishment of final weights of LS quality criteria from activist learners point of view by application of AHP once again only for 'quality in use' criteria) has re-established the weights of 'quality in use' criteria.

Final evaluation results after using MCEQLS experts' utility function (1) are as follows: In the general case (G) when experts do not take into account particular learning style: 72.7% according to trapezoidal method, or 63.8% according to triangle method for [LS.sub.1] Vs 67.6% according to trapezoidal method, or 60.5% according to triangle method for [LS.sub.2]:

[a.sub.iG] * f([X.sub.j]) = (0,727 0,676) [a.sub.iG] * f ([X.sub.j]) = (0,638 0,605).

In the particular case when experts take into account particular (activist) learning style (A):0 76.0% according to trapezoidal method, or 65.4% according to triangle method for [LS.sub.1] Vs 68.1% according to trapezoidal method, or 60.8% according to triangle method for [LS.sub.2]:

[a.sub.iA] * f ([X.sub.j]) = (0,760 0,681) [a.sub.iA] * f ([X.sub.j]) = (0,654 0,608).

If we look separately at learning activity (LA) component of the scenarios, we'll get the following results for LA comparative weights in LS:

In the general case (G) when experts do not take into account particular learning style: 28.4% according to trapezoidal method, or 25.1% according to triangle method for L[A.sub.1] Vs 23.2% according to trapezoidal method, or 21.8% according to triangle method for L[A.sub.2]:

[a.sub.iG] * f([X.sub.j]) = (0,284 0,232) [a.sub.iG] * f([X.sub.j]) = (0,251 0,218).

In the particular case when experts take into account particular (activist) learning style (A): 29.4% according to trapezoidal method, or 25.4% according to triangle method for L[A.sub.1] Vs 21.5% according to trapezoidal method, or 20.8% according to triangle method for L[A.sub.2]:

[a.sub.iA] * f ([X.sub.j]) = (0,294 0,215) [a.sub.iA] * f([X.sub.j]) = (0,254 0,208).

In the general case these results mean that, according to trapezoidal fuzzy numbers method, [LS.sub.1] meets 72.7% quality in comparison with the ideal, [LS.sub.2] -67.6%, and using triangle fuzzy numbers method -[LS.sub.1] -63.8%, and [LS.sub.2] -60.5% respectively.

In the particular case (for activist learner) these results mean that according to trapezoidal fuzzy numbers method, [LS.sub.1] meets 76.0% quality in comparison with the ideal, [LS.sub.2] -68.1%, and using triangle fuzzy numbers method -[LS.sub.1] -65.4%, and [LS.sub.2] -60.8% respectively.

The same tendency in ranking the alternatives is noticeable in evaluation of LA. It is understandable, because the experts have decided that LOs and VLE for each alternative should be evaluated equally, and in this case LA remains the most significant component of LS.

The obtained results mean that [LS.sub.1] is a better alternative in comparison with [LS.sub.2] both in the general case and in the particular case of suitability to activist learner, and both for LA component and the whole learning scenario.

The same tendency is noticeable in the first stage of large scale validation of those scenarios in iTEC project. Selection of Lithuanian teachers and classes in September -October 2011 has shown that [LS.sub.1] seems more promising alternative for teachers in comparison with [LS.sub.2]-67 classes prefer to validate [LS.sub.1], and only 41 -[LS.sub.2]. Results of large scale validation of iTEC Cycle 1 scenarios will be available in March 2012.

Conclusions

The research results presented in the paper show that MCEQLS approach refined by the original method of consecutive triple application of AHP to establish criteria weights: (a) is applicable in real life situations when schools have to decide on use of particular learning scenarios for their education needs, and (b) could significantly improve the quality of expert evaluation of learning scenarios by noticeably reduce of the expert evaluation subjectivity level.

Use of the method of consecutive triple application of AHP leads to establishing different weights of criteria for particular leaner groups, and respectively -to different LS alternatives evaluation results. Application of both triangle and trapezoidal fuzzy numbers shows similar alternatives' quality evaluation results, i.e. the ranking of analysed alternatives have not changed while applying different fuzzy numbers methods. The experimental evaluation results show that proposed MCEQLS AHP method is quite objective, exact and simple to use for selecting qualitative LS alternatives for particular learner groups. On the other hand, proposed LS personalised quality evaluation approach is applicable for the aims of iTEC project in order to select LS suitable for activist learners. Therefore, this approach was recommended by the authors to be widely used by European policy makers, publishers, practitioners, and experts-evaluators both inside and outside iTEC project to evaluate quality and personalisation level of learning scenarios.

The main limitations of the paper are as follows: (1) only one of pair-wise comparison methods probably suitable for evaluation of a small number of LS alternatives was selected; (2) only one (i.e. activist) learning style developed by the only classification selected (see 1.2.) was chosen for the presented research.

Method of consecutive triple application of AHP presented in the paper is absolutely novel, and these new elements make the given work distinct from all the other earlier works in the area.

doi: 10.3846/20294913.2012.762952

References

Antucheviciene, J.; Turskis, Z.; Zavadskas, E. K. 2006. Modelling renewal of construction objects applying methods of the game theory, Technological and Economic Development of Economy 12(4): 263-268.

Ardito, C.; Costabile, M. F.; De Marsico, M.; Lanzilotti, R.; Levialdi, S.; Roselli, T.; Rossano, V. 2006. An approach to usability evaluation of e-learning applications, Universal Access in the Information Society 4(3): 270-283. http://dx.doi.org/10.1007/s10209-005-0008-6

Belton, V.; Stewart, T. J. 2002. Multiple criteria decision analysis: an integrated approach. Kluwer Academic Publishers. http://dx.doi.org/10.1007/978-1-4615-1495-4

Brauers, W. K.; Zavadskas, E. K. 2006. The MOORA method and its application to privatization in a transition economy, Control and Cybernetics 35(2): 443-468.

Brauers, W K.; Zavadskas, E. K.; Peldschus, F.; Turskis, Z. 2008. Multi-objective decision-making for road design, Transport 23(3): 183-193. http://dx.doi.org/10.3846/1648-4142.2008.23.183-193

Chua, B. B.; Dyson, L. E. 2004. Applying the ISO9126 model to the evaluation of an elearning system, in Atkinson, R.; McBeath, C.; Jonas-Dwyer, D.; Phillips, R. (Eds.). Beyond the comfort zone: Proceedings of the 21st ASCILITE conference, 5-8 December, 2004, Perth, 184-190.

Dagiene, V.; Kurilovas, E. 2007. Design of Lithuanian digital library of educational resources and services: the problem of interoperability, Information Technology and Control 36(4): 402-411.

Dzemyda, G.; Saltenis, V. 1994. Multiple criteria decision support system: methods, user's interface and applications, Informatica 5(1-2): 31-42.

Gasperovic, J.; Caplinskas, A. 2006. Methodology to evaluate the functionality of specification languages, Informatica 17(3): 325-346.

Ginevicius, R.; Podvezko, V.; Bruzge, S. 2008. Evaluating the effect of state aid to business by multicriteria methods, Journal of Business Economics and Management 9(3): 167-180. http://dx.doi.org/10.3846/1611-1699.2008.9.167-180

Graf, S.; List, B. 2005. An evaluation of open source e-learning platforms stressing adaptation issues. Presented at ICALT 2005.

ISO/IEC 14598-1:1999. Information technology -software product evaluation -part 1: general overview. 1st ed. 1999-04-15

ISO/IEC 9126-1:2001(E). Software engineering -product quality -part 1: Quality model. 2001

iCOPER 2011. EU eContentplus Programme's iCOPER (Interoperable Content for Performance in a Competency-Driven Society) Best Practice Network web site. D. 3.1. Annex A. [Online], [cited 20 January 2012]. Available from Internet: http://www.icoper.org/

IMS LD 2003. IMS Learning Design Information Model. Version 1.0 Final Specification. [Online], [cited 20 January 2012]. Available from Internet: http://www.imsglobal.org/learningdesign/ldv1p0/ imsld_infov1p0.html#1495449

Honey, P.; Mumford, A. 1992. Manual of learning styles. London, UK.

Hwang, C. L.; Yoon, K. S. 1981. Multiple attribute decision-making/methods and applications. Berlin,

Heidelberg, New York: Springer-Verlag. http://dx.doi.org/10.1007/978-3-642-48318-9 iTEC (Innovative Technologies for an Engaging Classroom) project website 2011. [Online], [cited 20 January 2012]. Available from Internet: http://itec.eun.org/web/guest/

Lin, H.-F. 2010. An application of fuzzy AHP for evaluating course website quality, Computers & Education 54: 877-888. http://dx.doi.org/10.1016/j.compedu.2009.09.017

Kalibatas, D.; Turskis, Z. 2008. Multicriteria evaluation of inner climate by using MOORA method, Information Technology and Control 37(1): 79-83.

Kolb, D. A. 1984. Experiential learning: experience as the source of learning and development. Englewood Cliffs: Prentice Hall.

Koper, E. J. R.; Tattersall, C. (Eds.). 2005. Learning design: a handbook on modelling and delivering networked education and training. Heidelberg: Springer.

Kurilovas, E. 2009. Interoperability, standards and metadata for e-learning, in Papadopoulos, G. A.; Badica, C. (Eds.). Intelligent Distributed Computing III, Studies in Computational Intelligence 237, Berlin, Heidelberg: Springer-Verlag, 121-130. http://dx.doi.org/10.1007/978-3-642-03214-1_12

Kurilovas, E.; Vinogradova, I.; Serikoviene, S. 2011a. Application of multiple criteria decision analysis and optimisation methods in evaluation of quality of learning objects, International Journal of Online Pedagogy and Course Design 1(4): 62-76. http://dx.doi.org/10.4018/ijopcd.2011100105

Kurilovas, E.; Bireniene, V.; Serikoviene, S. 2011b. Methodology for evaluating quality and reusability of learning objects, Electronic Journal of e-Learning 9(1): 39-51.

Kurilovas, E.; Dagiene, V. 2010a. Evaluation of quality of the learning software. Basics, concepts, methods: Monograph. Germany, Saarbrucken: LAP LAMBERT Academic Publishing.

Kurilovas, E.; Dagiene, V. 2010b. Multiple criteria evaluation of quality and optimisation of e-learning system components, Electronic Journal of e-Learning 8(2): 141-150.

Kurilovas, E.; Dagiene, V. 2009a. Multiple criteria comparative evaluation of e-learning systems and components, Informatica 20(4): 499-518.

Kurilovas, E.; Dagiene, V. 2009b. Learning objects and virtual learning environments technical evaluation criteria, Electronic Journal of e-Learning 7(2): 127-136.

Kurilovas, E.; Serikoviene, S. 2010. Application of scientific approaches for evaluation of quality of learning objects in eQNet project, in Lytras, M. D., et al. (Eds.). WSKS 2010, Part I, CCIS 111, Heidelberg: Springer, 329-335.

MCDM, International Society on Multiple Criteria Decision Making web site 2011. [Online], [cited 20 January 2012]. Available from internet: http://www.mcdmsociety.org/

Oliver, M. 2000. An introduction to the evaluation of learning technology, Educational Technology & Society 3(4): 20-30.

Ounaies, H. Z.; Jamoussi, Y.; Ben Ghezala, H. H. 2009. Evaluation framework based on fuzzy measured method in adaptive learning system, Themes in Science and Technology Education 1(1): 49-58.

Peldschus, F.; Zavadskas, E. K. 2005. Fuzzy matrix games multi-criteria model for decision-making in engineering, Informatica 16(1): 107-120.

Saaty, T. L. 2008. Relative measurement and its generalization in decision making: why pairwise comparisons are central in mathematics for the measurement of intangible factors -the analytic hierarchy/ network process, RACSAM (Review of the Royal Spanish Academy of Sciences, Series A, Mathematics) 102(2): 251-318.

Saaty, T. L. 1990. How to make a decision: the analytic hierarchy process, European Journal of Operational Research 48(1): 9-26. http://dx.doi.org/10.4018/ijopcd.2011100105

Sivilevicius, H.; Zavadskas, E. K.; Turskis, Z. 2008. Quality attributes and complex assessment methodology of the asphalt mixing plant, Baltic Journal of Road and Bridge Engineering 3(3): 161-166. http://dx.doi.org/10.3846/1822-427X.2008.3.161-166

Shee, D. Y.; Wang, Y.-S. 2008. Multicriteria evaluation of the web-based e-learning system: a methodology based on learner satisfaction and its application, Computers and Education 50(48): 894-905. http://dx.doi.org/10.1016/j.compedu.2006.09.005

Sun, P.-C.; Tsai, R. J.; Finger, G.; Chen, Y.-Y.; Yeh, D. 2008. What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction, Computers and Education 50: 1183-1202. http://dx.doi.org/10.1016/j.compedu.2006.11.007

Turskis, Z. 2008. Multi-attribute contractors ranking method by applying ordering of feasible alternatives of solutions in terms of preferability technique, Technological and Economic Development of Economy 14(2): 224-239. http://dx.doi.org/10.3846/1392-8619.2008.14.224-239

Turskis, Z.; Zavadskas, E. K.; Peldschus, F. 2009. Multi-criteria optimization system for decision making in construction design and management, Inzinerine Ekonomika -Engineering Economics (1): 7-17.

Wiley, D. A. 2000. Connecting Learning Objects to Instructional design Theory: a definition, a Metaphor, and a Taxonomy. Utah State University. [Online], [cited 20 January 2012]. Available from Internet: http://www.reusability.org/read/

Zavadskas, E. K.; Zakarevicius, A.; Antucheviciene, J. 2006. Evaluation of ranking accuracy in multicriteria decisions, Informatica 17(4): 601-618.

Zavadskas, E. K.; Antucheviciene, J. 2007. Multiple criteria evaluation of rural building's regeneration alternatives, Building and Environment 42(1): 436-451. http://dx.doi.org/10.1016/j.buildenv.2005.08.001

Zavadskas, E. K.; Kaklauskas, A.; Peldschus, F.; Turskis, Z. 2007. Multi-attribute assessment of road design solutions by using the COPRAS method, The Baltic Journal of Road and Bridge Engineering 2(4): 195-203.

Zavadskas, E. K.; Turskis, Z. 2008. A new logarithmic normalization method in games theory, Informatica 19(2): 303-314.

Zavadskas, E. K.; Turskis, Z.; Tamosaitiene, J.; Marina, V. 2008. Multicriteria selection of project managers by applying grey criteria, Technological and Economic Development of Economy 14(4): 462-477. http://dx.doi.org/10.3846/1392-8619.2008.14.462-477

Zavadskas, E. K.; Turskis, Z. 2010. A new additive ratio assessment (ARAS) method in multicriteria decision-making, Technological and Economic Development of Economy 16(2): 159-172. http://dx.doi.org/10.3846/tede.2010.10

Eugenijus KURILOVAS (a), Inga ZILINSKIENE (b)

(a) Vilnius Gediminas Technical University, Sauletekio al. 11, LT-10223 Vilnius, Lithuania

(a,b) Vilnius University Institute of Mathematics and Informatics, Akademijos str. 4, LT-08663 Vilnius, Lithuania

Received 12 October 2011; accepted 28 January 2012

Corresponding address: E. Kurilovas E-mail: eugenijus.kurilovas@itc.smm.lt

Eugenijus KURILOVAS is Associate Professor in Vilnius Gediminas Technical University and Research Scientist in Vilnius University Institute of Mathematics and Informatics. He is a member of over 20 committees of international scientific journals and conferences, published over 80 scientific papers and 6 books, and participated in over 20 EU-funded large scale R&D projects and studies. He is also guest editor in Journal of Universal Computer Science, and reviewer in Computer in Human Behavior, both abstracted/indexed in Thomson ISI Web of Science. He is the author of over 10 best paper awards in the largest international e-Learning conferences during last years.

Inga ZILINSKIENE is a Researcher in Vilnius University Institute of Mathematics and Informatics. She has published a number of scientific papers in international journals. 2013 Volume 19(1): 93-124
Table 1. Conversion of linguistic variables and QWS
symbols into non-fuzzy values

Linguistic      Triangle non-fuzzy   Trapezoidal non-fuzzy
variables             values                values

Excellent (*)         0.850                  1.000
Good (#)              0.675                  0.800
Fair (+)              0.500                  0.500
Poor (|)              0.325                  0.200
Bad (0)               0.150                  0.000
COPYRIGHT 2013 Vilnius Gediminas Technical University
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:multiple criteria quality evaluation of learning software analytic hierarchy process
Author:Kurilovas, Eugenijus; Zilinskiene, Inga
Publication:Technological and Economic Development of Economy
Article Type:Report
Geographic Code:4E
Date:Mar 1, 2013
Words:6963
Previous Article:Comparison of requirements for environmental protection and a model for evaluating contaminated sites in Lithuania.
Next Article:A multiple criteria sorting methodology with multiple classification criteria and an application to country risk evaluation.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters