Printer Friendly

Proposal for improvement Capes Evaluation System in Engineering III.

1 Introduction

The search for global education institutions and research have contributed to the structure and membership of such institutions, motivated by a focused vision for the creation of knowledge and the development of practical applications for such innovations and contribute to the social development and economic the nation, become a source of additional financial resource, human and material, forming thus a virtuous cycle of excellence in the areas of operation of these institutions (Etzkowitz & Leydesdorff, 1999; Benneworth et al, 2009; Olcay & Bulu, 2016).

Face of this context, the evaluation system of universities and programs of postgraduate is a problem influenced by several factors, which contribute to the analysis of a wide range of alternatives, this way the multi-criteria approach to support decision fits the problem on that of HE Ranking (Higher Education). The multi-criteria approach in decision problems allows the construction of the preference structure(s) of decision maker(es) that meet several goals, which are represented by criteria such as, for example, teaching, research, citations, social integration, intellectual production etc., and considering the importance of each criterion for the decision problem, they have grounds for the review process of the available alternatives, in this case, the set of universities (De Almeida et al., 2015).

The evaluation system of Brazilian programs of postgraduate is through Higher Education Personnel Improvement Coordination (CAPES), which assesses and acts as a development agency. Evaluation systems such as Times Higher Education (THE), Quacquarelly-Symonds (QS) and Academic Ranking of World Universities (ARWU) evaluate universities globally, not acting as a financing agency. Thus, the Brazilian evaluation of graduate programs can not be compared with the above evaluation system. Although the indicators and the data base are similar, the assessment systems are different (Yu et al., 2016; Olcay & Bulu, 2016).

The main core of any postgraduate is research, this depends on training and requires full dedication to study, and academic institutions task binding on such activities. The results of this research, when applied, lead to economic and social development (Dias & Rorato, 2014).

In this way, Brazilian postgraduate system must be evaluated not only in terms of number of programs, students and notes, but also need to be considered distribution by area of knowledge and quality in research. They are great challenges of the National Postgraduate System--SNPG in terms of reduction of regional asymmetries and areas of knowledge, new researchers training time, qualification of teaching staff, promotion of scientific growth and increase the country's role in the international arena (CAPES, 2015).

The promotion of Brazilian scientific production and the increase of the country's protagonism on the international stage, combined with advances and the highlight of Brazil in the economic environment as an emerging nation, have meant that the country pass also facing the challenge of preparing to have universities international prestige, the world-class call university has become a slogan, used not only to express the improvement in the quality of teaching and research in higher education, but also to refer to the development of capacity to compete in the global market by acquisition, adaptation and generation of knowledge (Niland, 2007; Salmi, 2009; Altbach, 2010).

Thus, in this context, the study presents the model used by the CAPES system to evaluate brazilian graduate programs, which, according to the context uses a multicriteria method that is based on a mathematical structure to evaluate a problem. Given this contextual aspect, the study discussed by the model used by CAPES and presented the International Systems to Rank universities, such study discusses a vision perspective to be added to the model used by CAPES.

The contribution of this paper is to discuss the criteria and weights used by the CAPES evaluation of the Postgraduate Program evaluating the possibility of incorporating some criteria and/or methodological aspects of international assessment systems. The evaluation system, both the universities and the postgraduate programs, use the model of deterministic additive aggregation. Thus, the evaluation of each criterion is an additive function of the sub-criteria and overall rating is a weighted sum of the new criteria.

This paper will be divided into five sections, in section 1 introduction was presented with its rationale and objective. Section 2 described the concept of Multi-Criteria Decision Aid--MCDA. Section 3 describes the evaluation of university and Brazilian graduate program systems. Section 4 was presented the analysis of the impacts of the criteria and weights used by the systems described above. Finally, in section 5 was carried out the relevant conclusions.

2 Multi-Criteria Decision Aid--MCDA

A multicriteria problem consists in a situation where there are at least two alternative options to choose from, and this choice is conducted by the wish to attend to multiple objectives, which are associated with the consequences of choice for each alternative to be followed (De Almeida et al., 2015).

The modeling of decision problems involving multiple criteria and different decision concerns over alternatives such as choice, ordering or classification. In this way, it is used multicriteria decision support methods (MCDA), which cover a wide range of methods available in the literature.

The modeling of decision problems involving multiple criteria and different decision concerns over alternatives such as choice, ordering or classification. In this way, it is used multicriteria decision support methods (MCDA), which cover a wide range of methods available in the literature (De Almeida, 2013).

The main methods of multicriteria decision support are classified into three major groups: Single Criterion Synthesis, outranking methods and interactive (Roy, 1996; Belton & Stewart, 2002). Another widely used classification is the division between methods compensatory and noncompensatory methods (De Almeida, 2013).

An important aspect to be considered in the definition of the method is related to compensation which may exist between the criteria in the aggregation model. In compensatory methods there is the idea to offset a lower performance of an alternative in a given criterion for a better performance on other criteria. With this, the compensatory methods may favor the more unbalanced alternatives. Already in the non-compensatory methods such compensation does not exist, and there is an interaction between the criteria, leading to favor the more balanced alternative (De Almeida, 2013).

The outranking methods are based on par compared to a couple of alternatives, it is not possible to make an analytical aggregation to establish a score for each alternative. Assumes the possibility of incomparability in preference relations using a notion of overcoming between alternatives, not transitive and have no compensatory assessments. Within this method there are two main areas: the methods of family ELimination and Choice Expressing the Reality (ELECTRE) and Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) (Vincke, 1992; Roy, 1996; Brans & Mareschal, 2002; Palha et al., 2016).

In turn, methods for single criterion synthesis, add the criteria of a problem as if they were a single criterion, summarizing the others and present compensatory assessments. Within this classification, there are the following methods: Model Aggregation Additive Deterministic (De Almeida, 2013) and Multiattribute Utility Theory (MAUT) (Keeney & Raiffa, 1976).

2.1 Model Aggregation Additive Deterministic

According to De Almeida (2013), the additive deterministic model is to aggregate the performance of the alternatives according to each criterion. It is related to the assumption of a certain situation in a deterministic context in obtaining the consequences for each alternative. In equation (1) [K.sub.j] is a criterion weight, [V.sub.j](x) is a value functions, when [x.sub.i] is the performance of the alternative by criteria j(j=1,...,n), obtained according to the preference of Decision-Maker (DM), and where in (2) represent the normalization.

V(x) = [n.summation over (j=1)][K.sub.j][V.sub.j]([x.sub.j]) (1)

[n.summation over (j=1)][K.sub.j] = 1 (2)

An important factor for the use of an additive aggregation function, if and only if the criteria are mutually independent in preference (Keeney & Raiffa, 1976). Another issue that must be taken into consideration is about getting the scale constants that can not be based only on the degree of importance of the criteria. As the additive model represents the function value set on the consequences rather than be drawn to the alternative.

The constant scales are associated with the replacement rate, which brings the concept of trade-offs between the criteria, ie the gains compensation idea into a criterion when it gets lost in another. The scale value of the constant depends on the result of space limitations. Since the weights can only translate the notion of the importance of the criteria, and there is the notion of trade-offs and compensation among the criteria (De Almeida et al, 2015).

3 Description of Evaluation Systems

3.1 Higher Education Personnel Improvement Coordination (CAPES)

The Higher Education Personnel Improvement Coordination Foundation (CAPES) is an agency of the Brazilian Ministry of Education responsible for defining the opening guidelines, operation and evaluation of postgraduate courses in Brazil. The evaluation process of postgraduate courses conducted by CAPES is continuous. This evaluation system allows to compare the level of research activities between national and international programs (CAPES, 2013). Table 1 shows the main features of the CAPES.

The evaluation system of CAPES, to the area of Engineering III, is carried out by a committee of experts who evaluate the data provided by each program, composed a table from the items evaluated (criteria) to qualitatively and quantitatively resulting in concepts between 1 and 7 to postgraduate programs (Figure 1) (CAPES, 2013).

The program assessment level greater than or equal to 3 has its validated and recognized diplomas nationally. The program offers only the master has its limited level 5, getting levels 6 and 7 reserved for international reference P.H.D. This system is divided into two main phases: The first phase consists in classifying them into five categories (1 to 5 concepts). And the second phase is a second rating for those who were classified as level 5 can participate in scientific, cultural and artistic, as well as features that make the competitors at the international level (CAPES, 2013).

The evaluation methodology includes five analytical criteria: Proposed Program, Faculty, Student Body, theses and dissertations, intellectual production and Social Inclusion. Each of these criteria is subdivided into sub-criteria, consolidated for all aspects taken into account in the procedures adopted by the evaluation (CAPES, 2013). Table 2 shows, respectively, the evaluation criteria for the area of Engineering III and a brief description of the sub-criteria and their respective weights once the criteria may vary among the evaluation areas

Through the indices removed from the reports of each program, we evaluate each of these sub-criteria of a qualitative or quantitative way. They are assigned concepts ranging from Very Poor, Poor, Fair, Good and Very Good to each of these subcriteria associated to the six evaluation criteria. The algorithm used to obtain the sub assessment of and relationship between the numerical values and concepts (VG, G, F, P and VP) are shown in Table 3. Based on an overall assessment arrives at the levels (final concept) from 1 to 5 (CAPES, 2013).

The evaluation system of postgraduate programs made by CAPES down through levels of performance, a verbal concept for each evaluated criterion. The final concept is obtained by aggregation, additively (weighted average) of all criteria. The form of additive aggregation results in a compensatory model where programs that have obtained an unfavorable assessment in a particular criterion may be the result offset by a very favorable assessment of another. This shape favors aggregation programs with little balanced reviews, ie programs that have low-grade evaluated on some criteria and high degree assessment of other criteria to compensate (Vincke, 1992).

3.2 Quacquarelly Symonds (QS)

The QS is a commercially oriented ranking, produced by consultancy Quacquarelli Symonds, specializing in education and study abroad. The ranking is designed to guide students seeking training in higher education institutions of excellence, as well as corporations and institutions seeking qualified professionals the labor market (QS, 2016b). The main characteristics QS ranking is presented in Table 4.

Quacquarelli Symonds (QS) World University Rankings, aims to help students make comparisons from top universities around the world. This assessment is made on the basis of six performance indicators (Academic reputation, Employer reputation, Student-to-faculty ratio, Citations per faculty, International faculty ratio end international student ratio). The rankings to evaluate universities are made using four criteria (research, teaching, employability end internationalization) (QS, 2016a). Each indicator considers a different weighting in the calculation of overall scores and can be seen in Table 5.

The QS World University Rankings model was developed in 2004 to rank over 800 world universities, the results are published in a table of interactive classification, which can be sorted by country or region and for each of the six performance indicators. This model along with QS World University Rankings by Faculty provides the ranking of the 400 best universities in the world, considering the five sands of knowledge: arts & humanities, engineering & technology, life sciences & medicine, natural sciences, and social sciences & management (QS, 2016a).

The QS uses an alphanumeric notation to group and compare universities based on four aspects: Size (student population); Comprehensive (areas of operation); Intensity of research; University age in question (QS, 2016a). This information is presented in the ranking with the final score obtained by each university according to Table 6.

The QS selects universities based on some aspects: position of universities in national rankings, academic reputation, geographical balance, direct submission, among others (QS, 2016a).

3.3 Times Higher Education (THE)

The Times Higher Education (THE) is a weekly London magazine, which seeks to inform the public about issues related to higher education. The Times Higher Education, is now known by the publication of the Times Higher Education World University Rankings in which the magazine lists the world's best universities. This ranking is the judgment of universities throughout the world in all its key areas. The evaluation methodology THE took more than a decade to be developed, this methodology was carried out in consultation with leading universities in the world, and thus constructed an evaluation system (Olcay & Bulu, 2016). The main features of THE ranking are presented in Table 7.

The evaluation team of THE seeks to assess universities around the world against 13 performance indicators. These performance indicators are grouped in evaluation methodology becoming five criteria of analysis or assay areas: Teaching (the learning environment), Research (volume, income and reputation), Citations (research influence), International Outlook (staff, students and research) and Industry income (knowledge transfer) (Marginson, 2014). Table 8 shows, respectively, the evaluation criteria and a brief description of the sub-criteria and their respective weights.

The creation of the THE Rankings 2015-2016 top 800 list was based on a database with more than 100.000 data points 3000 universities spread across 88 countries. For the creation of this ranking was also conducted a global survey of academic reputation among scholars from around the world. To get to the rankings of the best universities in the world THE calculator, it is using a normalization approach for each performance indicator, then are combined indicators (Table 8) (THE, 2015).

The normalization approach is based on the distribution of data within a particular window, which calculates a cumulative probability function, and which evaluates the performance indicator of the university, is inside this function. The cumulative probability score of X determines a university with random values for the indicator fall below the score X percent of the time. For all performance indicators except the Academic Reputation Survey in which an exponential component is used, it calculates the cumulative probability function using a version of Z-scoring (THE, 2015).

3.4 Academic Ranking of World Universities (ARWU)

Academic Ranking of World Universities (ARWU), was developed by Shanghai Jiao Tong University, in order to compare the position of Chinese universities with the best competitors in the world, not only so they would know where to send their students but also to meet desired of the Chinese government to establish the country world-class universities (ARWU, 2015a). The Table 9 shows the main features of ARWU ranking.

The ranking uses five indicators that measure: scientific production in quantity and quality; the number of researchers with a high level of citations; former students or teachers who received the Nobel Prize or Fields Medal, the equivalent in mathematics; the publication in prestigious magazines. There is a sixth composite indicator that combines the above and considering the number of teachers of the institution with full-time dedication. Finally, the six indicators are aggregated and assigns a numerical score end in the best institution that receives points 100 (ARWU, 2015a). The Table 10 shows, respectively, the evaluation criteria and a brief description of the sub-criteria and Their respective weights.

All data used in ARWU indicators are collected from secondary sources, among them the official website of the Nobel Prize, International Mathematical Union for the Fields Medals and various databases of Thomson Reuters for citation data and publications. The number of fulltime academics is obtained from national sources as shown in Table 11.

4 Analysis of Evaluation System Comparative in Brazil and other

The rankings construction methodologies can vary significantly depending on the institution responsible for its preparation and in terms of its objectives. Therefore, depending on who prepares the ranking and what purpose does, certain indicators are not selected to compose the model that will measure the performance of universities (Salmi & Saroyan, 2007; Hazelkorn, 2010; Dill, 2006; Perez-Esparrels & Lopez Garcia, 2011).

It is known that the development and use of rankings will always be subject to criticism by a number of problems and drawbacks, among which are: indicators selection criteria, weighting and weight assigned to them, the standardization of results, many rankings described combine several steps to produce the final score, errors in the collection and processing of data, lack of transparency and reliability of results (Saisana et al., 2011).

4.1 International Rankings: Performance of Brazilian Universities

To evaluate the quality of education in higher education rankings of universities has been created, they not only created a visual way to differentiate universities, but also began to promote academic, scientific and educational competition among universities on a global scale (Salmi & Saroyan, 2007). Already the international rankings are intended to list the institutions in accordance with certain criteria, and the results are generally interpreted in a comparative way, may or may not be associated with the mission of classified institutions. Starting from the importance and the impact that the rankings have gained about universities, stakeholders and society in general, the results of the performance of Brazilian universities are presented in three major international rankings, analyzing their ranking in the overall ranking (Saisana et al., 2011).

In the 2015 edition of ARWU ranking of the top 500 world universities, six were Brazilian universities, the best placed is the USP, which appears in the range of 101-150, and is the only Latin American university among the 150 best in the world. Other higher education institutions in Brazil, the ARWU 2015 are the UFRJ, UNICAMP and UNESP, classified in the range of 301-400, and the UFRGS and UFMG classified in the range of 401-500 ranking.

Considering only the score obtained by each Brazilian university in the indicators analyzed by ARWU ranking, as can be seen in Figure 2, although it is not possible to determine its exact position, USP, classified in the range that goes from 101 to 150 position, is clearly the institution with the highest score in the ranking indicators PUB--number of publications indexed in WoS based on the last year (20%), HiCi--researchers on the list of most cited (20%), N&S--Publication in Nature and Science in last five years (20%) and weighting of these indicators by the number of full-time teachers of the institution.

Figure 3 shows the score obtained by the USP, best placed among Brazilian, with reference to the score by Harvard University, which ranks first. USP and other Brazilian universities did not score in the indicators and Alumni Award (Nobel Prizes or Fields Medals and students and teachers), and the score achieved in other indicators is quite low. Between USP and Harvard University the biggest difference score is given in indicators N & S--publications in Nature and Science (87,9 points difference) and HiCi--highly cited researchers (87,8 points difference).

The N&S and HiCi indicators have weight of 20% each in the final ranking score. The smallest difference between USP and Harvard is on display PUB--publications in WoS, where the difference between the two institutions is only 28 points. These differences position USP to about 150 positions away from Harvard. Considering that performance in research (60%) is the central point of ARWU rating, Brazil's performance in this ranking is consistent with profile presented by the Brazilian production.

In the 2015 edition of ARWU ranking of the top 500 world universities, six were Brazilian universities, the best placed is the USP, which appears in the range of 101-150, and is the only Latin American university among the 150 best in the world. Other higher education institutions in Brazil, the ARWU 2015 are the UFRJ, UNICAMP and UNESP, classified in the range of 301-400, and the UFRGS and UFMG classified in the range of 401-500 ranking.

With the application of THE Global Ranking methodology in 2015, the 800 best world universities, 17 were Brazilian universities, the best placed was the USP, which appears in the range of 201-250. Followed by UNICAMP that appears in the range of 351-400, the two aforementioned universities are among the top 400 in the world. Other universities that are among the 800 in the overall ranking THE are the UFRJ and the PUC-Rio appearing in the range of 501-600, UNB, UFMG, UFPR, UFRGS, UFSCar, the UFV, UFLA, the LST, PUCRS, UERJ, UNESP were classified in the range of 601-800. Compared with the ranking of 500 universities evaluated by ARWU ranking the number of universities in the THE ranking is lower. This way it can see that the Chinese ranking is one of the elitists because employs methodology based on high-performance search indicators.

In Figure 4 it can see that the indicators Industry Income and International Outlook, the UNICAMP had the best performance compared to the USP in indicators Citations, Teaching and Research USP had the highest indexes are oozing with the best Brazilian university.

Figure 5 shows the score obtained by the USP in the indicators considered by the ranking, with reference to the score obtained by the California Institute of Technology (CALTHEC), which occupies the first position overall ranking.

Considering the score obtained by the USP of the five criteria, it can see that the difference between the two institutions in scores of criteria vary on average 48 points. Confome the previously described Teaching and Research criteria become more sensitive as they have indicators known as Reputation Survey which significantly by 50% and 60% respectively of the ratings criteria.

The results show that the performance of universities in the THE ranking is very shy. Based on exclusion criteria declared by THE, most of Brazilian universities would be able to be considered for inclusion in the ranking, however, their inclusion depends on the score achieved, since the global ranking has only 400 positions. Thus, of the 17.500 existing universities in the world ranking has ability to sort about 2,28% of all universities and classified in 2015 only 0,5% of Brazilian universities in tops 400.

The fact that Brazil is not among the best placed countries does not mean that there is production significant scientific or center of excellence in research. It must keep in mind that the international classifications are generally homogenizing, therefore, there are many details and pockets of excellence in specific areas that are not captured by international rankings THE when evaluating the institutions as a whole.

With the application of QS rankings methodology in 2015, the 800 best world universities, 21 were Brazilian universities, the best placed was the USP, which appeared in the placement 143, followed by UNICAMP in placing 195 the two aforementioned universities are among the tops 200 in the world, UFRJ was the third best evaluated university placement was 323, these three aforementioned universities are among the top 400 in the world. Other universities that are among the 800 in global ranking QS: UFRGS appears in the range of 451-460, UNB, UNIFESP, PUC-SP and PUC-RJ are in the range between 501-550, UFMG lies between 551-600, UERJ and UFSCAR are between 651-700, PUC-RS, the UEL, UFBA, UFSC, UFSM, UFV, the UFC, UFPR, UFPE and UFF are in place above 701. Figure 6 shows the score obtained by the Brazilian university better positioned in the eight indicators considered by QS ranking.

Referring to score at MIT, which ranks first in the ranking, was drawn Figure 7 in order to compare the evaluated criteria of the USP and MIT.

The biggest difference between USP and MIT is when we look at the indicators that seek to analyze the university's perspective of internationalization, understood here as the proportion of students and foreign teachers. Whereas in the rankings, in general, small differences can significantly influence the final classification, although they are not astronomical, the performance differences USP of the indicators employed by QS result in a final score of 37,6 points lower, standing in the a distance of 143 positions the underwriter.

4.2 Evaluation model analysis CAPES

CAPES seek to evaluate the postgraduate courses in 48 major areas. These areas of assessment, in turn, grouped areas of knowledge, which are subdivided into sub-areas. For a brief presentation of the evaluation of CAPES Model, this paper will examine the area of Engineering III, specifically the Postgraduate Program in Production Engineering, taking into account the evaluation criteria of the CAPES for post-graduate program. Thus it was decided to work with 13 Brazilian universities have PhD programs concept in CAPES greater than or equal to 4.

Figure 8 shows the process result of evaluation of CAPES, of the Engineering III, the Postgraduate Program, this result is the weighting of sub-criteria within each criterion shown in Table 2, and the totals for each criteria that are again weighted, resulting in the overall evaluation of the program, as shown in Table 3. based on the overall assessment arrives at the levels (final concept) from 1 to 5. As the UFPE and UFRGS university and present concept 6.

Table 12 shows the compendium of data. The criterion Faculty and Student Body, theses and dissertations, subcriteria Composition and performance of faculty and Number of theses and dissertations in period evaluation is possible to observe that the USP is the university with the largest number of teachers a total of 28 teachers and UTFPR and UNISINOS are those with the lowest number of teachers a total of 11. Regarding the number of theses and dissertations in period evaluation, the university that obtained the largest number of jobs was defended UFF with 5,92 and the university with the lowest performance were UTFPR, UNIFEI, UNISINOS and UFMG with zero defense in the analyzed period.

The production of texts, chapters of books, collections, and other entries are integrated into the intellectual production criteria. The contribution of these variables in the technical production indicator of teachers is not very relevant because the evaluation CAPES model give more relevance to articles published in journals. This way it can see that the UFRJ (84) has the largest number of items published in this indicator and PUC / Rio (8) is the one with the least amount of items posted on this indicator.

The criterion Intellectual Production, subcriterion Qualified publications program for teaching permanent it can see that the note obtained UFRGS getting 2,34 as the best university in the studied sub-criterion. As Table 12 nearby universities that are in the ranking were: UFSC (2,33), UFRJ (2,17), UFSCAR (1,85), UNIFEI (1,81), USP/SC (1,8), PUC/RIO (1,55), UFPE (1,45), UFMG (1,24), UTFPR (1,16), UNISINOS (1,15), UFF (0,85) and USP (0,75). The indices of the rankings described for each university means the amount of product produced by each student's postgraduate program. Exemplifying the UFRGS got a score of 2,34, this means that each teacher has produced on average 2,34 articles. USP already obtained 0,75 note that it is concluded that was produced under an article by professor. It is noted that this subcriterion is strongly influenced by students number of the postgraduate program. Two questions should be raised at that time. The first is that the sum of the weights in the weight to reach the criterion of the overall index is not standardized and the second that the weights are compensatory.

The evaluation of the number of articles published in conference proceedings, is contained in the criteria Student body, theses and dissertations, in the subcriterion quality of theses and dissertations and productions. In this way, the rankings of universities that publish in conference proceedings are in descending order: 1 UFSC (1274), 2 UFSCAR (459), 3 UTFPR (384), 4 USP/SC (294), 5 UFRGS (258), 6 UFPE (238), 7 UFF (235), 8 UNISINOS (222), 9 UFRJ (195), 10 USP (183), 11 UNIFEI (143), 12 PUC-RIO (99) and 13 UFMG (82).

5 Analyzing the criteria of the CAPES valuation model compared to the criteria of ARWU rankings, THE and QS

The evaluation of both evaluation system is ranking of universities and postgraduate program evaluation system a complex task. It is possible to observe that the adoption of any criteria can commit injustices, which ends justifying the inclusion of several complementary qualitative and quantitative parameters for a better evaluation.

Given this context, quantitative assessments tend to be more easily understood and used than the qualitative as impact indicators used for journals and university rankings. This parallel, however, includes a warning about the reliability of these indicators as well as the recent demonstrations on the indiscriminate use of the impact factor. In Table 13 shows the criteria and associated indicators for each ranking.

The THE and QS rankings, have three equal criteria (Teaching, Citations and International outlook), but with different weights. The Teaching criterion THE ranking analyzes the performance of the institutions for teaching and learning environment, both from the student perspective as the teachers. For both, are employed five indicators: academic reputation, certificated doctors, admission rate of students, budget and number of titles, and have put together a contribution to the final score (30%), while the QS rankings this criterion is based on indicators, quality with overall score (50%), this category measures the prestige of the institutions among academics and entrepreneurs, or by an opinion research with academics and employers.

The Citations criterion, for the THE rankings, seeks to show how much each university is contributing to the construction of human knowledge, which research stand out and have long been used by the scientific community having an influence on the overall score of 30%. As for the QS rankings, this category measures the impact of the scientific production of the institutions based on citations received by its researchers. The teacher for Citations indicator is used to produce scores, this indicator considers the total number of citations in the five-year period (Scopus) divided by the number of university teachers.

For the THE rankings criterion International outlook (7,5%), examines whether there is diversity (foreign students and teachers) on campus, is a sign of how an institution puts into global perspective. This criterion is 7,5% relative weight in the rankings and is composed of all three indicators of the same weight (2,5%). As for the QS rankings, this criterion has a final contribution of 10%, and checks the degree of international opening of the institution with regard to foreign teachers and students.

The QS World and ARWU ranking possesses a criterion in common, Quality of education, for ARWU ranking indicator associated is Number of alumni who earned a Nobel Prize or the Fields Medal in mathematics (Alummi) with weight of 10% and for ranking QS Employer the related indicator is the Student-to-faculty ratio to 20% weight. Quality of education criterion is evaluated by CAPES within the student body criteria, theses and dissertations with 35% weight.

The THE and ARWU models as your criteria, it can be seen some similarities and differences, the models incorporate some similar indicators in their assessment, with only one criterion the research is common to both. The THE has 3 indicator sum of weights equal to 30% (Reputation survey, Research income and Research productivity) and ARWU has 2 indicators with 20% pesos each (Number of articles published in Nature and Science and Number of articles listed in Thompson Scientific's Science Citation Index Expanded and its Social Sciences Citation Index. Added to the article count in 2006, listings in Social Sciences Citation Index the count double).

After analyzing the criteria used in the rankings in the study, whose main objective is to rank universities, given the objective of this study aimed to analyze these criteria to verify the possibility of incorporating new criteria in the CAPES evaluation system for program postgraduation. We observed numerous problems regarding the criteria and their weightings in the evaluation system CAPES in which is the untying of evaluation to foster postgraduate programs and the hierarchy of criteria hinder the parameterization of weights, because in each hierarchical level will have a different weight, which does not happen with the rankings studied, since each indicator is a contribution to the weight criteria.

The criterion intellectual production with relative weight of 35% is one of the most important criteria in the evaluation system of CAPES, because in it there is the subcriterion Qualified publications program for teaching permanent, in which it has a contribution at the end of 17,5% score most of all contributions, so the importance of analyzing deeply. In this way, the Qualis system is an important part of the overall evaluation process of Brazilian PostGraduate Programs. All areas use this system as a source to classify and rank the journals in which their researchers publish. In each area of CAPES is used different indicators to rank the journals with index H, Scielo, Quotes by Document, CAPES, SJR and JCR. In the specific case of engineering III the indicator used is the JCR.

Although every attempt to quantify the stratification (high impact production, A1, A2, B1, B2, B3, B4, B5 and C) often causes distortions and may represent a low stimulus to the more differentiated scientific literature. One of the simple ways to control this issue and effectively encourage more differentiated scientific production is to score the scientific production of teachers and PPG leaving the sum of points obtained JCR, as in ARWU ranking. According to Milk (2010) JCR base is considered a more reliable basis for being more stable and suffer less fluctuation than other bases.

It was found that the criteria used JCR base are generated independently by two groups with different private and commercial interests Thomson Reuters and Scopus, which do not necessarily adequately represent the worldwide scientific publications. In this way it was observed that the CAPES to use the JCR base should rethink your model as to its use, that because these two groups has considerably expanded to include periodic least developed countries in their databases, which creates, among others problems, difficulties in time series analysis. It is necessary to review this dependency and generate alternative criteria, less biased and more "authentic".

Worldwide, the number and quality of citations received (with and without self-cites) and H-index and the like of each of the teachers are highly valued. Something that CAPES has not incorporated this in its evaluation platform. The map assessments could include one criterion such as valuing and quantifying the quotes obtained by the scientific products of a given PPG in the previous four years as well as total number of citations and H-index of the permanent teachers.

There is to propose a different criterion for co-authorships score. Unfortunately, it seems to be common practice that many teachers are listed in a co-authorship without having actually met the criteria for authorship. This does not help the quality of Brazilian research actually generates distortions.

It is necessary to incorporate in the evaluation system CAPES a criterion for internationalization of Brazilian and foreign institutions. This is an important mechanism for exchange and scientific development and promotion needs to be valued, as adopted by THE and QS rankings.

It is also necessary to incorporate the faculty criteria CAPES, activities they value the role as number of views expressed, peer-review in different journals, as a participant of editorial board, or participate in activities such as the dissertation stands, thesis or contests for teachers, administrative positions, the orientation of graduate student, a member of committees and university committees, among many other activities, the evaluation of teachers of PPG, which currently total score.

Among the features observed is that different from ARWU, which employs only bibliometric indicators to identify key universities, THE main characteristic is the weight given to opinion surveys with the academic peers and labor market professionals about the reputable universities in teaching and research. The THE differs from ARWU by employing both qualitative analysis (prestige and reputation) and quantitative (performance indicators) in order to identify the universities that stand out in the world in terms of education and research. The QS uses an alphanumeric notation to group, compare and select universities based on national rankings, academic reputation, geographical balance, direct submission, among others. In turn, the CAPES uses bibliometric indicators in the graduate evaluation system.

6 Conclusion

In this work we aimed to discuss the QS Ranking systems THE and ARWU and evaluation in postgraduate Brazil, analyzing the criteria and weighting of the CAPES evaluation system. In general, the main objective of the CAPES evaluation system is to promote the pursuit of excellence standards. The evaluation results are, in turn, the basis for the formulation of postgraduate policies and design of development actions. In view of this there is a need to separate the issue, the issue of quality in education and development actions.

University rankings to differ from each other mainly by their methodological orientation. Depending on the reasons that give rise and the particular objectives of each of them, the rank THE, QS and ARWU are comparisons based on weighted sums of a limited set of indicators. Thus it is shown the difficulty of assigning appropriate weights to each indicator in order to meet the demand of users who consult the rankings with interests as diverse. Although it was observed that with regard to standardization of results, many of rankings described combine several steps to produce the final score.

Thus the results of this paper contribute to the improvement of the evaluation methodology of CAPES, when considering the critical identified in the analysis, suggesting the inclusion of new indicators and criteria, as well as the redistribution of weights. It was also observed that the performance in research has been paramount consideration in the analysis and classification of universities in major international university rankings.

DOI: https://doi.org/10.5585/ExactaEP.v17n1.7684

References

Academic Ranking of World Universities--ARWU. Academic Ranking of World Universities (2015a). Available at http://www.shanghairanking.com/ aboutarwu.html /. (accessed date May, 2016).

Academic Ranking of World Universities--ARWU. Academic Ranking of World Universities (2015b). Available at http://www.shanghairanking.com/ARWUMethodology-2015.html (accessed date May, 2016).

Benneworth, P., Coenen, L., Moodysson, J., & Asheim, B. (2009). Exploring the multiple roles of Lund University in strengthening Scania's regional innovation system: towards institutional learning? Eur. Learn. Stud. 17(11), 1645-1664.

Belton, V., & Stewart, T.J. (2002) Multiple Criteria Decision Analysis. Kluwer academic Publishers.

CAPES, Higher Education Personnel Improvement Coordination. EVALUATION REPORT 2010-2012 THREE-YEAR 2013, Available at http://www.capes. gov.br/component/content/article/44-avaliacao/4686engenharias-iii (accessed date May, 2016).

CAPES--Higher Education Personnel Improvement Coordination. Planilhas comparativas da Avaliacao Trienal 2013. Engenharias III--Engenharia de Producao. Available at http://www.avaliacaotrienal2013.capes. gov.br/resultados/planilhas-comparativas. (accessed date May, 2016).

De Almeida, A.T., Cavalcante, C.A.V., Alencar, M.H., Ferreira, R.J.P., Almeida-Filho, A.T., & Garcez, T. V. (2015). Multicriteria and Multiobjective Models for Risk Reliability and Maintenance Decision Analysis. New York, USA, Springer.

De Almeida, A T. (2013). Additive-Veto Models For Choice And Ranking Multicriteria Decision Problems. Asia-Pacific Journal of Operational Research, 30(6), 1-20.

Dias, E.D., & Rorato, R. (2014). O evolucionismo economico na pos-graduacao brasileira: uma analise a partir da otica da educacao. Avaliacao: Revista da Avaliacao da Educacao Superior. 19(1), 193-226.

Dill, D. D. (2006). Convergence and diversity: the role and influence of university rankings. In: Keynote Address presented at the Consortium of Higher Education Researchers (CHER). 19th Annual Research Conference.

Etzkowitz, H., & Leydesdorff, L. (1999). The future location of research and technology transfer. J. Technol. Transfer. 24(2/3), 111-123.

Keeney, R.L., & Raiffa, H. (1976). Decision with Multiple Objectives: Preferences and Value Trade-offs. John Wiley & Sons.

Leite, J. P. (2010). O novo QUALIS e a avaliacao dos Programas de Pos-Graduacao na area medica: mitos e realidade. Revista Brasileira de Psiquiatria, 32(2), 103-105.

Niland, J. (2014). The challenge of building worldclass universities. In: The World Class University and Ranking: Aiming Beyond Status, ed. J. Sadlak y N. C. Liu. Bucares: UNESCOCESPES. MARGINSON, S.. University Rankings and Social Science. European Journal of Education, 49(1), 1-4.

Olcay, G.A., & Bulu, M. (2016) Is measuring the knowledge creation of universities possible?: A review of university rankings. Technological Forecasting & Social Change, xxx(x), xxx-xxx.

Palha, R.P., Almeida, A.T., & Alencar, L.H. (2016). A Model for Sorting Activities to Be Outsourced in Civil Construction Based on ROR-UTADIS. Mathematical Problems in Engineering, 2016(1), 1-15.

Perez-Esparrells, C., & Gomez-Sancho, J.M. (2011). Los rankings internacionales de las instituciones de educacion superior y las clasificaciones universitarias en Espana: vision panoramica y prospectiva de futuro. Serie: Documentos de Trabajo. Fundacion de las cajas de ahorros.

QS TOP UNIVERSITIES (2016a). QS university ranking: Methodology Re-trieved. Available at http:// www.topuniversities.com/university-rankings-articles/world-university-rankings/qs-world-university-rankingsmethodology. (accessed date May, 2016).

QS TOP UNIVERSITIES (2016b). QS Classifications Available at http://www.iu.qs.com/university-rankings/qs-classifications/. (accessed date May, 2016).

Roy, B. (1996). Multicriteria Methodology For Decision Aiding. Kluwer Academic Publishers.

Saisana, M., D'hombres, B., & Saltelli, A. (2011). Rickety numbers: volatility of university rankings and policy implications. Research Policy, 40(1), 165-177.

Salmi, J. (2009). El desafio de crear universidades de rango mundial. Washington, USA, Banco mundial.

Salmi, J., & Saroyan, A. (2007). League tables as policy instruments: uses and misuses. Higher Education Management and Policy, 19(2), 31-68.

TIMES HIGHER EDUCATION--THE. World University Rankings 2015-2016 methodology titulo. Available at <https://www.timeshighereducation.com/ news/ranking-methodology-2016>. (accessed date May, 2016).

Vincke, P. (1992). Multicriteria decision-aid. John Wiley & Sons.

Yu, M.C., Wu, Y.C.J., Alhalabi, W., Kao, H.Y., & Wu, W.H. (2016). ResearchGate: Na effective altmetric indicator for active researchers? Computers in Human Behavior, 55(2016), 1001-1006.

Recebido em 13 ago. 2017 / aprovado em 7 dez. 2017

Mirian Batista de Oliveira Bortoluzzi (1)

Fagner Jose Coutinho de Melo (2)

Armando Muchanga (3)

(1) Possui graduacao em Engenharia de Producao Agroindustrial pela Universidade do Estado de Mato Grosso (UNEMAT) e Mestrado em Engenharia de Producao pela Universidade Federal de Pernambuco (UFPE) Universidade Federal de Pernambuco mirianbortoluzzi@gmail.com

(2) Mestre em Engenharia da Producao pela Universidade Federal do Pernambuco--UFPE, Pos-graduado em Gestao da Qualidade e Produtividade pela Faculdade dos Guararapes, Graduado em Administracao pela Universidade Federal de Pernambuco--UFPE. Universidade Federal de Pernambuco fagnercoutinhomelo@gmail.com

(3) Possui graduacao em Matematica pela Universidade Eduardo Mondlane--Mocambique (2010) e Mestrado em Engenharia de Producao pela Universidade Federal de Pernambuco (UFPE)--Brasil (2016). Universidade Federal de Pernambuco mipe_armando@yahoo.com.br

Caption: Figure 1: Dynamic assignment of notes by the Engineering III committee

Caption: Figure 2: Score of Brazilian Universities ranking indicators ARWU 2015

Caption: Figure 3: Point of comparison obtained by USP and Harvard in ranking indicator ARWU 2015

Caption: Figure 4: Score of UNICAMP and USP ranking indicators in THE 2015

Caption: Figure 5: Points of comparison obtained by USP and CALTHEC in ranking indicators THE 2015

Caption: Figure 6: Score of UNICAMP and USP ranking indicators in QS 2015

Caption: Figure 7: Points of comparison obtains by USP and MIT in ranking indicators QS 2015
Table 1: Main characteristics of CAPES

Periodicity                Quadrinnial (published
                           between July to August)

Ranking                  Ranking the bibliographic
Postgraduate                production quantified
Program

Comprehensive           Formulation of postgraduate
                                   polices

                        Design of development actions

Source Maids         CV database of the Lattes Platform

                       Qualis system and Impact Factor

                      Capes colection filing is done by
                         coordinators of the courses

Inclusion Criteria        Bibliographic production
                         distributed according the
                       stratification Qualis defended
                          Theses and Dissertations

Source: CAPES (2013).

Table 2: Criteria and the associated weights of the indicators employed
in CAPES

Criterion              Main indicators                    Weight
                                                       contribution
                        Subcriterion          Weight   to the final
                                               (%)       score (%)

Proposal           Consistency, scope and       40           0
Program (0%)       updating concentration
                            areas

                    Program planning with       40           0
                      your views future
                         development

                     Infrastructure for         20           0
                   teaching, research and,
                      if appropriate ,
                          extension

Faculty (20%)          Composition and          30           6
                   performance of faculty

                     Size of the faculty        30           6

                  Distribution of research      30           6
                   and training activities
                   of the program teachers

                  Contribution of teachers      10           3
                     to teaching and /or
                  research in undergraduate

Student body,       Number of theses and        30         10.5
theses and          dissertations in the
dissertations         period evaluation
(35%)              Distribution Guidelines      10          3.5
                    Quality of Theses and       40          14
                      Dissertations and
                         production

                  Program efficiency in the     20           7
                  formation of teachers and
                       Fellows doctors

Intellectual       Qualified publications       50         17.5
production          program for teaching
(35%)                     permanent

                  Distribution of qualified     30         10.5
                  publications in relation
                   to permanent faculty of
                         the Program

                    Technical production,       20           7
                      patents and other
                    productions relevant

Social             Insertion and regional       40           4
Inclusion (10%)   impact and  (or) national
                           program

                       Integration and          40           4
                   cooperation with other
                          programs

                        Visibility or           20           2
                   transparency given by
                     the program to its
                         performance

Source: Adapted CAPES (2013).

Table 3: Correspondence between the numerical values and the concepts

Criterion                   Correspondence subcriterion

                Subcriterion                    VG

                   1a (1)         40 [less than or equal to] FOR
                   1b (1)         80 [less than or equal to] ADE
Faculty              2            1 [less than or equal to] ATI
                                   [less than or equal to] 2,5

                     3            50 [less than or equal to] D3A

                                   1,5 [less than or equal to]
                     1            ORI [less than or equal to] 4

Student body,        2            PSA [less than or equal to] 15
theses and         3a (1)       0,40 [less than or equal to] PRDD
dissertations      3b (1)       0,35 [less than or equal to] PRDM
                   4a (1)      32 < EFD [less than or equal to] 30
                   4b (1)         EFT [less than or equal to] 60

                     1           0,85 [less than or equal to] PQD
Intellectual         2            50 [less than or equal to] DPD
production           3            0,8 [less than or equal to]PTC
                     --                         --

Social               --                         --
Inclusion

Criterion              Correspondence subcriterion

                                    G

                   30 [less than or equal to] FOR < 40
                   70 [less than or equal to] ADE < 80
Faculty           1 [less than or equal to] ATI < 0,8 /
                  2,5[less than or equal to] ATI < 3,0

                   40 [less than or equal to] D3A <50

                  1 [less than or equal to] ORI < 1,5/
                    4 [less than or equal to] ORI <6

Student body,      25 < PSA [less than or equal to] 15
theses and       0,30 [less than or equal to] PRDD <0,4
dissertations   0,30 [less than or equal to] PRDM < 0,35
                   32 < EFD [less than or equal to] 30
                   66 < EFT [less than or equal to] 60

                 0,65 [less than or equal to] PQD < 0,85
Intellectual       40 [less than or equal to] DPD< 50
production        0,6 [less than or equal to] PTC < 0,8
                                   --

Social                             --
Inclusion

Criterion             Correspondence subcriterion

                                   F

                  20 [less than or equal to] FOR < 30
                  60 [less than or equal to] ADE < 70
Faculty          0,6 [less than or equal to] ATI <0,8 /

                  3,5[less than or equal to] ATI <3,0
                   30 [less than or equal to] D3A <40

                  0,7 [less than or equal to] ORI < 1/
                   6 [less than or equal to] ORI < 8

Student body,     35 < PSA [less than or equal to] 25
theses and       0,20 [less than or equal to] PRDD <0,3
dissertations   0,20 [less than or equal to] PRDM < 0,3
                  34 < EFD [less than or equal to] 32
                  72 < EFT [less than or equal to] 66

                0,45 [less than or equal to] PQD < 0,65
Intellectual       30 [less than or equal to] DPD< 40
production       0,4 [less than or equal to] PTC < 0,6
                                   --

Social                             --
Inclusion

Criterion                            Correspondence subcriterion

                                   P                           VP

                  10 [less than or equal to] FOR < 20       FOR < 10
                  50 [less than or equal to] ADE < 60       ADE < 50
Faculty          0,4 [less than or equal to] ATI <0,6 /    ATI < 0,4 /
                  3,0 [less than or equal to] ATI <4,0      ATI < 0,4

                   20 [less than or equal to] D3A <30        D3A <20

                0,4 [less than or equal to] ORI < 0,7 /    ORI < 0,4 /
                   8[less than or equal to] ORI < 10         ORI<10

Student body,     45 < PSA [less than or equal to] 35       45 < PSA
theses and      0,10 [less than or equal to] PRDD < 0,2    PRDD < 0,10
dissertations   0,10 [less than or equal to] PRDM < 0,2    PRDM < 0,10
                  36 < EFD [less than or equal to] 34       36 < EFD
                  78 < EFT [less than or equal to] 72       78 < EFT

                0,25 [less than or equal to] PQD < 0,45    PQD < 0,25
Intellectual      20 [less than or equal to] DPD < 30       DPD < 20
production       0,2 [less than or equal to] PTC < 0,4      PTC < 0,2
                                   --                          --

Social                             --                          --
Inclusion

(1) Weight of the indicators are given a/or other indicator is used
in a chosen ranking

Source: Adapted CAPES (2013).

Table 4: Main characteristics of QS

Periodicity               Annual (Published in September)

                                   Global ranking

                                 Ranking by 5 area

                              Ranking by 30 discipline

Published                 Ranking of universities under 50
Ranking                                years

                                  Ranking QS Asia

                              QS ranking Latin America

                                  Ranking QS BRICS

                          Analyzes about 3000 universities
                           800 universities ranked in the
                                   global ranking

Comprehensive             300 universities ranked by area

                      200 universities ranked by disciplines

                     50 universities classified under 50 years

                             Surveys implemented by QS

                                  Data base Scopus

Source Maids               Data provided by universities

                           Data from national education
                                      agencies

                        Universities are selected based on
Inclusion Criteria        performance national rankings,
                           reputation in opinion polls,
                          geographical balance and direct
                            the unniversity presentation

                         These creteria are used to select
                          over 3000 universities evaluated

Source: Adapted QS, (2016a).

Table 5: Criteria and the associated weights of the
indicators employed in QS

Criterion         Main indicators Subcriterion

                  Academic reputation : 40%
Teaching (50%)
                  Employer reputation : 10%

Citations (20%)   Citations per faculty: 20%

Quality of        Student-to-faculty ratio : 20%
education (20%)

International     International faculty: 5%
outlook (10%)     International students: 5%

Source: Adapted QS Top Universities, (2016a).

Table 6: classification rating in QS World University Rankings

Size                      Comprehensive              Age

XL - Extra Large     FC - Full Comprehensive     5 -Historic
>= 30,000 students    More 5 faculty areas     (>= 100 years)

L - Large              CO - Comprehensive        4 - Mature
>= 12,000 students     All 5 faculty areas      (< 100 years)

M - Medium                FO - Focused         3 - Established
>= 5,000 students       > 2 faculty areas       (< 50 years)

S - Small                SP - Specialist          2 - Young
< 5,000 students       <= 2 faculty areas        (<25 years)

                                                   1 - New
                                                 (<10 years)

Size                    Research
                       Intensity

XL - Extra Large     VH--Very Hingh
>= 30,000 students

L - Large               HI--High
>= 12,000 students

M - Medium             MD--Medium
>= 5,000 students

S - Small               LO--Low
< 5,000 students

Source: Adapted QS, (2016b).

Table 7: Main characteristics of THE

Periodicity             Annual (Published in September)

Published               Annual (Published in September)
Ranking                          Global ranking
                                Ranking by area
                             Ranking by discipline

                        400 universities ranked in the
                                 global ranking

Comprehensive           100 universities ranked by area

                     100 universities classified under 50
                                     years

                        Independent institutions (Nobel
                               and Fields Medal)

                      Institutional Profiles Project (GPP)

Source Maids            Data Base Thomson Reuters (WoS)

                         Ministry of Education of each
                        country the National Statistics
                      Institute and university association

                           Universities are analyzed
                          automatically, except those
                             offering not graduate

Inclusion Criteria        They operate in excessively
                        specialized fields or they have
                       published least 200 articles per
                                     year.

Source: Adapted THE, (2015).

Table 8: Criteria and the associated weights of the
indicators employed in THE

Criterion                  Main indicators
                             Subcriterion

                        Reputation survey: 15%

                     Staff-to-student ratio: 4.5%

                   Doctorate-to-bachelor's ratio:
Teaching (30%)                  2.25%

                        Doctorates awarded-to
                       academic staff ratio: 6%

                     Institutional income: 2.25%

                        Reputation survey: 18%

Research (30%)           Research income: 6%

                      Research productivity: 6%

Citations (30%)    Citations of published work: 30%

                      International-to-domestic
                         student ratio: 2.5%

International      International-to-domestic-staff
outlook (7,5%)               ratio: 2.5%

                  International collaboration: 2.5%

Industry income    Knowledge-transfer activities:
(2,5%)                          2.50%

Source: Adapted THE, (2015).

Table 9: Main characteristics of ARWU

Periodicity          Annual (Published in September)

                             Global ranking

Published                    Ranking by area
Ranking
                         Ranking by discipline

                    Analyzes about 3000 universities

                 500 universities ranked in the global
Comprehensive                    ranking

                     200 universities ranked by area
Source Maids
                 200 universities ranked by disciplines

                       Opinion poll conducted by
                            Thomson Reuters

                        Data Base Thomson Reuters

                    National Ministry of Education,
                National bureau of Statistics, National
                    Association of Universities, etc.

                    Universities that received Nobel
                    prize Fields medals, possessing
Inclusion           researchers among the most cited
Criteria           or articles published in Nature or
                     Science and with a significant
                   number of articles indexed in the
                                WoS base

Source: Adapted ARWU, (2015a).

Table 10: Criteria and the associated weights of
the indicators employed in ARWU

Criterion                      Main indicators
                                 Subcriterion

Quality of              Number of alumni who earned a
education (10%)       Nobel Prize or a Fields Medal in
                          mathematics (Alummi): 10%

                          Number of researchers who
                      earned a Nobel Prize in physics,
                      chemistry, medicine or economics
                         and/or the Fields Medal in
                           mathematics (Award): 20%

Quality of staff           Number of highly cited
(40%)                 researchers in the fields of life
                         science, medicine, physics,
                       engineering and social sciences
                                 (HiCi): 20%

                       Number of articles published in
                        Nature and Science (N&S): 20%

Research Output         Number of articles listed in
(40%)                   Thompson Scientific's Science
                       Citation Index Expanded and its
                       Social Sciences Citation Index.
                        Added to the article count in
                      2006, listings in Social Sciences
                       Citation Index the count double
                                  (PUB): 20%

                       The weighted score of the above
                       five indicators divided by the
Size of the            number of full-time equivalent
institution (10%)     academic staff. If the number of
                    academic staff for institutions of a
                       country cannot be obtained, the
                      weighted scores of the above five
                        indicators is used (PCP): 10%

Source: Adapted ARWU, (2015b).

Table 11: Data sources used by ARWU--2015

Indicator           Sources

Nobel Prizes        http://www.nobelprize.org/

Fields Medals       http://www.mathunion.org/index.
                    php?id=prizewinners

Highly cited        http://thomsonreuters.com/
researchers         essential-science-indicators

                    http://www.highlycited.com e

Papers
published in
Nature and
Science

Articles indexed    http://www.webofknowledge.com
in Science

Citation Index-
Expanded

Journal Citation    Journal Citation Report http://www.
Report              webofknowledge.com

                    Number of academic staff. Data is
                    obtained from national agencies
                    such as National Ministry of
Other               Education, National Bureau of
                    Statistics,
                    National Association of Universities
                    etc.

Source: Adapted ARWU, (2015b).

Table 12: Compendium of Data

Alternative                       PUC-RIO   UFF    UFMG   UFPE   UFRGS

Theses and Dissertations (Te/D1)   3,21     5,92    0     3,37   1,17

Fullpapers             Al           12       5      8      12      5
published in           A2            4       4      13     8      17
technical and          B1           15       8      21     11     14
scientific             B2           17       27     9      19     27
journals               B3            4       11     4      1      43
                       B4           17       37     3      9      39
                       B5            6       50     4      12     19
Complete works published            99      235     82    233     258
in Technical- Scientific

Books and           FullText         3       2      1      2       1
Book             Book Chapters       5       19     14     5      41
Chapters           Gleanings         0       0      0      0       0
                  Verbetes and       0       3      3      9       4
                     other

Alternative                       UFRJ   UFSC   UFSCAR   UNIFEI

Theses and Dissertations (Te/D1)  2,23   0,62    1,56      0

Fullpapers             A1          17     6       11       8
published in           A2          27     21      16       5
technical and          B1          44     43      12       21
scientific             B2          19     67      68       25
journals               B3          36     87      30       9
                       B4          29     88      25       13
                       B5          16    247      45       19
Complete works published          195    1274    459      143
in Technical- Scientific

Books and           FullText       18     14      2        0
Book             Book Chapters     59     54      38       6
Chapters           Gleanings       7      0       0        0
                  Verbetes and     0      7       8        1
                     other

Alternative                       UNISINOS   USP   USP/SC   UTFPR

Theses and Dissertations (Te/D1)     0       0,9    1,58      0

Fullpapers             A1            2        6      9        5
published in           A2            4        7      11       2
technical and          B1            4        8      32       2
scientific             B2            25      37      23       9
journals               B3            8       13      20      27
                       B4            17      27      29      20
                       B5            24      16      43      109
Complete works published            222      183    294      384
in Technical- Scientific

Books and           FullText         3        6      11       1
Book             Book Chapters       18      45      24      36
Chapters           Gleanings         0        3      0        2
                  Verbetes and       9        6      4        6
                     other

Source: Adapted CAPES, (2015).

Table 13: Comparison of the Rankings

                       Main indicators

Criterion                  THE World                QS World

                    Reputation survey: 15%         Employer
                                                reputation : 10%

                       Staff-to-student            Academic
                          ratio: 4.5%           reputation : 40%

Teaching           Doctorate-to-bachelor's
                         ratio: 2.25%

                    Doctorates awarded-to-
                   academic staff ratio: 6%

                  Institutional income: 2.25%

                    Reputation survey: 18%

Research             Research income: 6%

                   Research productivity: 6%

Citations                Citations of            Citations per
                      published work: 30%         faculty: 20%

Quality of                    --                  Student-to-
education                                           faculty
                                                  ratio : 20%

Quality of                    --                       --
faculty

International      International-to-domestic     International
outlook               student ratio: 2.5%         faculty: 5%

                   International-to-domestic     International
                       staff ratio: 2.5%          students: 5%

                         International
                      collaboration: 2.5%

Industry income       Knowledge-transfer               --
                       activities: 2.5%

Size of the                   --                       --
institution

                                   Main indicators

Criterion                                ARWU

Teaching                                  --

                  Number of articles published in Nature and Science
                                      (N&S): 20%

Research          Number of articles listed in Thompson Scientific's
                    Science Citation Index Expanded and its Social
                     Sciences Citation Index. Added to the article
                      count in 2006, listings in Social Sciences
                      Citation Index the count double (PUB): 20%

Citations                                 --

Quality of          Number of researchers who earned a Nobel Prize
education          in physics, chemistry, medicine or economics and/
                   or the Fields Medal in mathematics (Award): 20%

Quality of         Number of highly cited researchers in the fields
faculty                 of life science, medicine, physics,
                      engineering and social sciences (HiCi): 20%

International                             --
outlook

Industry income                           --

Size of the         The weighted score of the above five indicators
institution          divided by the number of full-time equivalent
                    academic staff. If the number of academic staff
                   for institutions of a country cannot be obtained,
                   the weighted scores of the above five indicators
                                  is used (PCP): 10%

Source: The authors, (2016).

Figure 8: Overall assessment of the four criteria for assessing the
CAPES

           Faculty   Intellectual   Concept   Student body,    Social
                      production               theses and     Inclusion
                                              dissertations

PUC/PR     1         1,75           4         1,4             0,5
PUC-RIO    1         1,75           5         1,75            0,5
UFF        0,8       1,75           4         1,75            0,5
UFMG       1         1,75           4         1,75            0,4
UFPE       1         1,75           6         1,75            0,5
UFRGS      1         1,75           6         1,75            0,5
UFRJ       0,8       1,75           5         1,75            0,5
UFSC       0,8       1,75           5         1,75            0,5
UFSCAR     0,8       1,75           4         1,75            0,5
UNIFEI     1         1,75           5         1,75            0,4
UNIP       0,8       1,75           5         1,75            0,5
UNISINOS   1         1,75           5         1,75            0,5
USP/SC     0,8       1,75           5         1,75            0,5
UTFPR      0,8       1,05           4         1,75            0,4

Source: Adapted CAPES, (2015)
COPYRIGHT 2019 Universidade Nove de Julho
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bortoluzzi, Mirian Batista de Oliveira; Melo, Fagner Jose Coutinho de; Muchanga, Armando
Publication:Revista Exacta
Date:Jan 1, 2019
Words:9874
Previous Article:Development of the concept of a pool access platform for people with reduced mobility/Desenvolvimento do conceito de uma plataforma de acesso a...
Next Article:Decision suport applied to requirement selection for Cerne 2 certification on a technological incubator/Apoio a decisao aplicado a selecao dos...

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |