Printer Friendly

Assessment practices in higher education & results of the German research program modeling and measuring competencies in higher education (KoKoHs).

The ever-increasing internationalization of study programs and global mobility of students call for greater transparency of and valid information on the knowledge and skills students acquire over the course of their studies. Several theoretical and methodological challenges arise from the immense diversity of degree courses, study programs, and institutions. A recent review of the literature has revealed a substantial lack of research on assessment practices in higher education, especially on domain-specific and generic competency models, as well as on measurement methods and valid instruments for competency assessment. The German Federal Ministry of Education and Research initiated the national research program Modeling and Measuring Competencies in Higher Education (KoKoHs) to address these challenges. This article describes the assessment practices, aims, and conceptual and methodological framework of KoKoHs and presents the main results of the first funding phase of the program.

Earlier approaches to competency assessment in higher education were limited mostly to prerequisite admissions tests, data on learning opportunities, and subjective measures (Kuhn & Zlatkin-Troitschanskaia, 2011). Recent analyses have revealed that accredited higher-education institutions lack sufficiently reliable and valid instruments to assess students' learning outcomes and that there are tremendous differences in competency evaluation in higher education within departments and institutions, as well as among institutions nationally and internationally. Hence, not surprisingly, the results of many studies indicate that most certificates and grades in higher education are hardly comparable on the national level let alone on an international level (Zlatkin-Troitschaskaia, Shavelson & Kuhn, 2015). Given the increasing internationalization and global mobility of students, it is imperative there is transparency of students' knowledge and skills across various study models and countries.

Valid assessment of competencies in higher education form the basis for transparency and comparability of academic degrees, which are stipulated aims of policy reform programs. Therefore, in 2010, the German Federal Ministry of Education and Research (BMBF) initiated the national Modeling and Measuring Competencies in Higher Education (KoKoHs) research program, which addresses the political and practical challenges of conducting competency assessment in higher education. We present an overview of the main outcomes following the end of the first five-year funding phase of the KoKoHs program. First, we outline the structure, aims, and theoretical and methodological framework of KoKoHs. Second, we present the accumulated results in the areas of competency modeling, test development, and validation. Third, we report on key activities of the program that will shape the future of competency assessment in higher education in Germany, including the dissemination of results, internationalization of KoKoHs networks, and provision of support for young researchers. We conclude by outlining challenges and perspectives for the second funding phase of KoKoHs.

KoKoHs: Structure and Aims

The KoKoHs program provides systematic, internationally compatible and fundamental research on competency assessment in higher education (Zlatkin-Troitschanskaia, Kuhn, & Toepper, 2014). During the first phase, the program included 70 projects with 220 researchers at more than 50 institutions of higher education in Germany and Austria. Selected during an external review process, each 24 cross-university collaborative project was required to bring together domain experts, teaching methodology experts, and research methodology experts from at least two universities. KoKoHs projects involved more than 50 international experts (from universities, testing institutes, etc.) from 20 countries including the United States, Australia, Japan, and South Korea. The first phase ran from 2011 to 2015. Having received positive external evaluation, the program is continued for another five years (2016 to 2020).

The general purpose of the KoKoHs program is to model and assess systematically domain-specific and generic competencies of students in higher education. KoKoHs projects take into account curricular and job-related requirements, transform theoretical competency models into suitable measuring instruments, and validate test score interpretations. To enable meaningful cooperation and promote cross-project synergies during the first phase, KoKoHs focused on student competencies in one generic cluster (self-regulation and general research competencies) and four domain-specific clusters comprising some of the most popular fields of study in Germany:

* engineering, including electrical engineering and mechanical engineering;

* economics and social sciences, including teacher training in economics and social sciences;

* educational sciences, including psychology; and

* teacher training in science, technology, engineering, and mathematics (STEM subjects).

Conceptual and Methodological Framework

In the KoKoHs projects, competencies were defined as latent cognitive and noncognitive underpinnings of performance (Ewell, 2005; Rychen, 2004). The KoKoHs program adopted Weinert's (2001) definition of competencies as "cognitive abilities and skills that individuals possess or acquire in order to solve certain problems as well as the aligned motivational, volitional and social dispositions and skills to apply the solutions in different situations successfully and responsibly" (p. 27-28). This general definition was specified for competencies acquired in higher education. During the first phase, KoKoHs projects focused predominantly on (latent) cognitive abilities and skills and specified them for their respective fields of study (Alexander, 1997; Alexander, Winters, Dinsmore, & Parkinson, 2011).

Models of knowledge and skills were operationalized through measuring instruments and tested in empirical assessments. Validation efforts aimed to establish validity of interpretation and answer the key question: What can we infer from the (cognitive) representations elicited by the assessment of the actual competencies of students? This approach is always challenging: The underlying cognitive abilities and skills--ideally also the corresponding noncognitive (e.g., affective-motivational) aspects--need to be operationalized through representative, practice-oriented, and often domain-specific tasks; assessments need to represent specific situational contexts and be free of potential bias, such as measurement errors or influences of construct-irrelevant test-taking behavior (Kulikowich & Alexander, 1994; 2003).

The general assessment framework in KoKoHs was based on the Assessment Triangle by Pellegrino, Chudowsky, and Glaser (2001), which covers three fundamental aspects of assessment: "a model of student cognition and learning in the domain, a set of beliefs about the kinds of observations that will provide evidence of students' competencies, and an interpretation process for making sense of the evidence"(p. 44) (see also Shavelson, 2013; Webb, Shavelson, & Steedle, 2012). These three aspects corresponded with key objectives of KoKoHs:

1. Define the construct to be assessed (cognition).

2. Develop and use suitable models and measuring instruments (observation).

3. Draw valid inferences from the assessment data (interpretation).

The Assessment Triangle provided the cornerstones for an assessment connecting theoretical constructs of students' competencies with empirical evidence; that is, developing estimates based on limited instances of students' knowledge and skills in an argument- based approach of "reasoning from evidence" (Mislevy, 1994). For more specific, practical orientation, KoKoHs project teams adopted the evidence-centered assessment approach and test development concept (Mislevy & Haertel, 2006; Hattie, Jaeger, & Bond, 1999), which includes the following steps:

* Domain analysis and modeling: In the assessment of competencies in higher education, initial steps included analyzing and defining the domain and modeling the domain-specific construct to be assessed.

* Assessment framework: An assessment framework was defined, which served to operationalize the theoretical model and develop items for the test instruments.

* Assessment implementation: The instruments were tested empirically.

* Assessment delivery: The test scores were analyzed using various psychometric models. Analyses always included evaluations of fit of the data to the theoretical constructs and to the corresponding data interpretations. The conclusive evaluation of the tests with regard to various validation criteria served as a basis for further decisions (see also Pant, Rupp, Tiffin-Richards, & Koller, 2009).

Validation is of paramount importance in KoKoHs. Validation efforts followed the International Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014; Kane, 2013).

Overview of the Main Results

The following is a synopsis of the results of the KoKoHs program at the end of the first funding phase and before the beginning of the second funding phase. Results are summarized and presented for the three main areas of work in the program: development of competency models; development of test instruments; and validation. Furthermore, we describe the efforts taken to reach the following three strategic aims of the program: to achieve national and global visibility; to ensure internationalization and compatibility of results; and to help young researchers establish a specialized research community.

Competency Models, Assessments, and Validation

The teams of the 24 collaborative projects developed 41 competency models of generic and domain-specific competencies in higher education. Content validity (including curricular validity) was ensured in the KoKoHs projects through analyses of almost 1,000 documents such as module descriptions and study regulations from more than 250 institutions of higher education throughout Germany. Furthermore, analyses of items and tasks from almost 1,500 documents (e.g., exams, exercises, lecture notes) informed the construction of test items as shown in Table 1.

In addition to these document analyses, validation measures employed in the KoKoHs projects often included expert interviews and cognitive labs (with N~500 experts and N~500 participants, respectively, across the 24 collaborative projects). Expert interviews provided evidence of content validity; cognitive labs provided evidence of cognitive validity through analyses of fit between cognitive processes in the theoretical models and thought processes observed empirically in think-aloud interviews while participants responded to items.

The teams of the 24 collaborative projects also created new assessment instruments based on the competency models developed, and/or they adapted existing international instruments, if available, to meet their needs. Altogether, more than 60 paper-pencil instruments and almost 40 computer-based instruments were developed in the KoKoHs program as depicted in Table 2. During the first five-year funding phase, more than 220 researchers and several hundred student assistants were involved in project work. In addition, lecturers and students from the participating institutions in Germany actively supported the program by supervising or participating in the surveys and assessments during classes. As an incentive, all participating institutions, lecturers, and students received professionally prepared feedback from the aggregated, anonymized data. In turn, this facilitated transfer of research results and findings into higher-education practices.

Additional, more action-oriented approaches were employed in the KoKoHs projects for valid assessment of specific competency facets. For example, video-recorded role plays were used to assess explanatory knowledge of pre-service physics teachers within the domain of teacher education. The KoKoHs reseachers used--in addition to almost 10 newly developed video-based instruments--various other measuring methods, such as critical incidents, for complementary, qualitative, in-depth analyses of competency levels. Moreover, the teams of the KoKoHs collaborative projects used quantitative methods, (e.g., structural equation analysis) to gather evidence of validity aspects such as the internal structure of competency constructs.

The internal structure was differentiated according to both content requirements (e.g., knowledge and skills related to financial plans as part of business competency) and cognitive requirements (e.g., remembering, applying, or evaluating) (Anderson & Krathwohl, 2001).

Overall, the teams of the KoKoHs collaborative projects assessed domain-specific and generic competencies as well as personal and structural influence factors of approximately 50,000 students at more than 220 institutions of higher education in Germany as well as in Austria. Results in the three areas of competency modeling, test development, and validation significantly contributed to the provision of a reliable, valid, and internationally compatible basis for competency assessment in higher education in Germany.

Project Example

The WiwiKom project, which focuses on modeling and measuring the competencies of students and graduates of business and economics, provides one example of how the conceptual and methodological framework was implemented and how psychometric validity was gathered (Zlatkin-Troitschanskaia, Forster, Bruckner, & Happ, 2014). The construct of professional competency in business and economics was defined in a theory-driven competency model based on Kane's interpretative use argument (2013). Empirical evidence gathered in the assessments was described in the validity argument; subsequently, analyses of the data indicate the modeling was adequate (Kane, 2013).

The theory-driven model of competency in business and economics developed in WiwiKom (Zlatkin-Troitschanskaia et al., 2014) differentiated seven domain-specific content dimensions and three levels of cognitive requirements. The content dimensions represented the core curriculum in business and economics, sub-divided into content areas (e.g., microeconomics, finance, etc.). The cognitive dimension specified levels of professional competency defined in terms of the mental processes (e.g., understanding, applying, etc.) necessary to respond appropriately to situational cognitive requirements of increasing complexity. The competency model served as a basis for developing the WiwiKom test instrument and validating it in qualitative and quantitative studies with a focus on the five key validity aspects, while adhering to the Standards for Educational and Psychological Testing (AERA et al., 2014).

For curricular and content validation, the test content was examined during document analyses and was compared to curricula and textbooks from 98 degree courses at 64 institutions of higher education in Germany; it also was evaluated by lecturers of business and economics during expert interviews (N=32) and in online ratings (N=78). For cognitive validation, mental processes of 32 students were examined in cognitive labs, where students were asked to think aloud while responding to test items. For item calibration, test standardization, and establishment of validity of internal test structure, three field surveys were conducted in the WiwiKom project, assessing approximately 10,000 students of business and economics from all years of study at 57 institutions of higher education in Germany. The data was analyzed using methods such as confirmatory factor analysis or IRT modeling to gather evidence on the dimensionality and gradation of the examined competency. In addition, surveys were administered to gather data on multiple personal variables (e.g., gender, prior knowledge, etc.) and institutional variables (e.g., type of institution) for analyses of the relationship between the construct and other variables.

Further Activities of the KoKoHs Program

In addition to the specific research goals, there were three strategic aims of the KoKoHs program which would define the long-term impact of the program.

National and Global Visibility

A major aim of KoKoHs project was to achieve national and global visibility through the dissemination of our results. The teams of the KoKoHs collaborative projects were highly productive, primarily publishing articles in high-ranking national and international journals. Moreover, approximately 250 presentations were held at national conferences, and almost 100 presentations were held at high-profile international conferences, such as annual meetings of the European Association for Research on Learning and Instruction (EARLI) and the American Educational Research Association (AERA).

In addition to the project teams presenting and publishing the results of individual projects, results related to the entire KoKoHs program also were documented and published by the coordination project. The coordination project not only contributed numerous presentations and posters to national and international conferences with a focus on scientific topics or higher-education practice as shown in Table 3, but also published its own KoKoHs Working Papers series (with seven issues altogether, five of which were in English (1)) as well as seven thematic issues in prestigious national and international journals, some of which were coedited by renowned international cooperation partners. The KoKoHs program is the only national research program in Germany that has published an overview paper and is represented in an international edited volume on all research initiatives worldwide in the area of learning outcomes assessment in higher education.

Internationalization

To enhance global visibility and international networking as well as to ensure compatibility with international research and higher education practices, KoKoHs researchers established and maintained cooperation with international experts in different research areas. International KoKoHs cooperation partners include experts from various universities, research associations, and public and non-profit higher education and research institutions, including testing institutes. The KoKoHs program has more than 50 international cooperation partners from 20 countries on four continents. During the first funding phase, cooperation between KoKoHs project teams and international partners included joint events such as the KoKoHs Affiliated Group Meeting at the 2014 AERA conference in Philadelphia (Kuhn, Toepper, & Zlatkin-Troitschanskaia, 2014), joint publications such as a special issue in the journal Studies in Higher Education (Zlatkin-Troitschanskaia & Shavelson, 2015), and in the Peabody Journal of Education (Zlatkin-Troitschanskaia, Blomeke, & Pant, 2015) as well as joint supervision of doctoral and post-doctoral projects of KoKoHs researchers.

Supporting Young Researchers

With approximately 70 doctoral projects and almost 20 post-doctoral projects conducted by KoKoHs researchers, a major focus of the program was to systematically support young researchers in building up a scientific community within empirical higher-education research in Germany. Providing the young researchers with the necessary guidance would enable them to close existing gaps in research on competency assessment in higher education. To this end, the KoKoHs coordination project organized for all young researchers a variety of systematic training opportunities and events throughout the course of the program including methodology workshops, mentoring, and networking events such as international colloquia. Workshops were held at regular intervals over the course of the program on various topics related to research methodology, including a general introduction to methods of social research, item and test development, scaling and test theory, validation, and longitudinal data analysis. Networking and mentoring events such as the International Colloquium for Young Researchers in November 2013 and the international Autumn Academy in October 2014 were organized for outstanding young researchers whose submissions had been selected by international experts (Toepper, Zlatkin-Troitschanskaia, Kuhn, Schmidt, & Bruckner, 2014).

These events presented young researchers with excellent opportunities for networking internationally, presenting their work to the international scientific community, and receiving feedback from renowned international experts. Further opportunities for international networking and exchange open to all researchers in the program included the international kick-off and closing conferences as part of the cooperation between individual collaborative projects and international partners.

Conclusions and Future Perspectives

During the first funding phase, KoKoHs addressed theoretical, methodological, and empirical challenges including: systematically designing or adapting tests; considering framework conditions such as time, method, and format; analyzing data with complex psychometric methods; confirming psychometric quality criteria; and undertaking comprehensive validation. The models of competency structures and levels, the assessment designs, and the measuring instruments developed and tested so far provide a solid basis for future in-depth longitudinal multilevel analyses in random field experimental studies in higher education.

To date, few studies in higher education have employed complex methodological designs, such as longitudinal modeling, multilevel modeling, or (quasi-)experimental designs. Hence, findings on the trajectory of competencies over the course of studies are still scarce. With regard to instruments, there remains a lack of innovative measurement methods such as adaptive computer-based testing. Many challenges need to be addressed in order to overcome the unsatisfactory state of having to rely on less direct indicators (i.e., grades, degrees, and students' self-evaluations) and to complement these existing indicators with more direct assessments that allow valid conclusions to be drawn about student competencies (Zlatkin- Troitschaskaia, Shavelson, & Kuhn, 2015).

In 2015, the BMBF launched the second phase of the KoKoHs research program. The remaining theoretical, methodological, and empirical challenges will be addressed in this funding phase. These challenges include systematically designing or adapting tests under time, method, and format constraints, analyzing data with complex methods, confirming psychometric quality criteria, and undertaking comprehensive validation (AERA et al., 2014). Due to specific challenges in higher education--reliability issues related to complex models constrained by limited testing time, panel mortality in longitudinal studies, and testing based on students' performance--more complex and innovative methods of analysis need to be considered. These methods include longitudinal and multilevel analyses in random field- experimental studies, adaptive computer-based testing, and suitable psychometric techniques. KoKoHs program goals for the second funding phase include maintaining and expanding the networks established thus far, while continuing to support and draw on the expertise of the research community solidified in Germany during the first phase. More systematic international collaboration and exchange of best practices from this field (and related areas such as competency assessment in the school sector) are needed.

References

Alexander, P. A. (1997). Mapping the multidimensional nature of domain learning: The interplay of cognitive, motivational, and strategic forces. In M. L. Maehr & P. R. Pintrich (Eds.), Advances in motivation and achievement (Vol. 10, pp. 213-250). Greenwich, CT: JAI Press.

Alexander, P. A., Winters, F., Dinsmore, D. L., & Parkinson, M. (2011). The role of domain knowledge in self-regulated learning. In B. Zimmerman & D. Schunk (Eds.), Handbook of self-regulation of learning and performance. New York: Routledge.

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (AERA, APA, & NCME) (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing. A revision of Bloom's taxonomy of educational objectives. New York: Longman.

Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. In J. C. Burke & Associates (Eds.), Achieving accountability in higher education (pp. 1-24). San Francisco, CA: Jossey-Bass.

Hattie, J., Jaeger, R. M., & Bond, L. (1999). Persistent methodological questions in educational testing. Review of Educational Research, 24, 393-446.

Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1-73.

Kuhn, C., & Zlatkin-Troitschanskaia, O. (2011). Assessment of competencies among university students and graduates --Analyzing the state of research and perspectives (Business Education Working Paper No. 59). Mainz: Johannes Gutenberg University.

Kuhn, C., Toepper, M., & Zlatkin-Troitschanskaia, O. (2014). Current international state and future perspectives on competence assessment in higher education--Reportfrom the KoKoHs Affiliated Group Meeting at the AERA Conference on April 4, 2014 in Philadelphia (USA) (KoKoHs Working Papers No. 6). Berlin & Mainz: Humboldt University & Johannes Gutenberg University.

Kulikowich, J. M., & Alexander, P. A. (2003). Cognitive assessment. In L. Nadel (Ed.), The encyclopedia of cognitive science (Vol. 1, pp. 526-532). London: Nature Publishing Group.

Kulikowich, J. M., & Alexander, P. A. (1994). Error patterns on cognitive tasks: Applications of polytomous item response theory and log-linear modeling. In C. Reynolds (Ed.), Cognitive assessment: An interdisciplinary dialogue (pp. 137-154). New York: Plenum.

Mislevy, R. J. (1994). Test theory reconceived: CSE technical report 376. Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing.

Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidence-centered design for educational testing. Educational Measurement: Issues and Practice, 25(4), 6-20.

Pant, H. A., Rupp, A. A., Tiffin-Richards, S. P., & Koller, O. (2009). Validity issues in standard-setting studies. Studies in Educational Evaluation, 35(2-3), 95-101.

Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.) (2001). Knowing what students know: The science and design of educational assessment. Washington, D.C.: The National Academies Press.

Rychen, D.S. (2004). Key competencies for all: An overarching conceptual frame of reference. In D. S. Rychen & A. Tiana (Eds.), Developing key competencies in education: Some lessons from international and national experience (pp. 5-34). Paris: UNESCO.

Shavelson, R. J. (2013). An approach to testing & modeling competence. In S. Blomeke, O. Zlatkin-Troitschanskaia, C. Kuhn, & J. Fege (Eds.), Modeling and measuring competencies in higher education. Tasks and Challenges (pp. 29-43). Rotterdam: Sense Publishers.

Toepper, M., Zlatkin-Troitschanskaia, O., Kuhn, C., Schmidt, S., & Bruckner, S. (2014). Advancement of Young Researchers in the Field of Academic Competency Assessment--Reportfrom the International Colloquium for Young Researchers from November 14-16, 2013 in Mainz (KoKoHs Working Papers, 5). Berlin & Mainz: Humboldt University & Johannes Gutenberg University.

Webb, N. M., Shavelson, R. J., & Steedle, J. T. (2012). Generalizability theory in assessment contexts. In C. Secolsky & B. D. Denison (Eds.), Handbook on measurement, assessment, and evaluation in higher education (pp. 132-149). New York, London: Routledge.

Weinert, F. E. (2001). Concept of competence: A conceptual clarification. In D. S. Rychen & L. H. Salganik (Eds.), Defining and selecting key competencies (pp. 45-65). Seattle, WA: Hogrefe and Huber.

Zlatkin-Troitschanskaia, O., Blomeke, S., & Pant, H. A. (2015). Competency Research in Higher Education. [Special Issue] Peabody Journal of Education, 90(4).

Zlatkin-Troitschanskaia, O., Forster, M., Bruckner, S., & Happ, R. (2014). Insights from a German assessment of business and economics competence. In H. Coates (Ed.), Higher Education Learning Outcomes Assessment-International Perspectives (pp. 175-197). Frankfurt: Peter Lang.

Zlatkin-Troitschanskaia, O., Kuhn, C., & Toepper, M. (2014). Modelling and assessing higher education learning outcomes in Germany. In H. Coates (Ed.). Higher Education Learning Outcomes Assessment-International Perspectives (pp. 213-235). Frankfurt: Peter Lang.

Zlatkin-Troitschanskaia, O., & Shavelson, R. J. (2015). Competence assessment in higher education. [Special Issue] Studies in Higher Education, 40(3).

Zlatkin-Troitschanskaia, O., Shavelson, R. J., & Kuhn, C. (2015). The international state of research on measurement of competency in higher education. Studies in Higher Education, 40(3), 393-411.

(1) KoKoHs Working Papers can be downloaded at http://www.kompetenzen-im-hochschulsektor.de/ 617_ DEU_HTML.php. See the KoKoHs homepage in English for more in-depth information, including details about KoKoHs events.

Prof. Olga Zlatkin-Troitschanskaia

Johannes Guten berg

University

Prof. Hans Anand Pant

Humbolt University of Berlin

Dr. Christiane Kuhn

Johannes Gutenberg

University

Miriam Toepper

Johannes Gutenberg

University

Corinna Lautenbach

Humbolt University of Berlin

CORRESPONDENCE

Email lstroitschanskaia@uni-mainz.de
Table 1

Competency Models Developed and Validation

Theoretical competency models          41

Document analyses of

  curricula, regulations, standards    910
  exams, exercises, lecture notes      1,350
  project and lab reports              48

Validation

  expert interviews                    556
  cognitive labs                       459

Table 2

Instruments Developed and Competency Assessments Conducted

Instruments
  paper-pencil tests                       63
  computer-based tests                     36
  video-based tests                        8
  other tests (e.g., critical incidents)   119

Assessment surveys
  institutions                             226
  participants                             49,904

Table 3

Project Results Disseminated

Publications
  national                      134
  international                 65
Presentations
  national                      244
  international                 89
COPYRIGHT 2016 Research & Practice in Assessment
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Notes in Brief
Author:Zlatkin-Troitschanskaia, Olga; Pant, Hans Anand; Kuhn, Christiane; Toepper, Miriam; Lautenbach, Cori
Publication:Research & Practice in Assessment
Article Type:Report
Geographic Code:4EUGE
Date:Jun 22, 2016
Words:4111
Previous Article:Service-Learning Essentials. Questions, Answers, and Lessons Learned.
Next Article:An analysis of programs serving men of color in the community college: an examination of funding streams, interventions, and objectives.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters