Printer Friendly

DESIGNING FOR QUALITY: An Analysis of Design and Pedagogical Issues in Online Course Development.

This study investigated the process through which 100 online courses were developed in compliance with a purpose-made rubric designed to bring the courses to a level that would meet requirements of membership in a state authorization reciprocity agreement. The study identified and analyzed common design and pedagogical issues instructors encountered while working with a teaching and learning coordinator who provided training and feedback. Results indicated that writing measurable objectives posed more difficulties than any other aspect of course development. Other issues emanated from content organization, assessment communication, student participation, active learning incorporation, technology integration, and instructor presence. The study presents implications to faculty, instructional designers, and administrators.

INTRODUCTION

Distance education has been around since as early as the 1700s and continues to grow with the emergence of new technologies. One notable development in the 1700s was the training of prospective clergymen through correspondence (Adams & Olszewski-Kubilius, 2007; Kentnor, 2015). A report prepared for the Council for Higher Education Accreditation by the Institute for Higher Education Policy stated that distance education is growing rapidly, not only as a supplement to traditional institutions and programs, but also as a replacement for those institutions and programs (Council for Higher Education Accreditation, 1998). Moore and Kearsley (2012) outlined five generations of distance education: correspondence, broadcast radio and television, open universities, teleconferencing, and Internet/web. While the latter, popularly known as online learning or e-learning, may still be regarded as a new phenomenon, it increased radically in the 1990s (Wallace, 2010). Harasim (2000) also observed that the first wholly online course was offered in 1981. But then despite the mode of delivery being slightly different from other forms such as correspondence, the principles are the same: allowing students to access education by distance. The Institute for Higher Education Policy (2000) prepared a report on assessing the quality of online programs. The study recommended that institutions should evaluate their programs in several areas: institutional support, course development, teaching and learning, course structure, student support, faculty support, and evaluation and assessment. Against this background, it is clear that there has certainly been time enough to research what makes for quality in online programs and more specifically, online courses. To that end, it becomes clear that the notion of quality is not a strange phenomenon to institutions that offer online programs.

QUALITY IN ONLINE LEARNING

Until recently, quality in higher education was measured only by a course's content, pedagogy, and learning outcomes (Bremer, 2012). This approach has since changed to a more process-oriented system where a combination of activities contributing to the education experience are put into consideration. These activities include: student needs, use of data and information for decision-making, department contributions, and improved learning outcomes (Thair, Garnett, & King, 2006). As Misuta and Pribilovab (2015) contended, this means that not only the output, but also the quality of the entire process, is subject to evaluation. Because quality principles of successful technology-supported learning are the same as those in the traditional classroom, this all-inclusive approach of evaluating education experiences is regularly applied to the development and assessment of online learning. Pappas-Rogich and Gehrling (2013) contended that the best way to assure quality in online learning is to increase rigor. According to Boyer (1990, p. 23), "rigor supports the idea that pedagogical procedures must be carefully planned, continuously planned, and related directly to the subject taught." Daggett (2005, p. 2) extended this view by arguing that, "rigor and relevance help students engage in the learning process and excel in what they do." And so Pappas-Rogich and Gehrling (2013) identified four ways of increasing rigor in online learning: making grading criteria explicit by using the same discussion rubrics for all online courses; providing high-quality readings, texts, and learning activities; supporting critical thinking; and enhancing social interaction among faculty and students within the e-learning environment.

In a study on quality in online learning, Kassop (2013) found that there are several ways that online courses may exceed traditional classes in quality and rigor, such as student-centered learning, highly interactive discussions, enriched course materials, on-demand interaction and support services, prompt instructor feedback, time-management flexibility and, surprisingly, an intimate community of learners. In a study on online learning in a nursing program, Wilkinson, Forbes, Bloomfield, and Fincham Gee (2004) found that advanced practice nurses enjoyed online programs' flexibility, appreciated having control over where and when they learned, and liked the quality of learning materials. The students in online courses noted that they were able to proceed at their own pace, download teaching materials, and repeat learning through modules as many times as needed. Again, Bonnel, Wambach and Connors (2005) found that nurse educators who teach online courses have been found to be positively affected by e-learning in that they have felt empowered, confident to use technology, as well as engage freely to teach with technology.

STATE AUTHORIZATION RECIPROCITY AGREEMENT

The number of universities offering fully online degree programs continues to increase. Numerous reports, surveys, and studies show that the e-learning industry is gaining speed with increasing numbers of individuals, corporations, and institutions turning to e-learning as they recognize its effectiveness and its convenience. The global e-learning market supposedly reached $107 billion in 2015. More than 6.7 million students (32% of total higher education enrollment) took at least one online course through a university during the fall of 2011 (Cain, 2013; Sheehy, 2013). The State Authorization Reciprocity Agreement (SARA) is an agreement among member states, districts, and territories in the United States that establishes comparable national standards for interstate offering of postsecondary distance education courses and programs. It is intended to make it easier for students to take online courses offered by postsecondary institutions based in another state. The members of SARA are states, not institutions or students. Therefore a state "joins" or becomes a "member" of SARA while a college or university "operates under" or "participates in" SARA. States join SARA through their respective regional compact. One of solutions that SARA provides is that it makes state authorization more effective and uniform across states in dealing with quality and integrity issues that have arisen in some online/distance education offerings (National Council for State for State Authorization Reciprocity Agreements, 2016).

A state university in the southern part of United States had seen a large increase in total number of online courses from the fall semester of 2007 to fall of 2015. From as low as offering six online courses in the year 2007, the number increased to as high as 89 courses in 2015. The total number of distance education courses also increased to as high as 149 in 2015. In order to meet the requirements of membership in SARA, the university instituted training for online instructors and developers and a rubric for demonstrating compliance of online courses. In order to be certified to teach online, faculty members were required to complete two online workshops. The first workshop would basically prepare them for teaching in an online environment in a Canvas learning management system interface. The workshop provided best practices and the university policies for instructors teaching in an online environment. This workshop would essentially prepare faculty members to become effective online instructors by discussing various best practices and the university policies in online and blended instruction. The second workshop introduced faculty members to a carefully developed rubric that would be used by the Center for Excellence in Teaching and Learning (CETL) to evaluate their compliance with SARA requirements. This workshop went section by section of the new rubric familiarizing faculty members with it. Upon completion of the two trainings, faculty members would be authorized to develop online courses, have them evaluated by CETL who would provide any feedback before revising content and have them approved and taught online. It is from the online course development process that the present study was built.

PURPOSE OF STUDY

The purpose of this study was to analyze design and pedagogical issues that online instructors encountered while developing courses in their area of expertise while working with an instructional designer who provided training, evaluated the courses and provided feedback to the course developers. While giving feedback to instructors, the instructional designer employed the strategy of doing it together in instructional design (Dick, Carey, & Carey, 2009). So, rather than simply evaluating a course and providing feedback, the evaluator would look at the course together with a developer and identified all areas that were well done and those that needed some revision. The evaluator would then suggest ways of improving the flaws and discuss them with the developer of the course before implementing any changes. Specifically, the study sought to answer the following questions:

* What design and pedagogical issues did faculty members encounter while developing online courses in their area of expertise?

* What implications did these issues have on preparing future training for online course development?

RESEARCH DESIGN AND METHODOLOGY

Context and Participants

In order to meet the requirements of membership in the SARA, a state university in the southern part of United States instituted training for online instructors and a rubric for demonstrating compliance of online courses. First, all faculty members were required to complete a one hour online course development workshop in order to be certified to teach online and in blended environments. Second, course developers, creating an online or blended course were required to complete the Online/Blended Rubric Training Course in Canvas. This course was based on a rubric that was carefully created by the institution for evaluating online and blended courses. Once an instructor successfully completed the Online/Blended Rubric Training Course, responsibility rested with them to design and develop their online/blended courses to meet the requirements of the rubric and use the rubric to self-evaluate each course before turning in to a teaching and learning coordinator (TLC; an instructional design specialist) from CETL, who would evaluate it and provide any feedback for revisions. Once a course was approved by CETL, the instructor would be able to teach it online.

While developing a course, faculty members were free to work with the Teaching and Learning Coordinator on a one-on-one basis for guidance in designing and developing the course. This was found to be a helpful practice because the Teaching and Learning Coordinator employed the "doing it together" strategy (Dick et al., 2009), which enabled faculty members to familiarize themselves with the best online course design principles while in the process of building their own course. Once the course development was completed, a relevant departmental chairperson would review the objectives for identicalness with the face-to-face version of the course. The course developer would then use the rubric to self-evaluate the course before submitting it to CETL for formal evaluation. Between 2013 and 2015, a total of 140 courses were approved for online teaching. By the summer semester of 2016, 140 full-time and adjunct faculty members were teaching online. The present study analyzed 100 online courses evaluated between 2014 and 2015.

Researcher Stance

The TLC is a faculty member who performs instructional design duties was the researcher in this study. The researcher, therefore, had the insider's perspective. As an instructional design specialist, the researcher was responsible for evaluating all online courses using a standardized rubric having been involved in the development of the rubric itself. A course developer would first evaluate a course on his or her own using the set rubric. They would then turn in the course to their departmental chairperson by way of adding them to the course in Canvas learning management system and also sending them an electronic copy of a rubric they had used to self-evaluate. The departmental chairperson would review course objectives to validate that they matched those of the corresponding face-to-face course. Once that was done, the departmental chairperson would forward it to the Center for Excellence in Teaching and Learning for formal evaluation. The TLC would then evaluate the course and indicate where it was falling short according to the rubric. Once feedback was compiled, a one-on-one meeting with the course developer would be set where feedback would be discussed and solutions identified. During these meetings, minor corrections would be done outright while working together. Changes needing more time would be discussed and later done and effected by the course developer. The course developer would also ask any questions that they might have at that point and the course evaluator would address them. While evaluating a course, as a researcher, the TLC systematically recorded all issues identified on a Google document. Subsequently, the researcher analyzed these data from which the present study was built.

Data Collection

Qualitative content analysis was used to collect data for the present study. Content analysis, a method that originated from the communication sciences, is an empirical method that can be used to examine text and images in order to identify messages and meanings (Hartley & Morphew, 2008; Krippendorff, 2013). A more contemporary definition of content analysis describes it as "a research method for the subjective interpretation of the content of text data through the systematic classification process of coding and identifying themes or patterns" (Hsieh & Shannon, 2005, p. 1278). As Krippendorff (2013) observed, content analysis is often used for the exploration of trends, patterns, and differences among similar components. Again, Fraenkel, and Wallen (2015) contended that one of the major reasons for using content analysis is to formulate themes out of large amounts of descriptive information and obtain information useful in dealing with educational problems. In the present study, content of 100 online courses under review was analyzed for issues online course developers encountered in the process. These issues or problematic areas were based on various aspects of the rubric for evaluating online courses. For example, design and pedagogical issues would arise from development of aspects of a course such as writing objectives, modularization, active learning organization, assessment communication, student and instructor participation, course technology, and other areas. Content analysis identified common issues that faculty members encountered while developing these aspects of an online course in the Canvas learning management system. A Google document was created to provide space where all identified issues course developers encountered were systematically recorded.

Data Analysis

Data comprising issues course developers encountered while developing online courses were analyzed based on eight main categories: writing measurable objectives, content organization, modularization, active learning organization, student participation expectations, instructor participation expectations, course technology, and assessment communication. The whole process of identifying and recording issues while evaluating 100 online courses culminated in an enormous amount of data. In order to make sense of this huge amount of data, the researcher employed whole text analysis. This technique requires an investigator to properly understand the purpose of the study in order to be able to identify specific codes. Whole text analysis was developed by Glaser and Strauss (2012) and Strauss and Corbin (2015). And so from the theme "design and pedagogical issues in online course development" eight categories were derived and are presented in Tables 1 through 7. These tables glean issues that were identified across courses under evaluation.

DISCUSSION

Research Question 1: What design and pedagogical issues did faculty members encounter while developing online courses in their area of expertise?

Writing Measurable Objectives

Data collected from content analysis of 100 online courses indicated that writing measurable objectives (Mager, 1997) posed the more challenges than any other aspect of course development. For instance, while the rubric for evaluating online courses required that action verbs be employed in writing course and module objectives, most course developers had issues with selecting appropriate action verbs that would make objectives measurable. While working with faculty members through these issues, the TLC utilized the list of action verbs from Bloom's ideas on writing learning objectives (Cruz, 2003). And so Bloom's revised taxonomy proved to be helpful in assisting course developers to select appropriate action verbs for writing measurable objectives. Again, most objectives were written from the teacher's perspective rather than the learner's perspective. Some objectives were also written within other objectives and the TLC had to work with course developers to make sure that objectives were separated and appeared in single form. Lastly, when it came to writing module objectives, notable issues included omitting module objectives altogether, duplicating course objectives in modules rather than making them different but related, and also not mapping module objectives back to relevant course objectives as required by the rubric. It must be pointed out that while the TLC worked with faculty members through these issues, the strategy of doing it together in instructional design was employed (Dick et al., 2009). So, issues were essentially solved collectively between course developers and the TLC.

Modularization

Issues arose when it came to developing course modules within the Canvas learning management system. Most notably, some courses did not make use of modules. And so, rather than developing a series of modules that would accommodate content for each week, a huge amount of content would be placed within the syllabus ending up overloading it. In certain instances, a single module would be created with whole semester content being placed in it rather than creating several modules that would separate content say by week or unit. Again, in certain instances, modules only contained tasks like graded events and not content for learners. Some modules also lacked variety in terms of presentation media. While the rubric encouraged course developers to vary content presentation styles across different media from text, to pictures and videos, it was apparently easier for course developers to use text to present content. The rubric further required course developers to ensure that all items, assignments, readings, multimedia, discussion boards needed for students to complete each module are included within each module. These items would then be hidden on the left navigation bar so that students would not rush into attempting graded events before going through necessary content. However, it was not uncommon for course developers to make assignments available for students without them going through modules. This encouraged students to attempt graded events without properly going through modules as required in the course. The TLC would address this with course developers, demonstrating the importance of making sure students go through content before attempting any graded event for points.

Active Learning Organization

Central in online learning contexts is the understanding that learners must be active in their online learning experience (Marks, 2016). It therefore becomes critical in e-learning environments to employ strategies that rigorously engage learners as they interact with content. In the present study, a common issue regarding active learning organization arose where no active learning activities were included at all in a course. To address this issue, the TLC worked with the course developer to include at least discussion topics to the course and any other applicable active learning strategies. In certain courses, one or more active learning activities would be included but only as ungraded activities. While working with the course developer, the TLC elucidated that students are normally unlikely to invest their time and effort participating in an activity that would not yield them any points. So, in essence, it was not just a matter of incorporating active learning activities in a course but also making sure that they were graded activities. In organizing active learning activities, the rubric for evaluating online courses required that links to each active learning activity be created in relevant modules so that students would access them after going through a module. This way, learners would go through the content first before partaking in any graded activity such as a discussion topic. However, issues arose when links to active learning activities were not created in a module. Rather, students would access these activities directly from the list of graded events within the learning management system interface. While working with course developers, the TLC clarified that allowing students to access these activities directly would encourage them to attempt them for points without going through relevant module content.

Assessment Communication

In their examination of 73 and 60 courses respectively, and out of several methods, both Swan (2001) and Arend (2007) identified online discussions as the most popular assessment method in online learning environments. From the 100 courses analyzed in this study, numerous strategies were used to assess students' work: essay-type written assignments, quizzes and exams, discussions, problem-based assignments, and also group work with presentation delivery format adapted for the online environment. It must, however, be pointed out that the present study placed focus on the way content of assessments was communicated to learners. One issue that arose with assessment communication, for example, was that courses presented graded events without the inclusion of value and breakdown of grades, which would be helpful for online students from wherever they would be based. Again, contrary to the requirements of the rubric, some graded events did not indicate which course objectives they were addressing. Also, while it would be more helpful to make instructions very clear for the online learner, one of the issues identified in this study was that assignment instructions were not sufficiently clear and detailed. While working with course developers, the TLC explained how important it is to make assignment instructions as clear and as detailed as possible, including how and when they should be submitted. This, it was elucidated, should be done considering that, unlike the traditional classroom student, the online learner would not have the same opportunity to ask for clarification at any time in class. Again, some assignments were presented without a grading rubric, which is a very important component of an e-learning assignment because when presented in great detail and not in a generic way, the rubric guides online learners as they tackle assignments on their own and remotely.

Instructor Participation Expectations

While it is easy to assume that success of an online course mostly emanates from content, course design, and technology, on the contrary, studies on quality in online learning indicate that teacher presence is even more important (Marks, 2016; Walker, 2008; Welch, Forbes, Bloomfiled, & Fincham, 2014). The rubric for evaluating online courses required that instructor participation expectations be clearly spelled out to learners. Analysis of the 100 courses brought out issues such as courses having no explanation as to how often an instructor would be online to participate in class. Also, there were cases where no description was made in the syllabus as to how quickly an instructor would respond to messages from students. Another outstanding issue with instructor participation expectations emanated from a lack of proper description in syllabus as to how quickly graded materials would be returned to students. Again, while the rubric required that all course announcements be present throughout the semester and delayed by date so that they would automatically come up at particular points over the course of a semester, issues arose where no single announcement was set prior to the beginning of the course.

Student Participation Expectations

One outstanding issue that came out with student participation was a scenario where the participation component of the course was not graded. While discussing the issue with course developers, the TLC pointed out that students would normally not invest their time in an activity that would not earn them points. So, in order to ensure that students actively participated in these activities, they had to carry some points. Some courses did not include any discussion topics at all and so the importance of including online discussions would have to be clearly stated out so that course developers would include them. Other issues with student participation came where discussion board instructions were not included. Instructions would normally help students to properly contribute to discussion topics. Again, in certain cases, an "introductions" discussion where instructor and students would introduce themselves in the first week of class was not included. The TLC then demonstrated to course developers the importance of breaking the ice in a course, which would lead to high levels of participation throughout the semester. Also, contrary to what the rubric required, some syllabi did not include etiquette expectations. Instructor-set etiquette expressions ensure courtesy and integrity among learners, especially with discussions.

Course Technology

Studies in technology integration in teaching and learning have demonstrated that providing technologically superior tools does not necessarily result in guaranteed use nor assure integration in teaching and learning (Nichols, 2008; Sahin & Thompson, 2007). In online learning environments, it therefore becomes critical to employ technology not just for the sake of it, but to use it in relevant contexts in such a way that it advances a particular pedagogy and result in authentic learning. Outstanding issues to do with course technology integration in the present study included the absence of an initial assessment of students' technical skills before they could take a course. Also, some courses did not employ a single application that had been proven to advance students' engagement. Links that would allow learners to explore Canvas were also found to be broken in several instances. Again, links to course videos in modules would not work. While addressing these issues, the TLC would reiterate the importance of using the "link validation" facility in Canvas before publishing a course. While the plagiarism checker, Turnitin, was well utilized in courses, a common issue was that no clear instructions were made available to students resulting in them struggling to have their submissions go through the application. Issues of lack of clear instructions also applied to the integration of an online engagement application such as Voice Thread. Similarly, while the application, Crocodoc, which allows instructors to make annotations on student submissions, worked well in giving instant feedback, students had issues accessing such comments due to lack of proper instructions.

Research Question 2: What implications did these issues have on preparing future training for online course development?

Evaluation and approval of the 100 courses led to several new lessons. First, it became clear that more faculty development workshops had to be created to prepare faculty members better to design, develop, and teach online courses. And so, apart from having faculty members complete an online teaching certification workshop and the Online/Blended Rubric Training Course, more workshops in different areas had to be created. It was decided that workshops should be introduced in areas such as: principles of instructional design, writing measurable objectives, active learning strategies, best practices in e-assessment, classroom management, developing rubrics, technology integration in e-learning environments, and other areas. It was believed that these workshops would thoroughly prepare faculty members to be able to design, develop and teach online courses. It must be pointed out that the list of new workshops was arrived at using a bottom-up approach. In the process, faculty members were asked to make suggestions as to what workshops would benefit them, and from their feedback a list was compiled. Again, a new course shell, Faculty Commons, was created to which all faculty members were enrolled. Various resources that would help them with online learning projects would be posted in this shell for any time reference. So, apart from regular faculty workshops, these resources would be made available to faculty members at all times. Also, one of the outstanding lessons learned through the process was that while the rubric for evaluating online courses was detailed and elaborate enough, it was too long. To that end, a group of faculty representatives and the Center for Excellence in Teaching and Learning drafted a shorter rubric with close reference to the Quality Matters Rubric. The new, shorter rubric would be used to evaluate all online courses from then onwards.

CONCLUSION

While online learning continues to proliferate, assuring quality of learning in e-environments becomes an interesting challenge. The present study clearly shows that maintaining quality in an online course requires meticulousness in developing each and every aspect of a course. For example, while sometimes it may be easy to spend a lot of time incorporating a wide variety of state-of-the-art technologies in an online course, it does not necessarily translate into robust cognition. On the contrary, it becomes critical to carefully select pedagogies that work well in applicable contexts and then employing selected, relevant technologies to advance such pedagogies. The study analyzed specific aspects of the course development process and identified areas that needed necessary revisions for improvement. While at that, the study also clearly demonstrated that the whole strategy of doing it together in instructional design (Dick et al., 2009) yields positive results in the course development process. For example, while giving feedback and direction to online course developers, the TLC did not simply make or suggest corrections on various aspects of a course. Rather, the process involved working together through the process, identifying aspects that required improvement, and again, working through them together to rectify flaws. In the end, both parties benefited a lot from the whole process with course developers improving the quality of their courses and the Center for Excellence in Teaching and Learning improving on its capacity to support quality online course development by providing more resources, and also improving the quality of the instrument for evaluating online courses itself.

REFERENCES

Adams, C. M., & Olszewski-Kubilius, P. (2007). Distance learning and gifted students. In J. Van-Tassel-Baska (Ed.). Serving gifted learners beyond the traditional classroom: A guide to alternative programs and services (pp. 169-188). Waco, TX: Prufrock.

Arend, B. (2007). Course assessment practices and student learning strategies in online courses. Journal of Asynchronous Learning Networks, 11(4), 3-13.

Bonnel, W., Wambach K., & Connors, H. (2005). A nurse educator teaching with technologies course: More than technology on the web. Journal of Professional Nursing, 21(1), 59-65.

Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate. New York, NY: Carnegie Foundation for the Advancement of Teaching.

Bremer, C. (2012). Enhancing e-learning quality through the application of the AKUE procedure model. Journal of Computer Assisted Learning, 28(1), 2012, 15-26.

Cain, G. (2013). E-learning: Six reasons students take online classes. Retrieved from http://www2.humboldt.edu/ceee/sixreasons.html

Council for Higher Education Accreditation. (1998). Assuring quality in distance learning. Washington, DC: Institute for Higher Education Policy.

Cruz, E. (2003). Bloom's revised taxonomy. In B. Hoffman (Ed.), Encyclopedia of educational technology. Retrieved from http://coe.sdsu.edu/eet/Articles/bloomrev/start.htm

Fraenkel, J. R., & Wallen, N. E. (2015). How to design and evaluate research in education (9th ed.). New York, NY: McGraw-Hill Education.

Daggett, W. (2005). Achieving academic excellence through rigor and relevance. Retrieved from www.leadered.com/pdf/academic_excellence.pdf

Dick, W., Carey, L., & Carey, J. (2009). The systematic design of instruction (7th ed.). Boston, MA: Pearson.

Glaser, B. G., & Strauss, A. L. (2012). The discovery of grounded theory: Strategies for qualitative research. Chicago, IL: Aldine.

Harasim, L. M. (2000). Shift happens: Online education as a new paradigm in learning. The Internet and Higher Education, 3, 41-61.

Hartley, M., & Morphew, C. (2008). What's being sold and to what end? A content analysis of college viewbooks. Journal of Higher Education, 79(6), 671-691.

Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277-1288.

Institute for Higher Educational Policy. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Retrieved from http://www.americanbar.org/content/dam/aba/migrated/legaled/distanceeducation/QualityOnTheLine.authcheckdam.pdf

Kassop M. (2013). Ten ways online education matches, or surpasses, face-to-face learning. Retrieved from http://ts.mivu.org/default.asp?show=article&id=1034.

Kentnor, H. (2015). Distance education and the evolution of online learning in the United States. Curriculum and Teaching Dialogue, 17(2), 3-23.

Krippendorff, K. (2013). Content analysis: An introduction to its methodology (3rd ed.). Thousand Oaks, CA: SAGE.

Mager, R. F. (1997). Preparing instructional objectives: A critical tool in the development of effective instruction (3rd ed.). Atlanta, GA: The Center for Effective Performance.

Marks, D. B. (2016). Theory to practice: Quality instruction in online learning environments. National Teacher Education Journal, 9(2), 75-80.

Misuta, M., & Pribilovab, K. (2015). Measuring of quality in the context of e-learning. Social and Behavioral Sciences, 177, 312-319.

Moore, M., & Kearsley, G. (2012). Distance education: A systems view of online learning (3rd ed.). Belmont, CA: Wadsworth Cengage.

National Council for State for State Authorization Reciprocity Agreements. (2016). What does my state need to do to join SARA? Retrieved from http://nc-sara.org/

Nichols, M. (2008). Institutional perspectives: The challenges of e-learning diffusion. British Journal of Educational Technology, 39(4), 598-609.

Pappas-Rogich, M., & Gehrling, K. R. (2013) Assessing and maintaining quality and rigor in an online DNP program. Nurse Educator, 38(6), 256-260.

Sahin, I., & Thompson, A. (2007). Analysis of predictive factors that influence faculty members technology adoption level. Journal of Technology and Teacher Education, 15(2), 167-190.

Sheehy, K. (2013, January 8). Online course enrollment climbs for 10th straight year. U.S. News Retrieved from http://www.usnews.com/education/online-education/articles/2013/01/08/online-course-enrollment-climbs-for-10th-straight-year

Strauss, A., & Corbin, J. (2015). Basics of qualitative research techniques and procedures for developing grounded theory (2nd ed.). London, England: SAGE.

Swan, K. (2001). Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22(2), 306-331. doi:10.1080/0158791010220208

Thair, A., Garnett, P., & King, S. (2006). Quality assurance and change in higher education. In L. Hunt, A. Bromage, & B. Tomkinson (Eds.), The realities of change in higher education: Interventions to promote learning and teaching (pp. 52-63). London, England: Routledge.

Wallace, R. M. (2010). Online learning in higher education: A review of research on interactions among teachers and students. Education, Communication & Information, 3(2), 241-280. doi:10.1080/14636310303143

Walker, R. (2008). Twelve characteristics of an effective teacher: A longitudinal, qualitative, research study of in-service and pre-service teachers' opinions. Educational Horizons, 87(1), 61-68.

Welch, A., Napoleon, L., Hill, B., & Roumell, E. (2014). Virtual Teaching Dispositions Scale (VTDS): A multi-dimensional instrument to assess teaching dispositions in virtual classrooms. Journal of Online Learning & Teaching, 10(3), 446-467.

Wilkinson, A., Forbes, A., Bloomfield, J., & Fincham Gee, C. (2004). An exploration of four Web-based open and flexible learning modules in post-registration nurse education International Journal of Nursing Studies, 41(4), 411-424.

Mapopa William Sanga

Southwestern Oklahoma State University

* Mapopa William Sanga, Assistant Professor and Teaching and Learning Coordinator, Southwestern Oklahoma State University. Telephone: (580) 774-7128. E-mail: mapopa.sanga@swosu.edu
TABLE 1
Writing Measurable Objective

Course Includes:
* Nonaction verbs employed
* Objectives teacher-centered
* Objectives written within other objectives
* No module-specific objectives
* Course objectives duplicated as module objectives
* Module objectives not mapped back to course objectives

Rubric Requires:
* Include objectives for each module or unit
* Make objectives learner-centered and use action verbs
* Reference back module/unit objectives to relevant course objectives

TABLE 2
Modularization

Course Includes:
* No module pages included, huge amount of content placed in syllabus
* Course modules set on one page
* No content for learners in modules, just graded events
* Lack of variety in module content organization
* Module pages do not include links to relevant graded events

Rubric Requires:
* Course needs to contain modules and module pages for content
presentation
* Include all course content by module
* Vary content presentation style in modules
* Provide links to relevant graded events in module pages

TABLE 3
Active Learning Organization

Course Includes:
* No active learning activities included
* One active learning activity included but not graded
* No links to active learning activities provided in relevant modules

Rubric Requires:
* Clearly articulated learning activities provide opportunities for
interaction that support active learning

TABLE 4
Assessment Communication

Course Includes:
* Assignment instructions not clear, more detail needed
* Assignments have no grading rubrics
* Grading rubrics too generic
* Graded events do not include value and breakdown of grades
* Graded events do not include objectives they address

Rubric Requires:
* Each graded activity has a corresponding set of instructions that
explains the activity, how and when it should be submitted, and
assessment criteria.

TABLE 5
Instructor Participation Expectations

Course Includes:
* No explanation in syllabus as to how often instructor will be online
to participate with class.
* No description in syllabus as to how quickly instructor will respond
to messages
* Syllabus does not describe how quickly graded materials will be
returned to students.
* Course announcements not preset.
Rubric Requires:
* A clear explanation of instructor's role in the student's experience
in class.

TABLE 6
Student Participation Expectations

Course Includes:
* Participation component not graded
* Discussion board instructions not provided
* Course contains no single discussion topic
* Students not given an opportunity to introduce themselves
* Syllabus does not include etiquette expectations

Rubric Requires:
* Student participation and active learning expectations are clearly
described.

TABLE 7
Course Technology

Course Includes:
* No preassessment of students' technical skills for taking an online
course
* No applications that work in Canvas employed to advance student
engagement
* Links to exploring Canvas LMS broken
* Link to course videos in modules broken
* No clear instructions for using Turnitin application made available
to students
* Voice thread application well utilized but no clear instructions made
available to students
* Students not given instructions on printing Crocodoc feedback
comments Rubric Requires:
* Course tools support the course/unit learning objectives, student
engagement, and active learning
COPYRIGHT 2017 Information Age Publishing, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sanga, Mapopa William
Publication:Quarterly Review of Distance Education
Article Type:Report
Geographic Code:1USA
Date:Jun 22, 2017
Words:6132
Previous Article:GLOBAL STANDARDS FOR ENHANCING QUALITY IN ONLINE LEARNING.
Next Article:HOMEWORK IN CYBER SCHOOLS: An Exploratory Study in an American School.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters