2009)
Berge and Mrozowski (2001) reviewed 890 research articles and dissertation abstracts on distance education from 1990 to 1999. The four distance education journals chosen by the authors to represent distance education included, American Journal of Distance Education, Distance Education, Open Learning, and the Journal of Distance Education. This review overlapped in the dates of the Tallent-Runnels et al. (2006) study. Berge and Mrozowski (2001) categorized the articles according to Sherry's (1996) ten themes of research issues in distance education: redefining roles of instructor and students, technologies used, issues of design, strategies to stimulate learning, learner characteristics and support, issues related to operating and policies and administration, access and equity, and costs and benefits.
In the Berge and Mrozowski (2001) study, more than 100 studies focused on each of the three themes: (1) design issues, (2) learner characteristics, and (3) strategies to increase interactivity and active learning. By design issues, the authors focused on instructional systems design and focused on topics such as content requirement, technical constraints, interactivity, and feedback. The next theme, strategies to increase interactivity and active learning, were closely related to design issues and focused on students’ modes of learning. Learner characteristics focused on accommodating various learning styles through customized instructional theory. Less than 50 studies focused on the three least examined themes: (1) cost-benefit tradeoffs, (2) equity and accessibility, and (3) learner support. Cost-benefit trade-offs focused on the implementation costs of distance education based on school characteristics. Equity and accessibility focused on the equity of access to distance education systems. Learner support included topics such as teacher to teacher support as well as teacher to student support.
Tallent-Runnels et al. (2006) reviewed research on online instruction from 1993 to 2004. They reviewed 76 articles focused on online learning by searching five databases, ERIC, PsycINFO, ContentFirst, Education Abstracts, and WilsonSelect. Tallent-Runnels et al. (2006) categorized research into four themes, (1) course environment, (2) learners' outcomes, (3) learners’ characteristics, and (4) institutional and administrative factors. The first theme that the authors describe as course environment ( n = 41, 53.9%) is an overarching theme that includes classroom culture, structural assistance, success factors, online interaction, and evaluation.
Tallent-Runnels et al. (2006) for their second theme found that studies focused on questions involving the process of teaching and learning and methods to explore cognitive and affective learner outcomes ( n = 29, 38.2%). The authors stated that they found the research designs flawed and lacked rigor. However, the literature comparing traditional and online classrooms found both delivery systems to be adequate. Another research theme focused on learners’ characteristics ( n = 12, 15.8%) and the synergy of learners, design of the online course, and system of delivery. Research findings revealed that online learners were mainly non-traditional, Caucasian, had different learning styles, and were highly motivated to learn. The final theme that they reported was institutional and administrative factors (n = 13, 17.1%) on online learning. Their findings revealed that there was a lack of scholarly research in this area and most institutions did not have formal policies in place for course development as well as faculty and student support in training and evaluation. Their research confirmed that when universities offered online courses, it improved student enrollment numbers.
Zawacki-Richter et al. (2009) reviewed 695 articles on distance education from 2000 to 2008 using the Delphi method for consensus in identifying areas and classified the literature from five prominent journals. The five journals selected due to their wide scope in research in distance education included Open Learning, Distance Education, American Journal of Distance Education, the Journal of Distance Education, and the International Review of Research in Open and Distributed Learning. The reviewers examined the main focus of research and identified gaps in distance education research in this review.
Zawacki-Richter et al. (2009) classified the studies into macro, meso and micro levels focusing on 15 areas of research. The five areas of the macro-level addressed: (1) access, equity and ethics to deliver distance education for developing nations and the role of various technologies to narrow the digital divide, (2) teaching and learning drivers, markets, and professional development in the global context, (3) distance delivery systems and institutional partnerships and programs and impact of hybrid modes of delivery, (4) theoretical frameworks and models for instruction, knowledge building, and learner interactions in distance education practice, and (5) the types of preferred research methodologies. The meso-level focused on seven areas that involve: (1) management and organization for sustaining distance education programs, (2) examining financial aspects of developing and implementing online programs, (3) the challenges and benefits of new technologies for teaching and learning, (4) incentives to innovate, (5) professional development and support for faculty, (6) learner support services, and (7) issues involving quality standards and the impact on student enrollment and retention. The micro-level focused on three areas: (1) instructional design and pedagogical approaches, (2) culturally appropriate materials, interaction, communication, and collaboration among a community of learners, and (3) focus on characteristics of adult learners, socio-economic backgrounds, learning preferences, and dispositions.
The top three research themes in this review by Zawacki-Richter et al. (2009) were interaction and communities of learning ( n = 122, 17.6%), instructional design ( n = 121, 17.4%) and learner characteristics ( n = 113, 16.3%). The lowest number of studies (less than 3%) were found in studies examining the following research themes, management and organization ( n = 18), research methods in DE and knowledge transfer ( n = 13), globalization of education and cross-cultural aspects ( n = 13), innovation and change ( n = 13), and costs and benefits ( n = 12).
These three systematic reviews provide a broad understanding of distance education and online learning research themes from 1990 to 2008. However, there is an increase in the number of research studies on online learning in this decade and there is a need to identify recent research themes examined. Based on the previous systematic reviews ( Berge & Mrozowski, 2001 ; Hung, 2012 ; Tallent-Runnels et al., 2006 ; Zawacki-Richter et al., 2009 ), online learning research in this study is grouped into twelve different research themes which include Learner characteristics, Instructor characteristics, Course or program design and development, Course Facilitation, Engagement, Course Assessment, Course Technologies, Access, Culture, Equity, Inclusion, and Ethics, Leadership, Policy and Management, Instructor and Learner Support, and Learner Outcomes. Table 2 below describes each of the research themes and using these themes, a framework is derived in Fig. 1 .
Research themes in online learning.
Research Theme | Description | |
---|---|---|
1 | Learner Characteristics | Focuses on understanding the learner characteristics and how online learning can be designed and delivered to meet their needs. Online learner characteristics can be broadly categorized into demographic characteristics, academic characteristics, cognitive characteristics, affective, self-regulation, and motivational characteristics. |
2 | Learner Outcomes | Learner outcomes are statements that specify what the learner will achieve at the end of the course or program. Examining learner outcomes such as success, retention, and dropouts are critical in online courses. |
3 | Engagement | Engaging the learner in the online course is vitally important as they are separated from the instructor and peers in the online setting. Engagement is examined through the lens of interaction, participation, community, collaboration, communication, involvement and presence. |
4 | Course or Program Design and Development | Course design and development is critical in online learning as it engages and assists the students in achieving the learner outcomes. Several models and processes are used to develop the online course, employing different design elements to meet student needs. |
5 | Course Facilitation | The delivery or facilitation of the course is as important as course design. Facilitation strategies used in delivery of the course such as in communication and modeling practices are examined in course facilitation. |
6 | Course Assessment | Course Assessments are adapted and delivered in an online setting. Formative assessments, peer assessments, differentiated assessments, learner choice in assessments, feedback system, online proctoring, plagiarism in online learning, and alternate assessments such as eportfolios are examined. |
7 | Evaluation and Quality Assurance | Evaluation is making a judgment either on the process, the product or a program either during or at the end. There is a need for research on evaluation and quality in the online courses. This has been examined through course evaluations, surveys, analytics, social networks, and pedagogical assessments. Quality assessment rubrics such as Quality Matters have also been researched. |
8 | Course Technologies | A number of online course technologies such as learning management systems, online textbooks, online audio and video tools, collaborative tools, social networks to build online community have been the focus of research. |
9 | Instructor Characteristics | With the increase in online courses, there has also been an increase in the number of instructors teaching online courses. Instructor characteristics can be examined through their experience, satisfaction, and roles in online teaching. |
10 | Institutional Support | The support for online learning is examined both as learner support and instructor support. Online students need support to be successful online learners and this could include social, academic, and cognitive forms of support. Online instructors need support in terms of pedagogy and technology to be successful online instructors. |
11 | Access, Culture, Equity, Inclusion, and Ethics | Cross-cultural online learning is gaining importance along with access in global settings. In addition, providing inclusive opportunities for all learners and in ethical ways is being examined. |
12 | Leadership, Policy and Management | Leadership support is essential for success of online learning. Leaders perspectives, challenges and strategies used are examined. Policies and governance related research are also being studied. |
Online learning research themes framework.
The collection of research themes is presented as a framework in Fig. 1 . The themes are organized by domain or level to underscore the nested relationship that exists. As evidenced by the assortment of themes, research can focus on any domain of delivery or associated context. The “Learner” domain captures characteristics and outcomes related to learners and their interaction within the courses. The “Course and Instructor” domain captures elements about the broader design of the course and facilitation by the instructor, and the “Organizational” domain acknowledges the contextual influences on the course. It is important to note as well that due to the nesting, research themes can cross domains. For example, the broader cultural context may be studied as it pertains to course design and development, and institutional support can include both learner support and instructor support. Likewise, engagement research can involve instructors as well as learners.
In this introduction section, we have reviewed three systematic reviews on online learning research ( Berge & Mrozowski, 2001 ; Tallent-Runnels et al., 2006 ; Zawacki-Richter et al., 2009 ). Based on these reviews and other research, we have derived twelve themes to develop an online learning research framework which is nested in three levels: learner, course and instructor, and organization.
In two out of the three previous reviews, design, learner characteristics and interaction were examined in the highest number of studies. On the other hand, cost-benefit tradeoffs, equity and accessibility, institutional and administrative factors, and globalization and cross-cultural aspects were examined in the least number of studies. One explanation for this may be that it is a function of nesting, noting that studies falling in the Organizational and Course levels may encompass several courses or many more participants within courses. However, while some research themes re-occur, there are also variations in some themes across time, suggesting the importance of research themes rise and fall over time. Thus, a critical examination of the trends in themes is helpful for understanding where research is needed most. Also, since there is no recent study examining online learning research themes in the last decade, this study strives to address that gap by focusing on recent research themes found in the literature, and also reviewing research methods and settings. Notably, one goal is to also compare findings from this decade to the previous review studies. Overall, the purpose of this study is to examine publication trends in online learning research taking place during the last ten years and compare it with the previous themes identified in other review studies. Due to the continued growth of online learning research into new contexts and among new researchers, we also examine the research methods and settings found in the studies of this review.
The following research questions are addressed in this study.
This five-step systematic review process described in the U.S. Department of Education, Institute of Education Sciences, What Works Clearinghouse Procedures and Standards Handbook, Version 4.0 ( 2017 ) was used in this systematic review: (a) developing the review protocol, (b) identifying relevant literature, (c) screening studies, (d) reviewing articles, and (e) reporting findings.
The Education Research Complete database was searched using the keywords below for published articles between the years 2009 and 2018 using both the Title and Keyword function for the following search terms.
“online learning" OR "online teaching" OR "online program" OR "online course" OR “online education”
The initial search of online learning research among journals in the database resulted in more than 3000 possible articles. Therefore, we limited our search to select journals that focus on publishing peer-reviewed online learning and educational research. Our aim was to capture the journals that published the most articles in online learning. However, we also wanted to incorporate the concept of rigor, so we used expert perception to identify 12 peer-reviewed journals that publish high-quality online learning research. Dissertations and conference proceedings were excluded. To be included in this systematic review, each study had to meet the screening criteria as described in Table 3 . A research study was excluded if it did not meet all of the criteria to be included.
Inclusion/Exclusion criteria.
Criteria | Inclusion | Exclusion |
---|---|---|
Focus of the article | Online learning | Articles that did not focus on online learning |
Journals Published | Twelve identified journals | Journals outside of the 12 journals |
Publication date | 2009 to 2018 | Prior to 2009 and after 2018 |
Publication type | Scholarly articles of original research from peer reviewed journals | Book chapters, technical reports, dissertations, or proceedings |
Research Method and Results | There was an identifiable method and results section describing how the study was conducted and included the findings. Quantitative and qualitative methods were included. | Reviews of other articles, opinion, or discussion papers that do not include a discussion of the procedures of the study or analysis of data such as product reviews or conceptual articles. |
Language | Journal article was written in English | Other languages were not included |
Fig. 2 shows the process flow involved in the selection of articles. The search in the database Education Research Complete yielded an initial sample of 3332 articles. Targeting the 12 journals removed 2579 articles. After reviewing the abstracts, we removed 134 articles based on the inclusion/exclusion criteria. The final sample, consisting of 619 articles, was entered into the computer software MAXQDA ( VERBI Software, 2019 ) for coding.
Flowchart of online learning research selection.
A review protocol was designed as a codebook in MAXQDA ( VERBI Software, 2019 ) by the three researchers. The codebook was developed based on findings from the previous review studies and from the initial screening of the articles in this review. The codebook included 12 research themes listed earlier in Table 2 (Learner characteristics, Instructor characteristics, Course or program design and development, Course Facilitation, Engagement, Course Assessment, Course Technologies, Access, Culture, Equity, Inclusion, and Ethics, Leadership, Policy and Management, Instructor and Learner Support, and Learner Outcomes), four research settings (higher education, continuing education, K-12, corporate/military), and three research designs (quantitative, qualitative and mixed methods). Fig. 3 below is a screenshot of MAXQDA used for the coding process.
Codebook from MAXQDA.
Research articles were coded by two researchers in MAXQDA. Two researchers independently coded 10% of the articles and then discussed and updated the coding framework. The second author who was a doctoral student coded the remaining studies. The researchers met bi-weekly to address coding questions that emerged. After the first phase of coding, we found that more than 100 studies fell into each of the categories of Learner Characteristics or Engagement, so we decided to pursue a second phase of coding and reexamine the two themes. Learner Characteristics were classified into the subthemes of Academic, Affective, Motivational, Self-regulation, Cognitive, and Demographic Characteristics. Engagement was classified into the subthemes of Collaborating, Communication, Community, Involvement, Interaction, Participation, and Presence.
Frequency tables were generated for each of the variables so that outliers could be examined and narrative data could be collapsed into categories. Once cleaned and collapsed into a reasonable number of categories, descriptive statistics were used to describe each of the coded elements. We first present the frequencies of publications related to online learning in the 12 journals. The total number of articles for each journal (collectively, the population) was hand-counted from journal websites, excluding editorials and book reviews. The publication trend of online learning research was also depicted from 2009 to 2018. Then, the descriptive information of the 12 themes, including the subthemes of Learner Characteristics and Engagement were provided. Finally, research themes by research settings and methodology were elaborated.
Publication patterns of the 619 articles reviewed from the 12 journals are presented in Table 4 . International Review of Research in Open and Distributed Learning had the highest number of publications in this review. Overall, about 8% of the articles appearing in these twelve journals consisted of online learning publications; however, several journals had concentrations of online learning articles totaling more than 20%.
Empirical online learning research articles by journal, 2009–2018.
Journal Name | Frequency of Empirical Online Learning Research | Percent of Sample | Percent of Journal's Total Articles |
---|---|---|---|
International Review of Research in Open and Distributed Learning | 152 | 24.40 | 22.55 |
Internet & Higher Education | 84 | 13.48 | 26.58 |
Computers & Education | 75 | 12.04 | 18.84 |
Online Learning | 72 | 11.56 | 3.25 |
Distance Education | 64 | 10.27 | 25.10 |
Journal of Online Learning & Teaching | 39 | 6.26 | 11.71 |
Journal of Educational Technology & Society | 36 | 5.78 | 3.63 |
Quarterly Review of Distance Education | 24 | 3.85 | 4.71 |
American Journal of Distance Education | 21 | 3.37 | 9.17 |
British Journal of Educational Technology | 19 | 3.05 | 1.93 |
Educational Technology Research & Development | 19 | 3.05 | 10.80 |
Australasian Journal of Educational Technology | 14 | 2.25 | 2.31 |
Total | 619 | 100.0 | 8.06 |
Note . Journal's Total Article count excludes reviews and editorials.
The publication trend of online learning research is depicted in Fig. 4 . When disaggregated by year, the total frequency of publications shows an increasing trend. Online learning articles increased throughout the decade and hit a relative maximum in 2014. The greatest number of online learning articles ( n = 86) occurred most recently, in 2018.
Online learning publication trends by year.
The publications were categorized into the twelve research themes identified in Fig. 1 . The frequency counts and percentages of the research themes are provided in Table 5 below. A majority of the research is categorized into the Learner domain. The fewest number of articles appears in the Organization domain.
Research themes in the online learning publications from 2009 to 2018.
Research Themes | Frequency | Percentage |
---|---|---|
Engagement | 179 | 28.92 |
Learner Characteristics | 134 | 21.65 |
Learner Outcome | 32 | 5.17 |
Evaluation and Quality Assurance | 38 | 6.14 |
Course Technologies | 35 | 5.65 |
Course Facilitation | 34 | 5.49 |
Course Assessment | 30 | 4.85 |
Course Design and Development | 27 | 4.36 |
Instructor Characteristics | 21 | 3.39 |
Institutional Support | 33 | 5.33 |
Access, Culture, Equity, Inclusion, and Ethics | 29 | 4.68 |
Leadership, Policy, and Management | 27 | 4.36 |
The specific themes of Engagement ( n = 179, 28.92%) and Learner Characteristics ( n = 134, 21.65%) were most often examined in publications. These two themes were further coded to identify sub-themes, which are described in the next two sections. Publications focusing on Instructor Characteristics ( n = 21, 3.39%) were least common in the dataset.
The largest number of studies was on engagement in online learning, which in the online learning literature is referred to and examined through different terms. Hence, we explore this category in more detail. In this review, we categorized the articles into seven different sub-themes as examined through different lenses including presence, interaction, community, participation, collaboration, involvement, and communication. We use the term “involvement” as one of the terms since researchers sometimes broadly used the term engagement to describe their work without further description. Table 6 below provides the description, frequency, and percentages of the various studies related to engagement.
Research sub-themes on engagement.
Description | Frequency | Percentage | |
---|---|---|---|
Presence | Learning experience through social, cognitive, and teaching presence. | 50 | 8.08 |
Interaction | Process of interacting with peers, instructor, or content that results in learners understanding or behavior | 43 | 6.95 |
Community | Sense of belonging within a group | 25 | 4.04 |
Participation | Process of being actively involved | 21 | 3.39 |
Collaboration | Working with someone to create something | 17 | 2.75 |
Involvement | Involvement in learning. This includes articles that focused broadly on engagement of learners. | 14 | 2.26 |
Communication | Process of exchanging information with the intent to share information | 9 | 1.45 |
In the sections below, we provide several examples of the different engagement sub-themes that were studied within the larger engagement theme.
Presence. This sub-theme was the most researched in engagement. With the development of the community of inquiry framework most of the studies in this subtheme examined social presence ( Akcaoglu & Lee, 2016 ; Phirangee & Malec, 2017 ; Wei et al., 2012 ), teaching presence ( Orcutt & Dringus, 2017 ; Preisman, 2014 ; Wisneski et al., 2015 ) and cognitive presence ( Archibald, 2010 ; Olesova et al., 2016 ).
Interaction . This was the second most studied theme under engagement. Researchers examined increasing interpersonal interactions ( Cung et al., 2018 ), learner-learner interactions ( Phirangee, 2016 ; Shackelford & Maxwell, 2012 ; Tawfik et al., 2018 ), peer-peer interaction ( Comer et al., 2014 ), learner-instructor interaction ( Kuo et al., 2014 ), learner-content interaction ( Zimmerman, 2012 ), interaction through peer mentoring ( Ruane & Koku, 2014 ), interaction and community building ( Thormann & Fidalgo, 2014 ), and interaction in discussions ( Ruane & Lee, 2016 ; Tibi, 2018 ).
Community. Researchers examined building community in online courses ( Berry, 2017 ), supporting a sense of community ( Jiang, 2017 ), building an online learning community of practice ( Cho, 2016 ), building an academic community ( Glazer & Wanstreet, 2011 ; Nye, 2015 ; Overbaugh & Nickel, 2011 ), and examining connectedness and rapport in an online community ( Bolliger & Inan, 2012 ; Murphy & Rodríguez-Manzanares, 2012 ; Slagter van Tryon & Bishop, 2012 ).
Participation. Researchers examined engagement through participation in a number of studies. Some of the topics include, participation patterns in online discussion ( Marbouti & Wise, 2016 ; Wise et al., 2012 ), participation in MOOCs ( Ahn et al., 2013 ; Saadatmand & Kumpulainen, 2014 ), features that influence students’ online participation ( Rye & Støkken, 2012 ) and active participation.
Collaboration. Researchers examined engagement through collaborative learning. Specific studies focused on cross-cultural collaboration ( Kumi-Yeboah, 2018 ; Yang et al., 2014 ), how virtual teams collaborate ( Verstegen et al., 2018 ), types of collaboration teams ( Wicks et al., 2015 ), tools for collaboration ( Boling et al., 2014 ), and support for collaboration ( Kopp et al., 2012 ).
Involvement. Researchers examined engaging learners through involvement in various learning activities ( Cundell & Sheepy, 2018 ), student engagement through various measures ( Dixson, 2015 ), how instructors included engagement to involve students in learning ( O'Shea et al., 2015 ), different strategies to engage the learner ( Amador & Mederer, 2013 ), and designed emotionally engaging online environments ( Koseoglu & Doering, 2011 ).
Communication. Researchers examined communication in online learning in studies using social network analysis ( Ergün & Usluel, 2016 ), using informal communication tools such as Facebook for class discussion ( Kent, 2013 ), and using various modes of communication ( Cunningham et al., 2010 ; Rowe, 2016 ). Studies have also focused on both asynchronous and synchronous aspects of communication ( Swaggerty & Broemmel, 2017 ; Yamagata-Lynch, 2014 ).
The second largest theme was learner characteristics. In this review, we explore this further to identify several aspects of learner characteristics. In this review, we categorized the learner characteristics into self-regulation characteristics, motivational characteristics, academic characteristics, affective characteristics, cognitive characteristics, and demographic characteristics. Table 7 provides the number of studies and percentages examining the various learner characteristics.
Research sub-themes on learner characteristics.
Learner Characteristics | Description | Frequency | Percentage |
---|---|---|---|
Self-regulation Characteristics | Involves controlling learner's behavior, emotions, and thoughts to achieve specific learning and performance goals | 54 | 8.72 |
Motivational Characteristics | Learners goal-directed activity instigated and sustained such as beliefs, and behavioral change | 23 | 3.72 |
Academic Characteristics | Education characteristics such as educational type and educational level | 19 | 3.07 |
Affective Characteristics | Learner characteristics that describe learners' feelings or emotions such as satisfaction | 17 | 2.75 |
Cognitive Characteristics | Learner characteristics related to cognitive elements such as attention, memory, and intellect (e.g., learning strategies, learning skills, etc.) | 14 | 2.26 |
Demographic Characteristics | Learner characteristics that relate to information as age, gender, language, social economic status, and cultural background. | 7 | 1.13 |
Online learning has elements that are different from the traditional face-to-face classroom and so the characteristics of the online learners are also different. Yukselturk and Top (2013) categorized online learner profile into ten aspects: gender, age, work status, self-efficacy, online readiness, self-regulation, participation in discussion list, participation in chat sessions, satisfaction, and achievement. Their categorization shows that there are differences in online learner characteristics in these aspects when compared to learners in other settings. Some of the other aspects such as participation and achievement as discussed by Yukselturk and Top (2013) are discussed in different research themes in this study. The sections below provide examples of the learner characteristics sub-themes that were studied.
Self-regulation. Several researchers have examined self-regulation in online learning. They found that successful online learners are academically motivated ( Artino & Stephens, 2009 ), have academic self-efficacy ( Cho & Shen, 2013 ), have grit and intention to succeed ( Wang & Baker, 2018 ), have time management and elaboration strategies ( Broadbent, 2017 ), set goals and revisit course content ( Kizilcec et al., 2017 ), and persist ( Glazer & Murphy, 2015 ). Researchers found a positive relationship between learner's self-regulation and interaction ( Delen et al., 2014 ) and self-regulation and communication and collaboration ( Barnard et al., 2009 ).
Motivation. Researchers focused on motivation of online learners including different motivation levels of online learners ( Li & Tsai, 2017 ), what motivated online learners ( Chaiprasurt & Esichaikul, 2013 ), differences in motivation of online learners ( Hartnett et al., 2011 ), and motivation when compared to face to face learners ( Paechter & Maier, 2010 ). Harnett et al. (2011) found that online learner motivation was complex, multifaceted, and sensitive to situational conditions.
Academic. Several researchers have focused on academic aspects for online learner characteristics. Readiness for online learning has been examined as an academic factor by several researchers ( Buzdar et al., 2016 ; Dray et al., 2011 ; Wladis & Samuels, 2016 ; Yu, 2018 ) specifically focusing on creating and validating measures to examine online learner readiness including examining students emotional intelligence as a measure of student readiness for online learning. Researchers have also examined other academic factors such as academic standing ( Bradford & Wyatt, 2010 ), course level factors ( Wladis et al., 2014 ) and academic skills in online courses ( Shea & Bidjerano, 2014 ).
Affective. Anderson and Bourke (2013) describe affective characteristics through which learners express feelings or emotions. Several research studies focused on the affective characteristics of online learners. Learner satisfaction for online learning has been examined by several researchers ( Cole et al., 2014 ; Dziuban et al., 2015 ; Kuo et al., 2013 ; Lee, 2014a ) along with examining student emotions towards online assessment ( Kim et al., 2014 ).
Cognitive. Researchers have also examined cognitive aspects of learner characteristics including meta-cognitive skills, cognitive variables, higher-order thinking, cognitive density, and critical thinking ( Chen & Wu, 2012 ; Lee, 2014b ). Lee (2014b) examined the relationship between cognitive presence density and higher-order thinking skills. Chen and Wu (2012) examined the relationship between cognitive and motivational variables in an online system for secondary physical education.
Demographic. Researchers have examined various demographic factors in online learning. Several researchers have examined gender differences in online learning ( Bayeck et al., 2018 ; Lowes et al., 2016 ; Yukselturk & Bulut, 2009 ), ethnicity, age ( Ke & Kwak, 2013 ), and minority status ( Yeboah & Smith, 2016 ) of online learners.
While engagement and learner characteristics were studied the most, other themes were less often studied in the literature and are presented here, according to size, with general descriptions of the types of research examined for each.
Evaluation and Quality Assurance. There were 38 studies (6.14%) published in the theme of evaluation and quality assurance. Some of the studies in this theme focused on course quality standards, using quality matters to evaluate quality, using the CIPP model for evaluation, online learning system evaluation, and course and program evaluations.
Course Technologies. There were 35 studies (5.65%) published in the course technologies theme. Some of the studies examined specific technologies such as Edmodo, YouTube, Web 2.0 tools, wikis, Twitter, WebCT, Screencasts, and Web conferencing systems in the online learning context.
Course Facilitation. There were 34 studies (5.49%) published in the course facilitation theme. Some of the studies in this theme examined facilitation strategies and methods, experiences of online facilitators, and online teaching methods.
Institutional Support. There were 33 studies (5.33%) published in the institutional support theme which included support for both the instructor and learner. Some of the studies on instructor support focused on training new online instructors, mentoring programs for faculty, professional development resources for faculty, online adjunct faculty training, and institutional support for online instructors. Studies on learner support focused on learning resources for online students, cognitive and social support for online learners, and help systems for online learner support.
Learner Outcome. There were 32 studies (5.17%) published in the learner outcome theme. Some of the studies that were examined in this theme focused on online learner enrollment, completion, learner dropout, retention, and learner success.
Course Assessment. There were 30 studies (4.85%) published in the course assessment theme. Some of the studies in the course assessment theme examined online exams, peer assessment and peer feedback, proctoring in online exams, and alternative assessments such as eportfolio.
Access, Culture, Equity, Inclusion, and Ethics. There were 29 studies (4.68%) published in the access, culture, equity, inclusion, and ethics theme. Some of the studies in this theme examined online learning across cultures, multi-cultural effectiveness, multi-access, and cultural diversity in online learning.
Leadership, Policy, and Management. There were 27 studies (4.36%) published in the leadership, policy, and management theme. Some of the studies on leadership, policy, and management focused on online learning leaders, stakeholders, strategies for online learning leadership, resource requirements, university policies for online course policies, governance, course ownership, and faculty incentives for online teaching.
Course Design and Development. There were 27 studies (4.36%) published in the course design and development theme. Some of the studies examined in this theme focused on design elements, design issues, design process, design competencies, design considerations, and instructional design in online courses.
Instructor Characteristics. There were 21 studies (3.39%) published in the instructor characteristics theme. Some of the studies in this theme were on motivation and experiences of online instructors, ability to perform online teaching duties, roles of online instructors, and adjunct versus full-time online instructors.
The research methods used in the studies were classified into quantitative, qualitative, and mixed methods ( Harwell, 2012 , pp. 147–163). The research setting was categorized into higher education, continuing education, K-12, and corporate/military. As shown in Table A in the appendix, the vast majority of the publications used higher education as the research setting ( n = 509, 67.6%). Table B in the appendix shows that approximately half of the studies adopted the quantitative method ( n = 324, 43.03%), followed by the qualitative method ( n = 200, 26.56%). Mixed methods account for the smallest portion ( n = 95, 12.62%).
Table A shows that the patterns of the four research settings were approximately consistent across the 12 themes except for the theme of Leaner Outcome and Institutional Support. Continuing education had a higher relative frequency in Learner Outcome (0.28) and K-12 had a higher relative frequency in Institutional Support (0.33) compared to the frequencies they had in the total themes (0.09 and 0.08 respectively). Table B in the appendix shows that the distribution of the three methods were not consistent across the 12 themes. While quantitative studies and qualitative studies were roughly evenly distributed in Engagement, they had a large discrepancy in Learner Characteristics. There were 100 quantitative studies; however, only 18 qualitative studies published in the theme of Learner Characteristics.
In summary, around 8% of the articles published in the 12 journals focus on online learning. Online learning publications showed a tendency of increase on the whole in the past decade, albeit fluctuated, with the greatest number occurring in 2018. Among the 12 research themes related to online learning, the themes of Engagement and Learner Characteristics were studied the most and the theme of Instructor Characteristics was studied the least. Most studies were conducted in the higher education setting and approximately half of the studies used the quantitative method. Looking at the 12 themes by setting and method, we found that the patterns of the themes by setting or by method were not consistent across the 12 themes.
The quality of our findings was ensured by scientific and thorough searches and coding consistency. The selection of the 12 journals provides evidence of the representativeness and quality of primary studies. In the coding process, any difficulties and questions were resolved by consultations with the research team at bi-weekly meetings, which ensures the intra-rater and interrater reliability of coding. All these approaches guarantee the transparency and replicability of the process and the quality of our results.
This review enabled us to identify the online learning research themes examined from 2009 to 2018. In the section below, we review the most studied research themes, engagement and learner characteristics along with implications, limitations, and directions for future research.
Three out of the four systematic reviews informing the design of the present study found that online learner characteristics and online engagement were examined in a high number of studies. In this review, about half of the studies reviewed (50.57%) focused on online learner characteristics or online engagement. This shows the continued importance of these two themes. In the Tallent-Runnels et al.’s (2006) study, the learner characteristics theme was identified as least studied for which they state that researchers are beginning to investigate learner characteristics in the early days of online learning.
One of the differences found in this review is that course design and development was examined in the least number of studies in this review compared to two prior systematic reviews ( Berge & Mrozowski, 2001 ; Zawacki-Richter et al., 2009 ). Zawacki-Richter et al. did not use a keyword search but reviewed all the articles in five different distance education journals. Berge and Mrozowski (2001) included a research theme called design issues to include all aspects of instructional systems design in distance education journals. In our study, in addition to course design and development, we also had focused themes on learner outcomes, course facilitation, course assessment and course evaluation. These are all instructional design focused topics and since we had multiple themes focusing on instructional design topics, the course design and development category might have resulted in fewer studies. There is still a need for more studies to focus on online course design and development.
Three out of the four systematic reviews discussed in the opening of this study found management and organization factors to be least studied. In this review, Leadership, Policy, and Management was studied among 4.36% of the studies and Access, Culture, Equity, Inclusion, and Ethics was studied among 4.68% of the studies in the organizational level. The theme on Equity and accessibility was also found to be the least studied theme in the Berge and Mrozowski (2001) study. In addition, instructor characteristics was the least examined research theme among the twelve themes studied in this review. Only 3.39% of the studies were on instructor characteristics. While there were some studies examining instructor motivation and experiences, instructor ability to teach online, online instructor roles, and adjunct versus full-time online instructors, there is still a need to examine topics focused on instructors and online teaching. This theme was not included in the prior reviews as the focus was more on the learner and the course but not on the instructor. While it is helpful to see research evolving on instructor focused topics, there is still a need for more research on the online instructor.
The research themes from this review were compared with research themes from previous systematic reviews, which targeted prior decades. Table 8 shows the comparison.
Comparison of most and least studied online learning research themes from current to previous reviews.
Level | 1990–1999 ( ) | 1993–2004 ( ) | 2000–2008 ( ) | 2009–2018 (Current Study) | |
---|---|---|---|---|---|
Learner Characteristics | L | X | X | X | |
Engagement and Interaction | L | X | X | X | |
Design Issues/Instructional Design | C | X | X | ||
Course Environment Learner Outcomes | C L | X X | |||
Learner Support | L | X | |||
Equity and Accessibility | O | X | X | ||
Institutional& Administrative Factors | O | X | X | ||
Management and Organization | O | X | X | ||
Cost-Benefit | O | X |
L = Learner, C=Course O=Organization.
In this review there is a greater concentration of studies focused on Learner domain topics, and reduced attention to broader more encompassing research themes that fall into the Course and Organization domains. There is a need for organizational level topics such as Access, Culture, Equity, Inclusion and Ethics, and Leadership, Policy and Management to be researched on within the context of online learning. Examination of access, culture, equity, inclusion and ethics is very important to support diverse online learners, particularly with the rapid expansion of online learning across all educational levels. This was also least studied based on Berge and Mrozowski (2001) systematic review.
The topics on leadership, policy and management were least studied both in this review and also in the Tallent-Runnels et al. (2006) and Zawacki-Richter et al. (2009) study. Tallent-Runnels categorized institutional and administrative aspects into institutional policies, institutional support, and enrollment effects. While we included support as a separate category, in this study leadership, policy and management were combined. There is still a need for research on leadership of those who manage online learning, policies for online education, and managing online programs. In the Zawacki-Richter et al. (2009) study, only a few studies examined management and organization focused topics. They also found management and organization to be strongly correlated with costs and benefits. In our study, costs and benefits were collectively included as an aspect of management and organization and not as a theme by itself. These studies will provide research-based evidence for online education administrators.
As with any systematic review, there are limitations to the scope of the review. The search is limited to twelve journals in the field that typically include research on online learning. These manuscripts were identified by searching the Education Research Complete database which focuses on education students, professionals, and policymakers. Other discipline-specific journals as well as dissertations and proceedings were not included due to the volume of articles. Also, the search was performed using five search terms “online learning" OR "online teaching" OR "online program" OR "online course" OR “online education” in title and keyword. If authors did not include these terms, their respective work may have been excluded from this review even if it focused on online learning. While these terms are commonly used in North America, it may not be commonly used in other parts of the world. Additional studies may exist outside this scope.
The search strategy also affected how we presented results and introduced limitations regarding generalization. We identified that only 8% of the articles published in these journals were related to online learning; however, given the use of search terms to identify articles within select journals it was not feasible to identify the total number of research-based articles in the population. Furthermore, our review focused on the topics and general methods of research and did not systematically consider the quality of the published research. Lastly, some journals may have preferences for publishing studies on a particular topic or that use a particular method (e.g., quantitative methods), which introduces possible selection and publication biases which may skew the interpretation of results due to over/under representation. Future studies are recommended to include more journals to minimize the selection bias and obtain a more representative sample.
Certain limitations can be attributed to the coding process. Overall, the coding process for this review worked well for most articles, as each tended to have an individual or dominant focus as described in the abstracts, though several did mention other categories which likely were simultaneously considered to a lesser degree. However, in some cases, a dominant theme was not as apparent and an effort to create mutually exclusive groups for clearer interpretation the coders were occasionally forced to choose between two categories. To facilitate this coding, the full-texts were used to identify a study focus through a consensus seeking discussion among all authors. Likewise, some studies focused on topics that we have associated with a particular domain, but the design of the study may have promoted an aggregated examination or integrated factors from multiple domains (e.g., engagement). Due to our reliance on author descriptions, the impact of construct validity is likely a concern that requires additional exploration. Our final grouping of codes may not have aligned with the original author's description in the abstract. Additionally, coding of broader constructs which disproportionately occur in the Learner domain, such as learner outcomes, learner characteristics, and engagement, likely introduced bias towards these codes when considering studies that involved multiple domains. Additional refinement to explore the intersection of domains within studies is needed.
One of the strengths of this review is the research categories we have identified. We hope these categories will support future researchers and identify areas and levels of need for future research. Overall, there is some agreement on research themes on online learning research among previous reviews and this one, at the same time there are some contradicting findings. We hope the most-researched themes and least-researched themes provide authors a direction on the importance of research and areas of need to focus on.
The leading themes found in this review is online engagement research. However, presentation of this research was inconsistent, and often lacked specificity. This is not unique to online environments, but the nuances of defining engagement in an online environment are unique and therefore need further investigation and clarification. This review points to seven distinct classifications of online engagement. Further research on engagement should indicate which type of engagement is sought. This level of specificity is necessary to establish instruments for measuring engagement and ultimately testing frameworks for classifying engagement and promoting it in online environments. Also, it might be of importance to examine the relationship between these seven sub-themes of engagement.
Additionally, this review highlights growing attention to learner characteristics, which constitutes a shift in focus away from instructional characteristics and course design. Although this is consistent with the focus on engagement, the role of the instructor, and course design with respect to these outcomes remains important. Results of the learner characteristics and engagement research paired with course design will have important ramifications for the use of teaching and learning professionals who support instruction. Additionally, the review also points to a concentration of research in the area of higher education. With an immediate and growing emphasis on online learning in K-12 and corporate settings, there is a critical need for further investigation in these settings.
Lastly, because the present review did not focus on the overall effect of interventions, opportunities exist for dedicated meta-analyses. Particular attention to research on engagement and learner characteristics as well as how these vary by study design and outcomes would be logical additions to the research literature.
This systematic review builds upon three previous reviews which tackled the topic of online learning between 1990 and 2010 by extending the timeframe to consider the most recent set of published research. Covering the most recent decade, our review of 619 articles from 12 leading online learning journal points to a more concentrated focus on the learner domain including engagement and learner characteristics, with more limited attention to topics pertaining to the classroom or organizational level. The review highlights an opportunity for the field to clarify terminology concerning online learning research, particularly in the areas of learner outcomes where there is a tendency to classify research more generally (e.g., engagement). Using this sample of published literature, we provide a possible taxonomy for categorizing this research using subcategories. The field could benefit from a broader conversation about how these categories can shape a comprehensive framework for online learning research. Such efforts will enable the field to effectively prioritize research aims over time and synthesize effects.
Florence Martin: Conceptualization; Writing - original draft, Writing - review & editing Preparation, Supervision, Project administration. Ting Sun: Methodology, Formal analysis, Writing - original draft, Writing - review & editing. Carl Westine: Methodology, Formal analysis, Writing - original draft, Writing - review & editing, Supervision
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
1 Includes articles that are cited in this manuscript and also included in the systematic review. The entire list of 619 articles used in the systematic review can be obtained by emailing the authors.*
Appendix B Supplementary data to this article can be found online at https://doi.org/10.1016/j.compedu.2020.104009 .
Research Themes by the Settings in the Online Learning Publications
Research Theme | Higher Ed ( = 506) | Continuing Education ( = 58) | K-12 ( = 53) | Corporate/Military ( = 3) |
---|---|---|---|---|
Engagement | 153 | 15 | 12 | 0 |
Presence | 46 | 2 | 3 | 0 |
Interaction | 35 | 4 | 4 | 0 |
Community | 19 | 2 | 4 | 0 |
Participation | 16 | 5 | 0 | 0 |
Collaboration | 16 | 1 | 0 | 0 |
Involvement | 13 | 0 | 1 | 0 |
Communication | 8 | 1 | 0 | 0 |
Learner Characteristics | 106 | 18 | 9 | 1 |
Self-regulation Characteristics | 43 | 9 | 2 | 0 |
Motivation Characteristics | 18 | 3 | 2 | 0 |
Academic Characteristics | 17 | 0 | 2 | 0 |
Affective Characteristics | 12 | 3 | 1 | 1 |
Cognitive Characteristics | 11 | 1 | 2 | 0 |
Demographic Characteristics | 5 | 2 | 0 | 0 |
Evaluation and Quality Assurance | 33 | 3 | 2 | 0 |
Course Technologies | 33 | 2 | 0 | 0 |
Course Facilitation | 30 | 3 | 1 | 0 |
Institutional Support | 24 | 0 | 8 | 1 |
Learner Outcome | 24 | 7 | 1 | 0 |
Course Assessment | 23 | 2 | 5 | 0 |
Access, Culture, Equity, Inclusion and Ethics | 26 | 1 | 2 | 0 |
Leadership, Policy and Management | 17 | 5 | 5 | 0 |
Course Design and Development | 21 | 1 | 4 | 1 |
Instructor Characteristics | 16 | 1 | 4 | 0 |
Research Themes by the Methodology in the Online Learning Publications
Research Theme | Mixed Method ( = 95) | Quantitative ( = 324) | Qualitative ( = 200) |
---|---|---|---|
Engagement | 32 | 78 | 69 |
Presence | 11 | 25 | 14 |
Interaction | 9 | 20 | 14 |
Community | 2 | 9 | 14 |
Participation | 6 | 8 | 7 |
Collaboration | 2 | 5 | 10 |
Involvement | 2 | 6 | 6 |
Communication | 0 | 5 | 4 |
Learner Characteristics | 16 | 100 | 18 |
Self-regulation Characteristics | 5 | 43 | 6 |
Motivation Characteristics | 4 | 15 | 4 |
Academic Characteristics | 1 | 15 | 3 |
Affective Characteristics | 2 | 12 | 3 |
Cognitive Characteristics | 4 | 8 | 2 |
Demographic Characteristics | 1 | 6 | 0 |
Evaluation and Quality Assurance | 5 | 22 | 11 |
Course Technologies | 4 | 20 | 11 |
Course Facilitation | 7 | 14 | 13 |
Institutional Support | 12 | 9 | 12 |
Learner Outcome | 3 | 23 | 6 |
Course Assessment | 5 | 20 | 5 |
Access, Culture, Equity, Inclusion & Ethics | 3 | 13 | 13 |
Leadership, Policy and Management | 5 | 9 | 13 |
Course Design and Development | 2 | 8 | 17 |
Instructor Characteristics | 1 | 8 | 12 |
The following are the Supplementary data to this article:
International Journal of Educational Technology in Higher Education volume 17 , Article number: 53 ( 2020 ) Cite this article
150k Accesses
52 Citations
24 Altmetric
Metrics details
This article reports on a large-scale ( n = 987), exploratory factor analysis study incorporating various concepts identified in the literature as critical success factors for online learning from the students’ perspective, and then determines their hierarchical significance. Seven factors--Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Online Social Comfort, Online Interactive Modality, and Social Presence--were identified as significant and reliable. Regression analysis indicates the minimal factors for enrollment in future classes—when students consider convenience and scheduling—were Basic Online Modality, Cognitive Presence, and Online Social Comfort. Students who accepted or embraced online courses on their own merits wanted a minimum of Basic Online Modality, Teaching Presence, Cognitive Presence, Online Social Comfort, and Social Presence. Students, who preferred face-to-face classes and demanded a comparable experience, valued Online Interactive Modality and Instructional Support more highly. Recommendations for online course design, policy, and future research are provided.
While there are different perspectives of the learning process such as learning achievement and faculty perspectives, students’ perspectives are especially critical since they are ultimately the raison d’être of the educational endeavor (Chickering & Gamson, 1987 ). More pragmatically, students’ perspectives provide invaluable, first-hand insights into their experiences and expectations (Dawson et al., 2019 ). The student perspective is especially important when new teaching approaches are used and when new technologies are being introduced (Arthur, 2009 ; Crews & Butterfield, 2014 ; Van Wart, Ni, Ready, Shayo, & Court, 2020 ). With the renewed interest in “active” education in general (Arruabarrena, Sánchez, Blanco, et al., 2019 ; Kay, MacDonald, & DiGiuseppe, 2019 ; Nouri, 2016 ; Vlachopoulos & Makri, 2017 ) and the flipped classroom approach in particular (Flores, del-Arco, & Silva, 2016 ; Gong, Yang, & Cai, 2020 ; Lundin, et al., 2018 ; Maycock, 2019 ; McGivney-Burelle, 2013 ; O’Flaherty & Phillips, 2015 ; Tucker , 2012 ) along with extraordinary shifts in the technology, the student perspective on online education is profoundly important. What shapes students’ perceptions of quality integrate are their own sense of learning achievement, satisfaction with the support they receive, technical proficiency of the process, intellectual and emotional stimulation, comfort with the process, and sense of learning community. The factors that students perceive as quality online teaching, however, has not been as clear as it might be for at least two reasons.
First, it is important to note that the overall online learning experience for students is also composed of non-teaching factors which we briefly mention. Three such factors are (1) convenience, (2) learner characteristics and readiness, and (3) antecedent conditions that may foster teaching quality but are not directly responsible for it. (1) Convenience is an enormous non-quality factor for students (Artino, 2010 ) which has driven up online demand around the world (Fidalgo, Thormann, Kulyk, et al., 2020 ; Inside Higher Education and Gallup, 2019 ; Legon & Garrett, 2019 ; Ortagus, 2017 ). This is important since satisfaction with online classes is frequently somewhat lower than face-to-face classes (Macon, 2011 ). However, the literature generally supports the relative equivalence of face-to-face and online modes regarding learning achievement criteria (Bernard et al., 2004 ; Nguyen, 2015 ; Ni, 2013 ; Sitzmann, Kraiger, Stewart, & Wisher, 2006 ; see Xu & Jaggars, 2014 for an alternate perspective). These contrasts are exemplified in a recent study of business students, in which online students using a flipped classroom approach outperformed their face-to-face peers, but ironically rated instructor performance lower (Harjoto, 2017 ). (2) Learner characteristics also affect the experience related to self-regulation in an active learning model, comfort with technology, and age, among others,which affect both receptiveness and readiness of online instruction. (Alqurashi, 2016 ; Cohen & Baruth, 2017 ; Kintu, Zhu, & Kagambe, 2017 ; Kuo, Walker, Schroder, & Belland, 2013 ; Ventura & Moscoloni, 2015 ) (3) Finally, numerous antecedent factors may lead to improved instruction, but are not themselves directly perceived by students such as instructor training (Brinkley-Etzkorn, 2018 ), and the sources of faculty motivation (e.g., incentives, recognition, social influence, and voluntariness) (Wingo, Ivankova, & Moss, 2017 ). Important as these factors are, mixing them with the perceptions of quality tends to obfuscate the quality factors directly perceived by students.
Second, while student perceptions of quality are used in innumerable studies, our overall understanding still needs to integrate them more holistically. Many studies use student perceptions of quality and overall effectiveness of individual tools and strategies in online contexts such as mobile devices (Drew & Mann, 2018 ), small groups (Choi, Land, & Turgeon, 2005 ), journals (Nair, Tay, & Koh, 2013 ), simulations (Vlachopoulos & Makri, 2017 ), video (Lange & Costley, 2020 ), etc. Such studies, however, cannot provide the overall context and comparative importance. Some studies have examined the overall learning experience of students with exploratory lists, but have mixed non-quality factors with quality of teaching factors making it difficult to discern the instructor’s versus contextual roles in quality (e.g., Asoodar, Vaezi, & Izanloo, 2016 ; Bollinger & Martindale, 2004 ; Farrell & Brunton, 2020 ; Hong, 2002 ; Song, Singleton, Hill, & Koh, 2004 ; Sun, Tsai, Finger, Chen, & Yeh, 2008 ). The application of technology adoption studies also fall into this category by essentially aggregating all teaching quality in the single category of performance ( Al-Gahtani, 2016 ; Artino, 2010 ). Some studies have used high-level teaching-oriented models, primarily the Community of Inquiry model (le Roux & Nagel, 2018 ), but empirical support has been mixed (Arbaugh et al., 2008 ); and its elegance (i.e., relying on only three factors) has not provided much insight to practitioners (Anderson, 2016 ; Cleveland-Innes & Campbell, 2012 ).
Integration of studies and concepts explored continues to be fragmented and confusing despite the fact that the number of empirical studies related to student perceptions of quality factors has increased. It is important to have an empirical view of what students’ value in a single comprehensive study and, also, to know if there is a hierarchy of factors, ranging from students who are least to most critical of the online learning experience. This research study has two research questions.
The first research question is: What are the significant factors in creating a high-quality online learning experience from students’ perspectives? That is important to know because it should have a significant effect on the instructor’s design of online classes. The goal of this research question is identify a more articulated and empirically-supported set of factors capturing the full range of student expectations.
The second research question is: Is there a priority or hierarchy of factors related to students’ perceptions of online teaching quality that relate to their decisions to enroll in online classes? For example, is it possible to distinguish which factors are critical for enrollment decisions when students are primarily motivated by convenience and scheduling flexibility (minimum threshold)? Do these factors differ from students with a genuine acceptance of the general quality of online courses (a moderate threshold)? What are the factors that are important for the students who are the most critical of online course delivery (highest threshold)?
This article next reviews the literature on online education quality, focusing on the student perspective and reviews eight factors derived from it. The research methods section discusses the study structure and methods. Demographic data related to the sample are next, followed by the results, discussion, and conclusion.
Online education is much discussed (Prinsloo, 2016 ; Van Wart et al., 2019 ; Zawacki-Richter & Naidu, 2016 ), but its perception is substantially influenced by where you stand and what you value (Otter et al., 2013 ; Tanner, Noser, & Totaro, 2009 ). Accrediting bodies care about meeting technical standards, proof of effectiveness, and consistency (Grandzol & Grandzol, 2006 ). Institutions care about reputation, rigor, student satisfaction, and institutional efficiency (Jung, 2011 ). Faculty care about subject coverage, student participation, faculty satisfaction, and faculty workload (Horvitz, Beach, Anderson, & Xia, 2015 ; Mansbach & Austin, 2018 ). For their part, students care about learning achievement (Marks, Sibley, & Arbaugh, 2005 ; O’Neill & Sai, 2014 ; Shen, Cho, Tsai, & Marra, 2013 ), but also view online education as a function of their enjoyment of classes, instructor capability and responsiveness, and comfort in the learning environment (e.g., Asoodar et al., 2016 ; Sebastianelli, Swift, & Tamimi, 2015 ). It is this last perspective, of students, upon which we focus.
It is important to note students do not sign up for online classes solely based on perceived quality. Perceptions of quality derive from notions of the capacity of online learning when ideal—relative to both learning achievement and satisfaction/enjoyment, and perceptions about the likelihood and experience of classes living up to expectations. Students also sign up because of convenience and flexibility, and personal notions of suitability about learning. Convenience and flexibility are enormous drivers of online registration (Lee, Stringer, & Du, 2017 ; Mann & Henneberry, 2012 ). Even when students say they prefer face-to-face classes to online, many enroll in online classes and re-enroll in the future if the experience meets minimum expectations. This study examines the threshold expectations of students when they are considering taking online classes.
When discussing students’ perceptions of quality, there is little clarity about the actual range of concepts because no integrated empirical studies exist comparing major factors found throughout the literature. Rather, there are practitioner-generated lists of micro-competencies such as the Quality Matters consortium for higher education (Quality Matters, 2018 ), or broad frameworks encompassing many aspects of quality beyond teaching (Open and Distant Learning Quality Council, 2012 ). While checklists are useful for practitioners and accreditation processes, they do not provide robust, theoretical bases for scholarly development. Overarching frameworks are heuristically useful, but not for pragmatic purposes or theory building arenas. The most prominent theoretical framework used in online literature is the Community of Inquiry (CoI) model (Arbaugh et al., 2008 ; Garrison, Anderson, & Archer, 2003 ), which divides instruction into teaching, cognitive, and social presence. Like deductive theories, however, the supportive evidence is mixed (Rourke & Kanuka, 2009 ), especially regarding the importance of social presence (Annand, 2011 ; Armellini and De Stefani, 2016 ). Conceptually, the problem is not so much with the narrow articulation of cognitive or social presence; cognitive presence is how the instructor provides opportunities for students to interact with material in robust, thought-provoking ways, and social presence refers to building a community of learning that incorporates student-to-student interactions. However, teaching presence includes everything else the instructor does—structuring the course, providing lectures, explaining assignments, creating rehearsal opportunities, supplying tests, grading, answering questions, and so on. These challenges become even more prominent in the online context. While the lecture as a single medium is paramount in face-to-face classes, it fades as the primary vehicle in online classes with increased use of detailed syllabi, electronic announcements, recorded and synchronous lectures, 24/7 communications related to student questions, etc. Amassing the pedagogical and technological elements related to teaching under a single concept provides little insight.
In addition to the CoI model, numerous concepts are suggested in single-factor empirical studies when focusing on quality from a student’s perspective, with overlapping conceptualizations and nonstandardized naming conventions. Seven distinct factors are derived here from the literature of student perceptions of online quality: Instructional Support, Teaching Presence, Basic Online Modality, Social Presence, Online Social Comfort, cognitive Presence, and Interactive Online Modality.
Instructional Support refers to students’ perceptions of techniques by the instructor used for input, rehearsal, feedback, and evaluation. Specifically, this entails providing detailed instructions, designed use of multimedia, and the balance between repetitive class features for ease of use, and techniques to prevent boredom. Instructional Support is often included as an element of Teaching Presence, but is also labeled “structure” (Lee & Rha, 2009 ; So & Brush, 2008 ) and instructor facilitation (Eom, Wen, & Ashill, 2006 ). A prime example of the difference between face-to-face and online education is the extensive use of the “flipped classroom” (Maycock, 2019 ; Wang, Huang, & Schunn, 2019 ) in which students move to rehearsal activities faster and more frequently than traditional classrooms, with less instructor lecture (Jung, 2011 ; Martin, Wang, & Sadaf, 2018 ). It has been consistently supported as an element of student perceptions of quality (Espasa & Meneses, 2010 ).
Teaching Presence refers to students’ perceptions about the quality of communication in lectures, directions, and individual feedback including encouragement (Jaggars & Xu, 2016 ; Marks et al., 2005 ). Specifically, instructor communication is clear, focused, and encouraging, and instructor feedback is customized and timely. If Instructional Support is what an instructor does before the course begins and in carrying out those plans, then Teaching Presence is what the instructor does while the class is conducted and in response to specific circumstances. For example, a course could be well designed but poorly delivered because the instructor is distracted; or a course could be poorly designed but an instructor might make up for the deficit by spending time and energy in elaborate communications and ad hoc teaching techniques. It is especially important in student satisfaction (Sebastianelli et al., 2015 ; Young, 2006 ) and also referred to as instructor presence (Asoodar et al., 2016 ), learner-instructor interaction (Marks et al., 2005 ), and staff support (Jung, 2011 ). As with Instructional Support, it has been consistently supported as an element of student perceptions of quality.
Basic Online Modality refers to the competent use of basic online class tools—online grading, navigation methods, online grade book, and the announcements function. It is frequently clumped with instructional quality (Artino, 2010 ), service quality (Mohammadi, 2015 ), instructor expertise in e-teaching (Paechter, Maier, & Macher, 2010 ), and similar terms. As a narrowly defined concept, it is sometimes called technology (Asoodar et al., 2016 ; Bollinger & Martindale, 2004 ; Sun et al., 2008 ). The only empirical study that did not find Basic Online Modality significant, as technology, was Sun et al. ( 2008 ). Because Basic Online Modality is addressed with basic instructor training, some studies assert the importance of training (e.g., Asoodar et al., 2016 ).
Social Presence refers to students’ perceptions of the quality of student-to-student interaction. Social Presence focuses on the quality of shared learning and collaboration among students, such as in threaded discussion responses (Garrison et al., 2003 ; Kehrwald, 2008 ). Much emphasized but challenged in the CoI literature (Rourke & Kanuka, 2009 ), it has mixed support in the online literature. While some studies found Social Presence or related concepts to be significant (e.g., Asoodar et al., 2016 ; Bollinger & Martindale, 2004 ; Eom et al., 2006 ; Richardson, Maeda, Lv, & Caskurlu, 2017 ), others found Social Presence insignificant (Joo, Lim, & Kim, 2011 ; So & Brush, 2008 ; Sun et al., 2008 ).
Online Social Comfort refers to the instructor’s ability to provide an environment in which anxiety is low, and students feel comfortable interacting even when expressing opposing viewpoints. While numerous studies have examined anxiety (e.g., Liaw & Huang, 2013 ; Otter et al., 2013 ; Sun et al., 2008 ), only one found anxiety insignificant (Asoodar et al., 2016 ); many others have not examined the concept.
Cognitive Presence refers to the engagement of students such that they perceive they are stimulated by the material and instructor to reflect deeply and critically, and seek to understand different perspectives (Garrison et al., 2003 ). The instructor provides instructional materials and facilitates an environment that piques interest, is reflective, and enhances inclusiveness of perspectives (Durabi, Arrastia, Nelson, Cornille, & Liang, 2011 ). Cognitive Presence includes enhancing the applicability of material for student’s potential or current careers. Cognitive Presence is supported as significant in many online studies (e.g., Artino, 2010 ; Asoodar et al., 2016 ; Joo et al., 2011 ; Marks et al., 2005 ; Sebastianelli et al., 2015 ; Sun et al., 2008 ). Further, while many instructors perceive that cognitive presence is diminished in online settings, neuroscientific studies indicate this need not be the case (Takamine, 2017 ). While numerous studies failed to examine Cognitive Presence, this review found no studies that lessened its significance for students.
Interactive Online Modality refers to the “high-end” usage of online functionality. That is, the instructor uses interactive online class tools—video lectures, videoconferencing, and small group discussions—well. It is often included in concepts such as instructional quality (Artino, 2010 ; Asoodar et al., 2016 ; Mohammadi, 2015 ; Otter et al., 2013 ; Paechter et al., 2010 ) or engagement (Clayton, Blumberg, & Anthony, 2018 ). While individual methods have been investigated (e.g. Durabi et al., 2011 ), high-end engagement methods have not.
Other independent variables affecting perceptions of quality include age, undergraduate versus graduate status, gender, ethnicity/race, discipline, educational motivation of students, and previous online experience. While age has been found to be small or insignificant, more notable effects have been reported at the level-of-study, with graduate students reporting higher “success” (Macon, 2011 ), and community college students having greater difficulty with online classes (Legon & Garrett, 2019 ; Xu & Jaggars, 2014 ). Ethnicity and race have also been small or insignificant. Some situational variations and student preferences can be captured by paying attention to disciplinary differences (Arbaugh, 2005 ; Macon, 2011 ). Motivation levels of students have been reported to be significant in completion and achievement, with better students doing as well across face-to-face and online modes, and weaker students having greater completion and achievement challenges (Clayton et al., 2018 ; Lu & Lemonde, 2013 ).
To examine the various quality factors, we apply a critical success factor methodology, initially introduced to schools of business research in the 1970s. In 1981, Rockhart and Bullen codified an approach embodying principles of critical success factors (CSFs) as a way to identify the information needs of executives, detailing steps for the collection and analyzation of data to create a set of organizational CSFs (Rockhart & Bullen, 1981 ). CSFs describe the underlying or guiding principles which must be incorporated to ensure success.
Utilizing this methodology, CSFs in the context of this paper define key areas of instruction and design essential for an online class to be successful from a student’s perspective. Instructors implicitly know and consider these areas when setting up an online class and designing and directing activities and tasks important to achieving learning goals. CSFs make explicit those things good instructors may intuitively know and (should) do to enhance student learning. When made explicit, CSFs not only confirm the knowledge of successful instructors, but tap their intuition to guide and direct the accomplishment of quality instruction for entire programs. In addition, CSFs are linked with goals and objectives, helping generate a small number of truly important matters an instructor should focus attention on to achieve different thresholds of online success.
After a comprehensive literature review, an instrument was created to measure students’ perceptions about the importance of techniques and indicators leading to quality online classes. Items were designed to capture the major factors in the literature. The instrument was pilot studied during academic year 2017–18 with a 397 student sample, facilitating an exploratory factor analysis leading to important preliminary findings (reference withheld for review). Based on the pilot, survey items were added and refined to include seven groups of quality teaching factors and two groups of items related to students’ overall acceptance of online classes as well as a variable on their future online class enrollment. Demographic information was gathered to determine their effects on students’ levels of acceptance of online classes based on age, year in program, major, distance from university, number of online classes taken, high school experience with online classes, and communication preferences.
This paper draws evidence from a sample of students enrolled in educational programs at Jack H. Brown College of Business and Public Administration (JHBC), California State University San Bernardino (CSUSB). The JHBC offers a wide range of online courses for undergraduate and graduate programs. To ensure comparable learning outcomes, online classes and face-to-face classes of a certain subject are similar in size—undergraduate classes are generally capped at 60 and graduate classes at 30, and often taught by the same instructors. Students sometimes have the option to choose between both face-to-face and online modes of learning.
A Qualtrics survey link was sent out by 11 instructors to students who were unlikely to be cross-enrolled in classes during the 2018–19 academic year. 1 Approximately 2500 students were contacted, with some instructors providing class time to complete the anonymous survey. All students, whether they had taken an online class or not, were encouraged to respond. Nine hundred eighty-seven students responded, representing a 40% response rate. Although drawn from a single business school, it is a broad sample representing students from several disciplines—management, accounting and finance, marketing, information decision sciences, and public administration, as well as both graduate and undergraduate programs of study.
The sample age of students is young, with 78% being under 30. The sample has almost no lower division students (i.e., freshman and sophomore), 73% upper division students (i.e., junior and senior) and 24% graduate students (master’s level). Only 17% reported having taken a hybrid or online class in high school. There was a wide range of exposure to university level online courses, with 47% reporting having taken 1 to 4 classes, and 21% reporting no online class experience. As a Hispanic-serving institution, 54% self-identified as Latino, 18% White, and 13% Asian and Pacific Islander. The five largest majors were accounting & finance (25%), management (21%), master of public administration (16%), marketing (12%), and information decision sciences (10%). Seventy-four percent work full- or part-time. See Table 1 for demographic data.
To increase the reliability of evaluation scores, composite evaluation variables are formed after an exploratory factor analysis of individual evaluation items. A principle component method with Quartimin (oblique) rotation was applied to explore the factor construct of student perceptions of online teaching CSFs. The item correlations for student perceptions of importance coefficients greater than .30 were included, a commonly acceptable ratio in factor analysis. A simple least-squares regression analysis was applied to test the significance levels of factors on students’ impression of online classes.
Using a threshold loading of 0.3 for items, 37 items loaded on seven factors. All factors were logically consistent. The first factor, with eight items, was labeled Teaching Presence. Items included providing clear instructions, staying on task, clear deadlines, and customized feedback on strengths and weaknesses. Teaching Presence items all related to instructor involvement during the course as a director, monitor, and learning facilitator. The second factor, with seven items, aligned with Cognitive Presence. Items included stimulating curiosity, opportunities for reflection, helping students construct explanations posed in online courses, and the applicability of material. The third factor, with six items, aligned with Social Presence defined as providing student-to-student learning opportunities. Items included getting to know course participants for sense of belonging, forming impressions of other students, and interacting with others. The fourth factor, with six new items as well as two (“interaction with other students” and “a sense of community in the class”) shared with the third factor, was Instructional Support which related to the instructor’s roles in providing students a cohesive learning experience. They included providing sufficient rehearsal, structured feedback, techniques for communication, navigation guide, detailed syllabus, and coordinating student interaction and creating a sense of online community. This factor also included enthusiasm which students generally interpreted as a robustly designed course, rather than animation in a traditional lecture. The fifth factor was labeled Basic Online Modality and focused on the basic technological requirements for a functional online course. Three items included allowing students to make online submissions, use of online gradebooks, and online grading. A fourth item is the use of online quizzes, viewed by students as mechanical practice opportunities rather than small tests and a fifth is navigation, a key component of Online Modality. The sixth factor, loaded on four items, was labeled Online Social Comfort. Items here included comfort discussing ideas online, comfort disagreeing, developing a sense of collaboration via discussion, and considering online communication as an excellent medium for social interaction. The final factor was called Interactive Online Modality because it included items for “richer” communications or interactions, no matter whether one- or two-way. Items included videoconferencing, instructor-generated videos, and small group discussions. Taken together, these seven explained 67% of the variance which is considered in the acceptable range in social science research for a robust model (Hair, Black, Babin, & Anderson, 2014 ). See Table 2 for the full list.
To test for factor reliability, the Cronbach alpha of variables were calculated. All produced values greater than 0.7, the standard threshold used for reliability, except for system trust which was therefore dropped. To gauge students’ sense of factor importance, all items were means averaged. Factor means (lower means indicating higher importance to students), ranged from 1.5 to 2.6 on a 5-point scale. Basic Online Modality was most important, followed by Instructional Support and Teaching Presence. Students deemed Cognitive Presence, Social Online Comfort, and Online Interactive Modality less important. The least important for this sample was Social Presence. Table 3 arrays the critical success factor means, standard deviations, and Cronbach alpha.
To determine whether particular subgroups of respondents viewed factors differently, a series of ANOVAs were conducted using factor means as dependent variables. Six demographic variables were used as independent variables: graduate vs. undergraduate, age, work status, ethnicity, discipline, and past online experience. To determine strength of association of the independent variables to each of the seven CSFs, eta squared was calculated for each ANOVA. Eta squared indicates the proportion of variance in the dependent variable explained by the independent variable. Eta squared values greater than .01, .06, and .14 are conventionally interpreted as small, medium, and large effect sizes, respectively (Green & Salkind, 2003 ). Table 4 summarizes the eta squared values for the ANOVA tests with Eta squared values less than .01 omitted.
While no significant differences in factor means among students in different disciplines in the College occur, all five other independent variables have some small effect on some or all CSFs. Graduate students tend to rate Online Interactive Modality, Instructional Support, Teaching Presence, and Cognitive Presence higher than undergraduates. Elder students value more Online Interactive Modality. Full-time working students rate all factors, except Social Online Comfort, slightly higher than part-timers and non-working students. Latino and White rate Basic Online Modality and Instructional Support higher; Asian and Pacific Islanders rate Social Presence higher. Students who have taken more online classes rate all factors higher.
In addition to factor scores, two variables are constructed to identify the resultant impressions labeled online experience. Both were logically consistent with a Cronbach’s α greater than 0.75. The first variable, with six items, labeled “online acceptance,” included items such as “I enjoy online learning,” “My overall impression of hybrid/online learning is very good,” and “the instructors of online/hybrid classes are generally responsive.” The second variable was labeled “face-to-face preference” and combines four items, including enjoying, learning, and communicating more in face-to-face classes, as well as perceiving greater fairness and equity. In addition to these two constructed variables, a one-item variable was also used subsequently in the regression analysis: “online enrollment.” That question asked: if hybrid/online classes are well taught and available, how much would online education make up your entire course selection going forward?
As noted above, two constructed variables and one item were used as dependent variables for purposes of regression analysis. They were online acceptance, F2F preference, and the selection of online classes. In addition to seven quality-of-teaching factors identified by factor analysis, control variables included level of education (graduate versus undergraduate), age, ethnicity, work status, distance to university, and number of online/hybrid classes taken in the past. See Table 5 .
When the ETA squared values for ANOVA significance were measured for control factors, only one was close to a medium effect. Graduate versus undergraduate status had a .05 effect (considered medium) related to Online Interactive Modality, meaning graduate students were more sensitive to interactive modality than undergraduates. Multiple regression analysis of critical success factors and online impressions were conducted to compare under what conditions factors were significant. The only consistently significant control factor was number of online classes taken. The more classes students had taken online, the more inclined they were to take future classes. Level of program, age, ethnicity, and working status do not significantly affect students’ choice or overall acceptance of online classes.
The least restrictive condition was online enrollment (Table 6 ). That is, students might not feel online courses were ideal, but because of convenience and scheduling might enroll in them if minimum threshold expectations were met. When considering online enrollment three factors were significant and positive (at the 0.1 level): Basic Online Modality, Cognitive Presence, and Online Social Comfort. These least-demanding students expected classes to have basic technological functionality, provide good opportunities for knowledge acquisition, and provide comfortable interaction in small groups. Students who demand good Instructional Support (e.g., rehearsal opportunities, standardized feedback, clear syllabus) are less likely to enroll.
Online acceptance was more restrictive (see Table 7 ). This variable captured the idea that students not only enrolled in online classes out of necessity, but with an appreciation of the positive attributes of online instruction, which balanced the negative aspects. When this standard was applied, students expected not only Basic Online Modality, Cognitive Presence, and Online Social Comfort, but expected their instructors to be highly engaged virtually as the course progressed (Teaching Presence), and to create strong student-to-student dynamics (Social Presence). Students who rated Instructional Support higher are less accepting of online classes.
Another restrictive condition was catering to the needs of students who preferred face-to-face classes (see Table 8 ). That is, they preferred face-to-face classes even when online classes were well taught. Unlike students more accepting of, or more likely to enroll in, online classes, this group rates Instructional Support as critical to enrolling, rather than a negative factor when absent. Again different from the other two groups, these students demand appropriate interactive mechanisms (Online Interactive Modality) to enable richer communication (e.g., videoconferencing). Student-to-student collaboration (Social Presence) was also significant. This group also rated Cognitive Presence and Online Social Comfort as significant, but only in their absence. That is, these students were most attached to direct interaction with the instructor and other students rather than specific teaching methods. Interestingly, Basic Online Modality and Teaching Presence were not significant. Our interpretation here is this student group, most critical of online classes for its loss of physical interaction, are beyond being concerned with mechanical technical interaction and demand higher levels of interactivity and instructional sophistication.
Some past studies have used robust empirical methods to identify a single factor or a small number of factors related to quality from a student’s perspective, but have not sought to be relatively comprehensive. Others have used a longer series of itemized factors, but have less used less robust methods, and have not tied those factors back to the literature. This study has used the literature to develop a relatively comprehensive list of items focused on quality teaching in a single rigorous protocol. That is, while a Beta test had identified five coherent factors, substantial changes to the current survey that sharpened the focus on quality factors rather than antecedent factors, as well as better articulating the array of factors often lumped under the mantle of “teaching presence.” In addition, it has also examined them based on threshold expectations: from minimal, such as when flexibility is the driving consideration, to modest, such as when students want a “good” online class, to high, when students demand an interactive virtual experience equivalent to face-to-face.
Exploratory factor analysis identified seven factors that were reliable, coherent, and significant under different conditions. When considering students’ overall sense of importance, they are, in order: Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Social Online Comfort, Interactive Online Modality, and Social Presence. Students are most concerned with the basics of a course first, that is the technological and instructor competence. Next they want engagement and virtual comfort. Social Presence, while valued, is the least critical from this overall perspective.
The factor analysis is quite consistent with the range of factors identified in the literature, pointing to the fact that students can differentiate among different aspects of what have been clumped as larger concepts, such as teaching presence. Essentially, the instructor’s role in quality can be divided into her/his command of basic online functionality, good design, and good presence during the class. The instructor’s command of basic functionality is paramount. Because so much of online classes must be built in advance of the class, quality of the class design is rated more highly than the instructor’s role in facilitating the class. Taken as a whole, the instructor’s role in traditional teaching elements is primary, as we would expect it to be. Cognitive presence, especially as pertinence of the instructional material and its applicability to student interests, has always been found significant when studied, and was highly rated as well in a single factor. Finally, the degree to which students feel comfortable with the online environment and enjoy the learner-learner aspect has been less supported in empirical studies, was found significant here, but rated the lowest among the factors of quality to students.
Regression analysis paints a more nuanced picture, depending on student focus. It also helps explain some of the heterogeneity of previous studies, depending on what the dependent variables were. If convenience and scheduling are critical and students are less demanding, minimum requirements are Basic Online Modality, Cognitive Presence, and Online Social Comfort. That is, students’ expect an instructor who knows how to use an online platform, delivers useful information, and who provides a comfortable learning environment. However, they do not expect to get poor design. They do not expect much in terms of the quality teaching presence, learner-to-learner interaction, or interactive teaching.
When students are signing up for critical classes, or they have both F2F and online options, they have a higher standard. That is, they not only expect the factors for decisions about enrolling in noncritical classes, but they also expect good Teaching and Social Presence. Students who simply need a class may be willing to teach themselves a bit more, but students who want a good class expect a highly present instructor in terms responsiveness and immediacy. “Good” classes must not only create a comfortable atmosphere, but in social science classes at least, must provide strong learner-to-learner interactions as well. At the time of the research, most students believe that you can have a good class without high interactivity via pre-recorded video and videoconference. That may, or may not, change over time as technology thresholds of various video media become easier to use, more reliable, and more commonplace.
The most demanding students are those who prefer F2F classes because of learning style preferences, poor past experiences, or both. Such students (seem to) assume that a worthwhile online class has basic functionality and that the instructor provides a strong presence. They are also critical of the absence of Cognitive Presence and Online Social Comfort. They want strong Instructional Support and Social Presence. But in addition, and uniquely, they expect Online Interactive Modality which provides the greatest verisimilitude to the traditional classroom as possible. More than the other two groups, these students crave human interaction in the learning process, both with the instructor and other students.
These findings shed light on the possible ramifications of the COVID-19 aftermath. Many universities around the world jumped from relatively low levels of online instruction in the beginning of spring 2020 to nearly 100% by mandate by the end of the spring term. The question becomes, what will happen after the mandate is removed? Will demand resume pre-crisis levels, will it increase modestly, or will it skyrocket? Time will be the best judge, but the findings here would suggest that the ability/interest of instructors and institutions to “rise to the occasion” with quality teaching will have as much effect on demand as students becoming more acclimated to online learning. If in the rush to get classes online many students experience shoddy basic functional competence, poor instructional design, sporadic teaching presence, and poorly implemented cognitive and social aspects, they may be quite willing to return to the traditional classroom. If faculty and institutions supporting them are able to increase the quality of classes despite time pressures, then most students may be interested in more hybrid and fully online classes. If instructors are able to introduce high quality interactive teaching, nearly the entire student population will be interested in more online classes. Of course students will have a variety of experiences, but this analysis suggests that those instructors, departments, and institutions that put greater effort into the temporary adjustment (and who resist less), will be substantially more likely to have increases in demand beyond what the modest national trajectory has been for the last decade or so.
There are several study limitations. First, the study does not include a sample of non-respondents. Non-responders may have a somewhat different profile. Second, the study draws from a single college and university. The profile derived here may vary significantly by type of student. Third, some survey statements may have led respondents to rate quality based upon experience rather than assess the general importance of online course elements. “I felt comfortable participating in the course discussions,” could be revised to “comfort in participating in course discussions.” The authors weighed differences among subgroups (e.g., among majors) as small and statistically insignificant. However, it is possible differences between biology and marketing students would be significant, leading factors to be differently ordered. Emphasis and ordering might vary at a community college versus research-oriented university (Gonzalez, 2009 ).
We will make the data available.
Al-Gahtani, S. S. (2016). Empirical investigation of e-learning acceptance and assimilation: A structural equation model. Applied Comput Information , 12 , 27–50.
Google Scholar
Alqurashi, E. (2016). Self-efficacy in online learning environments: A literature review. Contemporary Issues Educ Res (CIER) , 9 (1), 45–52.
Anderson, T. (2016). A fourth presence for the Community of Inquiry model? Retrieved from https://virtualcanuck.ca/2016/01/04/a-fourth-presence-for-the-community-of-inquiry-model/ .
Annand, D. (2011). Social presence within the community of inquiry framework. The International Review of Research in Open and Distributed Learning , 12 (5), 40.
Arbaugh, J. B. (2005). How much does “subject matter” matter? A study of disciplinary effects in on-line MBA courses. Academy of Management Learning & Education , 4 (1), 57–73.
Arbaugh, J. B., Cleveland-Innes, M., Diaz, S. R., Garrison, D. R., Ice, P., Richardson, J. C., & Swan, K. P. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. Internet and Higher Education , 11 , 133–136.
Armellini, A., & De Stefani, M. (2016). Social presence in the 21st century: An adjustment to the Community of Inquiry framework. British Journal of Educational Technology , 47 (6), 1202–1216.
Arruabarrena, R., Sánchez, A., Blanco, J. M., et al. (2019). Integration of good practices of active methodologies with the reuse of student-generated content. International Journal of Educational Technology in Higher Education , 16 , #10.
Arthur, L. (2009). From performativity to professionalism: Lecturers’ responses to student feedback. Teaching in Higher Education , 14 (4), 441–454.
Artino, A. R. (2010). Online or face-to-face learning? Exploring the personal factors that predict students’ choice of instructional format. Internet and Higher Education , 13 , 272–276.
Asoodar, M., Vaezi, S., & Izanloo, B. (2016). Framework to improve e-learner satisfaction and further strengthen e-learning implementation. Computers in Human Behavior , 63 , 704–716.
Bernard, R. M., et al. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research , 74 (3), 379–439.
Bollinger, D., & Martindale, T. (2004). Key factors for determining student satisfaction in online courses. Int J E-learning , 3 (1), 61–67.
Brinkley-Etzkorn, K. E. (2018). Learning to teach online: Measuring the influence of faculty development training on teaching effectiveness through a TPACK lens. The Internet and Higher Education , 38 , 28–35.
Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin , 3 , 7.
Choi, I., Land, S. M., & Turgeon, A. J. (2005). Scaffolding peer-questioning strategies to facilitate metacognition during online small group discussion. Instructional Science , 33 , 483–511.
Clayton, K. E., Blumberg, F. C., & Anthony, J. A. (2018). Linkages between course status, perceived course value, and students’ preferences for traditional versus non-traditional learning environments. Computers & Education , 125 , 175–181.
Cleveland-Innes, M., & Campbell, P. (2012). Emotional presence, learning, and the online learning environment. The International Review of Research in Open and Distributed Learning , 13 (4), 269–292.
Cohen, A., & Baruth, O. (2017). Personality, learning, and satisfaction in fully online academic courses. Computers in Human Behavior , 72 , 1–12.
Crews, T., & Butterfield, J. (2014). Data for flipped classroom design: Using student feedback to identify the best components from online and face-to-face classes. Higher Education Studies , 4 (3), 38–47.
Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: Staff and student perspectives. Assessment & Evaluation in Higher Education , 44 (1), 25–36.
Drew, C., & Mann, A. (2018). Unfitting, uncomfortable, unacademic: A sociological reading of an interactive mobile phone app in university lectures. International Journal of Educational Technology in Higher Education , 15 , #43.
Durabi, A., Arrastia, M., Nelson, D., Cornille, T., & Liang, X. (2011). Cognitive presence in asynchronous online learning: A comparison of four discussion strategies. Journal of Computer Assisted Learning , 27 (3), 216–227.
Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education , 4 (2), 215–235.
Espasa, A., & Meneses, J. (2010). Analysing feedback processes in an online teaching and learning environment: An exploratory study. Higher Education , 59 (3), 277–292.
Farrell, O., & Brunton, J. (2020). A balancing act: A window into online student engagement experiences. International Journal of Educational Technology in High Education , 17 , #25.
Fidalgo, P., Thormann, J., Kulyk, O., et al. (2020). Students’ perceptions on distance education: A multinational study. International Journal of Educational Technology in High Education , 17 , #18.
Flores, Ò., del-Arco, I., & Silva, P. (2016). The flipped classroom model at the university: Analysis based on professors’ and students’ assessment in the educational field. International Journal of Educational Technology in Higher Education , 13 , #21.
Garrison, D. R., Anderson, T., & Archer, W. (2003). A theory of critical inquiry in online distance education. Handbook of Distance Education , 1 , 113–127.
Gong, D., Yang, H. H., & Cai, J. (2020). Exploring the key influencing factors on college students’ computational thinking skills through flipped-classroom instruction. International Journal of Educational Technology in Higher Education , 17 , #19.
Gonzalez, C. (2009). Conceptions of, and approaches to, teaching online: A study of lecturers teaching postgraduate distance courses. Higher Education , 57 (3), 299–314.
Grandzol, J. R., & Grandzol, C. J. (2006). Best practices for online business Education. International Review of Research in Open and Distance Learning , 7 (1), 1–18.
Green, S. B., & Salkind, N. J. (2003). Using SPSS: Analyzing and understanding data , (3rd ed., ). Upper Saddle River: Prentice Hall.
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2014). Multivariate data analysis: Pearson new international edition . Essex: Pearson Education Limited.
Harjoto, M. A. (2017). Blended versus face-to-face: Evidence from a graduate corporate finance class. Journal of Education for Business , 92 (3), 129–137.
Hong, K.-S. (2002). Relationships between students’ instructional variables with satisfaction and learning from a web-based course. The Internet and Higher Education , 5 , 267–281.
Horvitz, B. S., Beach, A. L., Anderson, M. L., & Xia, J. (2015). Examination of faculty self-efficacy related to online teaching. Innovation Higher Education , 40 , 305–316.
Inside Higher Education and Gallup. (2019). The 2019 survey of faculty attitudes on technology. Author .
Jaggars, S. S., & Xu, D. (2016). How do online course design features influence student performance? Computers and Education , 95 , 270–284.
Joo, Y. J., Lim, K. Y., & Kim, E. K. (2011). Online university students’ satisfaction and persistence: Examining perceived level of presence, usefulness and ease of use as predictor in a structural model. Computers & Education , 57 (2), 1654–1664.
Jung, I. (2011). The dimensions of e-learning quality: From the learner’s perspective. Educational Technology Research and Development , 59 (4), 445–464.
Kay, R., MacDonald, T., & DiGiuseppe, M. (2019). A comparison of lecture-based, active, and flipped classroom teaching approaches in higher education. Journal of Computing in Higher Education , 31 , 449–471.
Kehrwald, B. (2008). Understanding social presence in text-based online learning environments. Distance Education , 29 (1), 89–106.
Kintu, M. J., Zhu, C., & Kagambe, E. (2017). Blended learning effectiveness: The relationship between student characteristics, design features and outcomes. International Journal of Educational Technology in Higher Education , 14 , #7.
Kuo, Y.-C., Walker, A. E., Schroder, K. E., & Belland, B. R. (2013). Interaction, internet self-efficacy, and self-regulated learning as predictors of student satisfaction in online education courses. Internet and Education , 20 , 35–50.
Lange, C., & Costley, J. (2020). Improving online video lectures: Learning challenges created by media. International Journal of Educational Technology in Higher Education , 17 , #16.
le Roux, I., & Nagel, L. (2018). Seeking the best blend for deep learning in a flipped classroom – Viewing student perceptions through the Community of Inquiry lens. International Journal of Educational Technology in High Education , 15 , #16.
Lee, H.-J., & Rha, I. (2009). Influence of structure and interaction on student achievement and satisfaction in web-based distance learning. Educational Technology & Society , 12 (4), 372–382.
Lee, Y., Stringer, D., & Du, J. (2017). What determines students’ preference of online to F2F class? Business Education Innovation Journal , 9 (2), 97–102.
Legon, R., & Garrett, R. (2019). CHLOE 3: Behind the numbers . Published online by Quality Matters and Eduventures. https://www.qualitymatters.org/sites/default/files/research-docs-pdfs/CHLOE-3-Report-2019-Behind-the-Numbers.pdf
Liaw, S.-S., & Huang, H.-M. (2013). Perceived satisfaction, perceived usefulness and interactive learning environments as predictors of self-regulation in e-learning environments. Computers & Education , 60 (1), 14–24.
Lu, F., & Lemonde, M. (2013). A comparison of online versus face-to-face students teaching delivery in statistics instruction for undergraduate health science students. Advances in Health Science Education , 18 , 963–973.
Lundin, M., Bergviken Rensfeldt, A., Hillman, T., Lantz-Andersson, A., & Peterson, L. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education , 15 (1).
Macon, D. K. (2011). Student satisfaction with online courses versus traditional courses: A meta-analysis . Disssertation: Northcentral University, CA.
Mann, J., & Henneberry, S. (2012). What characteristics of college students influence their decisions to select online courses? Online Journal of Distance Learning Administration , 15 (5), 1–14.
Mansbach, J., & Austin, A. E. (2018). Nuanced perspectives about online teaching: Mid-career senior faculty voices reflecting on academic work in the digital age. Innovative Higher Education , 43 (4), 257–272.
Marks, R. B., Sibley, S. D., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education , 29 (4), 531–563.
Martin, F., Wang, C., & Sadaf, A. (2018). Student perception of facilitation strategies that enhance instructor presence, connectedness, engagement and learning in online courses. Internet and Higher Education , 37 , 52–65.
Maycock, K. W. (2019). Chalk and talk versus flipped learning: A case study. Journal of Computer Assisted Learning , 35 , 121–126.
McGivney-Burelle, J. (2013). Flipping Calculus. PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate . Studies , 23 (5), 477–486.
Mohammadi, H. (2015). Investigating users’ perspectives on e-learning: An integration of TAM and IS success model. Computers in Human Behavior , 45 , 359–374.
Nair, S. S., Tay, L. Y., & Koh, J. H. L. (2013). Students’ motivation and teachers’ teaching practices towards the use of blogs for writing of online journals. Educational Media International , 50 (2), 108–119.
Nguyen, T. (2015). The effectiveness of online learning: Beyond no significant difference and future horizons. MERLOT Journal of Online Learning and Teaching , 11 (2), 309–319.
Ni, A. Y. (2013). Comparing the effectiveness of classroom and online learning: Teaching research methods. Journal of Public Affairs Education , 19 (2), 199–215.
Nouri, J. (2016). The flipped classroom: For active, effective and increased learning – Especially for low achievers. International Journal of Educational Technology in Higher Education , 13 , #33.
O’Neill, D. K., & Sai, T. H. (2014). Why not? Examining college students’ reasons for avoiding an online course. Higher Education , 68 (1), 1–14.
O'Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education , 25 , 85–95.
Open & Distant Learning Quality Council (2012). ODLQC standards . England: Author https://www.odlqc.org.uk/odlqc-standards .
Ortagus, J. C. (2017). From the periphery to prominence: An examination of the changing profile of online students in American higher education. Internet and Higher Education , 32 , 47–57.
Otter, R. R., Seipel, S., Graef, T., Alexander, B., Boraiko, C., Gray, J., … Sadler, K. (2013). Comparing student and faculty perceptions of online and traditional courses. Internet and Higher Education , 19 , 27–35.
Paechter, M., Maier, B., & Macher, D. (2010). Online or face-to-face? Students’ experiences and preferences in e-learning. Internet and Higher Education , 13 , 292–329.
Prinsloo, P. (2016). (re)considering distance education: Exploring its relevance, sustainability and value contribution. Distance Education , 37 (2), 139–145.
Quality Matters (2018). Specific review standards from the QM higher Education rubric , (6th ed., ). MD: MarylandOnline.
Richardson, J. C., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students’ satisfaction and learning in the online environment: A meta-analysis. Computers in Human Behavior , 71 , 402–417.
Rockhart, J. F., & Bullen, C. V. (1981). A primer on critical success factors . Cambridge: Center for Information Systems Research, Massachusetts Institute of Technology.
Rourke, L., & Kanuka, H. (2009). Learning in Communities of Inquiry: A Review of the Literature. The Journal of Distance Education / Revue de l'ducation Distance , 23 (1), 19–48 Athabasca University Press. Retrieved August 2, 2020 from https://www.learntechlib.org/p/105542/ .
Sebastianelli, R., Swift, C., & Tamimi, N. (2015). Factors affecting perceived learning, satisfaction, and quality in the online MBA: A structural equation modeling approach. Journal of Education for Business , 90 (6), 296–305.
Shen, D., Cho, M.-H., Tsai, C.-L., & Marra, R. (2013). Unpacking online learning experiences: Online learning self-efficacy and learning satisfaction. Internet and Higher Education , 19 , 10–17.
Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology , 59 (3), 623–664.
So, H. J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education , 51 (1), 318–336.
Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education , 7 (1), 59–70.
Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education , 50 (4), 1183–1202.
Takamine, K. (2017). Michelle D. miller: Minds online: Teaching effectively with technology. Higher Education , 73 , 789–791.
Tanner, J. R., Noser, T. C., & Totaro, M. W. (2009). Business faculty and undergraduate students’ perceptions of online learning: A comparative study. Journal of Information Systems Education , 20 (1), 29.
Tucker, B. (2012). The flipped classroom. Education Next , 12 (1), 82–83.
Van Wart, M., Ni, A., Ready, D., Shayo, C., & Court, J. (2020). Factors leading to online learner satisfaction. Business Educational Innovation Journal , 12 (1), 15–24.
Van Wart, M., Ni, A., Rose, L., McWeeney, T., & Worrell, R. A. (2019). Literature review and model of online teaching effectiveness integrating concerns for learning achievement, student satisfaction, faculty satisfaction, and institutional results. Pan-Pacific . Journal of Business Research , 10 (1), 1–22.
Ventura, A. C., & Moscoloni, N. (2015). Learning styles and disciplinary differences: A cross-sectional study of undergraduate students. International Journal of Learning and Teaching , 1 (2), 88–93.
Vlachopoulos, D., & Makri, A. (2017). The effect of games and simulations on higher education: A systematic literature review. International Journal of Educational Technology in Higher Education , 14 , #22.
Wang, Y., Huang, X., & Schunn, C. D. (2019). Redesigning flipped classrooms: A learning model and its effects on student perceptions. Higher Education , 78 , 711–728.
Wingo, N. P., Ivankova, N. V., & Moss, J. A. (2017). Faculty perceptions about teaching online: Exploring the literature using the technology acceptance model as an organizing framework. Online Learning , 21 (1), 15–35.
Xu, D., & Jaggars, S. S. (2014). Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas. Journal of Higher Education , 85 (5), 633–659.
Young, S. (2006). Student views of effective online teaching in higher education. American Journal of Distance Education , 20 (2), 65–77.
Zawacki-Richter, O., & Naidu, S. (2016). Mapping research trends from 35 years of publications in distance Education. Distance Education , 37 (3), 245–269.
Download references
No external funding/ NA.
Authors and affiliations.
Development for the JHB College of Business and Public Administration, 5500 University Parkway, San Bernardino, California, 92407, USA
Montgomery Van Wart, Anna Ni, Pamela Medina, Jesus Canelon, Melika Kordrostami, Jing Zhang & Yu Liu
You can also search for this author in PubMed Google Scholar
Equal. The author(s) read and approved the final manuscript.
Correspondence to Montgomery Van Wart .
Competing interests.
We have no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Cite this article.
Van Wart, M., Ni, A., Medina, P. et al. Integrating students’ perspectives about online learning: a hierarchy of factors. Int J Educ Technol High Educ 17 , 53 (2020). https://doi.org/10.1186/s41239-020-00229-8
Download citation
Received : 29 April 2020
Accepted : 30 July 2020
Published : 02 December 2020
DOI : https://doi.org/10.1186/s41239-020-00229-8
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
The first step in any research project is framing the research question. It can be considered the core of any systematic investigation as the research outcomes are tied to asking the right questions. Thus, this primary interrogation point sets the pace for your research as it helps collect relevant and insightful information that ultimately influences your work.
Typically, the research question guides the stages of inquiry, analysis, and reporting. Depending on the use of quantifiable or quantitative data, research questions are broadly categorized into quantitative or qualitative research questions. Both types of research questions can be used independently or together, considering the overall focus and objectives of your research.
A research question is a clear, focused, concise, and arguable question on which your research and writing are centered. 1 It states various aspects of the study, including the population and variables to be studied and the problem the study addresses. These questions also set the boundaries of the study, ensuring cohesion.
Designing the research question is a dynamic process where the researcher can change or refine the research question as they review related literature and develop a framework for the study. Depending on the scale of your research, the study can include single or multiple research questions.
A good research question has the following features:
A poorly formulated research question poses several risks. 1
Now that you know “What is a research question?”, let’s look at the different types of research questions.
Depending on the type of research to be done, research questions can be classified broadly into quantitative, qualitative, or mixed-methods studies. Knowing the type of research helps determine the best type of research question that reflects the direction and epistemological underpinnings of your research.
The structure and wording of quantitative 2 and qualitative research 3 questions differ significantly. The quantitative study looks at causal relationships, whereas the qualitative study aims at exploring a phenomenon.
Quantitative research questions can be further categorized into descriptive, comparative, and relationship, as explained in the Table below.
Descriptive research questions | These measure the responses of a study’s population toward a particular question or variable. Common descriptive research questions will begin with “How much?”, “How regularly?”, “What percentage?”, “What time?”, “What is?” Research question example: How often do you buy mobile apps for learning purposes? |
Comparative research questions | These investigate differences between two or more groups for an outcome variable. For instance, the researcher may compare groups with and without a certain variable. Research question example: What are the differences in attitudes towards online learning between visual and Kinaesthetic learners? |
Relationship research questions | These explore and define trends and interactions between two or more variables. These investigate relationships between dependent and independent variables and use words such as “association” or “trends. Research question example: What is the relationship between disposable income and job satisfaction amongst US residents? |
Qualitative research questions are adaptable, non-directional, and more flexible. It concerns broad areas of research or more specific areas of study to discover, explain, or explore a phenomenon. These are further classified as follows:
Exploratory Questions | These question looks to understand something without influencing the results. The aim is to learn more about a topic without attributing bias or preconceived notions. Research question example: What are people’s thoughts on the new government? |
Experiential questions | These questions focus on understanding individuals’ experiences, perspectives, and subjective meanings related to a particular phenomenon. They aim to capture personal experiences and emotions. Research question example: What are the challenges students face during their transition from school to college? |
Interpretive Questions | These questions investigate people in their natural settings to help understand how a group makes sense of shared experiences of a phenomenon. Research question example: How do you feel about ChatGPT assisting student learning? |
Mixed-methods studies use both quantitative and qualitative research questions to answer your research question. Mixed methods provide a complete picture than standalone quantitative or qualitative research, as it integrates the benefits of both methods. Mixed methods research is often used in multidisciplinary settings and complex situational or societal research, especially in the behavioral, health, and social science fields.
A good research question should be clear and focused to guide your research. It should synthesize multiple sources to present your unique argument, and should ideally be something that you are interested in. But avoid questions that can be answered in a few factual statements. The following are the main attributes of a good research question.
The importance of research questions cannot be understated. When drafting a research question, use the following frameworks to guide the components of your question to ease the process. 4
Those struggling to understand how to write a research question, these simple steps can help you simplify the process of writing a research question.
Topic selection | Choose a broad topic, such as “learner support” or “social media influence” for your study. Select topics of interest to make research more enjoyable and stay motivated. |
Preliminary research | The goal is to refine and focus your research question. The following strategies can help: Skim various scholarly articles. List subtopics under the main topic. List possible research questions for each subtopic. Consider the scope of research for each of the research questions. Select research questions that are answerable within a specific time and with available resources. If the scope is too large, repeat looking for sub-subtopics. |
Audience | When choosing what to base your research on, consider your readers. For college papers, the audience is academic. Ask yourself if your audience may be interested in the topic you are thinking about pursuing. Determining your audience can also help refine the importance of your research question and focus on items related to your defined group. |
Generate potential questions | Ask open-ended “how?” and “why?” questions to find a more specific research question. Gap-spotting to identify research limitations, problematization to challenge assumptions made by others, or using personal experiences to draw on issues in your industry can be used to generate questions. |
Review brainstormed questions | Evaluate each question to check their effectiveness. Use the FINER model to see if the question meets all the research question criteria. |
Construct the research question | Multiple frameworks, such as PICOT and PEA, are available to help structure your research question. The frameworks listed below can help you with the necessary information for generating your research question. |
Framework | Attributes of each framework |
FINER | Feasible Interesting Novel Ethical Relevant |
PICOT | Population or problem Intervention or indicator being studied Comparison group Outcome of interest Time frame of the study |
PEO | Population being studied Exposure to preexisting conditions Outcome of interest |
The following are some bad and good research question examples
Unclear: How does social media affect student growth? |
Clear: What effect does the daily use of Twitter and Facebook have on the career development goals of students? |
Explanation: The first research question is unclear because of the vagueness of “social media” as a concept and the lack of specificity. The second question is specific and focused, and its answer can be discovered through data collection and analysis. |
Simple: Has there been an increase in the number of gifted children identified? |
Complex: What practical techniques can teachers use to identify and guide gifted children better? |
Explanation: A simple “yes” or “no” statement easily answers the first research question. The second research question is more complicated and requires the researcher to collect data, perform in-depth data analysis, and form an argument that leads to further discussion. |
References:
Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.
Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.
Experience the future of academic writing – Sign up to Paperpal and start writing for free!
Language and grammar rules for academic writing, you may also like, how to cite in apa format (7th edition):..., how to write your research paper in apa..., how to choose a dissertation topic, how to write a phd research proposal, how to write an academic paragraph (step-by-step guide), research funding basics: what should a grant proposal..., how to write the first draft of a..., mla works cited page: format, template & examples, academic editing: how to self-edit academic text with..., measuring academic success: definition & strategies for excellence.
🏆 best research title examples about online class, 💡 most interesting online learning topics to write about, 📚 good online education topics for presentation, 🌐 catchy titled for online learning essay, 💻 online class research titles, ❓ research topics about online classes.
IvyPanda. (2024, February 29). 161 Online Education Topics and Essay Examples. https://ivypanda.com/essays/topic/online-learning-essay-topics/
"161 Online Education Topics and Essay Examples." IvyPanda , 29 Feb. 2024, ivypanda.com/essays/topic/online-learning-essay-topics/.
IvyPanda . (2024) '161 Online Education Topics and Essay Examples'. 29 February.
IvyPanda . 2024. "161 Online Education Topics and Essay Examples." February 29, 2024. https://ivypanda.com/essays/topic/online-learning-essay-topics/.
1. IvyPanda . "161 Online Education Topics and Essay Examples." February 29, 2024. https://ivypanda.com/essays/topic/online-learning-essay-topics/.
Bibliography
IvyPanda . "161 Online Education Topics and Essay Examples." February 29, 2024. https://ivypanda.com/essays/topic/online-learning-essay-topics/.
Since the rise of generative AI (GenAI) in late 2022, many scholars and thought leaders have wondered about its impact on higher education. This study used a survey methodology (three multiple choice questions and one open-ended question) to explore the perspectives of a nationally representative sample of 1327 US administrators and faculty, asking questions to understand how much change they anticipate as a result of advancements in artificial intelligence (AI) technology, how prepared their institution is for such change, and what aspects of higher education they expect to change. The researchers used Kranzberg’s laws of technology as a lens to interpret the findings and guide the subsequent discussion about how AI might impact higher education. The findings showed that the vast majority of participants expect that AI will change their institution over the next five years and that the majority of participants do not feel that their institution is ready for change. The comments left in response to the open-ended questions fell into one of four themes: concerns about academic integrity and rigor, issues related to AI integration (e.g., anticipated benefits, practices in teaching and learning, issues related to preparedness, and the expected scope of change), the feeling that the current AI discourse is merely hype, and feelings of uncertainty. Ultimately, AI has the potential to be both advantageous and disadvantageous to teaching and learning, with the benefits and challenges of its use varying by context.
Copyright (c) 2024 Nicole Johnson, Jeff Seaman, Julia Seaman
This work is licensed under a Creative Commons Attribution 4.0 International License .
As a condition of publication, the author agrees to apply the Creative Commons – Attribution International 4.0 (CC-BY) License to OLJ articles. See: https://creativecommons.org/licenses/by/4.0/ .
This licence allows anyone to reproduce OLJ articles at no cost and without further permission as long as they attribute the author and the journal. This permission includes printing, sharing and other forms of distribution.
Author(s) hold copyright in their work, and retain publishing rights without restrictions
The DOAJ Seal is awarded to journals that demonstrate best practice in open access publishing
IMAGES
VIDEO
COMMENTS
Research suggests that some groups of students experience more difficulty with academic performance and engagement when course content is delivered online vs. face-to-face. As you look to improve the online learning experience for students, take a moment to understand how students, caregivers, and staff are currently experiencing virtual learning.
Online Learning - Science topic. Explore the latest questions and answers in Online Learning, and find Online Learning experts. Questions (206) Publications (338,281) Questions related to Online ...
Table 1 summarizes the 12 topics in online learning research in the current research and compares it to Martin et al.'s (2020) study, as shown in Figure 1. The top research theme in our study was engagement (22.5%), followed by course design and development (12.6%) and course technology (11.0%).
effectiveness of online learning and (b) a meta-analysis of those studies from which effect sizes that contrasted online and face-to-face instruction could be extracted or estimated. A narrative summary of studies comparing different forms of online learning is also provided. These activities were undertaken to address four research questions: 1.
1.1. Related literature. Online learning is a form of distance education which mainly involves internet‐based education where courses are offered synchronously (i.e. live sessions online) and/or asynchronously (i.e. students access course materials online in their own time, which is associated with the more traditional distance education).
Research questions focused on how participants perceived their learning experiences in online courses and how they described interactions with instructors and other students. Data collection was multimodal. The interviews were conducted in face-to-face format, electronic mail, and Skype. The questionnaires were completed by electronic mail. Field
This example research question targets two clearly defined variables, which can be measured and analysed relatively easily. Do online courses result in better retention of material than traditional courses? Timely, specific and focused, answering this research question can help inform educational policy and personal choices about learning formats.
Zhang et al. (2022) implemented a bibliometric review to provide a holistic view of research on online learning in higher education during the COVID-19 pandemic period. They concluded that the majority of research focused on identifying the use of strategies and technologies, psychological impacts brought by the pandemic, and student perceptions.
The online learning experimental questions were attached to the end of the NSSE online survey and sent to students at 45 U.S. baccalaureate degree-granting institutions. The 45
Distance learning has evolved over many generations into its newest form of what we commonly label as online learning. In this second-order meta-analysis, we analyze 19 first-order meta-analyses to examine the impact of distance learning and the special case of online learning on students' cognitive, affective and behavioral outcomes.
online learning outcomes are comparable or superior to outcomes for the traditional in-class instruction (Allen et al., 2007). More recently, an annual report by the Babson Survey Research Group on the state of online learning in higher education in the United States, found that enrollment in online education had increased significantly. The annual
The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.
Yi Yang Linda F. Cornelius Mississippi State University. Abstract. How to ensure the quality of online learning in institutions of higher education has been a growing concern during the past several years. While several studies have focused on the perceptions of faculty and administrators, there has been a paucity of research conducted on ...
million stu-dents were taking an online course (All. n & Seaman, 2010). Nearly 30% of students were taking a course online. The same study also found percent of enrollmen. growth was 21%, while overall growth in higher education was only 2%. Moreover, the 21% growth rate for online enrollments far exceeds the l.
Research questions. By building upon the aforementioned relevant works, this study aimed to contribute to the online learning literature with a comprehensive understanding of the online learning experience that K‐12 students had during the COVID‐19 pandemic period in China. ... (ICT) skills at the beginning of online learning. For example ...
The article presents some challenges faced by teachers and learners, supplemented with the recommendations to remove them. JEL Code: A20. The COVID-19 pandemic has led to an expansion in the demand for online teaching and learning across the globe. Online teaching and learning is attracting many students for enhanced learning experiences.
The answer is an emphatic "no." Most online programs appear on your transcript the same as on-campus programs would. You may also wonder if an online program will impact your plans for a higher degree later. As long as your degree is from an accredited institution, it won't harm your chances of acceptance. 4.
Tallent-Runnels et al. (2006) reviewed research late 1990's to early 2000's, Berge and Mrozowski (2001) reviewed research 1990 to 1999, and Zawacki-Richter et al. (2009) reviewed research in 2000-2008 on distance education and online learning. Table 1 shows the research themes from previous systematic reviews on online learning research.
The CoI model has formed the basis for a good deal of research on online learning. Most of this research. has focused on one of the three pr esences, social presence being the most frequently ...
This article reports on a large-scale (n = 987), exploratory factor analysis study incorporating various concepts identified in the literature as critical success factors for online learning from the students' perspective, and then determines their hierarchical significance. Seven factors--Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Online Social ...
Choose a broad topic, such as "learner support" or "social media influence" for your study. Select topics of interest to make research more enjoyable and stay motivated. Preliminary research. The goal is to refine and focus your research question. The following strategies can help: Skim various scholarly articles.
10 Questions To Get To Know Your Online Learners. Summary: The first step in creating an eLearning course is figuring out who you are creating it for in the first place. In this article, I'll highlight 10 questions that will help you get to know your online learners, so that you can design eLearning courses that make their mark.
Online Learning Perception and Effectiveness. While the solution allowed students to access information and continue their studies, there was apprehension in regard to the efficacy of online learning and the outcomes such shifts have on students' academic performances. Distance Learning of Forest Management.
Since the rise of generative AI (GenAI) in late 2022, many scholars and thought leaders have wondered about its impact on higher education. This study used a survey methodology (three multiple choice questions and one open-ended question) to explore the perspectives of a nationally representative sample of 1327 US administrators and faculty, asking questions to understand how much change they ...