International Review of Research in Open and Distributed Learning Revising and Validating the Community of Inquiry Instrument for MOOCs and Other Global Online Courses

Globally, online course enrollments have grown, and English is often used as a lingua franca for instruction. The Community of Inquiry (CoI) framework can inform the creation of more supportive, interaction-rich online learning environments. However, the framework and its accompanying validated instrument were created in North America, limiting researchers’ ability to use the instrument in co urses where participants have varying levels of English language proficiency. We revised the CoI instrument so it could be more easily read and understood by individuals whose native language is not English. Using exploratory and confirmatory factor analyses (EFA and CFA) on data obtained from global online courses and MOOCs, we found the revised instrument had good fit statistics once seven items were removed. This study expands the usability of the CoI instrument beyond the original and translated versions, and provides an example of adapting and validating an existing instrument for global courses.


Introduction
Online learning has grown dramatically despite relatively high attrition rates (Bawa, 2016).Garrison et al.'s (2000) Community of Inquiry (CoI) framework highlights how outcomes can improve through meaningful interactions.Arbaugh et al. (2008) developed and validated an instrument that measured CoI constructsteaching presence, social presence, and cognitive presence-allowing researchers to better identify factors that impact outcomes.The "overwhelming majority" of research using the instrument has been conducted in North America (Stenbom, 2018, p. 24) and it is important to ensure the instrument is also appropriate for courses with a global audience.Since it is not practical to provide the survey in every language, especially in large global courses such as massive open online courses (MOOCs), it is important to develop an English version of the survey that would be easily comprehensible at varying levels of English language proficiency.In this research, we revised the CoI instrument to be comprehensible for culturally and linguistically diverse English language educators and validated it using survey responses following teacher professional development courses offered globally.Specifically, we revised the CoI instrument to be at the B1 level of the Common European Framework of Reference (CEFR) for English (i.e., lower intermediate level of English language proficiency).
We sought to answer the following research question: • How well do revisions of the CoI survey items to the B1 level of the CEFR for English maintain the construct validity of the original CoI survey for a global audience with varying levels of English language proficiency?

Growth of Online Learning
At universities outside the United States, online course enrollments have been growing rapidly (Xiao, 2018), a growth likely to accelerate in the wake of emergency remote teaching during the COVID-19 pandemic (Teräs et al., 2020).
MOOCs have also impacted global online learning in the last decade because they "offer free or low-cost education to anyone, anytime, anywhere, and on a massive scale" (Lowenthal & Hodges, 2015, p. 84).
MOOCs have been categorized based on learning interactions and their dominant learning strategies.Connectivist MOOCs (cMOOCs) emphasize learner-learner interaction and community, while extended MOOCs (xMOOCs) focus on learner-content interaction and a cognitive-behaviorist approach to learning (Anders, 2015).Blended MOOCs (bMOOCs) combine online learning with in-person meetings to discuss and apply learning (Yousef et al., 2015).
MOOCs have the potential to serve as scalable solutions to the challenges and demands in teacher professional learning.For instance, pre-service teachers from Israel expressed positive attitudes towards learning both the content, pedagogical, and technological knowledge after enrolling into an international MOOC for credit (Donitsa-Schidt & Topaz, 2018).Both pre-service and in-service teachers in the US have demonstrated personal and professional growth after enrolling and participating in a professional development MOOC (Phan & Zhu, 2020).In-service elementary school teachers participating in a teachers' professional development MOOC in Greece enhanced their self-efficacy beliefs compared to those teachers who did not participate in the course (Tzovla et al., 2021).As teachers are expected to adjust to rapidly evolving national education policies (Zein, 2019) and meet the increasing demands for flexible and inclusive education for diverse learners, MOOCs can become a tool for open education and teacher professional development for all (Koukis & Jimoyannis, 2019).
Language MOOCs are dedicated to online instruction in second or foreign languages.They can be effectively used to teach all aspects of language, especially for reading and listening skills (Sallam et al., 2020).MOOCs designed to improve teachers' instructional practices in teaching English as a second language are largely offered in English (Finardi & Tyler, 2015).English MOOCs are especially popular with English language learners (ELLs; see Wilson & Gruzd, 2014) who commonly enroll to improve their English language skills as well as their economic, social, and geographic mobility (Uchidiuno et al., 2018).While there are benefits to offering courses in English, those who design and develop MOOCs should take into consideration the English proficiency of their learners and adjust the language level of the MOOC without sacrificing content.

CoI Framework Supporting Online Learning Performance
Online courses tend to have attrition rates 10 to 20% higher than in-person courses (Bawa, 2016).Attrition rates are much worse in MOOCs.Fewer than 5% of participants enrolled in MOOCs offered by MIT and Harvard University passed their course.The pass rates rose to nearly 16% when students indicated they intended to pass the MOOC, and only went as high as 50% when students paid a small fee (Reich & Ruipérez-Valiente, 2019a, b).In order to improve course outcomes, many have attempted to strengthen the three presences highlighted in Garrison et al.'s (2000) CoI framework (see Figure 1).
• Cognitive presence referred to "the extent to which the participants in any particular configuration of a community of inquiry are able to construct meaning through sustained communication" (Garrison et al., 2000, p. 89).
• Social presence was seen as a prerequisite for cognitive presence and defined as "the ability of participants in the Community of Inquiry to project their personal characteristics into the community, thereby presenting themselves to the other participants as 'real people'" (Garrison et al., 2000, p. 89).Social presence was indicated by affective expression, open communication, and group cohesion.
• Teaching presence was seen as the binding element because "appropriate cognitive and social presence, and ultimately, the establishment of a critical community of inquiry, is dependent upon the presence of a teacher" (Garrison et al., 2000, p. 96).Teaching presence was indicated by design and organization of learning activities, facilitation, and direct instruction.The CoI framework was created following content analyses of discussion board comments.A decade later, Archer (2010)-one of the CoI original authors-suggested that "the time has come to build outwards from the firm base established by the many researchers who have applied this framework in the context of online discussions" (p.69).Stenbom (2018) identified and analyzed 103 journal articles that used the CoI instrument and found that a primary purpose of using the instrument was to gain insight into a variety of aspects in a learning environment or even compare entire courses.Fiock (2020) also reviewed research using the CoI framework and showed that research has focused on a wide range of aspects related to designing and facilitating online courses.Kumar et al. (2011) even applied the CoI framework to the design of an entire online doctoral program.Xing (2019) summarized that the CoI framework has "been widely applied to the design of online courses" (p.101) including MOOCs (see Thymniou & Tsitouridou, 2021).

Need for a Validated Global Survey
Using 287 online student responses collected from four North American universities, Arbaugh et al. (2008) developed and validated a widely-used survey instrument that measured each of the three presences in the CoI framework.Stenbom (2018) reviewed 103 journal articles using the CoI instrument and found that an "overwhelming majority of the studies" were conducted in North America.At the same time, there has been important work using the instrument internationally.For instance, it has been translated and validated in several languages including Portuguese (Moreira et al., 2013), Arabic (Alaulamie, 2014), Korean (Yu & Richardson, 2015), Swedish (Öberg & Nyström, 2016), Chinese (Ma et al., 2017), Spanish (Gil-Jaurena et 86 al., 2019), and French (Heilporn & Lakhal, 2020).However, considering that many global courses such as MOOCs enroll students with many different native languages, offering a survey in the language of instruction is the most logical approach.The original CoI survey in English has been used for research in international contexts such as in Singapore (Choy & Quek, 2016), South Korea (Kim, 2017), and China (Zhang, 2020).However, in these studies, accuracy of comprehension of the survey items may have been limited due to respondent language proficiency.Because English is commonly used in international courses (Finardi & Tyler, 2015;Wilson & Gruzd, 2014), the purpose of this research was to create and validate a version of the CoI survey for use in international courses where English is used but is not students' native or primary language.This study aimed to expand the usability of the CoI instrument beyond the original and translated versions.It also provided an example of adapting and validating existing instruments without translating the language.Factor analytic techniques have been shown to provide evidence of instruments' validation (Brown, 2015;Tabachnick & Fidell, 2019).

Research Context and Background
Following a grant from the US Department of State, we developed and freely offered three versions of an online professional development course to teachers of English whose students' ages ranged from 3 to 10, in countries where English was not the dominant language.The first version was a global online course (GOC) with eight-weekly modules and an enrollment cap of 25 students, allowing for weekly facilitated discussions and personalized feedback on assignments.In total, we offered 25 sections of the GOC to 609 students from 89 countries.All students who applied were nominated by their local US embassy, and selected and enrolled by the US Department of State.We also offered the GOC's first five modules to students in more than 100 countries as two different versions of a MOOC.The first MOOC maintained a set start and end date with weekly deadlines.The second MOOC provided students with flexibility in their pacing so long as they finished the modules within the 12-week period in which it was offered.In total, the five-week MOOC enrolled 21,232 students (7,221 successfully completed the course); 8,691 students (1,494 successfully completed the course) enrolled in the more flexible MOOC.Individualized instructor feedback was not provided on submitted assignments but module discussions were facilitated by the instructors and 20 topperforming GOC students.Similar to the GOC, the instructor posted regular announcements and reminders to help motivate students.As expected, student engagement and completion varied across the three versions of the course.Table 1 outlines the completion rates for both the total students enrolled as well as those students who completed at least one activity; we defined these as active students.

Data Collection
Since modules 1 to 5 were nearly identical across all three course formats, all participants were invited to voluntarily complete the CoI instrument in Module 5. A course page provided an invitation to participate in our study, a description of our survey research following IRB requirements, and a link to a Qualtrics survey.The Qualtrics survey included respondents' informed consent to participate in research, demographic information (e.g., gender, age, country, teaching position, number of years teaching), and our revised CoI survey items.The original CoI survey was developed and validated with English-speaking students from North America so understandably, as required for use in the course, it was written at a higher level than CEFR for English B1.As a result, three members of the research team worked collaboratively to revise the items.All three members had previously used the CoI framework in research.Additionally, one team member was an EFL expert and another was a non-native English speaker who had also been trained as an EFL teacher.The revised items were written at the B1 level while still addressing the intended CoI constructs.No changes were made to the response scale (see Table 2).

TP3
The instructor provided clear instructions on how to participate in course learning activities.
The teacher gave clear instructions on how to complete course activities.

TP4
The instructor clearly communicated important due dates/time frames for learning activities.
The teacher clearly communicated about important due dates.

TP5
The instructor was helpful in identifying areas of agreement and disagreement on course topics that helped me to learn.
The teacher helped explain difficult topics to help me learn.

TP6
The instructor was helpful in guiding the class towards understanding course topics in a way that helped me clarify my thinking.
The teacher helped me understand my thinking about course topics.

TP7
The instructor helped to keep course participants engaged and participating in productive dialogue.
The teacher helped students be engaged and participate in dialogue.

TP8
The instructor helped keep the course participants on task in a way that helped me to learn.
The teacher helped keep students on task, and it helped me learn.

TP9
The instructor encouraged course participants to explore new concepts in this course.
The teacher made me want to learn new things.

TP10
Instructor actions reinforced the development of a sense of community among course participants.
The teacher made students feel as part of a community.

TP11
The instructor helped to focus discussion on relevant issues in a way that helped me to learn.
The teacher set up discussions to help me learn.

TP12
The instructor provided feedback that helped me understand my strengths and weaknesses relative to the course's goals and objectives.
The teacher provided feedback that helped me learn.

TP13
The instructor provided feedback in a timely fashion.
The teacher provided feedback on time.

SP1
Getting to know other course participants gave me a sense of belonging in the course.
Getting to know other students made me feel part of the course.

SP2
I was able to form distinct impressions of some course participants.
I got to know some students.

SP3
Online or Web-based communication is an excellent medium for social interaction.
Online communication is an excellent way to interact with people.

CP3
I felt motivated to explore content-related questions.
I felt motivated to explore the questions asked.

CP4
I utilized a variety of information sources to explore problems posed in this course.
I used many resources to explore questions asked.

CP5
Brainstorming and finding relevant information helped me resolve contentrelated questions.
Sharing and finding information with classmates helped me find answers to questions asked.

CP6
Online discussions were valuable in helping me appreciate different perspectives.
Online discussions helped me see different perspectives.

CP7
Combining new information helped me answer questions raised in course activities.
Combining all of the new information helped me answer questions asked in course activities.

CP8
Learning activities helped me construct explanations/solutions.I can apply the knowledge created in this course to my work.
To achieve the B1 level, items were revised to use more familiar terms and grammatical structures that would also be less ambiguous for participants coming from diverse linguistic and cultural backgrounds.For example, we switched out instructor for teacher, a term more familiar to teachers working in classroom contexts.Some verbs were simplified, such as changing conversing to a more familiar verb, communicating.Some original items had complex sentences, such as "The instructor was helpful in guiding the class towards understanding course topics in a way that helped me clarify my thinking."We adapted this item by making it more personalized and simplifying the sentence structure: "The teacher helped me understand my thinking about course topics."In addition, we avoided using words that have a different meaning in other contexts (e.g., the word fashion).These types of revisions from the original versions still preserved the meaning and intent of the survey items while making them more comprehensible to global course participants.

Data Analysis
We randomly divided data into two samples.The first half (n = 744) was used to conduct exploratory factor analysis (EFA).The second half (n = 743) was used to confirm the factor structure with confirmatory factor analysis (CFA).Gorsuch (1983) explained that EFA determines "factors that best reproduce the variables under the maximum likelihood conditions, [while CFA] tests specific hypothesis regarding the nature of the factors" (p.129).We first conducted an EFA to determine the items that best described the construct.EFAs are used to assess the factor structure of a set of variables (data).Whenever these data are measured at a categorical level (e.g., ordinal, polytomous), Brown (2015) proposed the use of a robust weighted least square (WLSMV) estimator.An oblique rotation method (geomin) was applied, assuming the extracted factors were correlated.Rotating the factor matrix allowed for a more interpretability solution (Tabachnick & Fidell, 2019).The correlations matrix for correlation and sample adequacy is assessed using Bartlett's test of sphericity and Kaiser-Meyer-Olkin (KMO; Kaiser, 1970) measure.KMO values greater than .5 are acceptable and greater than .9are superb (Field, 2009) (Brown, 2015).A model is deemed to have good fit if RMSEA ≤ 0.05 (Hu & Bentler, 1999) but acceptable once the upper bound of the confidence interval is less than or equal to 0.10 ( Kline, 2011), and low values for SRMR (≤ .05;Schreiber et al., 2006).CFI and TLI are goodness-of-fit indices, where values in the range of .90 and .95generally represent acceptable model fit (Brown, 2015).
The main premise of factor analysis is to extract common variance among items.As such, reporting the total amount of variance extracted forms an important consideration in the factor analytic process.Tabachnick and Fidell (2019) suggested that the final factor solution should explain at least 50% of the total item variance.Additionally, the amount of variance in each item, explained by the retained factorscommunality-should also be reported (Field, 2009).We were guided by Tabachnick and Fidell (2019) using .50cutoff for communality coefficients (h 2 ) and an average of at least .60 for all items (Field, 2009).
The CFA applied the same fit indices used for EFA; the CFA model was employed to assess the empirical factor structure found through the EFA.

Findings
The WLSMV extraction method was used to conduct the EFA.Preliminary analysis yielded that the sample size was superb (KMO = .965).The correlations were also large enough for factor analysis using the Bartlett's Test [X 2 (946) = 24676.05,p < .001].We generated four models to determine the best structure for the data.The fit indices for the models are presented in Table 3.The first two models (one-factor and two-factor models) did not meet the preset criteria for model fit with CFI and TLI below the preferred .95cutoff.SRMR and RMSEA were also out of range.The third and fourth models showed more acceptable fit and were further examined despite RMSEA values being greater than 0.05 for both models, but we observed the acceptable fit through the upper bound of the confidence intervals.Overall, the 3-factor and 4-factor models better represented the data.Further assessment of these models found that in the 4-factor model, several items had severe cross-loadings.As a team we discussed the wording of these items and potential reasons for the cross-loadings.We decided these items were problematic and were therefore deleted.Additionally, other items remained in the analysis that covered the theoretical representation of the constructs being measured.In an iterative process, we removed individual items to ensure we observed the correlations at each iteration.As a result of the several analyses, we removed seven items (i.e., TP2, TP13, SP1, SP2, SP9, CP5, and CP11).
Upon the theoretical removal of those items, we regenerated four models.Those fit indices are presented in Table 4. Once again, the 3-factor and 4-factor models were better representations of the data.We tabled the 27-item factor loadings of both models (Table 5).Further inspection of the 4-factor model revealed more cross-loadings between factors and no items with factor loadings greater than .40.For example, in the 4factor solution, item CP10 could be a function of the second and fourth factors.Through discussing the two models, we opted for the 3-factor model, as its items better fit the theorized teaching presence (n = 11), social presence (n = 6), and cognitive presence (n = 10).This model had the simplest structure with loadings all greater than .40(Meyers et al., 2017), adequate fit indices, and the three factors explained 73.81% of the variance in all the items (more than the 50% recommended by Tabachnick & Fidell, 2019).Additionally, the average communality across the retained items was .60,suggesting that we had explained, on average, 60% of the variances across all the items included in the three factors we retained.Finally, we conducted the CFA to assess the factor structure with a unique sample.First, we assessed the internal consistency of the subscales.We employed the Cronbach's (1951) coefficient with a traditional .70recommendation (Nunnally & Bernstein, 1994).Higher values reflect higher internal consistency (i.e., the items share a large amount of variances).We found that items for teaching presence (a = .950,95% CI[.945, .955]),social presence (a= .892,95% CI[.880, .903]),and cognitive presence (a = .949,95% CI[.943, .954])reliably measured the constructs.The results of the CFA revealed that the factor structure from the EFA adequately represented the data: CFI = .974,TLI = .972,and RMSEA = 0.067, 90% CI [0.063, 0.070].The factor loadings are presented in Table 6.Moderate to high relationships existed across the three factors: teaching presence and social presence (r = .614),teaching presence and cognitive presence (r = .705),and social presence and cognitive presence (r = .679).

Table 6
Factor Loadings for the Three-Factor Confirmatory Factor Analysis

Language Considerations with Global Online Research
This study developed through the need for a CoI instrument that was written for a global audience using English as the lingua franca.The participants in our online courses were from over 80 countries and enrolled in our courses to learn more about English language teaching.Based on our grant-funded program parameters, we developed course materials-including the survey-in English at the CEFR B1 level to ensure participant understanding.This brought to light the importance of language considerations and comprehensibility when conducting global online research.Knowing that more and more students with varying levels of English language proficiency are enrolling in global courses offered in English, such as MOOCs, instructional designers and facilitators should carefully consider the language level required to participate in all aspects of their courses.This is especially true when collecting data from students for evaluative and research purposes, since important decisions are often based on these data and should be valid.
This study successfully adapted survey item to the CEFR B1 level, which was high enough to maintain the basic meaning of survey items while also being more comprehensible to respondents who were not native English speakers.More research is needed to examine processes for lowering the language level of existing survey instruments.

Using the CoI Survey in Global Contexts
Since global courses are most frequently offered in English, the CoI instrument needed to be examined critically and revised to ensure the utility and validity of the data it provided.After revising the CoI instrument to be at the CEFR B1 level, we administered it to students enrolled in one of the following three course formats: sections with reasonably low instructor-to-student ratios (1:25), a five-week MOOC, and a flexible MOOC.This study showed success in adapting survey items in the CoI instrument to the CEFR B1 level, which can be useful for other CoI studies conducted in global contexts.However, more work is needed to examine the validity of this instrument in international online learning environments.
Although there are accepted processes for translating and validating surveys (see Gavriilidou & Mitits, 2016), these are often not feasible for research in global courses or courses in multicultural contexts with a high level of linguistic diversity.For instance, the MOOCs examined in this research included participants from over 100 countries.Therefore, the most practical option was to provide a survey in the language of instruction at a level comprehensible for varying levels of language proficiency.However, we found no studies that investigated the methods and/or validation of instruments adapted from one language into the same language, making adjustments based on participants' proficiency level.

Contextual and Cultural Aspects of CoI Survey Item Analysis
Using an EFA and CFA, we found the instrument had good fit statistics once seven items (TP2, TP13, SP1, SP2, SP9, CP5, and CP11) were removed.There are some possible contextual reasons why the removed items did not load as expected.Social presence is a construct that describes student perceptions and attributes of learner-learner interactions (Garrison et al., 2000).The CoI instrument was originally developed for use in small traditional online courses with high levels of interactions within small groups of students (Arbaugh et al., 2008) allowing students to develop a level of familiarity that is unlikely to form in a MOOC.Additionally, we administered the survey to students following only five weeks of participation, and students' perceptions regarding other students may have changed if the course offerings were longer.
Based on these two contextual aspects related to class size and length of instruction between a traditional online course and short-term MOOC, it is understandable that the three social presence items that measured students' ability to form relationships or collaborate with others did not fit the model as well as the items that focused on students' comfort communicating online.Interestingly, two of those removed items, SP1 and SP2, are aligned with the social presence subconstruct affective expression.This finding supports Poquet et al.'s (2018) examination of social presence in three MOOCs that also found students tended to respond lower to SP1 and SP2.Similarly, in Kovanovic et al. (2018), EFA using student responses from five MOOCs found that the data fit best when affective expression was its own factor.As a result, additional research is needed to examine the development of affective expression in MOOCs.
Furthermore, the data from the two teaching presence items that focused on instructor-provided feedback did not fit the model as well as did the other teaching presence items.One important limitation of MOOCs is the quality of the feedback students receive.In MOOCs with thousands of students where the content experts typically do not have time to provide much feedback to individual students, it makes sense that the survey items measuring feedback performed differently than did the other items that can be accomplished in whole group interactions.However, because feedback is still important to teaching presence, we decided to keep the items TP12 and TP13 that focused on providing quality and timely feedback.Additional research is needed to explore effective ways to provide feedback in MOOCs, and keeping TP12 and TP13 will help this survey stay tied to the theory, even though its removal would have helped the data fit slightly better.
Similarly, even though SP7 (I felt it was OK to disagree with other students) was less than .5, this item was kept because is important to the concept of social presence and was not captured in other items.This was a particularly interesting item due to cultural aspects of online communication and learning.It is possible that for some of the cultures represented in the course, it was not appropriate to overtly disagree with other students.Research in intercultural pragmatics focused specifically on the speech act of disagreement in multicultural online asynchronous discussions using English as a lingua franca have showed a tendency to avoid strong disagreement, particularly with students who have lower levels of English language proficiency (Maìz-Arévalo, 2014).Therefore, this item may have cultural bias in its interpretation, particularly if public disagreement is not considered culturally acceptable.We recommend that additional research examine intercultural perspectives on disagreement as a measure of social presence in global courses, including more qualitative research with culturally and linguistically diverse learners' discourse in online synchronous discussions.

Implications for Future Research
The use of the revised CoI survey could benefit researchers examining global online courses where participants have varying levels of English language proficiency.The revised instrument's simplified language and sentence structure can help collect data that more accurately reflects students' perceived CoI in global courses as well as courses offered in multicultural contexts.We also recommend that others carefully consider the language levels of the research instruments they both create and use, particularly when using instruments in global contexts or within diverse contexts in North America where participants have varying levels of English language proficiency.If respondents are ELLs, survey items that have been written for native speakers may not be comprehensible or could result in survey fatigue due to the heavy linguistic load of each item.Improving the comprehensibility by lowering the language level will make the instruments accessible to a larger international audience.It is also important to validate surveys when using them with different audiences or when revising for language level.We recommend conducting an EFA and CFA, similar to this study.
Furthermore, researchers should consider the diverse range of cultures represented in survey respondents, which can affect participants' understanding of the survey items.For example, perceptions of disagreement as an indicator of social presence could be different because of culture and language proficiency (Maìz-Arévalo, 2014).Furthermore, the length and type of course could affect participants' perceptions of teaching, social, and cognitive presences.For example, without an instructor giving individualized feedback to students in MOOCs, it is expected that the item measuring feedback performed differently across three 98 different formats, particularly since the original survey was designed and validated in traditional instructorled courses rather than MOOCs.
A large portion of instructional design and technology (IDT) research has come from English speaking countries (Bodily et al., 2019).Furthermore, North America is overrepresented in the most highly cited online and blended learning research-especially at the K-12 level (Hu et al., 2019).The opportunity to design, develop, facilitate, and research global online offerings has never been greater due to improving telecommunication infrastructures and increasing support from all levels of government throughout the world (Palvia et al., 2018).The COVID-19 crisis has accelerated the growth and acceptance of online learning throughout the world.As we move into this new normal, it is important that the IDT field maintains a global perspective in our research efforts.The revised CoI instrument shared in this research can aid in those efforts.

Figure 1
Figure 1Model for Community of Inquiry Framework

Table 1
Participants Across Three Course FormatsThe purpose of this program was for experts in the field to provide research-based professional development opportunities to English as a foreign language (EFL) teachers and teacher educators around the world who may not otherwise have access.Since the participating teachers were largely ELLs themselves, the US Department of State required that all course materials be developed at the B1 level, based on the CEFR for English, meaning a participant "can read straightforward factual texts on subjects related to his/her field and interests with a satisfactory level of comprehension" (Council of Europe, 2018, p. 60).

Table 2
Comparing the Original and Revised Items We concentrated on five fit indices: (a) χ 2 goodness-of-fit statistic; (b) the root mean square error of approximation (RMSEA; Steiger & Lind, 1980); (c) standardized root mean square residual (SRMR); (d) comparative fit index (CFI); and (e) Tucker-Lewis index (TLI).SRMR, RMSEA, and χ 2 are considered bad fit indices; therefore, values of zero indicate perfect fit, and closer to zero reflects better fit on (a) item-factor correlations (loadings); (b) goodness of model fit; (c) percent of variance explained by the factors; (d) and theoretical explanations.Meyers et al. (2017) recommended factor loadings of .40 and higher with sample size in excess of 200 participants.However, results in the high .3smay also be acceptable.

Table 5
Factor Loadings for theThree-and Four-Factor Solutions Note.Factor loadings greater than .40are in boldface.h 2 is communalities for the 3-Factor model only.
Note.Standard errors are in parentheses.Items with loadings = 1 represent items used as the scaling constant.