Résumés
Résumé
Le présent article analyse les changements qui ont été introduits dans les enquêtes comparatives internationales (plus spécifiquement celles de l’OCDE) par suite de l’introduction du testing assisté par ordinateur. On passera en revue les différents instruments de mesure administrés par ordinateur qui ont été utilisés dans les différents cycles de ces enquêtes et on discutera si elles ont réellement impliqué une valeur ajoutée, notamment par le fait de fournir des données sur des compétences qui sont inaccessibles à travers des instruments classiques sous format papier-crayon. On analysera également les relations spécifiques que les mesures assistées par ordinateur entretiennent avec des variables d’arrière-fond tel le genre. Finalement, l’article mettra en évidence les défis et les risques qui sont associés avec une introduction de plus en plus massive d’instruments de mesure assistés par ordinateur, notamment en ce qui concerne deux aspects majeurs: la nécessité de garder une continuité dans les compétences mesurées en vue de mettre en évidence des évolutions, et le manque de cadres théoriques adéquats pour la mise en oeuvre efficace des mesures assistées par ordinateur.
Mots-clés :
- Enquêtes comparatives internationales,
- testing assisté par ordinateur,
- édumétrie
Abstract
The present article analyzes the changes that have been introduced in comparative international large-scale studies (specifically those organized by the OECD) as a consequence of the introduction of computer-assisted testing. We will review the various measurement instruments that have been administered by computer in the different cycles of these large-scale studies and we will discuss whether their introduction implied a real added value, notably by providing information on competencies that are inaccessible through classical paper and pencil tests. We will also analyze the specific relationship that the computer-assisted measures show with background variables, such as gender. Finally, the article will point out the challenges and risks associated with a more and more extensive introduction of computer-assisted measurement instruments, especially regarding two major aspects: the necessity to keep a certain stability in the measured competencies in view of reporting trends, and the lack of adequate theoretical frameworks for the efficient use of the computer-assisted measures.
Keywords:
- large-scale comparative studies,
- computer-assisted testing,
- edumetrics
Resumo
O presente artigo analisa as mudanças que foram introduzidas nos inquéritos comparativos internacionais (mais especificamente os da OCDE) na sequência da introdução do testing assistido por computador. Passar-se-á em revista os diferentes instrumentos de medida administrados por computador que foram utilizados nos diferentes ciclos destes inquéritos e discutir-se-á se eles implicaram realmente um valor acrescentado, nomeadamente pelo facto de fornecerem dados sobre as competências que são inacessíveis através dos instrumentos clássicos em formato papel e lápis. Analisar-se-á igualmente as relações específicas que as medições assistidas por computador estabelecem com as variáveis de background, como é o caso do género. Finalmente, o artigo evidenciará os desafios e os riscos que estão associados a uma introdução cada vez mais massiva de instrumentos de medida assistidos por computador, designadamente no que respeita a dois aspetos maiores : a necessidade de garantir uma continuidade nas competências medidas com vista a evidenciar as evoluções e a falta de quadros teóricos adequados para a operacionalização eficaz das medições assistidas por computador.
Palavras chaves:
- Inquéritos comparativos internacionais,
- testing assistido por computador,
- edumetria
Veuillez télécharger l’article en PDF pour le lire.
Télécharger
Parties annexes
Références
- Blais, J.-G. (dir.) (2009). Évaluation des apprentissages et technologies de l'information et de la communication: Enjeux, applications et modèles de mesure. Québec: Les Presses de l’Université Laval.
- Blais, J.-G., & Gilles, J.-L. (dir.) (2011). Évaluation des apprentissages et technologies de l'information et de la communication: Le futur est à notre porte. Québec: Les Presses de l’Université Laval.
- Bunderson, V. C., Inouye, D. K., & Olsen, J. B. (1989). The four generations of computerized educational measurement. In R. L. Linn (dir.), Educational measurement: Third edition (p. 367-407). New York: Macmillan.
- Coiro, J. (2009). Rethinking online reading assessment. Educational Leadership, 66(6), 59-63.
- Funke, J. (2010). Complex problem solving: A case for complex cognition? Cognitive Processing, 11(2), 133-142 [doi:10.1007/s 10339-009-0345-0].
- Greiff, S., & Funke, J. (2010). Systematic investigation of complex problem-solving systems based on minimal complex. Zeitschrift Fur Padagogik, 216-227.
- Martin, R. (2003). Le testing adaptatif par ordinateur dans la mesure en éducation: Potentialités et limites. Psychologie et psychométrie, 24(2-3), 89-116.
- Martin, R., & Binkley, M. (2009). Gender differences in cognitive tests: A consequence of gender-dependent preferences for specific information presentation formats? In F. Scheuermann & J. Bjornsson (dir.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (p. 75-81). Luxembourg: Office for Official Publications of the European Communities.
- Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114(3), 449-458.
- Michaelides, M. P., & Haertel, E. H. (2004). Sampling of common items: an unrecognised source of error in test equating. Centre for the study of evaluation, CRESST, University of California at Los Angeles.
- Monseur, C., & Berezner, A. (2007). The computation of equating errors in international surveys in education. Journal of Applied Measurement, 8(3), 323-335.
- OECD (2010). Development of the cognitive frameworks for the sixth cycle of the programme for international student assessment (PISA 2015). Paris: OECD Call for Tenders.
- OECD (2011a). Development of the computer platform & development of the context questionnaires and theirframework for the sixth cycle of the programme for international student assessment (PISA 2015). Paris: OECD Call for Tenders.
- OECD (2011b). PISA 2009 results: Students on line: Digital technologies and perf ormance (volume VI). Paris: OECD Publishing [doi:http://dx.doi.org/10.1787/9789264112995-en].
- PIAAC Expert Group in Problem Solving in Technology-Rich Environments (2009). OECD Education Working Papers: Vol. 36. PIAAC problem solving in technology-rich environments: A conceptual framework. Paris: OECD Publishing [doi:10.1787/220262483674].
- PIAAC Literacy Expert Group (2009). OECD Education Working Papers: Vol. 34. PIAA C literacy: A conceptual framework. Paris: OECD Publishing [doi:10.1787/220348414075].
- PIAAC Numeracy Expert Group (2009). OECD Education Working Papers: Vol. 35. PIAAC numeracy: A conceptual framework. Paris: OECD Publishing [doi:10.1787/ 220337421165].
- PISA 2012 Mathematics Expert Group (2010). PISA 2012 mathematics framework. Paris: OECD Publishing.
- PISA 2012 Problem Solving Expert Group (2010). PISA 2012 field trial problem solving framework. Paris: OECD Publishing.
- Programme for International Student Assessment (2010a). PISA 2009 assessment framework: Key competencies in reading, mathematics and science. Paris: OECD Publishing.
- Programme for International Student Assessment (2010b). PISA computer-based assessment of student skills in science. Paris: OECD Publishing.
- Wu, M. (2009). Issues in large scale assessment. Texte présenté à la conférence Pacific Rim Objective Measurement Symposium, juillet, Hong Kong.