注册 | 登录读书好,好读书,读好书!
读书网-DuShu.com
当前位置: 首页出版图书教育/教材/教辅外语大学英语四六级语言运用考试中的评分员误差研究

语言运用考试中的评分员误差研究

语言运用考试中的评分员误差研究

定 价:¥35.00

作 者: 张洁
出版社: 浙江大学出版社
丛编项: 外语、文化、教学论丛
标 签: 大学英语 大学英语教材 外语

购买这本书可以去


ISBN: 9787308157438 出版时间: 2016-05-01 包装: 平装
开本: 16开 页数: 字数:  

内容简介

  张洁博士的专著针对语言运用考试中的评分误差问题,主要探讨评分员误差对考试信效度的影响、误差的主要类型及造成误差的可能认知因素,从不同视角全面讨论了语言运用测试中的评分员误差问题。《语言运用考试中的评分员误差研究(英文版)》不仅系统地梳理了语言测试领域中评分误差的相关研究,介绍了该研究方向经常使用的定性定量研究方法,还通过两个实证研究翔实、具体地说明了如何运用不同的研究方法分析、测量和探究主观评分可能存在的误差及原因,在理论和实践层面都有较强的指导意义。书中包括的两个实证研究分别是她在硕士和博士阶段研究的主要内容,前者对应定量统计研究范式,运用多层面Rasch模型对四、六级口语考试的分数差异来源进行了系统研究;后者对应定性的以过程为导向的研究范式,对四级作文评分中评分员认知过程对评分准确度的影响进行了探讨。两个研究的主要发现对于大规模语言运用测试中评分误差的控制、测量以及更加有效地开展评分员培训和评分标准修订都具有十分重要的借鉴意义。

作者简介

暂缺《语言运用考试中的评分员误差研究》作者简介

图书目录

Chapter 1 Introduction 1.1 Rationales for studying rater variability 1.2 Status quo of studies on rater variability 1.3 An overview of this book 1.4 Definition of key termsChapter 2 Literature review: Studies on rater variability in language performance assessment 2.1 Rater variability in language performance assessment 2.2 Exploring rater variability using statistical analysis 2.2.1 Introduction 2.2.2 Rater reliability in Classical Test Theory 2.2.3 Rater facet as variance component in Generalizability Theory "" 2.2.4 Rater calibration in Many-Facet Rasch Model 2.2.5 Summary 2.3 Process-oriented approach to investigating rater variability 2.3.1 Raters' decision-making: the "black box" behind the final ratings" 2.3.2 Indirect evidence 2.3.3 Direct investigation of rating process: insights from verbal protocols 2.4 Factors accounting for rater variability 2.4.1 External factors 2.4.2 Internal factors 2.4.3 Situational factors 2.5 A framework for comparison between rater groups 2.6 SummaryChapter 3 Study 1: Investigating the scoring reliability of CET-SET using Many-Facet Rasch Model 3.1 Issues in second language speaking assessment 3.2 Challenges in test validation 3.3 The context of the study 3.4 Objectives of the study 3.5 Methods 3.5.1 Data 3.5.2 Instrument (MFRM) 3.6 Data analyses and findings 3.6.1 Facet map 3.6.2 Candidates 3.6.3 Tasks 3.6.4 Items 3.6.5 Rating scales 3.6.6 Raters 3.6.7 Bias analysis 3.7 Conclusions 3.8 Implications 3.9 Further research efforts to be madeChapter 4 Study 2: Exploring how raters' cognitive and meta-cognitive strategies influence rating accuracy in essay scoring 4.1 Subjective scoring: A matter of reliability or validity? 4.2 Exploring rating process: Looking into rater variability 4.3 Rater cognition studies in writing assessment 4.4 Methodology 4.4.1 The context of the study 4.4.2 Participants 4.4.3 Materials 4.4.4 Data collection 4.4.5 Data analysis 4.5 Results and discussion 4.5.1 General patterns of differences in broad categories 4.5.2 In-depth investigation of differences in the major sub-categories 4.6 Summary and further discussion 4.7 ConclusionChapter 5 Conclusions 5.1 Summary of findings 5.2 Comparison of the two studies 5.3 Limitations 5.4 Further research efforts to be madeAppendix I CET-SET rating scaleAppendix II CET4 rating rubrics for the writing taskAppendix III The writing task of the Dec. 2006 administration of CET4 and range findersAppendix IV Sample essaysAppendix V Instructions and training tasks for think-aloud sessionAppendix VI Sample transcripts of raters' thinking aloudAppendix VII Coding protocols for think-aloud verbal reportsAppendix VIII The coding scheme for raters' cognitive and meta-cognitive strategiesReferencesIndex

本目录推荐