《英語讀寫結(jié)合寫作試題效度驗證:以廣東省英語高考考題為例》運用多個研究工具、從多角度收集了理論和實證證據(jù)。分析發(fā)現(xiàn),考生完成梗概和回應(yīng)性議論文的寫作過程有所不同,由于梗概寫作的內(nèi)容構(gòu)建比回應(yīng)性議論文寫作復(fù)雜,所以計劃在梗概寫作中顯得相對重要;相應(yīng)地,回應(yīng)性議論文在語言產(chǎn)出上要求相對高一些,因此,考生更重視對回應(yīng)性議論文的監(jiān)控。
《英語讀寫結(jié)合寫作試題效度驗證:以廣東省英語高考考題為例》依托Messick的效度整體觀,從構(gòu)念效度的實質(zhì)和構(gòu)念效度的外推力兩個方面入手,為廣東省高考英語讀寫結(jié)合寫作題型收集效度證據(jù)。
The present study, within Messicks unitary validity conception, collects theoretical and empirical evidence for the substantive and generalizability aspects of construct validity of the texr-based writing task in National Matriculation English Test (Guangdong Version), a newly-designed large-scale high-stakes test. It adopted a constructivist reading-to-write model specifying the metacognitive (planning and monitoring ) and cognitive (selecting, organizing, and integrating) operations elicited in text-based writing. Three general research questions are generated: 1) whether the theoretical processes are actually tagged by the assessment task; 2) whether the two sub-tasks manifest the text-based writing construct differently; and 3 ) whether performance regularities entail suitability of the text-based writing task for NMET (GD).
Data were drawn from different sources via instruments constructed for this study. In response to the first two general research questions, questionnaire data from experts (N = 25), the instructors (N = 150), and the target candidates (N = 532) were collected. In addition, students (n = 36 ) interview data complemented the questionnaire data qualitatively. And the aggregation of the foregoing qualitative data, the coding and the rating results of 189 compositions responded to the third research question.
Chapter 1 Introduction
1.1 Rationale of the Present Study
1.1.1 General Background of the Present Study
1.1.2 Relevant Studies on Text-BasedWriting Tasks
1.2 Key Research Questions
1.3 Definitions of Key Terms
1.4 Contents of the Book
1.5 Summary
Chapter 2 Literature Review
2.1 Introduction
2.2 Writing Assessment: A Historical View
2.3 Orientation to English Teaching Objectives and NMET(GD)
2.3.1 The Curriculum and the Teaching Objectives
2.3.2 An Introduction to Writing Tests in NMET(GD)
2.4 Theoretical Conceptualization of Text-Based Writing
2.4.1 The Reading Perspective
2.4.2 The Writing Perspective
2.4.3 The Integrated Perspective
2.5 Conceptualization of Text-Based Writing Test Tasks
2.5.1 Advantages of Text-Based Writing Tasks
2.5.2 Problems with Text-Based Writing Tasks
2.6 Factors Influencing Students Text-Based Writing Performance
2.6.1 The Task Factors
2.6.2 The Individual Factors
2.6.3 The Writing Process
2.7 Textual Measures of Text-Based Writing
2.7.1 Content
2.7.2 Organization
2.7.3 Language
2.8 Important Validation Research on TextBased Writing Tests
2.9 Theory of Validity and Validation
2.9.1 Messicks Unitary Validity Concept
2.9.2 Validation Procedures in General
2.10 Summary
Chapter 3 Theoretical Framework
3.1 Introduction
3.2 Text-Based Writing Revisited
3.2.1 Definition of Text-Based Writing Construct
3.2.2 The Text-Based Writing Process Model
3.3 Task Complexity of Summary and Response Argumentation
3.4 Measures of Writing Products ~~
3.4.1 Language
3.4.2 Content and Coherence
3.5 Validation Process of the Present Study
3.5.1 The Blend of Validity Evidence of the Present Study
3.5.2 Validation Procedures in Action
3.6 Restatement of Research Questions
3.7 Summary
Chapter 4 Methodology
4.1 Introduction
4.2 Participants
4.2.1 Student Participants
4.2.2 Instructors and Experts
4.2.3 Raters
4.3 Research Design
4.4 Instruments and Materials
4.4.1 Instructors Attitude Questionnaire
4.4.2 Experts Questionnaires
4.4.3 The Writing Task
4.4.4 The Coding Scheme for Students Writings
4.4.5 The Rating Rubrics
4.4.6 Pilot Studies of Test Administration,Rating and Textual Coding
4.4.7 Students Writing Process Questionnaire
4.4.8 The Interview
4.4.9 Instructors Test-Preparation Questionnaire
4.5 Data Collection Procedures
4.5.1 Data Collection Procedures of the Instructors Questionnaires
4.5.2 Data Collection Procedures of the Experts Questionnaires
4.5.3 Collection of Student Data
4.6 Data Preparations
4.6.1 Numeric Data Generation
4.6.2 Data Entry and Missing Data Handling
4.7 Data Analyses
4.8 Summary
Chapter 5 Results
5.1 Introduction
5.2 Preliminary Data Analyses
5.2.1 Reliability Concerns of the Instruments
5.2.2 Validity of Questionnaires
5.2.3 Tests of Normality of the Rating Scores
5.3 Results for Research Question 1
5.3.1 Findings of Experts Questionnaire Data
5.3.2 Findings of Instructors Questionnaire Data
5.3.3 Findings of Students Questionnaire Data
5.3.4 Sub-Conclusion
5.4 Results for Research Question 2
5.4.1 Results for Research Question 2: the Writing Process Perspective
5.4.2 Results for Research Question 2: the Writing Product Perspective
5.4.3 Sub-Conclusion
5.5 Results for Research Question 3
5.5.1 Results for Research Question 3 ~~ Attitude and Acceptability Aspects
5.5.2 Results for Research Question 3 : Reliability and Rasch Analysis Aspects
5.5.3 Results for Research Question 3 : Test Score Aspect
5.5.4 Results for Research Question 3 : Textual Coding Data Aspect
5.5.5 Sub-Conclusion
5.6 Summary
Chapter 6 Discussions
6.1 Introduction
6.2 Evidence for the Substantive Aspect of Construct Validity
6.2.1 Match of the Construct. Explanation of Reliable Variance
6.2.2 An Anatomy of the Construct in Relation to the Task Further Explanation of Reliable Variance
6.3 Evidence for the Generalizability Aspect of Construct Validity
6.3.1 Generalizability: Suitability
6.3.2 Generalizability: More Evidence
6.4 General Discussion
6.5 Summary
Chapter 7 Conclusions
7.1 Introduction
7.2 Major Findings of the Present Study
7.3 Implications of the Present Study
7.3.1 Theoretical Implications
7.3.2 Assessment and Pedagogical Implications
7.4 Limitations of the Present Study
7.5 Directions for Further Studies
7.6 Summary
References
Appendix
Appendix A The Text-Based Writing Task
Appendix B The Cited Phrases for Textual Coding
Appendix C Cohesive Ties for Textual Coding
Appendix D Experts Questionnaire One
Appendix E Instructors Attitude Questionnaire
Appendix F Experts Questionnaire Two
Appendix G The Rating Rubrics
Appendix H Instructors Test-Preparation Questionnaire
Appendix I The Rating Plan
Appendix J Students Writing Process Questionnaire
Appendix K Students Interview Outline
Appendix L Examinee Measurement Report of Rasch Analysis
Appendix M Communalities of Factor Analysis
Appendix N Excerpts of the Interview
Appendix O Graphic Examples of Textual Coding