Ch6 p. 161 #1: If you were asked to take part in a mini-debate about the respective virtues of selected-response items versus constructed response items, what do you think would be your major points if you were supporting the use of selected response test items?
To start off, I would say that selected response items leaves more time for students to engage with the test material. With that said, if more students are able to complete more of a test in the time given, then the results would be even more reliable. Selected response test items create an opportunity for students to apply their knowledge through the use of comparing and contrasting the information. This situation not only furthers the students’ learning to better understand the subject matter, but also provides the teacher with a clear assessment of where students might be confused. For example, if students continue to mark the same wrong answer on a multiple choice test, the teacher would then be able to see where students’ misconceptions might have occurred. Finally, the use of selected response test items gives room for a great deal of information to be covered in an assessment. The more material that is given to students to test their understanding, the more results a teacher then has to learn from.
#3: Why do you think that multiple-choice tests have been so widely used in nationally standardized norm references achievement tests during the past half-century?
I think that multiple-choice tests have been used as the nationally standardized norm references achievement tests because it is a highly efficient way of collecting a mass amount of student data. As the reading explained, this method is also more reliable than other selected-response items. I realize that there are multiple opportunities for holes in multiple-choice tests, but they are still the most dependable option that currently exists for mass distribution. Another reason that I believe why multiple-choice tests have been widely used in nationally standardized norm references achievement tests is because it creates a consistent grading system. As long as a multiple-choice test has been effectively created with no accidental hints, it will be a reliable base for grading.
Ch7 p. 184 #1: What do you think has been the instructional impact, if any, of the widespread incorporation of student writing samples in the high stakes educational achievement tests used in numerous states?
I would guess that the incorporation of student writing samples in the high stakes educational achievement tests has been effective for some students, but for other students it might have a negative impact. The reason that I bring this up is that some schools may not focus on the students’ use of writing samples. If students have not been given adequate practice and instruction of what a good essay looks like on the platform of a test, then I feel that would have a different effect on the test results. With that said, the implementation of student writing samples in the high stakes educational achievement tests would cause teachers to see a need that would have to be addressed. Teachers would probably then focus a great deal of attention of classroom instruction on how to produce a good writing sample on a test. Writing sample assessments would be incorporated into lesson plans and daily instruction so that students are prepared for the achievement tests.
#2: How would you contrast short-answer items and essay items with respect to their elicited levels of cognitive behavior? Are there differences in the kinds of cognitive demands called for by the two item types? If so, what are they?
After the reading described the different variations of challenges that short-answer items and essay items create, I feel that there is more room available for error with essay items. This is not to say that essay items are not an excellent tool, but it is instead to say that the implementation of essay items must be carefully put forth. I feel that the cognitive demands of essay items is a wider range because it not only asks students to use their knowledge of the subject matter, but also to use their writing skills (or lack there of). Should good writing and correct English be expected from seniors? Yes, but should it be expected from freshmen? No, because freshmen have not gone through the 3 years of English that they are required to take in high school. Are some students inherently gifted in writing and would thus perform better in an essay test? Possibly. A short-answer item does not require the student to display good use of sentence structure, but instead has the main focus on the information that is being assessed. This is not to say that I do not expect students to learn and implement English writing skills in school, but instead to show that there are differences in the kinds of cognitive demands called for by short-answer items and essay items.