I hate standardized testing. I hate it more than students do. Why? Because it affects everything I am allowed to do in my classroom. As a freshmen teacher, I have more leeway than other grades. However, a large portion of our curriculum is still based around preparing for the eventual CATS test and the smaller tests through out the year. Freshman year is the only non-tested year in high school, but some of the things they learn with me still appear on their CATS tests in the next few years.
Here's the problem, though. The test? It's broken. There are a couple issues. First, there is a large amount of multiple choice. Why? Because it's the fastest way to get students to answer as many skill-related questions as possible. The problem with subjects like English is that sometimes there are multiple correct answers for a question. I can't tell you anything about the questions on the test this year because we don't even see them until the students do. However, the ORQ district tests that we give every six weeks are like smaller versions of the eventual reading test they take as sophomores. One of the questions on a district assessment students took this year asked which text feature would "best enhance" the text. The problem? If it had been short answer instead of multiple choice, the kids could have explained why they picked B instead of C. Instead, any kid that picked the "wrong" answer was just marked wrong.
There are questions that can have an ultimate "right" answer. What is the main characters name? can only have one answer. However, that question is not very challenging at all. The higher challenge there is for a question, the more likely the answers can be confusing, even to teachers.
So the obvious answer is to put in more ORQ or short answer questions, right? Wrong. These questions are broken as well. See, for the smaller tests, teachers grade them, so you can see where a student was going with their response. For the CATS test ORQs, test scorers are hired to score this portion of the test. You don't need to have teaching credentials in order to score tests. There's also still some subjectivity that one test scorer might score an answer Proficient, while another marks it Apprentice. That's not even mentioning the fact that the scoring rubric scores things that the question doesn't specifically ask for. From our most recent ORQ, students were asked to "Identify two distinguishing character traits". Knowing that their are external character traits (how you look) and internal character traits (personality traits), which do you think it's asking for? See, I can't tell me kids which one the test is asking for, but the only correct answers on the rubric show internal character traits.
I gave my students a check list, like I did with the last ORQ. My check list pointed out that it should be internal character traits, but some students still took "distinguishing" to mean how a character looks instead. Students who identified external character traits could only receive a score of Novice or zero on the test despite the fact that they actually made their answers work. See, the students were to use the character traits identified to explain how they affected the plot. The plot was the story of Langston Hughes's "Thank You, M'am", about a young man, Roger, who tries to steal a woman's, Mrs. Jones, purse. He doesn't succeed and the students who identified external character traits said it was because she was a large woman (which is correct, it's stated in the text) and because she was physically strong (which is correct, they can infer it because she drags Roger back to her house and he can't get out of her grip). Despite the fact that they could defend their answers and that technically, they did answer the question correct by stating how the two character traits affected the plot, they were wrong. Many of them even told me how it would have been different if she'd been a smaller or weaker person, but it was wrong.
Now, the majority of my students still understood the question to mean internal character traits, but what if this wasn't just the district assessment? What if this was the actual CATS test? What if this was the difference between my school being selected for an audit based on test scores or not? What if this was the difference between those students' ability to graduate or not? I can't imagine that the tests I receive are not indicative of the eventual CATS tests. If that's the case, how many of our students received low or failing grades because of the test questions and not because they literally didn't have a skill?
There has to be some ultimate assessment before a student can move on from high school. If there's not, I'm sure there would be a few bad teachers who would take it as an opportunity to goof off and have fun. I'm sure there would be some students who would suffer because no one HAS to teach them anything. However, the tests right now? They don't work and students are suffering because of it. Students are being held back from graduation. Students' time is being wasted to teach them how to best answer multiple choice questions. Students aren't being taught to think outside the box or question things around them. Students are being shortchanged in an effort to make test scores go up. Students are already suffering because of the tests. They've been suffering because of the tests.