Tuesday, March 18, 2008

Foundations Skills Assessment tests: a useless exercise

The Boundary District and British Columbia Teachers Associations’ opposition to the Fundamental Skills Assessment (FSA) scheduled to test all Grade 4 and Grade 7 students in February may seem puzzling. What, after all is wrong with testing children to determine the level of their basic literacy and numeracy skills.
The problem many teachers see with the FSA is not the test itself, but how the test results are used. The FSA is a typical standardized test, that is a test administered to a large number of students that compares their individual performances to a pre-established standard. Important information about individual students can be ascertained from such a test by comparing a particular student’s achievement to the entire target group, in this case, about 40,000 students in each of the two grades.
The FSA test results, however, are not primarily used in that way. Instead, conclusions are being drawn from those results, not about individual students, but about subgroups, the 15 or 50 or 100 students that form a particular class or attend a particular school. Such a use of test results is not valid. That is because no small group possesses the same full range of ability that the target group of 40,000 does. Small groups are always anomalies, containing a higher percentage of low achieving or high achieving students than the target group. Therefore, when the statistical analysis of the small group’s performance is generated, the numbers rarely match those of the huge target population. Yet the assumption made by those who do the small group analyses is that the small group numbers should mirror the target population. If they don’t, if the average success rate of a small group are lower, then the assumption is that the teacher or the school must be doing something wrong. If they are higher, the assumption is that the programs of practices of the school or teacher must be unusually good. Neither is necessarily true.
Making those assumptions is analogous to a teacher who analyzes a class’s test results row by row expecting each row to contain the same mix of students and therefore have the same rate of success as every other row. Then having determined which rows have the lowest rates, the teacher berates them for not working as hard or paying as much attention as the other rows, or better yet, blames him/herself for not teaching that row as well as the others.
Our schools, like those rows, reflect subgroups that vary greatly from one another based on socio-economic factors like wealth and poverty, ethnic background and immigrant status, family educational background and so on. From one part of the province to another, or one part of Metro Vancouver to another, those great variations in socio-economic factors have an impact on achievement in schools and on standardized tests. The differences are particularly evident when private schools, which admit only high achieving students in the first place, are compared to public schools that, of course, welcome students of all abilities.
Teachers and administrators are completely aware of the level of achievement of their students without being reminded by a standardized test. The irony is that after spending vast sums of money on these tests and their marking and analysis and hiring expensive Superintendents of Achievement to oversee the process, the Ministry will eventually realize that it is a fruitless exercise. Most jurisdictions in North America already have realized it (Google standardized testing for more information). The average scores generated by individual classes and schools will fluctuate from year to year by a couple of percentage points depending on the abilities of the student population for that year. For example in the last five years, the Boundary District success rate (percentage of students “meeting or exceeding expectations”) on the Grade 7 Reading Comprehension portion of the FSA has varied considerably: 79% in 2003, to 80% in 2004, 76% in 2005, 85% in 2006 and back to 79% in 2007. If one were to accept the Ministry’s rationale for the validity of these tests, those results would point to a serious failure of teachers and schools in 2005 and a remarkable improvement in the same schools and teachers in 2006. Such conclusions are obviously illogical and point out the clear weaknesses in using test results like these as a basis for evaluating school performance. The average scores for the entire province also vary because try as they might, the people making up the tests can never make them of equal difficulty.
Meanwhile the Ministry requires principals and teachers to chase after additional percentage points on their schools’ average FSA scores in order to create the illusion that steady progress is being made. Teachers know that this takes time away from the many other tasks that they are required to perform, most of which are of far greater importance.
That’s why teachers all over the province are opposed to the FSA.

Copywrite Grand Forks Gazette 2008

No comments: