The right way to use test data to improve an English language program
Read this article as it originally appeared on iTEP Executive Vice President Dan Lesho’s LinkedIn page. This is part 1 of a two-part series on the quantitative approach to IEP evaluation. In this part, we discuss the motivations for, and potential pitfalls of, quantitative evaluation of student performance. In part 2, we will look at some illustrative example sample data provided by iTEP, and I will discuss what that data may reveal and the sorts of changes it could motivate administrators and teachers to make.
“It seems like our students plateau at a certain level and have trouble making tangible progress after that.”
“Students usually exit our program before the last level and so are not as motivated to do well on tests and quizzes.”
If you’re an administrator or a teacher at an intensive English language program (IEP), the above statements probably sound familiar. Though common, such sentiments are all too often vaguely expressed and based on anecdote. This can create tension between administrators, teachers, and students. An IEP administrator might have the sense that students plateau at a certain level, but teachers of that level might feel that that is an unfair characterization. One teacher might feel that students are unmotivated on tests and quizzes while another might have had the opposite experience. A counselor might see several students struggling with writing and view this as a trend, while the bulk of the students feel that they are mostly fine at writing. How to settle such disputes?
A trend we have seen across all areas of education over the last decade is the increasing use of standardized testing. IEPs are no exception, with companies like iTEP International providing extensive testing services. Since the tests are standardized, the results can be pooled into large datasets that can reveal what happens to students as they move through the program or how the program itself is changing over time. In the context of an appropriate analysis, such data can help to confirm or undercut the vague sentiments with which we began.
The analysis of quantitative data can also raise questions that otherwise might not have arisen. For example, “Why does there seem to be more variation in the scores at our lower levels and less variation at the higher levels? Is that a good thing?”
The potential benefits are clear, but to successfully incorporate quantitative measures into your program, there are several things to keep in mind.
Trust and Communication
When involving quantitative data from standardized test scores into program evaluation, it is critical that administrators, teachers, and students share a mutual feeling of trust for one another. IEP administrators must trust that teachers are willing to change in light of data, and teachers must trust that administrators will not use data to paint a misleading picture. Most importantly, students must trust that both IEP administrators and teachers will evaluate their proficiency holistically, rather than reducing it to a single standardized test score.
The best way to establish trust is by taking the time to have conversations with students and teachers about the benefits of standardized test score analysis. Students and teachers respond positively when they are assured that the data will mainly be used to deepen the conversation about how best to serve students (rather than, say, as a way of finding places to reduce funding or punish poor performers).
Not All Data Are Created Equal
It is important to remember that just because all data from standardized tests can be viewed quantitatively, this does not mean it is all equally important or revealing. In addition, some tests might seem more important to students than others. In some programs, iTEP is used to inform matriculation into credit programs or advancement to higher levels. In these cases, the IEP can be more confident that the test-takers performed to the best of their abilities. If there are no stakes for students on a particular test, however, there may be reason to suspect that students did not perform at the top of their game. When there is consensus that students did their best, the scores can be more confidently used in program assessment. Otherwise, caution may be advised.
The Holistic Approach
Certainly, many educators have raised concerns about the effects of high-stakes testing and its linkages with funding and administrative decision making. Specifically, educational administrators must be cognizant that the data may only tell one part of a much bigger story about their students and their schools. Still, quantitative data is useful for exposing and tracking trends and could also reveal an aspect of the program that is not readily seen with a qualitative approach. In this way, quantitative data can be one aspect of programmatic evaluation that helps to shape the narrative as to how well programs are delivering for students.
Next month, we’ll look in detail at sample data from a program using iTEP exams and see what it may reveal about the program. In particular, we’ll consider average overall scores and also standard deviation of scores across and within proficiency levels.
Dan Lesho is Executive Vice President of iTEP International. Prior to joining iTEP (International Test of English Proficiency), he was director of Cal Poly Pomona English Language Institute, and a professor at Pitzer College.