The Mindset US Schools Need to Recruit International Students

The Mindset US Schools Need to Recruit International Students

International students have long been an integral part of American colleges and universities. Their opinions challenge the assumptions and beliefs expressed by U.S. students and faculty members both in and outside of the classroom. Their lifestyles bring cultural diversity and new ways of doing things to campus life. For many institutions, they also help keep the school afloat, providing billions of dollars to the economy every year and helping subsidize the education of American students. In the last few years, the number of students coming overseas to study in the states has dwindled—due in many parts to changes in visa and immigration policy. Even Reuters reports that new enrollments for the 2017-2018 school year slumped 6.6 percent compared with the previous year. What can universities do to recruit international students amid a rough few years? Here are few ways to make the process easier and increase international enrollment in 2019.

Need for speed

For many years, the availability of space could not keep up with the influx of international students in the U.S. This is no longer the case. International student recruitment has become an increasingly competitive field, with both large public universities and small liberal arts colleges trying to attract—and retrain—students from all over the world that want to study in another country. Now, more than ever, schools need to change their approach to international recruiting and be more proactive—and that includes getting the results from an English proficiency test as quickly and efficiently as possible.

iTEP is now accepted by over 800 schools and continues to follow an upward trend each year, banking on bringing in more international applicants because of iTEP’s on-demand English language evaluation. Graded in 24 hours and backed by native English-speaking ESL professionals. iTEP Academic has everything post-secondary institutions need to make informed admissions and assessment decisions quickly.

No roadblocks to apply

The math is easy—the harder it is to apply, the fewer applicants you will get. While this concept is nothing new, many institutions that recruit international students didn’t always notice because the demand was so high. The easiest way to skirt this problem is to streamline the application process and make it simple for prospective students to schedule and take an exam.

Think about it this way: If applicants have to wait weeks for a pre-set test date, and schools have to wait even longer to receive a score report, disillusionment and discouragement can easily follow. You can’t really sell the merits of your university if the application and enrollment process is needlessly difficult. That’s why iTEP makes it easy to take an English proficiency test and get the results back the next day.

In addition to on-demand scheduling at test centers in 51 countries around the world, iTEP now offers remote proctoring with Examity, an online proctoring service that helps ensure the academic integrity of online testing.

New avenues for testing

While in-person testing isn’t going away just yet, more and more students want the flexibility to take an English language proficiency test online, from any location. Many of Examity’s 500-plus higher education clients are moving aggressively away from in-person proctoring. This technology opens up new doors towards attracting and enabling remote applicant pools to apply for education in the United States.

We often hear from schools—including highly prestigious, recognizable institutions—that they get applications from international students they want to admit, but aren’t able to get the English proficiency test squared away in time. This can happen for a number of reasons. The most frustrating situation is when students have such a command of English that they don’t realize the English assessment test score requirement applies to them. This happens more often than you would think. With on-demand scheduling and remote proctoring, iTEP eliminates the need to wait for a test date—which can be the difference between the student choosing another school or delaying enrollment by a semester.

Fast turnaround should be mandatory

In our fast-paced world, there is really no excuse for long English test turnaround times. One foolproof way to recruit international students is to let them know right away if they’ve passed their English evaluation. iTEP gets scores to institutions within 24 hours, which lets you respond to students faster and spend less time jumping through administrative hoops.

Virginia Tech Language and Culture Institute Chooses User-Friendly English Test iTEP

Virginia Tech Language and Culture Institute Chooses User-Friendly English Test iTEP

Since 2013, the Virginia Tech Language and Culture Institute has been using iTEP Academic-Plus to help accurately place incoming students, administering the test upon the students’ arrival to get an instant baseline score of their English language proficiency. Testing and Assessment Coordinator Eric Moore says the relationship has been highly beneficial, praising iTEP as a user-friendly English test.

“The quickness of scoring is great, the database is great, and to be able to go in and locate a student’s test score right away is very valuable,” Moore says. “iTEP has really made things simple and very user-friendly.” 

Instant grading

iTEP Academic-Plus is broken down into five segments: grammar, listening, reading, writing, and speaking. The first three parts of the test are multiple-choice, and are scored electronically in real-time. “The quick turnaround is really helpful,” Moore says. “I can log in and see if someone is done and go in and look at the scores right away to get an idea of where the incoming students are with grammar or listening.”

The speaking and writing sections of iTEP are graded by native English-speaking ESL professionals. Institutions can also opt to have their own staff grade the test if needed. Virginia Tech uses this option, and during a semester break, the same ESL trained instructors that teach the universities courses will come in and read through the essays and listen to the speaking sections of the test, which Moore says gives them a good idea of the students’ abilities before classes start. (See iTEP Business Development Manager Cerise Santoro’s explanation of why iTEP uses human graders.)

Small company, big heart

A large institution like Virginia Tech needs a user-friendly English test that is also flexible. Before each semester, the university has to get all the incoming international students to take a placement exam, but managing the schedules of so many college students can be very difficult. The school values iTEP because of the personal, hands-on customer service it offers.

“We don’t have to go schedule something with iTEP and say we’d like to have a test on a certain day, and then have to jump through hoops if we want to add another test,” Moore says. “If I call iTEP, I get someone helpful right away. The ease of scheduling is great—if we have one in the morning, we can schedule another in the afternoon and it’s no problem.”

Responsive listening

Every school, business, or organization that uses a test has their own unique set of circumstances and obstacles to overcome to get the best results from an English proficiency test. iTEP understands that every situation is different, and works with institutions to customize the exam to different settings or to test different skills.

For example, Virginia Tech found that a few essay topics for its placement test were appearing more frequently than others. Moore contacted iTEP, and immediately noticed that the randomization of essay prompts is much improved. Each individual iTEP test is assembled from an “item bank” of thousands of questions, decreasing the chances of seeing the same question twice. In addition, iTEP utilizes a live, rotating item-bank that serves test questions randomly.

Troubleshooting made easy

It’s been said that excellent customer service helps strengthen your brand. At iTEP, we strive to offer our customers the best experience possible, and treat everyone with total respect. Moore repeatedly mentions iTEP’s excellent support team and how they’ve always been there, no matter the time, to help resolve problems. “When we’ve had issues, we’ve been able to reach a tech person with iTEP and been able to receive great customer service to help walk us through the issue,” he says. “The troubleshooting has always been handled very well.”

Created by education professionals

iTEP was founded in 2002 by two individuals with deep roots in the international education field. They wanted to create a user-friendly English test that addressed the needs of the international education community. The company wasn’t created as a business ploy, but as a true labor of love, something that Moore says makes it easy to believe in iTEP. “It’s nice knowing the background of a lot of the individuals that are a part of the organization,” he says. “What [iTEP Executive Vice President] Dan Lesho says, I trust. They have what’s important for the students in mind. Being smaller than other test companies, they have the ability to offer really tremendous support to any school or organization they work with.”

Artificial Intelligence (AI) vs. the ESL Teacher

artificial-intelligence-ai-vs-the-esl-teacher

Artificial Intelligence and the ESL Teacher

English language assessment tools have come a long way since the introduction of TOEFL, the first English proficiency test, in 1964. Back then, everything was done by pen and paper, without a computer in sight. But as technology has advanced, so has the way these tests are administered, designed, and used. While computer grading is now the norm for many companies, artificial intelligence doesn’t eliminate the importance of using human graders—especially English as a second language (ESL) teachers—in assessing the writing and speaking sections of an exam. There’s no doubt AI plays an important role in the future of English language assessment tests, but there are many advantages to using ESL professionals to judge the competency of test-takers.

Computers Can’t Detect Nuance 

We communicate in a subjective world. The purpose of language is to pass information from one human being to another. Artificial intelligence is not yet advanced enough to answer complicated questions or place the response in context of what else is around it. An ESL teacher has been trained to think about the big picture and ask questions like “Is the idea expressed here complete or incomplete?” and “Does this response make sense?” when grading an exam.


Language is highly dependent on context and on the different denotations and connotations of words and word combinations. The important thing to understand with AI evaluation of language is that the evaluation does not happen directly. Instead, AI evaluation depends on ratings arrived at through algorithms that compare speaking and writing to set models, and statistical analysis is often employed to “predict” the likely proficiency of the test taker. AI evaluates writing and speech characteristics not the communicative quality and critical analysis of a response. Human beings are still needed for that.

 

ESL-trained graders can detect this critical thinking and complex sentence structures. An exam grader, whether human or computer, needs to understand the language at several different levels, including the appropriate meaning of words in a sentence and how they interact grammatically to convey meaning, as well as the situation and contexts in which the words are used. Human graders are much more adept at this than any computer, no matter how good the AI is.

Washback Matters

Often times, tests that use AI are simply asking the test taker to repeat a short utterance or read directly from a displayed text. In some cases, test-takers are asked to transcribe a short listening passage. Again, statistical analysis is used to “predict” the likely proficiency level of the test taker. Many have raised concerns that this invites test-takers to craft their responses with the automated grading system in mind, incorporating elements in their speech or writing that will trigger the best rating by the algorithm. This raises larger concerns about the washback effect of AI graded testing, especially in terms of promoting language learning. Specifically, these concerns involve sending a misbegotten message to language learners that successful communication means checking off component features of formulaic speech or writing rather than focusing on the effectiveness with which a unified idea is communicated to another person. Shifting the focus to the effective and authentic communication of an idea between human beings has been a theme in the field of language learning and teaching for many decades now.

Certified Teachers Make a Difference

The typical iTEP grader is an active ESL teacher who works in a classroom in addition to grading language assessment tests for us. Our graders have years of experience working with students of different levels of English comprehension and a strong knowledge of the psychology behind language learning. In addition to their qualifications, each of our graders is trained on a grading rubric we provide and have completed an extensive iTEP Grader Certification program.

Once they begin grading exams for us, our graders are required to participate in regular norming exercises that show how their evaluations compare to both their peers and the standards we set out for them. Our graders don’t live in a vacuum—this essential and frequent training gives them the skills to recalibrate and avoid internal bias. iTEP also employs a master grader who is frequently in contact with our graders to offer feedback, answer any questions, and help eliminate human error.

An Extra Level of Care

At iTEP, we sincerely care about what we do. iTEP International was founded in 2002 by former TESOL professionals with deep roots in the international education field. The goal was to create an English proficiency test that addressed the needs of the international education community, and the members of this community are iTEP’s constant partners in the continual development and improvement of iTEP tests. The company and its offerings have their roots in the communities they serve.
 

We don’t have outside investors looking to make easy money through a sale or public offering. We value the importance of language learning and how transformational it can be for people to become competent in a foreign language. We hire and train educators who apply their expertise to designing exams that provide reliable results. Without reliability, an English assessment tool has little value. While AI is more reliable than humans at many tasks, evaluating the skill level of spoken and written language is not one of them, and isn’t likely to be any time soon.

6 Ways English Testing for Companies Makes Business Sense

English Testing for Companies

English is the global language for international business, making English testing for companies vital to assess the language skills of your employees. In most countries, English is often the default form of communication. No matter where an organization is located, an English proficiency test is a powerful tool that can help improve decision making, streamline communications, and provide data to justify hiring practices.

Hiring

Every company needs to hire qualified workers, but it can be hard to accurately judge the English language proficiency of a potential employee during the interview process. English testing can be used to help differentiate between employees with the similar resumes and is an effective tool to screen candidates before the hiring process begins. But what type of testing works best?

iTEP helps determine the ideal test for your industry and need. Hotels, restaurants, cruise ships, and companies in the tourism industry that need a quick turnaround can use iTEP Hospitality, designed for fast-paced environments that tests listening and speaking and is graded in 24 hours.

Say, for instance, you need to hire 1,000 people very quickly and don’t have the time to conduct individual interviews with every candidate. An English language test can be used to set a baseline level of competency needed for the job. Testing can be administered on any modern computer, so companies can test applicants either on-site or with a remote test—a key accessibility feature that could help increase the diversity of your candidates. Results are provided fast in a simple format anyone can understand.

Promotions

Even if you’re not doing a massive round of hiring, English language tests can help companies promote from within. Hiring is often a subjective exercise, but when deciding to promote an employee to a new position, it helps to have a quantifiable reason to make a decision. Administering an English language test will provide a score and benchmark executives can use to promote someone who has particularly strong English language skills, especially in non-English speaking companies.

Testing the specific language skills used on the job

It’s true that English testing for companies can help build a more skilled workforce. Yet not every industry needs its employees to converse in fluent English about complicated academic topics. A catering company needs to hire people who are knowledgeable about food and can make pleasant conversation, but doesn’t need someone to ace the writing section of an English test. iTEP has created English assessment tools for specific settings, such as au pairs and interns. At just 30 minutes, iTEP Conversation is a test that doesn’t feel like a test, and is perfect for companies that just need to evaluate the speaking and listening skills of their workforce. The convenience doesn’t end there—iTEP works with individual clients to design specific, branded English tests for any industry, such as an exam created specifically to evaluate English skills needed to work in the Japanese real estate market.

Using English skills of staff as a selling point

A recent LinkedIn survey revealed that 90% of HR Directors, CEOs and CMOs say that having English-speaking employees is beneficial to their businesses. Having qualified English speakers on staff is attractive to new clients, business partners, and potential employees. If your entire staff is fluent in English, you can point this out to customers as a way to distinguish your brand from competitors.

Evaluating ROI in English training

Companies that hire a large number of English as a Second Language (ESL) speakers, often choose to provide English language instruction to their employees. To see if the provided courses are effective, a company can administer an English language test before new employees start and after their instruction is complete. These results can help an employee understand their strengths and weaknesses, and also help an organization measure the ROI of its English instruction.

Shaping English training curriculum

English testing for companies can help modify the curriculum to best fit the needs of the organization. A good English language program should have clear goals, measurable outcomes, and metrics for success that the HR manager can easily track against. If test takers are scoring high on grammar but lower on other areas, these results can be used to shape how English is transmitted in your organization. iTEP is calibrated to show details that other tests might miss.

Administrators can also use the results to help improve the quality of its instructors.  Spotting patterns among test takers and making teachers aware of the outcomes of their courses can help them change the class to focus on improving the students’ weaknesses.

English Assessment Test

What makes an English assessment test effective?

English assessment test

English proficiency testing is crucial for educational institutions looking to admit qualified international students, and for companies that employ speakers of English as a second language. There are all sorts of English assessment tests out there, so what distinguishes a great English assessment test from a weak one? Here are a few things to look for when deciding on a test for your school or organization:

Comprehensive:

To get an overall picture of someone’s English language abilities, it’s important to test all of the language skills relevant to the test-taker’s study or work. For many industries, a simple overview of a prospect’s grammar skills is not enough. iTEP offers comprehensive exams that measure test-taker’s command of the English language both formally and informally, through verbal and written communication that occurs naturally in the workplace and in the classroom.

The proliferation of smartphones and the internet has given rise to a number of quick online-tests that purport to give a baseline level of a person’s English abilities. However, these tests typically don’t evaluate these skills in depth, and if they do, they fail to measure the speaking and writing abilities; both crucial skills to include when deciding on job prospects or potential students. Ensuring that you testing for both the speaking and writing abilities helps to showcase a test-taker’s command of voice and tone, the hardest thing to master in written language.

The flagship iTEP exams, iTEP Academic, iTEP SLATE, and iTEP Business all have five sections that asses speaking, writing, listening, reading and grammar. The score reports are intricately detailed, allowing for data gathering that tracks even the smallest improvements in a test-taker’s English proficiency. These reports are also very useful in helping identify areas that need more work.

Graded by man or machine?

Some English language tests seek to evaluate all language skills using artificial intelligence or non-native English speakers to grade the tests. Of course, there’s no reason multiple-choice sections of an English test shouldn’t be graded automatically and instantly. The difficulty arises in grading the active skills of speaking and writing, in which the test-taker generates organic content. Of course, it would be very fast and inexpensive to grade these sections automatically using artificial intelligence, but our research has found there to be no substitute for ESL-trained native English speakers.

Grading is an extremely complex task. Proponents of automatic grading argue that it’s more objective than human graders. To eliminate subjectivity, iTEP graders go through “norming” exercises which function as a type of calibration where all the graders are tasked with scoring the same test, allowing them to compare and adjust their standards based on a community-consensus, grading history, and expected performance per question. This ensures that results are consistent whenever the test is administered. Someday, AI technology may advance to the point of being able to provide accurate scores, but presently, only trained humans can reliably judge the intricacies and quirks that distinguish one level of English speaker from the next. 

The test should speak for itself

The nature of an English assessment test demands that the structure be sufficiently intuitive to the test taker so the questions can be understood without any extra explanation in the local language. All iTEP exams have a similar structure, a convenient administration procedure, and a standardized scoring rubric. Each type of question is formatted to be easily understood at first glance, even to a beginner English speaker.

Secure and convenient

Online English proficiency assessments are convenient, affordable, and accessible, but how do we know they are secure?

Naturally, the most secure environment to administer an English assessment test is a staffed test center. However, even in this setting, the top English tests on market have seen imposters taking the test on behalf of others. iTEP’s answer to this is a feature called FotoSure, a software which makes cheating by impersonation virtually impossible. FotoSure snaps and stores digital photographs of the test-taker throughout the exam period. Institutions can match the photos with the student arriving on campus.

In addition, iTEP utilizes a live, rotating item-bank that serves test questions randomly.

Each individual test is assembled from hundreds of random questions, decreasing the chances of ever seeing the same question twice. iTEP graders also conduct plagiarism scans, check testing history, and analyzing speaking samples for security breaches.

Not all settings require a maximally secure test. For placement purposes, for instance, intensive English programs often find it acceptable for test-takers to take iTEP on their home computer. When both the convenience of an at-home test and security are needed, iTEP has partnered with Examity to offer remote proctoring during which both the test-taker and his or her screen is monitored via webcam throughout the course of the exam.

Just the right amount of time:

Reports show that anxiety among test takers, especially students, is on the rise. Taking a long, taxing English test can be exhausting for any non-native speaker. This type of stress can skew results and have negative impacts on test takers. In an effort to help combat fatigue, iTEP conducted years of research, and found that a 90-minute test was the perfect length. At 90-minutes, an English test can be comprehensive but not unnecessarily long, while collecting enough data to provide reliable, detailed scores.

Evaluates a range of levels

Perhaps the most crucial aspect of an effective English assessment test is that it can accurately evaluate the skills of a wide-range of people. iTEP’s exams are laid out so that even someone with a very minimal grasp of English can answer at least a few questions. The writing section is open-ended, giving fluent or near-fluent students the chance to flex their muscle and really show how much they know. The graders will recognize the use complex structures, difficult verb tenses and other language nuances.

The norming effect of using a standardized English proficiency test within an IEP

test dots

This is part 3 of a four-part series on the quantitative approach to IEP evaluation. In this part, we will examine questions that may arise in choosing an exam to use in data-driven evaluation of an IEP described in parts 1 and 2 of this series. Part 4 will look at how standard deviation among scores can be used to evaluate the effectiveness of student achievement systems and how statistics software can be used to do analysis and comparisons.

So far in this series, I’ve endeavored to show how administering a standardized test to all students in an IEP—particularly over time— can reveal a lot about a program. A key component of this process is giving all of the students in the program the same test.

Undoubtedly, IEP academic administrators and teachers will be concerned about using one test for students across all proficiency levels. Of course, teachers and administrators will need to be sure that students have the minimum proficiency to follow the test. They will also want to know that there are test items that allow lower level students an opportunity to score. IEPs using iTEP for this purpose have discovered that lower level students can follow the straightforward instructions and approach of the test. In addition, each iTEP skill section has lower level test items and tasks that give lower proficiency students an opportunity to score on the test.

Whatever assessment is used, one more basic question must be addressed when employing a norm-based test such as iTEP to evaluate students’ proficiency across levels: “Should we use a test that is not based solely and directly on our own student learning outcomes and does not use our rubrics or evaluators?” Interestingly enough, there is a good deal of variation in the field in response to this question. While some may readily answer no to this question, others have decided that a close reading and understanding of a test’s proficiency descriptors and scores allows them to align their levels with outcomes on the chosen proficiency test.

In other words, the IEP is reasonably confident that students who gained the out-going skills of a particular level should be able to attain a determined score on the proficiency test. In addition, most IEP administrators and teachers know the pressure to help students perform on other proficiency tests commonly used for university admissions. Indeed, savvy teachers help students draw clear lines between language skills attained in class and success on these tests, using a positive washback effect. Lastly, some regard this third-party, independent testing as an exercise that promotes the good pedagogical health of the institution by ensuring that institutional concepts of proficiency and advancement are not formed in an organizational vacuum.

Certainly, there is much to consider before embarking on standardized proficiency testing across the board in an IEP. But once the decision is made, the data can be viewed from many different perspectives and can be a useful tool for programmatic evaluation and improvement. For instance, a benefit of using an independent proficiency test closely aligned with learning outcomes is the ability to view scores in juxtaposition to pass/fail rates, especially within particular sections of a level. The chart below shows a side-by-side comparison of overall iTEP scores and the percentage of students who passed a particular level and section. In this hypothetical six-level program, sections within a level are distinguished with a letter assigned such as 2A, 3B, or 4C.

Those who have worked long enough in IEPs will recognize this chart and the issue that it highlights. Recalling t he first article in this series, the review of quantitative data can “quantify” a phenomenon that one knows to be true only in an anecdotal or subjective sense. In this case, there might be a sense that the teacher or teachers in 3A are applying the rubrics in a way that allows for an inflated pass rate. Conversely, the teacher or teachers in 3D might be applying the rubrics in a way that does not reflect the students’ true proficiency.

While there is a great deal of variation between these two classes in terms of the pass rate, the standardized iTEP score shows much less variation. The students in level/section 3D are scoring in line with their peers in other sections. However, they are not passing the class at the same rate. IEP administrators know that such conditions are not sustainable in an IEP where students will want to be confident that there is fairness in how they are evaluated across sections. However, administrators sometimes do not know how to start the conversation with teachers who think from an anecdotal perspective that there is no problem. In the case of level/section 3D, demonstrating to teachers that their students perform on par with their peers in other sections might encourage the thoughtful participation in norming exercises designed to mitigate such discrepancies. For level/section 3A, one might ask why these students were not able to score significantly above the average given the high rate at which students passed the class. Of course, pass rates do not tell the whole story on skill achievement. If percentage grades are calculated, that could be another factor to consider when examining this possible case of inflated grades.

The analysis of percentage grades is what we will be looking at in greater depth in the next article. In essence, we will be seeking to discover if the percentage grade issued in an IEP class is closely associated with skill achievement. In other words, do higher grade percentages represent higher proficiency? Conversely, do lower grade percentages reflect lower proficiency? Using a proficiency test across the board in IEP can help to answer these types of questions.

Dan Lesho is Executive Vice President of  iTEP International. Prior to joining iTEP (International Test of English Proficiency), he was director of Cal Poly Pomona English Language Institute and a professor at Pitzer College.

See this article as it originally appeared on LinkedIn

Using data to visualize and evaluate placement and procedures in an IEP

graph crop

This article, which originally appeared on LinkedIn, is part 2 of a four-part series on the quantitative approach to IEP evaluation. In this part, we will look at some illustrative example sample data and discuss what that data may reveal. We will also examine the sorts of changes it could motivate administrators and teachers to make. Part 3 of this series will address questions surrounding choosing a test, and part 4 will look at how standard deviation and other statistical analysis among scores can be used to evaluate the effectiveness of student achievement systems.

First, thank you to those of you who responded to the first article in this series. I was pleasantly surprised by the amount of feedback. In particular, many of you responded to the section about the important role that trust plays in an IEP when considering quantitative data. Many leaders in education have written about how trust is a common denominator in successful schools, and IEPs are no different. Indeed, no productive dialogue about the analysis of quantitative data occurs without first establishing a common understanding about how the data will be used.

In most cases when considering quantitative data, it is simple descriptive statics that usually tells the most compelling story. In this article, we will look at example results that an IEP might get by administering a standardized test such as iTEP to students in the IEP across all levels.

The three examples below represent three possible outcomes of a proficiency test given to all students in an IEP with six proficiency levels. Example one represents a perfect, but non-existent world. Examples two and three represent real-world IEPs where the data might elicit some interesting conversations.

While Example 1 represents an idealized progression of proficiency, it is useful to consider its characteristics for points of later comparison. What do we like about this example? First, overall scores and specific skill area scores increase between each level. Also, proficiency, as measured by the test, increases uniformly from one level of test-takers to the next level.

Example 2 is much more characteristic of what a typical IEP might find as a result of giving all students a standard proficiency test.

Of course, the most interesting characteristic of Example 2 is the significant increase in proficiency between levels three and four. In addition, there seems to be relatively little increase in levels subsequent to level 4. Still, this chart likely represents a successful IEP. In fact, many IEPs seem to have a particular level progression where proficiency increases substantially, and this increase could have a variety of pedagogical and/or practical explanations.

In Example 3, the IEP administrator might notice the decline in speaking and writing scores in level 6 and decide to keep an eye on the subsequent test administrations to confirm if this is a trend. Another trend to keep an eye on here might be the lagging scores of reading and listening skills. Those in the IEP might question if this confirms a sentiment about lagging skill achievement in receptive language skills.

Of course, these charts represent just one possible administration of a standardized proficiency test at one point in time. There is much more to learn through multiple administrations of a test. In some cases, IEP administrators recognize trends based on the term or season of the year. The spring quarter might show a fall off of scores in the higher levels as students are readying to exit the program. The fall and winter sessions might be characterized by more solid and consistent increases as students are very motivated to advance within the program and achieve on other standardized tests. IEP administrators might view these data in the context of the most common level at which students exit the program, determining if this in some way affects the results.

Whatever the conclusion, data and simple analysis such as these three example results can help guide the conversation at an IEP when it comes to student achievement and length and structure of the program. It is important to reiterate the limitations of quantitative data. Quantitative data is good for uncovering what is happening but not necessarily why it’s happening. Again, the data might best serve as the jumping off point to a more in-depth conversation.

Dan Lesho is Executive Vice President of  iTEP International. Prior to joining  iTEP (International Test of English Proficiency), he was director of Cal Poly Pomona English Language Institute, and a professor at Pitzer College.

The right way to use test data to improve an English language program

The right way to use test data to improve an English language program

student graphs

Read this article as it originally appeared on iTEP Executive Vice President Dan Lesho’s LinkedIn page. This is part 1 of a two-part series on the quantitative approach to IEP evaluation. In this part, we discuss the motivations for, and potential pitfalls of, quantitative evaluation of student performance. In part 2, we will look at some illustrative example sample data provided by iTEP, and I will discuss what that data may reveal and the sorts of changes it could motivate administrators and teachers to make.

“It seems like our students plateau at a certain level and have trouble making tangible progress after that.”

“Students usually exit our program before the last level and so are not as motivated to do well on tests and quizzes.”

If you’re an administrator or a teacher at an intensive English language program (IEP), the above statements probably sound familiar. Though common, such sentiments are all too often vaguely expressed and based on anecdote. This can create tension between administrators, teachers, and students. An IEP administrator might have the sense that students plateau at a certain level, but teachers of that level might feel that that is an unfair characterization. One teacher might feel that students are unmotivated on tests and quizzes while another might have had the opposite experience. A counselor might see several students struggling with writing and view this as a trend, while the bulk of the students feel that they are mostly fine at writing. How to settle such disputes?

A trend we have seen across all areas of education over the last decade is the increasing use of standardized testing. IEPs are no exception, with companies like iTEP International providing extensive testing services. Since the tests are standardized, the results can be pooled into large datasets that can reveal what happens to students as they move through the program or how the program itself is changing over time. In the context of an appropriate analysis, such data can help to confirm or undercut the vague sentiments with which we began.

The analysis of quantitative data can also raise questions that otherwise might not have arisen. For example, “Why does there seem to be more variation in the scores at our lower levels and less variation at the higher levels? Is that a good thing?”

The potential benefits are clear, but to successfully incorporate quantitative measures into your program, there are several things to keep in mind.

Trust and Communication

When involving quantitative data from standardized test scores into program evaluation, it is critical that administrators, teachers, and students share a mutual feeling of trust for one another. IEP administrators must trust that teachers are willing to change in light of data, and teachers must trust that administrators will not use data to paint a misleading picture. Most importantly, students must trust that both IEP administrators and teachers will evaluate their proficiency holistically, rather than reducing it to a single standardized test score.

The best way to establish trust is by taking the time to have conversations with students and teachers about the benefits of standardized test score analysis. Students and teachers respond positively when they are assured that the data will mainly be used to deepen the conversation about how best to serve students (rather than, say, as a way of finding places to reduce funding or punish poor performers).

Not All Data Are Created Equal

It is important to remember that just because all data from standardized tests can be viewed quantitatively, this does not mean it is all equally important or revealing. In addition, some tests might seem more important to students than others. In some programs, iTEP is used to inform matriculation into credit programs or advancement to higher levels. In these cases, the IEP can be more confident that the test-takers performed to the best of their abilities. If there are no stakes for students on a particular test, however, there may be reason to suspect that students did not perform at the top of their game. When there is consensus that students did their best, the scores can be more confidently used in program assessment. Otherwise, caution may be advised.

The Holistic Approach

Certainly, many educators have raised concerns about the effects of high-stakes testing and its linkages with funding and administrative decision making. Specifically, educational administrators must be cognizant that the data may only tell one part of a much bigger story about their students and their schools. Still, quantitative data is useful for exposing and tracking trends and could also reveal an aspect of the program that is not readily seen with a qualitative approach. In this way, quantitative data can be one aspect of programmatic evaluation that helps to shape the narrative as to how well programs are delivering for students.

Next month, we’ll look in detail at sample data from a program using iTEP exams and see what it may reveal about the program. In particular, we’ll consider average overall scores and also standard deviation of scores across and within proficiency levels.

Dan Lesho is Executive Vice President of iTEP International. Prior to joining iTEP (International Test of English Proficiency), he was director of Cal Poly Pomona English Language Institute, and a professor at Pitzer College.