- Sign In
Thursday, January 12, 2017 by David Bong
Avant STAMP 4Se was developed especially to assess the language proficiency of elementary students.
Although STAMP 4Se is a real world proficiency assessment, it uses only questions (items) and content based around daily school and home life for students who are attending an elementary school in the U.S. To make the assessment appropriate for elementary students, and in order to make sure that we are not assessing their English reading ability, written directions are very limited and are always provided aurally so test takers can listen to them. All critical instructions are given in English for all languages tested. For three languages that have a large number of test-takers who are heritage learners: Spanish, Cantonese and Mandarin Chinese, other instructions and tasks for speaking and writing in the target language are given in both English and the target language. All other languages receive the other instructions and tasks in English.
The STAMP 4Se project was initially developed through a Foreign Language Assistance Program (FLAP) grant to the state of Wyoming, which assembled a consortium with five states (South Carolina, New Jersey, Georgia, Kentucky, and Virginia). The Center for Applied Second Language Studies at the University of Oregon (CASLS) and the Center for Applied Linguistics (CAL) in Washington, D.C., developed content for the assessments and piloted the tests in collaboration with elementary schools in the cooperating states.
Monday, December 12, 2016 by David Bong and David Jahner
The United States is a country of immigrants, but historically the second generation of these immigrants has lost their parents’ language – making our country poorer for it; economically, linguistically and culturally. Why has this happened repeatedly throughout our history? One reason could be the pressures of acculturation and the need to quickly fit into America's English speaking society. But in today’s interconnected world does it make sense to allow these valuable skills to just vanish? Fortunately, states across the country have finally begun to recognize the value of maintaining this treasure of linguistic and cultural heritage.
Friday, December 02, 2016 by David Bong
Avant is pleased to release the first report on National Avant STAMP Data since 2010. The report breaks down the data into individual languages and program type: Elementary Immersion, Secondary, and Higher Ed. We welcome your feedback and questions.
Wednesday, November 30, 2016 by Randy Barrette, Instructor and CAEP Co-Coordinator for the College of Education at Morehead State University (KY) and Avant Professional Development Consultant
Which comes first in your language classroom, the student, or the data tied to that student? Fortunately, the answer to that question is not an either/or response. In fact, do an Internet search for “Teachers and Data” and you will quickly find multiple approaches for gathering data about our students. You will see an increasing amount of literature about the need for teachers to not just have ‘assessment literacy’, but ‘data literacy’ as well. To highlight this point, the National Board for Professional Teacher Standards will unveil a new component this fall, which will ask teachers to show how they incorporate data about and from their students, families, and communities into their decision-making processes.
Thursday, November 03, 2016 by David Jahner, Executive Director, Southern Conference on Language Teaching and David Bong, CEO of Avant Assessment
Some state legislatures have proposed that students who learn to code should earn world language credits. Is learning to code equivalent to learning a world language? The very idea of this seems absurd to a language professional. Coding is certainly a valuable skill in today’s world. It is also understandable that legislators and school administrators need to find space in an already crowded curriculum to squeeze in more learning. However, we believe it is a mistake to replace world language, the one place in the curriculum where students can learn crucial skills for success in our increasingly connected global society and economy.
Tuesday, October 25, 2016 by David Bong
These are two of the most misunderstood terms in language testing. Both are very important in determining whether a particular test is appropriate in a given situation.
Simply stated, reliability means that if you give the same test to the same student s/he will get the same score. This is not easy to accomplish. For computer scored questions (items) in reading and listening, a test developer needs to conduct a statistical analysis of the items. This process is called psychometric analysis. The analysis is conducted on data from a number of test-takers, who ideally have a wide range of skill levels. If the item is a good one, the analysis will confirm that it consistently discerns the accurate level of the test taker. In other words, if it is an intermediate-low item, novice-level test takers will consistently get it wrong, and intermediate and above test takers will get it correct. The more consistently an item performs this way the better it is at differentiating the test taker’s language skill. The analysis will put each item on a spectrum from easy to hard. The result of that effort will show that not all intermediate-low items are created equal with some items at the same level being harder than others. That degree of difficulty within a level needs to be taken into account when building the test. A computer scored test that consists of a well laid out set of items that have been psychometrically identified as good items should be a highly reliable test of those skills.
Monday, October 17, 2016 by David Bong
There are two fundamental test designs in language testing: Fixed Form, and Adaptive. In a fixed form test, every test taker receives the same items (aka questions) and can be delivered on paper or computer. An adaptive test can only be delivered on a computer. The computer algorithm scores each item as the test taker answers it, then based on the score of that item or cluster of items, the computer algorithm adjusts the level of the next item or cluster of items to the next “challenge” level. Until the era of computers, it was not possible to do adaptive testing on a large scale. It is still not possible to apply computerized adaptive test design for speaking and writing tests on a large scale. But it is now quite easy and common to use adaptive testing for reading and listening tests.
There are many advantages to adaptive tests. Because the level of difficulty of the items adjusts to the level demonstrated by the test taker, s/he is consistently challenged by items without being overwhelmed by overly difficult items, or bored by easy ones. With an adaptive test, the test taker can rise to the top of the scale based on her/his performance. In an adaptive test, test takers will take various paths through the test so that cheating is much more difficult (yes, it does happen anywhere). Because the adaptive algorithm hones in on the actual level of the test taker quickly, it is able to deliver more items across a wide range of levels that are at or close to the test taker’s skill, increasing the confidence that the final score is accurate - all in a shorter amount of time than a fixed form test. In Avant STAMP, we also use the adaptive reading score to determine the level of prompts we deliver for the writing prompts, and the adaptive listening score in the same way for the speaking prompts, thus making the productive sections semi-adaptive and more appropriately leveled for each test taker.
Thursday, September 29, 2016 by David Bong
Language proficiency assessment can seem complex or even mysterious – a black box that just spits out a score. However, it isn’t that complicated once you get familiarized with a few basic concepts. I will be describing some of these basic concepts in the next few blog entries. Here is the first.
Standards-Based (or Criterion-based) vs Norm-Based (or Norm-Referenced)
Today, most proficiency assessments are standards-based, meaning that the test measures what test-takers can do against a set of fixed standards. Avant STAMP (STAndards-based Measure of Proficiency) assessments use standards that are aligned with the ACTFL Proficiency Guidelines. The world language standards for virtually all of the states in the US describe proficiency levels based on the ACTFL Guidelines. Often the standards are operationalized by testing organizations into benchmarks or rubrics. Avant has done this by adding some granularity to these standards to make it possible to use them to score learner responses to test questions. Here are the Benchmarks and Rubrics for Avant STAMP. Other standards-based language tests use a rubric as well to identify specific standards. Here for example, are the rubrics for the AP Spanish Language and Culture assessments.
Wednesday, August 24, 2016 by David Bong
It is definitely NOT what I learned in my college Japanese courses.
Proficiency is a much-discussed word in the language field. Our company delivers Avant STAMP and other “proficiency” tests. But what proficiency are we assessing? Proficiency at conjugating verbs or reproducing dialogues memorized in the classroom? After all, someone who does these well is probably proficient at memorizing grammar rules and set phrases. But do assessments of these skills tell us anything meaningful about the ability to function with that language in the real world? The problem with real language is that it is messy and unpredictable, not neat and orderly like a verb conjugation table or a classroom dialogue.
I studied Japanese for two years in college before heading off to Tokyo to study in earnest. In college we learned the famous “Jordan Method”, rich in set phrases and constructions to memorize. If A then B. If B then C and so on. When I got to Japan I quickly realized that even when I remembered A, instead of B coming next, it was C or K or Z that I heard, and I was completely lost.
Monday, July 18, 2016 by David Bong
Simply stated, Proficiency is the ability to use language in a real-world situation, Performance is the ability to use language in a limited and controlled situation such as a classroom or controlled situation-based exchange, and Achievement is the ability to repeat language elements that have been taught and mastered at some level. Each has a role in language learning, but only proficiency is what people use to communicate in the real world.
Language learners at the novice level need to focus on memorizing vocabulary and the basic building blocks of language so achievement exercises/tests are particularly appropriate and important at this level. So you can just ignore those ads for some language learning programs that claim you won’t need to do any of that needless memorization. Memorization is important, but it is only part of the picture. Without beginning to apply those memorized words and phrases into an unscripted proficiency-based learning environment, learners will not learn how to improvise and respond to the unexpected and incompletely understood situations they will experience in the real world. Instead they risk becoming fearful of any situation in which they do not know every single word. This fear of the uncertain is the experience that many learners have sadly been given in traditional classrooms that focus on rote memorization and regurgitation. So, achievement assessments are important, but if used excessively, can produce learners who are incapable of real world use of the language elements they have learned in class.
Co-Founder & CEO