An Introduction to Language Assessment Literacy

What is Language Assessment Literacy?

The original use of the term “literacy” is still commonly defined as the ability to read and write, but in its wider sense teaching professionals prefer to view it as a concept that brings together knowledge and competences in a given area of learning. We are all now becoming increasingly familiar with terms such as Digital Literacy or Research Literacy, as well as Assessment Literacy, which will now be the subject for a series of posts we are going to be sharing with you in the coming months, focusing specifically on the theme of Language Assessment Literacy, or LAL. Since the term first appeared at the start of the 1990s, there have been many attempts to define it, but we will use Pill and Harding’s simple yet concise definition from 2013, which considers LAL as a series of “competences that enable the individual to understand, evaluate and in some cases create language tests and analyse test data”.

Our focus for the coming months

In these posts, we will be covering these and other aspects of Testing, Evaluation and Assessment (TEA), looking at the challenges and choices we have, as well as teacher roles and responsibilities.Very often, TEA seems to be almost exclusively associated with high-stakes proficiency tests and the increasing need of students to show the world their level of English through formal certification. This is humorously summed up by that now famous adaptation of one of the great lines from English literature – “B2 or not B2, that is the question” and teachers sometimes complain about finding themselves increasingly having to “teach to the test”, and about being “slaves to the rhythm” of this certification boom. But of course good assessment practices should be just as much about Assessment for Learning (AfL) as Assessment of Learning (AoL) and most teachers regularly tell us they would also value more training and guidance on when and how to use more informal, formative classroom assessment, how to assess specific skills, what different ready-made options they have, or indeed how they can create and use their own tests.

What makes a good test and what choices do we have?

In the coming months then, we will be looking at exactly what makes a good test and the key questions we all need to ask ourselves: does the test do what it’s supposed to do, what it says it does, or what we want it to do? Does it adequately reflect real-life scenarios, or “target language use domains”, as they are called? There are many different approaches and options, different task types, different types of delivery.  What is important is that we choose or create (and interpret) any test in terms of whether it is fair, reliable, practical, and in more general terms if it is actually valid and useful for its intended purpose.

Where to start?

How prepared are we to make the right assessment decisions or how can we be even better prepared? Until recently, even the most prestigious international teacher training programmes and qualifications have really only touched on the basic concepts of assessment. But gradually over the last decade, TEA has started to become much more centre stage and specific training and resources more easily-available to teachers. At the international, national or regional events of organisations such as IATEFL and TESOL there is now normally a range of talks and workshops on offer, while different awarding bodies and organisations also now offer webinars, videos or MOOCs to contribute to better test preparation. In Spain and Portugal, the conference programmes of major teacher associations such as ACEIA or APPI now regularly feature specific assessment-focused sessions too, and we should not forget the specific contributions of  organisations such as EALTA or special interest groups such as the recently-founded GIELE in Spain, or IATEFL TEASIG, both of which have interesting conferences coming up in the next few weeks, in Valencia and London respectively (the second actually hosted by Pearson at our offices in the Strand).


The aim of this series of posts is to inform of and add to these teaching training events and resources, by helping teachers become more consciously involved in making appropriate assessment decisions and encouraging them to reflect on the skills and practices that can shape their own teaching and testing and successfully influence their students´ learning journeys.  While we would certainly not go as far as Popham who in 2004 was already provocatively declaring that “assessment illiteracy is professional suicide”, it is undeniably now an important part of our role (albeit not a matter of life and death!), and we hope that by sharing some ideas and thoughts with you this year, we can make Language Assessment Literacy a slightly less formidable and more engaging proposition. If you´re interested in Teaching, Evaluation and Assessment, why not come on in and join us for some TEA in the next few months?!


Pill, J., & Harding, L. (2013). Defining the language assessment literacy gap: Evidence from a parliamentary inquiry. Language Testing, 30(3), 381-402. DOI: 10.1177/0265532213480337

Popham, W. J. (2004). Why assessment illiteracy is professional suicide. Educational Leadership, 62(1), (pp. 82-83).

Leave a Reply