Better together? Assessing reading and writing

Written texts: A thing of the past?

The advent of digital technologies and the rise of the internet have altered the way we read and write considerably over the past few decades, but it has also increased access to written texts and made them easier to produce, share and publish.  And we are not just talking about posting on social media either. The rising popularity of English as a Medium of Instruction (EMI) and CLIL, or the use of English in the workplace means both students and professionals are increasingly exposed to written English.  So despite the general perception that we are witnessing the demise of these skills, they very much remain a central part of how we study, how we work and how we interact. Consequently, assessing these skills is as important as ever for us as language teaching professionals.

Valid, reliable… and real

Key to ensuring the validity and reliability of any reading or writing assessment is the suitability of the text that we are asking the test-taker to read or produce.  Obviously this means making sure that it is at the level we are testing. Proper word and time limits also ensure that test candidates are being given a fair and equal opportunity.

But it is no minor concern that the text should also be one that they would encounter in the real world.   Blog posts, e-mails, or reviews for an online audience (think Tripadvisor) are all examples of texts that did not exist a few decades ago, but which are commonplace today.  And while one may make the case that depending on its content or purpose a blog post might closely resemble an article or essay and that an email is really little more than a letter sent electronically, the medium does often affect the message, and conventions for these types of texts have changed over time.  What we are assessing, and indeed how we are assessing it, need to reflect these new realities.

Assessment criteria and task types

So how might we assess the skills necessary to read or write at a particular level of proficiency?  Most teachers will be familiar with descriptors or learning objectives from sources like the Common European Framework of Reference (CEFR) or the Global Scale of English (GSE).  Though these statements written as descriptions of what students should be able to do at different stages of their learning are extremely useful for planning lessons and designing courses, they have traditionally been most closely associated with assessment as they provide a clear rationale and focus for evaluation.  

Some tasks may be better suited to determining if test-takers are capable of fulfilling particular descriptors.  When evaluating reading, a selective cloze test in which specific words are deleted and must then be provided by the test-taker can be useful for determining if they can identify cohesive devices or know language which can be used to perform certain functions (persuading, ordering, etc).  On the other hand, inferring attitudes, making predictions or interpreting main ideas may be better assessed using matching activities, short written answers or even multiple choice. But once again the degree to which the task mirrors a real world skill is also of interest. In everyday life people are often asked to write about what they have read, but seldom (perhaps never) are they given a list of multiple choice questions (though multiple choice obviously has its benefits when it comes to correcting quickly and efficiently).

Integrating the skills of reading and writing

In previous posts we have discussed the difficulty that often arises when attempting to isolate the skills for assessment purposes.  Communication almost always entails both the productive and receptive skills and often crosses mediums. We speak about things we have read, write about things we have heard and carry on conversations in which actively listening is fundamental to being able to participate effectively.  And though we tend to report the results of exams by individual skills, the exam format itself can seldom keep skills so separate. Just to take one example, test-takers are normally asked to take in written instructions and items to do writing, listening and speaking tasks. And dictations which test listening are also normally used to simultaneously test writing. So should an effort be made to develop exams in which the skills are more integrated?

As we mentioned earlier, one way we might integrate the skills of reading and writing is by using note-taking or short written answers as tasks on a reading text to mirror the type of activity a student might do in real life.  Another way we might do this is by simply giving the test taker a writing task which asks them to take into consideration the ideas in a text they have seen previously on the reading section of the exam. This approach, which is a key feature of one of the two writing tasks on the Pearson Test of English General (see sections 7 and 8 on the sample tests here), will be familiar to most teachers who often assign writing based on the input from a lesson.  Giving students the opportunity to relate their written work to something they have read is a fair way of reflecting activities they have done in class and in fact may often do in work or other contexts.

Summing up

Reading and writing continue to be crucial skills that allow us to interpret the world around us, communicate important messages and take action.  The way we use these skills has changed and good assessment practice takes this into account by making sure texts are up-to-date and reflect those found in today’s world as well as asking for test-takers to perform tasks which are realistic approximations of what they may do in their everyday lives.

Other blog posts in this series:

An Introduction to Language Assessment Literacy
Assessing listening: challenges and choices
Assessing speaking: an ‘inexact science’

You may also be interested in:

How to develop writing skills for PTE General

Leave a Reply