Master
of pedagogy, Ibrayeva E.S., student of pedagogical faculty, Sultan A.
Buketov Karaganda State
University
Features of assessment in English
for Specific Purposes
Assessment in English for specific purposes (ESP) is in principle no different from other areas of
language assessment. Language assessment practitioners must take account of
test purpose, test taker characteristics, and the target language use
situation. All language assessment specialists adhere to accepted principles of
measurement, including providing evidence for test reliability, validity, and
impact. Finally, professional language testers are bound by international
standards of ethics which require, among other considerations, respect for the
humanity and dignity of test takers, not knowingly allowing the misuse of test
scores, and considering the effects of their tests on test takers, teachers,
score users, and society in general. ESP assessment is held to these same
principles. The traditional needs analysis in ESP covers the purpose of the
assessment, the personal, educational, and knowledge characteristics of the
test takers, and the context of specific
purpose language use. Test developers must offer evidence that the tests they
design provide consistent measurements of specific
purpose language ability, that the inferences and decisions based on test
performance are warranted, and that the consequences of the test are the
intended ones and are beneficial for test takers.
They are equally bound by professional ethical standards. If all this is true,
in what ways can we reasonably distinguish assessment in ESP from other areas
of language assessment? The simple answer,
of course, is that ESP assessment takes place in ESP programs.
Assessment instruments are needed in specific
purpose courses, as in all language programs, fi
rst, to give learners an opportunity to show what they have learned and what
they can do with the language they have learned by being given the same
instructions and the same input under the same conditions. Tests are needed
secondly to get a “ second opinion
” about students ’ progress and help confirm
teachers’ own assessments and help
them make decisions about students’
needs. Thirdly, tests are needed to provide for some standardization by
which teachers and other stakeholders judge performance and progress, allowing
for comparisons of students with each other and against performance criteria
generated either within the ESP program or externally. Finally tests help to
ensure that student progress is judged in the same way from one time to the
next, in other words, that the assessments are reliable. These are reasons for
formal testing in any language - teaching program and ESP programs are no
different in their need for assessment instruments that reflect the
content and methodology of the courses, which we assume are themselves based on
an analysis of the target language use situation. Traditionally, ESP courses
and assessments have been contrasted with
“general English ” courses and
assessments, though this distinction has been somewhat blurred in recent years,
particularly since the publication of Bachman and Palmer ’ s book, Language Testing in Practice [1]. All
language tests require the developers to define the purpose of the test,
conduct a needs analysis, collect language use data in context, analyze the
target communicative tasks and language, and develop test tasks that
reflect the target tasks. ESP assessment instruments are usually
defined fairly narrowly to reflect a specific area of
language use such as English for academic writing, English for nursing,
Aviation English, or Business English, for example. Thus, ESP tests are based
on our understanding of three qualities of specific purpose language:
first, that language use varies with context, second, that specific
purpose language is precise, and third that there is an interaction between
specific purpose language and specific purpose background
knowledge. With regard to contextual variation, it is well known that
physicians use language differently from air traffi c controllers,
university students in economics use language differently from students in
chemistry, and football/soccer players use language differently on the fi
eld than do ice hockey players on the rink. Furthermore, physicians use English
differently when talking with other medical practitioners than when talking
with patients, though both contexts would be categorized under the heading of
Medical English. Cotos showed, by means
of corpus analyses of published research article introductions in 50 different
academic disciplines, that the discourse conventions of each discipline showed
both similarities and differences across disciplines[2]. Context has been defined
variously over the years, but the classic features of context proposed by
Hymes are still useful today:
situation, participants, ends (purposes), act sequence (organization, content),
key (tone), instrumentalities (language, medium), norms of interaction, and
genre [3]. The manipulation of these aspects of context in ESP tests challenge
test takers to respond to differences in communicative context in ways perhaps
more finely tuned than in more general language assessments. Secondly,
regarding the notion that specific purpose language is precise, what
outsiders refer to as unnecessary “
gobbledygook ” in academic, vocational,
and technical fields, in fact reflects practitioners in those
fields wishing to be more precise and accurate in their communication.
Legal language, or “ legalese,” is the
most- often cited example of such precision.
Although the language is this paragraph might seem unnecessarily obtuse
to non - lawyers, we would suggest that the reason for its jargonish tone is
the legal mind’ s desire for precision,
covering all possible contingencies, and mitigating the possibility for
misinterpretation or ambiguity. This is the second distinguishing feature of
specific purpose language.
Finally, an important distinction between assessment in ESP and
assessment in other areas of language teaching/learning is the relationship
between language ability and background knowledge. In traditional, non -
specific purpose assessment, content, or background knowledge has often
been viewed as a confounding factor, masking
“ true” language ability and
producing “ construct irrelevant
variance.
References:
1. Douglas , D. ( 2005 )
Testing languages for specific purposes. In E.
Hinkel (ed.), Handbook of
Research in Second Language Teaching and Learning. 857 – 68.
Mahwah, NJ : Lawrence
Erlbaum.
2. Cotos, E. ( 2011 )
Potential of automated writing evaluation feedback. CALICO Journal 28 ( 2 ):
420 – 59.
3. Hymes, D. ( 1974 )
Foundations in Sociolinguistics: An Ethnographic Approach. Philadelphia, PA : University of Pennsylvania Press.