Language, Content and Skills in the Testing of English for Academic Purposes

 

 

 

 

Raphael Gamaroff, South African Journal of Higher Education, 12 (1), 109-116, 1998.

 

Abstract

Introduction

Review of the literature

Reliability and validity

Method

Results

Discussion

Conclusion

References

Appendix

Abstract

In modern education there is a critical need to understand better the relationship between language, (organisational) skills and content/subject matter. Nowhere is this more evident than in the teaching of academic and scientific discourse. An examination is made of the evaluations of second language lecturers of English for Academic purposes (EAP) and of the evaluations of Science lecturers at a South African historical black university (HBU). Comparisons are firstly made within each group of raters (lecturers), and secondly between the two groups. Both groups of raters were asked to evaluate first-year students’ essays on the “Greenhouse Effect”. The findings show that there is a wide range of scores and judgements within each group as well as between the two groups of raters, which affects the reliability of the scores. Reasons are offered for this wide range of variability in judgements and scores.

Introduction

In education there is a critical need to understand the relationship between language skills, organisational skills and content/world knowledge/subject matter. Nowhere is this more evident than in the testing of academic and scientific discourse. Language teachers and subject teachers (e.g. biology and history teachers) can learn from one another with regard to the testing of academic discourse. To test, and accordingly to teach, academic and scientific discourse, science teachers need a knowledge of language and language teachers need a knowledge of science.

A comparison is made between the evaluations of second language lecturers of English for Academic purposes (EAP) and the evaluations of Science lecturers at a historically black university in South Africa, where the mother tongue of students is a Bantu language. In South Africa the vast majority of learners are not mother tongue users of English, thus the EAP that educators and researchers are mostly concerned with in South Africa is English as a second language.

The article consists of four parts: 1. The problematic distinction between language, content/subject knowledge and organisational skills/argument; 2. Definitions of the terms academic and scientific discourse, as well as of the two key terms in testing, namely reliability and validity; 3. Method of the investigation; 4. Results of the investigation and discussion.

Review of the literature

One of the major findings in the South African Committee of University Principals Report (HSRC, 1981) was that the lack of proficiency in the second language as medium of instruction, namely English, was the main cause of poor achievement among black learners. And one of the preliminary findings of the “Programme for the educationally disadvantaged pupils in South Africa” (Botha & Cilliers, 1993) was that there are three major areas of concern in black schools in South Africa, namely, cognitive deprivation, language inadequacies and consequent scholastic backlogs.

For Young (1987:164) these scholastic backlogs, manifested by the poor pass rate of Standard 10 pupils in the Department of Education and Training (DET) “is often rooted in language incompetence, the causes of which cannot be found in the English subject classroom alone, but across the curriculum in every subject taught through English as a medium.” (The DET – now defunct – was the controlling body in South African black education until 1994, the year in which South Africa’s first democratic elections were held).

Young’s opinion is a common one, e.g. Mcintyre (1992:10) for whom the problem in English as the medium of instruction for disadvantaged black learners “lies with language competence and not with subject competence”. Contrary to Mcintyre, other researchers are loath to make such a clear distinction between “language” and “content” (Saville-Troike 1984; Snow & Brinton,1988; Spack, 1988; Bradbury, Damerell, Jackson & Searle, 1990; Murray, 1990;Starfield, 1990; Starfield & Kotechka, 1991; Angelil-Carter, 1994). These authors believe that language cannot be taught without content and skills. Consider some of the issues in the problematic distinction between language content and (organisational) skills:

Mcintyre’s “language competence” could mean linguistic competence or general language proficiency. Linguistic competence is only one part of language proficiency.

Linguistic competence is the knowledge of how to relate sounds to meaning. It is this relationship that we study in pure linguistics, whereas language proficiency is concerned with all of the following (Bialystok, 1978:71-75):

– Language input – exposure to the language.

– Knowledge in language use – storage of input. This knowledge is of three kinds: (1) “explicit linguistic knowledge” (conscious knowledge of the language); (2) “implicit linguistic knowledge” (unconscious knowledge of the language); and (3) “other knowledge” such as mother tongue, other languages, and knowledge of the world. I equate the latter with content knowledge.

– Output – the product of comprehension and production.

One of the great mysteries remains the relationship between input, information- processing and output (Mandler, 1984), which is closely tied up with the often deceptive relationship between the notions of “language” knowledge and “content” (or “subject”) knowledge: deceptive in that researchers (e.g. Hughes, 1989:82) often assume that there is a clear distinction between these two notions, which is not the case. On the one hand, it is recommended (Hughes, 1989:82) that we test language proficiency and nothing else, because language testers, according to Hughes, are “not normally interested in knowing whether students are creative, imaginative, or even intelligent, have wide general knowledge, or have good reasons for the opinions they happen to hold”. Much of the research into language learning has concentrated largely on linguistic knowledge, because it is widely believed that linguistic knowledge can be separated from content knowledge and skills, as in Hughes above (Saville-Troike 1984:199).

It is difficult, perhaps impossible, to separate the language-specific cognitive structures of language proficiency from content knowledge (Langacker, 1987; Taylor, 1989:81ff) or from problem-solving abilities (Vollmer, 1983:22; Bley-Vroman, 1990) which, according to Bialystok above, are components of language proficiency.

With regard to language and content, it is difficult to sort “knowledge into two discrete epistemic sets: knowledge of language [i.e. linguistic knowledge] and knowledge of the world”(Biggs 1982:112). As a result of this difficulty, it is often not possible to decide which of these two kinds of knowledge – not neglecting other possible factors – causes academic failure. Bolinger (1965) refers to the attempts to distinguish between these two kinds of knowledge as the “atomisation of meaning”. One might also describe this attempt as a splintering of culture, where culture “as a whole may be charactarizable as a vast integrated semiotic in which can be recognized a number of subsemiotics, one of which is language” (Lamb, 1984:96).

With regard to content and skills, content can be learnt by rote – where rote memory does not require any higher order skills -or it can be learnt so that it becomes assimilated into cognitive structure, which requires higher order cognitive skills, e.g. comparison and transfer. Content may not be understood owing to the fact that the higher order skills are not developed enough to make content part of cognitive structure. So if content is a problem, so also is skills, because the one is enmeshed in the other.

We have a dilemma. On the one hand, it is recommended (Hughes, 1989:82) that we test language ability and nothing else, but, on the other hand, it is difficult, perhaps impossible, to separate language-specific cognitive structures from general problem-solving abilities (Vollmer, 1983:22; Bley-Vroman, 1990) or from content (Taylor, 1989:81ff; ). What Mathews (1964:89) states with regard to reading applies to EAP as a whole:

As one cannot ignore the importance of human variabilities and the situational and existential resonance of every reading [EAP] experience, so he cannot think long about reading (EAP) without thinking about content…Nevertheless we speak of “reading [EAP] teachers” in contradistinction to “content area instructors”. Whom do we fool in asserting such a hopelessly arbitrary dichotomy…For our present purpose, reading [EAP] may be defined as a civilized means of enlarging the mind in one or more identifiable ways: and the character of that enlargement will be controlled by the material – the content. Nowhere in the world do means exist totally isolated from materials and objects; then how can we so often act as if the act of reading [EAP] can be abstracted from content? I am saying that the study of reading [EAP] as something divorced from content is hopelessly limited, and ultimately false, in its approach.

Reliability and validity

Two key concepts in evaluation are reliability andvalidity. The reliability of a test is concerned with the accuracy of scoring and the accuracy of the administration procedures of the test (e.g. the way the test is compiled and conducted). I focus on interrater reliability, which has to do with the equivalence of scores and of judgements between raters. In the discussion of raters’ judgements I deal with the three components/criteria of language (grammar, vocabulary and punctuation), topic (content knowledge) and organisation.

If reliability is concerned with how we measure, validity is concerned with what we are supposed to measure, i.e. with the “purposes of a test” (Carmines & Zeller, 1979), . In this investigation we are measuring the three criteria of language, content and organisation. A distinction is often made between content and skills (where the latter refers to the organisation of content). EAP is meant to teach academic skills. But as the data will show, it is not easy to distinguish between content and skills. Nor is it easy to distinguish between these two and “language”.

It is possible for a test to have high reliability, where raters give equivalent scores but this does not necessarily mean that the tests are valid, i.e. that these scores represent what they are supposed to measure. For instance, if all raters of an essay believe that language is the most important criterion in academic discourse and accordingly give equivalent scores in terms of language, the interrater reliability will be high. But, the question is whether language should indeed be the most important criterion in academic discourse. If teachers believe that language should play first fiddle to other criteria such as content and organisation, then the test is not being used for the intended purposes of measuring academic discourse, because academic discourse should involve high proficiency in all three of the abovementioned criteria. If one of these criteria is underplayed, the use of the test would be invalid. To summarise the problem in reliability and validity: raters may choose whatever they fancy to measure and in so doing may obtain equivalent scores, i.e. high interrrater reliability, but if they do not measure what they are supposed to measure, then the use of the test would be invalid.

[Oller (1979:272) defines reliability in terms of correlation (the way in which two sets of scores vary together) and rejects equivalence (in scores between raters) as a factor in reliability. I prefer Ebel and Frisbie’s (1991:76) view that equivalence in scores is also an important aspect of reliability].

Method

The research was carried out at the University of Fort Hare, where English is the medium of instruction. The vast majority of the students are mother tongue speakers of Xhosa and South Sotho.

Data were collected from two sources at Fort Hare on two occasions, one year apart: 1. the English Department and 2. Science departments . The Science departments include biology, zoology, geography and agriculture departments. The data from the English Department were collected from an EAP workshop on evaluation held in 1990. The following year I presented the same protocols used in the EAP workshop to a group of Science lecturers for evaluation. I then compared the evaluations of the English lecturers with the evaluations of the Science lecturers. The English group is dealt with first, followed by the Science group.

English group

The EAP evaluation workshop was held for the English Department in the second semester of 1990. The workshop consisted of 11 raters; 10 from the English Department and the workshop leader from another university. I also participated in the workshop, and am designated in this article as English Rater 3. Five of these English lecturers were literature lecturers and to my knowledge these literature lecturers had no or little training in EAP.

The essays that were evaluated belonged to students from the University of the North West (formerly the University of Bophuthatswana) who had taken a course in Special English (SPEN) based on Murray and Johansen’s “EAP in South Africa Series” (1989), which is now used in several universities in South Africa. The purpose of the SPEN course was to help students to read and write for academic purposes. The following academic skills were taught in three consecutive sections:

1. Reading for a specific purpose to find out the answer to a question, and writing definitions, notes and summaries. These skills prepare the student to write an argument. (Book 1, “Reading for meaning”).

2. Writing for an academic audience, which consists of planning, drafting and editing. (Book 2, “Writing for meaning”).

3. Writing to improve, which focuses on lexis, grammar, structure and style. (Book 3, “Write to improve).

The final exam essay of the SPEN course, which I deal with in this article, was a response to a question based on Appendix 6 of Murray and Johansen’s Book 3. The title of the essay question was: “Discuss how climatic changes brought about by the Greenhouse Effect are likely to affect the world’s plant and animal species.”

During the SPEN course, students studied Appendix 6 of Murray and Johansen (Book 3), which consisted of 16 readings on different aspects of the Greenhouse Effect. They were instructed to write mind maps and notes on the readings, and to think about the kind of questions that could be set in the exams. The idea was that students would transfer the reading and writing skills practised on other topics in the course work to the examination topic, namely, the Greenhouse Effect. The students had no previous teaching on the background material for the essay question, but were expected – after they had spent a large part of the SPEN course learning about organisational skills – to prepare for the exam during the latter part of the course by studying the 16 readings at the back of their textbook. In the exam they were expected to apply the study skills they had learned to the prepared material. There was only one question, which was presented for the first time at the exam. The fact that notes and reading material were not allowed into the exam room and that the question was not accompanied by any helpful texts needs to be taken into account, because, in my opinion, the absence of these supports made the exam very difficult for a first-year EAP student.

The EAP workshop consisted of the following procedures:

1. A discussion on the criteria of academic essay writing. The following main criteria were listed by the lecturers.

– Knowledge of topic/content/subject matter

– Clear expression

– Confident handling of material

– Argumentation

– Appropriate selection of subject matter (content)

– Cohesion/coherence/clarity.

2. In the main part of the workshop, each rater (lecturer) was presented with photocopies of four protocols, one from each of four students, and were given approximately half an hour to evaluate them. Raters had ample time to study the background reading material to the topic (the readings in Murray and Johansen’s Book 3), which was provided prior to the workshop.

3. Each of the 11 raters provided a score for each of the four protocols; 44 scores in all.

4. Each rater supplied reasons for their respective scores.

I deal mainly with Protocol 1 (see the appendix for a copy of Protocol 1), but also deal briefly with Protocol 2. I have only selected the opinions of seven of the 11 English raters on Protocol 1, because four of them did not make direct comments on this particular protocol.

Science Group

I gave 18 Science lecturers copies of two protocols (Protocols 1 and 2) for their evaluation as well as instructions containing guidelines on the criteria to be evaluated, of which I received eight responses. The data of these eight lecturers are used in this article. The lecturers were requested to comment on the following: 1. content, 2. organisation, and

3. language.

The Science lecturers were not informed that the protocols belonged to EAP students, and assumed that the essays were written by first-year Science students. The instructions to the Science lecturers did not contain the background information that was supplied to the English workshop group. The Science group, therefore, didn’t know whether the students were previously exposed to the background knowledge of the examination essay through explicit teaching or through the student’s own private research. In spite of the Science group’s lack of background information in this regard, this lack of information shouldn’t have had any significant effect on their judgements, although it is possible that the scores could have been affected. For example, if a rater judged the student to be hopeless on all three criteria, but was aware that the student had not been taught the material beforehand, the rater is likely to be more lenient and accordingly award a higher score. But I emphasise that we must keep judgements and scores as two distinct issues, as I shall show shortly.

 

Results

The table below shows the scores and summaries of the judgements of the English group (seven raters) and Science group (eight raters) on Protocol 1. The three criteria are content/topic, organisation and language. The blank spaces indicate that a rater (lecturer) has apparently ignored a specific criterion. This could either mean that little or no importance was attached to this criterion or that raters who left blank spaces thought that the criteria overlapped.

 

 

 

Discussion

With regard to the judgements and scores of both groups, the following observations are important:

1. Of the 14 raters who commented on organisation, seven raters (two English, five Science) were positive, and seven raters (four English, three Science) were negative. Thus within the two groups combined there is an equal split in opinion; seven positive and seven negative.

2. Raters emphasise different criteria, and apparently ignore others. If a rater comments on either organisation or content but not both, this does not necessarily mean that the rater ignores one or the other, but perhaps that content is implicit in organisation. The reason is that it is difficult to separate content from organisation. I also don’t think that one can separate language (as defined above) from the other two criteria, unless in Orr and Du Toit’s superficial sense of language as “clothes covered with food stains and dirt, with perhaps a few missing buttons” (Du Toit & Orr, 1987:199).

3. Similar scores between raters do not necessarily mean similar judgements. For example English Raters 2 and 6 have the same score (58%) but they have opposite judgements concerning the strength of argument (organisation) of the essay. They agree that the content is on the topic, which strengthens my argument, because if they agree that the content is on the topic it is possible to isolate the criterion that is causing the disagreement (i.e. organisation), and thus to infer that radically different views on organisation (strength of argument) yields identical scores.

4. If similar scores between raters do not necessarily mean similar judgements (as in 3 above), it is also true that different scores between raters do not necessarily mean different judgements. For example, English Raters 3 and 6 have radically different scores (80% and 58%) but similar judgements. Here is an example from the Science group: Science Raters 2 and 5 have radically different scores (30% and zero respectively) yet similar judgements on organisation and language. Science Rater 5 said the content was “poor”. I would think that Science Rater 2, who does not say anything about content thought along similar lines.

5. Some of the descriptions are vague; for example it is not clear whether comments such as “General and broad” (English Rater 4) and “A few major errors” (English Rater 6) are favourable or unfavourable judgements. In the case of the latter example, it is possible that English Rater 6, who is a second language speaker and sometimes does make a few grammatical faux pas in conversation with his colleagues, has confused “There are a few major errors” with “There are few major errors” The first remark is negative, the latter positive.

6.What is of ultimate interest is the different academic perspectives behind these judgements. In this regard, I quote English Rater 2 (verbatim from the videotape of the workshop. Hence the conversational style).

On the topic, but argument seems to be weaker. It didn’t have the strength of argument in it. It is effect-centred rather than cause-and-effect centred. Some bits of logic in paragraphs 1 and 2 I could follow – incoherence.

English Rater 2 maintains that Protocol 1 is more a literary discourse (i.e. “effect-centred”) than a scientific discourse (i.e. “cause-and-effect centred), where the emphasis in scientific discourse, he claims, should be on logic. This implies that literary discourse does not require logic. Rater 1 did make the point in the English workshop discussion that logic is also required in literary discourse. I would add that logic is not a unitary entity but rather consists of a cluster of diverse “logics”, just as discourse consists of a cluster of diverse “discourses”.

Contrast English Rater 2’s comment quoted above with English Rater 3’s (myself) comment on Protocol 1 below:

Is very well sorted out – the intro is there. In the next paragraph, the student actually discusses animals and plant reproduction. So he [she] takes one aspect, he takes reproduction of animals and plants and gives a general idea about it and then in the next paragraph we have supporting ideas; some animals produce their offspring and so on. So he is taking the reproduction side of animals and plants, and shows how that is affected by the Greenhouse Effect. In the next paragraph, he discusses the effects. He is getting more specific.

English Rater 2 is looking for the right things (e.g. logic) in EAP, but does not find them. His claim is that the student indulges in “effects” (he means literary devices) instead of the cause-and-effect of scientific discourse. On the contrary, Rater 3 (myself) finds little evidence of literary “effects” and much evidence of logical progression. In fact, academic discourse, whether literary discourse (the creation of a literary work as well as the analysis of a literary work) or scientific discourse, both require logical thought.

It is important to note that the essay question was unaccompanied by any textual aids, which means that the student, during the EAP course had to ingest, chew, digest and assimilate 16 diverse texts on global warming, some of them quite long and complicated, ranging from the causes of global warming to its effect on plants and animals. My judgement above (English Rater 3) takes this into account, as well as the fact that the student had no choice of question in the exam. For these reasons I thought that Protocol 1 deserved high marks. I concede that after considering (being swayed by?) the scores and judgements of the other participants in the English workshop, I thought that my 80% was a bit too generous, and that about 70% would have been a fairer score. Having conceded this point, I nevertheless do not share the general opinion of the Science group that the language of Protocol I was poor. On the contrary I do not think that the quality of the language was spoiled by a few “missing buttons” (Du Toit and Orr above).

It is interesting that Rater 2’s main training is in literature, while Rater 3 is trained in Applied Linguistics as well as in the teaching of General Science. Of course there is no necessary causal link between the two rater’s different judgements and their different training.

Here is a glaring example of different perspectives that shows the differences between a rater who has been trained in the hard sciences (e.g. biology and chemistry) and a rater who has been trained in the humanities (e.g. literature and philosophy). Consider the title of the essay again.

Discuss how climatic changes brought about by the Greenhouse Effect are likely to affect the world’s plant and animal species

To illustrate my argument I need to refer to Protocol 2, which I introduce into the discussion. Here is the relevant sentence in Protocol 2: “The danger of the use of these gases is the climatic changes which affect the plant,man and animal species.” The student devoted a large part of the essay to the effects of these gases on man.

I introduce English Rater 10 (the leader of the workshop), who was not used in the discussion of Protocol 1, but whose following comment on Protocol 2 is of interest. She comments: “At the end of the 2nd paragraph to last paragraph about people that was not really relevant.” Accordingly, English Rater 10 judged the student to be completely off the topic and awarded the student a score of 10%.

The Science raters had a different interpretation of this sentence. Three of them circled the word “man” in Protocol 2, where one of them commented in the margin of the student’s protocol that “man is an animal species”. These 3 Science raters did not think that it was necessary to distinguish between man and animal in this particular sentence, owing to the fact that in science “man” is subsumed under “animal”. The Science raters are not saying, as English Rater 10 does, that it is illegitimate for the student to include “man”. On the contrary, they think it is quite legitimate to do so. Their criticism is that it is incorrect in the biological sciences to say “man and animal” as if man were separate from the animal kingdom.

A (biological) scientist regards a human as an animal while an English teacher (in this case, Rater 10) does not usually regard man as an animal. So, for an English teacher (e.g. our English Rater 10), if students specify the effects on “man” in this essay, they could be heavily penalised for writing off the topic. Recall Rater 2’s score of 10% on this protocol.

7. The specialised background knowledge required was too difficult for first-year EAP students to learn (on their own). Even some of the Science raters differed, quite markedly in some cases, on the truth of the facts presented. The point is that this topic should not have been given to the SPEN students to study, especially without any assistance from the lecturer. The fact is that it is not possible to separate discourse skills from factual knowledge. Therefore it would not be correct, especially in EAP to expect students to be able to transfer skills from one kind of text to another (e.g. a sociology text to a biology text) without taking into account the content knowledge of the different texts. With regard to the relationship between language skills and content knowledge, consider the following comments of two Science raters on Protocol 1:

Science Rater 6

Comment A is addressed to this researcher; comment B to the student.

A. `I’m far more concerned with the underlying processes than the detailed language problems. Grammar and spelling on borderline.’

B. `Your understanding of the processes is shaky. Vocabulary problems hinder your expression of ideas. Overall sequence of essay is sound.’

1 There are two problems here: the first is whether it is possible to have a sound sequence of ideas when there are vocabulary problems which hinder the expression (sequence) of ideas. The second problem is that it is not certain whether this rater regards vocabulary as part of language or/and as a part of organisation.

The point here is that in comment A Science Rater 6 underplays language, revealing a lack of appreciation of the full meaning of academic discourse, which involves not only a knowledge of the subject matter and the ability to argue, but also involves academic language proficiency. However, EAP teachers as well often pay scant attention to language, i.e. grammar, spelling and punctuation, which I think should be a crucial criterion of academic discourse.

Science Rater 8

Comment A is addressed to this researcher about Protocols 1 and 2; comment B to the Protocol 1 student.

A. Both essays show glaring spelling and grammatical errors. However as this is a science exercise and the student is presumably being tested on his knowledge of the subject, this should not affect his mark. However, if his ability to write is being judged, then the approach would be different and should be penalised.

Science Rater 8, in Comment A above , distinguishes between “ability to write” and “knowledge of the subject”. These distinctions are very vague. I am not censuring Rater 8, but rather pointing out the problems in trying to distinguish between the three criteria of content, skills and language.

And Comment B of Science Rater 8, which is directed to the student:

B. A good attempt to answer the question. You have related your answer to what is actually asked for.

If we compare Science Rater 8’s score of 70% with the scores of English Rater 5 and English Rater 3 (68% and 80%, respectively), we notice that both English raters thought the student’s language was good, whereas Science Rater 8 thought it was “glaring”, and yet does not take it into account.

Conclusion

The findings show that there is a wide range of scores and judgements within each group and between the two groups of raters, which affects the reliability of the scores. This wide range of opinion shows the difficulties in the testing/teaching of academic discourse, where EAP teachers, if not ignorant of the double connotation of the term academic (scientific) discourse, often do not consider this double connotation to be a problem. The radically different scores and judgements within the Science group are also cause for concern. It is noteworthy that seven of the eight Science lecturers gave negative comments about language in Protocol 1. In my view the language of Protocol 1 was good, and was not significantly affected by a few “missing buttons”.

The fact remains that to ensure high interrater reliability there should only be a narrow range of scores and judgements between raters. This article discussed the reasons for this wide range of variability in terms of the three criteria language, organisation and topic. With regard to these criteria, the following observations stand out in the evaluations:

1. Language. All the Science lecturers, except one, think the language is bad. Of the four English lecturers who gave their opinions on language, three of them thought that the language was good.

2. Organisational skills. English lecturers as well as Science lecturers are radically divided on the issue.

3. Content/Topic. The science content is a problem for EAP lecturers. However, the Science lecturers, in several instances, also do not agree on the accuracy of the content in the protocols.

Scientists as well as EAP teachers can help one another to improve the quality of the teaching and testing of academic discourse. Both groups need to understand the fact that the successful evaluation of scientific discourse can only be achieved when one has an adequate understanding of the relationship between language, skills and content.

References

Alderson, J.C. & Clapham, C. 1992. Applied linguistics and language testing: A case study of the ELTS test. Applied Linguistics, 13(2):149-167.

Angelil-Carter, S. 1994. The adjunct model of content-based language learning. South African Journal of Higher Education (SAJHE), 3(2):9-14.

Bialystok, E.1978. A theoretical model of second language learning. Language Learning, 28(1):69-84.

Biggs, C. 1982. In a word, meaning. In Crystal, S. (ed.) 1982. Linguistic controversies. London: Edward Arnold.

Bley-Vroman, R. 1990. The logical problem of foreign language learning. Linguistic Analysis, 20 (1-2):3-49

Bolinger, D. 1965. The atomization of language. Language, 41:555-573.

Botha, H.L. & Cilliers, C.D. 1993. Programme for educationally disadvantaged pupils in South Africa: A multi-disciplinary approach. South African Journal of Education, 13(2):55-60.

Bradbury, J., Damerell, C., Jackson, F. & Searle, R. (1990). ESL issues arising from the “Teach-test-teach” programme. In K. Chick, (Ed), Searching for relevance: Contextual issues in applied linguistics. South African Applied Linguistics Association.

Brown, K. 1984. Linguistics today. Suffolk: Fontana Paperbacks.

Carmines, G. & Zeller, A. (1979). Reliability and validity assessment. Beverley Hills, California: Sage Publications.

Child, J. 1993. Proficiency and performance in language testing. Applied Linguistic Theory, 4(1/2):19-54.

Chomsky, N. 1965. Aspects of the theory of syntax. Cambridge, Massachusetts: M.I.T. Press.

Du Toit, A. & Orr, M. (1989). Achiever’s Handbook. Johannesburg: Southern Book Publishers.

Ebel, R.L. & Frisbie, D.A. 1991. Essentials of educational measurement. (5th ed.) Englewood Cliffs, new Jersey: Prentice Gall.

HSRC. 1981. Language teaching: Report of the Committee of University Principals. Pretoria: Human Sciences Research Council (HSRC).

Hutchinson, T. & Waters, A. (1987). English for special purposes: a learner-centred approach. Cambridge: Cambridge University Press.

Hughes, A. 1989. Testing for language teachers. Cambridge: Cambridge University Press.

Lamb, S.M. 1984. Semiotics of language and culture. In: Fawcett, R.P., Halliday, M.A.K., Lamb, S.M. & Makkai, A. 1984. The semiotics of culture and language, Vol. 2. London: Frances Pinter (Publishers).

Langacker, R. (1987). Foundations of cognitive grammar, Vol. 1. Theoretical preliminaries. Stanford: Stanford University.

Leech, G. 1981. Semantics. Harmondsworth, Middlesex: Penguin.

Mcintyre, S.P. 1992. Language learning across the curriculum:A possible solution to poor results. Popagano, 9-10 June, Mmabatho.

Mandler, J.M. 1984. Stories, scripts and scenes: Aspects of schema theory. Hillsdale, New Jersey. Lawrence Erlbaum Associates.

Mathews, J.H. 1964. The need for enrichment of concept and methodology in the study of reading. In: Thurstone, E.L. & Hafner, L.E. New concepts in college-adult reading: Thirteenth yearbook of the National ReadingConference. Milwaukee 3, Wisconsin: The National Reading Conference, Inc.

Murray, S. 1990. Teaching English for Academic Purposes (EAP) at the University of Bophuthatswana. South African Journal of Higher Education, Special Edition.

Murray, S. & Johanson, L. 1989. EAP series for Southern Africa. Randburg: Hodder & Stoughton.

Porter, D.1983. Assessing communicative proficiency: The search for validity. In: Johnson, K. & Porter, D. (eds.). Perspectives of communicative language teaching. London: Academic Press, Inc.

Snow, M.A. & Brinton, D.M. 1988. The adjunct model of language instruction: An ideal EAP framework. In Benesch, S. (ed.). Ending remediation: Linking ESL and content in higher education. PLACE: PUBLISHER.

Spack, R. 1988. Invention strategies and the ESL college composition student. Ruth Spack. TESOL Quarterly, Vol 18, no.4, Dec 1984. p.649-670.

Starfield, S. 1990. Contextualising language and study skills. South African Journal of Higher Education (SAJHE), Special Edition.

Starfield, S. & Kotechka, P. 1991. Language and learning: The Academic Support Programme’s intervention at the University of the Witwatersrand, Paper presented at the South African Applied Linguistics Association Conference, July.

1Taylor, J.R. (1989). Linguistic categorization. Oxford: Oxford University Press.

Vollmer, H.J. 1983. The structure of foreign language competence. In: Hughes, A. & Porter, D. (eds.). Current developments in language testing. London: Academic Press.

Young, D. 1987. A priority in language education: Language across the curriculum in black education. In: Young, D. & Burns, R. (eds.). Education at the crossroads. Rondebosch: University of Cape Town.

Appendix

Title of essay

“Discuss how climatic changes brought about by the Greenhouse Effect are likely to affect the world’s plant and animal species.”

Protocol 1

Climatic changes are true result of the greenhouse gases. These gases are concentrated in the atmosphere and they block the solar heat that travels back to space. They cause climatic changes and even changes in pattern of rainfall. These gases are carbon-dioxide, methane, chlorofluorocarbons, nitrous oxide and water vapour. The consequence of this gases also caused disastrous events like flood, drought and hurricanes.

The rise in temperatures are likely to affect the manner in which plants and animals reproduce. It will disrupt the way in which plants flowers and fruit. In dry seasons, some plants might not be able to fruit. This will cause shortage of food to animals and they eventually die because of hunger. Some of the plants do not grow under extremely high temperatures this will also lead to the destruction of the forest trees.

Some animals reproduce their offsprings according to how the climate is. The turtle for example, produce females when it is warmer and males when it is cooler. If the climate has changed, it will bring about the imbalance of sexes among certain animals. Animals like elephants alter their behaviour according to how wet or dry it is. If it is wet they gather together and the superior bull have a tendency to mate with all females thus transmitting its superior genes to those females. When it is dry they scatter and begin to live in swampy areas. The subordinate bull can now mate with female and transmitting its inferior genes.

The first dangerous effect of climatic change is over the Arctic. The sea ice of the Arctic is essential for animals that live and feed of it, e.g. walruses and the animals that migrate across it. Animal like the corals live near the coast of the seas. If the sea-level rises, the corals might not be able to cope up and this will bring to the total destruction of the corals.

According to the scientists warming is likely to occur more at latitudes near the poles but the tropics which are already hot will warm slightly.. Many animals will migrate to the north where they will not experience severe greenhouse effect e.g. prevailing winds, drought, hurricanes and floods. People living in the north will be able to have more crop yield. And they will be able to help people who are experiencing the changing of the climate.

In order to stop this, people should stop producing gases that contribute to the greenhouse. Trees should be planted in order to reduce certain amount of carbon dioxide. They should introduce a special tax for the emission of the carbon dioxide. In order to eliminate warming that can cause climate change deforestation and pollution should be stopped. The production should be stopped. People should make use of the low CO2 emitting sources.

 

Advertisements

3 responses to “Language, Content and Skills in the Testing of English for Academic Purposes

  1. English Language School in London October 9, 2013 at 8:51 am

    Hey, really nice article! I was looking for such a great article.. Thanks for sharing & keep posting

  2. Pingback: french 6 | OneDaringJew

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: