Native and Non-native Teachers’ Pragmatic Criteria for Rating Request Speech Act: The Case of American and Iranian EFL Teachers

Document Type : Research Article

Authors

1 West Tehran Branch, Islamic Azad University

2 Sharif University of Technology

Abstract

Abstract: Over the last few decades, several aspects of pragmatic knowledge and its effects on
teaching  and  learning  a  second  language  (L2)  have  been  explored  in  many  studies.  However,
among  various  studies,  the  area  of  interlanguage  pragmatic  (ILP)  assessment  is  quite  novel
issue and many features of it have remained unnoticed. As ILP assessment has received more
attention recently, the necessity of investigation on the EFL teachers‟ rating criteria for rating
various  speech  acts  has  become  important.  In  this  respect,  the  present  study  aimed  to
investigate  the  native  and  non-native EFL teachers‟ rating scores and criteria  regarding  the
speech  act  of  request.  To  this  end,  50  American  ESL  teachers  and  50  Iranian  EFL  teachers
participated to rate the EFL learners‟ responses to video-prompted Discourse Completion Tests
(DCTs)  regarding  the  speech  act  of  request.  Raters  were  supposed to rate the EFL learners‟
responses and mention their criteria for assessment. The results of the content analysis of raters‟
comments revealed nine criteria that they considered in their assessment. Moreover, the result
of  the  t-test  and  chi-square analyses of raters‟ rating scores and criteria proved that there are
significant differences between native and non-native EFL teachers‟ rating pattern. The results
of this study also shed light on importance of sociopragmatic and pragmalinguistic features in
native  and  non-native teachers‟ pragmatic rating, which can have several implications for L2
teachers, learners, and material developers.
معیارهای معلمان زبان بومی و غیربومی در نمره دهی کنش کلامی درخواست : مورد معلمان انگلیسی زبان آمریکایی و ایرانی
چکیده: طی چند دهه اخیر،  جنبه های  متفاوتی از دانش منظور شناختی  و تاثیر آن بر آموزش و فراگیری زبان دوم در تحقیقات متعدد مورد بررسی قرار گرفته است. با این حال، در میان تحقیقات مختلف، ارزیابی منظور شناختی بینازبانی تقریبا یک موضوع جدید محسوب می شود و بسیاری از جنبه های آن مورد بررسی قرار نگرفته است. به دلیل اهمیت زیادی که اخیرا ارزیابی منظور شناختی بینازبانی پیدا کرده است، ضرورت بررسی معیارهای سنجش معلمان انگلیسیِ بومی و غیربومی در نمره دهی کنش های کلامی آشکار گشته است. در این زمینه، تحقیق حاضر با هدف بررسی معیارها و نمره های داده شده توسط معلمان انگلیسیِ  بومی و غیربومی در ارزیابی کنش کلامی درخواست صورت پذیرفته است.  بدین منظور، 50 معلم انگلیسی زبان آمریکایی و 50 معلم انگلیسی زبان ایرانی برای ارزیابی مجموعه موقعیت آزمون تکمیل گفتمان زبان آموزان ایرانی مشارکت کردند. معلمان می بایست پاسخ های داده شده توسط زبان آموزان را نمره داده و معیار هایی که  برای ارزیابی در نظر گرفته بودند، ذکر می کردند. در نتیجه ی تحلیل محتوای نظرات معلمان،9 معیار که معلمان در ارزیابی زبان آموزان در نظر گرفته بودند مشخص شد. علاوه بر این، نتیجه ی آزمون تی و مربع خی نمرات داده شده و معیارهای ذکر شده معلمان نشان داد که تفاوت معناداری بین روش های نمره دهی معلمان بومی و غیر بومی وجود دارد. همچنین نتایج به دست آمده از این مطالعه  اهمیت ویژگی هایمنظور-جامعه شناختی و زبان-منظورشناختی را در ارزیابی منظور شناختی معلمان بومی و غیربومی مشخص کرده می تواند کاربرد گوناگونی برای معلمان زبان دوم،  دانش آموزان، و طراحان کتاب های درسی داشته باشد.
کلید واژه ها: نمره دهی منظور شناختی بین زبانی، ملاک نمره دهی،  در خواست، آزمون موقعیت تکمیل گفتمان

Keywords

Main Subjects


Introduction

Generally, the study of second language learners’ pragmatic ability is called interlanguage pragmatics (ILP). The prevalence of speech act studies in pragmatics is undeniable. Such studies are performed in both acquisitional areas, which deal with EFL learners’ developmental issues, and comparative areas, which are dominantly of cross-cultural studies (Alcon-Soler & Martinez-Flor, 2008). Another categorization associated with pragmatic studies is Leech’s (1983) and Thomas’s (1983) pragmalinguistic and sociopragmatic division of language knowledge in which linguistic and social dimensions of pragmatic knowledge are dealt with respectively. For example, in conceptualizing a speech act, various linguistic recourses that can be employed to produce a speech act are examples of pragmalinguistic understanding, while considering social norms and politeness concerns in a given situation denotes the sociopragmatic knowledge.

Among several speech acts, due to its important role in our lives, request is one of the dominant ones; as Blum-Kulka, House, and Kasper (1989) put it:

By making a request, the speaker infringes on the recipient’s freedom from imposition. The recipient may feel that the request is an intrusion on his/her freedom of action or even a power play. As for the requester, s/he may hesitate to make requests for fear of exposing a need or out of fear of possibly making the recipient lose face. (p. 11)

As it can be clearly realized, request is a critical speech act due to its directive nature, according to Searle’s (1976) taxonomy, and usually leads to face threatening conditions; therefore, it should be exchanged between the interlocutors with great care (Brown & Levinson, 1987; Searle, 1976).

Various studies (i.e., Blum-Kulka, House, & Kasper, 1989; Jalilifar, Hashemian, & Tabatabaee, 2011; Woodfiled, 2008) tried to analyze different aspects of production of this speech act; however, there are few studies about its assessment, specifically ILP rating (i.e. Taguchi, 2011). Therefore, this study aimed to help us understand the ILP rating patterns of native and non-native EFL teachers which have various implications for EFL teachers, materials developers and learners.

 

Review of the Related Literature

Usually request act occurs when the speaker (requester) expresses his wants to the hearer (requstee) and wants him/her to do something for his benefits (Trosborg, 1995). According to Yule (1996), besides other speech acts, request can be manifested through both off-record as well as on-record means. There are other categorizations proposed for the request speech act; it can be either direct or indirect, both of which can be conceptualized in three ways: (a) direct, (b) conventionally indirect, and (C) non-conventionally indirect (Blum-Kulka et al., 1989). Indirect requests are associated with more polite situations, while direct request forms denote impoliteness (Brown & Levinson, 1987; Leech, 1983). Table 1, based on works of Blum-Kulka and Olshtain (1984), Blum-Kulka (1989), Blum-Kulka, House, and Kasper (1989), Trosborg (1995), and Van Mulken (1996), presents request categorization.

Table 1. Request Categorization

Request Categorization

Individual Strategies

(in declining directness)

Direct Requests

1. Imperatives

2. Hedged/Unhedged performatives

3. Locution Derivables

4. Want Statements

Conventionally Indirect Requests

1. Suggestory Formula

2. Temporal Availability

3. Prediction

4. Permission

5. Willingness and Ability

Non-Conventionally Indirect Requests.

1. Strong Hints

2. Mild Hints

 

Moreover, it is important to note that indirect and direct forms of request are conceptualized via various means and, despite some universal characteristics, these means are not consistent across various languages (see Blum-Kulka et al., 1989; García, 1993; Takahashi & DuFon, 1989; Walters, 1979; Wierbzicka, 2003). Blum-Kulka and Olshtain (1984) analyzed the similarities and differences across several languages (English, French, Danish, German, Hebrew, Japanese, and Russian). They divided request into different segments such as address terms, head act, and adjunct to head act; moreover, they found the three hypotheses concerning universal features, namely (a) existence of non-central parts such as internal and external modification; (b) choices of direct and indirect requests; and (c) the three degree of indirectness (as mentioned above).

Studies on request speech act are not merely limited to cross-cultural studies; there are also numbers of interlanguage pragmatic studies about EFL learners’ competence in performing this very speech act (see Barron, 2012; Economidou-Kogetsidis, 2011; Kasper & Schmidt, 1996). For example, Woodfiled’s (2008) comparison of Japanese and German EFL learners’ competence with native speakers revealed that like native speakers, EFL learners tended to employ indirect strategies more than direct strategies. But, they used few internal modifications and ample direct strategies in their requests, unlike what native speakers did.

Moreover, various interlanguage pragmatic studies regarding the speech act of request was conducted in Iranian context. Jalilifar, Hashemian, and Tabatabaee (2011) in their study of Iranian EFL learners’ strategies in request speech acts highlighted some habits of EFL learners’ in requesting such as high-level learners’ overuse of indirect form or low-level learners’ overemployment of direct forms.

With the growing role of pragmatic competence in L2 teaching, some insights into pragmatic assessment have been shared and several methods of assessing speech acts, including request, have been recommended. Role plays, discourse completion tasks (DCTs), multiple choice discourse completion tasks (MDCTs), video and picture prompts, and self-assessment are among the testing techniques for pragmatic assessment of the speech acts of request, apology, and refusal which have been developed based on various contextual factors such as Power, Social Distance, and Imposition (Brown, 2001; Brown & Levinson, 1987; Hudson, Detmer, & Brown, 1995). Roever (2005, 2006, 2007, 2008, 2010) developed various pragmatic testing techniques for the mentioned set of speech acts; his methods, unlike the previous testing techniques that merely dealt with politeness, focused on implicatures and routine formulas and were free from bias toward test takers’ L1 backgrounds.

Among assessment considerations, the raters-related concerns are important, since raters’ facets lead to unreliable evaluations and cause (a) the severity or leniency effect, (b) the halo effect, (c) the central tendency effect, (d) inconsistencies, and (e) the test biases (Bachman, 1990; Eckes, 2005; Myford & Wolfe, 2003). One of the interesting areas of study in raters’ issues and behaviors in assessment is the variations and differences between native English speaker (NES) and non-native English speaker (NNES) teachers’ rating of L2 learners. Several studies revealed some contradictory results. For example, Barnwell (1989) found that native raters are more severe, while others such as Fayer and Krasinski (1987) claimed the opposite.

Moreover, as a result of observations of foreign language classes and raters’ behaviors, several assumptions about language teachers’ considerations concerning L2 rating were proposed. For instance, Alcon-Soler and Martinez-Flor (2008) mentioned the mass of studies and teachers’ focus on pragmalinguistic aspect of language, while neglecting or undermining the sociopragmatic features. However, Alemi, Eslami-Rasekh, and Rezanejad (2014) observed EFL teachers’ considerations regarding both pragmalinguistic and sociopragmatic aspects of language.

Regarding the raters’ variation in pragmatic assessment, Taguchi (2011) investigated the Japanese native teachers’ assessment of request and opinion speech acts. Several similarities and differences in their use of pragmatic and social norms in evaluating were found in raters’ evaluation; some of them mainly considered linguistic forms, whereas others focused on non-linguistic factors. Raters were also diverse in their degree of sensitivity to different types of errors. In addition, Youn (2007) analyzed Korean native teachers’ assessment of non-native Korean learners’ pragmatic knowledge which results showed the inconsistent severity in Korean raters’ ratings of the intended speech act. Alemi and Tajeddin’s (2013) study of NES and NNES teachers’ rating of EFL learners’ refusal production revealed some major differences between these two groups of raters. Moreover, in analysis of Iranian EFL learners’ compliments and compliment responses, similar results were discovered (Rezanjed, 2013). However, none of the previous studies on interlanguage pragmatic rating targeted the assessment process of the request speech act by NES and NNES teachers; therefore, the present investigation aimed to do it with addressing the following research questions:

1. What are the criteria used in NES and NNES rating of EFL learners’ production of the speech act of request?

2. Is there any significant difference between NES and NNES raters in rating scores and criteria mentioned in the speech act of request produced by EFL learners?

 

Methods

Participants

Participants of this study included 50 Iranian NNES and 50 American NES raters for rating the DCTs. The NNES raters were EFL teachers who were teaching English in various language institutes and universities in Iran and the NNES raters were American EFL teachers who were teaching English in different parts of the world. Since accessing to NES rates was nearly impossible in the context of Iran, they were found in LinkedIn which is a professional social network.  Moreover, the raters of the study involved both genders with different teaching backgrounds and experiences. Another group of this study included 12 Iranian EFL learners from one of the language institutes of Tehran who produced answers to the prepared video prompts. The EFL learners were chosen from the upper-intermediate level, since they had to be capable of understanding the request video prompts and producing the intended speech act.

Instrumentation

Video-prompted DCTs were the main instrument of this study. Compared with written methods, Video-prompted DCTs are superior methods of pragmatic production elicitation with various aspects of interlanguage interaction. They include both verbal and non-verbal properties of the conversation and elicit the intended speech act naturally from the EFL learners (Yamashita, 2008). Six employed video prompts of this study were extracted from American movies and TV-series which covered various contextual variables such as different degrees of power, familiarity, and imposition among the interlocutors. Twelve upper-intermediate EFL learners who could comprehend the video prompts and provide a response to each situation were asked to watch the video prompts and respond to each situation. Finally, some of the EFL learners’ responses were selected based on their relation to the purpose of the study and were transcribed with their specific situations similar to completed WDCTs. The final opted responses were typical instances of Iranian EFL learners’ request productions with their specific pragmatic failure and deficiencies. Moreover, a five-point Likert scale and a blank part were placed under each answer for raters to evaluate and mention their criteria.

Data collection procedure

EFL learners were asked to watch 6 request video prompts and respond to each of them. The video prompts situations and the responses were later transcribed by the authors. Afterwards, each situation was accompanied by only one answer from EFL learners. The final completed video-prompted DCTs were distributed among 50 American and 50 Iranian EFL teachers to rate according to their pragmatic appropriateness from 1-5 on Likert scale and to mention their rating criteria in assessment of each answer.

Data analysis

The study was intended to find out the differences existing among NES and NNES raters’ rating criteria in their pragmatic assessments of request speech act; therefore, mixed-method analysis was employed, which consisted of both qualitative and quantitative analysis. During the qualitative phase, raters’ responses were subject to content analysis for illuminating the various criteria that they had mentioned in their comments. Through quantitative analysis, the frequency was calculated for each of the identified criterion in order to identify the major criteria, and then t-test and chi-square of the rating scores and criteria’s frequencies were computed respectively to estimate the differences between NES and NNES raters’ judgments.

 

Results

Request rating criteria

To address the first research question, native and non-native raters’ comments about EFL learners’ responses to the video-prompted DCT were analyzed carefully and the following nine criteria, including both sociopragmatic and pragmalinguistic aspects, were found to be prominent in raters’ judgments. The raters’ criteria and definition of each criterion followed by native and non-native raters’ comments on EFL learners’ requests are given in Table 2.

 

Table 2. Native and Non-Native Raters’ Rating Criteria Regarding the Assessment of Request Speech Act

Criteria

Definition

Examples

Directness and Indirectness

Based on various situations and contextual requirements, speakers might prefer to request directly or indirectly and sometimes just through hints without uttering the request explicitly.

NES: The comment should also refer to the music directly rather than as ‘it’, since the manager may not understand immediately.

NNES: this way is not acceptable, especially because of asking indirectly.

Politeness

Due to its variability in different cultures, politeness has become an extremely controversial issue. Politeness considerations have to be taken into account regarding the social distances, dominance, and degree of imposition in a specific situation between the interlocutors.

NES: Maybe, a more careful and polite request would be appropriate here - it is your superior you are talking to!

NNES: It is a polite request. I would not ask her rudely either.

Linguistic Appropriacy

This criterion refers to grammatical and vocabulary mistakes in EFL learners’ productions.

NES: The sentence fragment and is poor grammar.

NNES: There are some grammatical errors. There is no need to use question mark at the end of sentence.

Authenticity and Cultural Anomalies

It refers to linguistically correct, but pragmatically wrong utterances. Such responses seem odd and unnatural according to the target culture’s social norms.

NES: Americans generally are not so aggressive in their requests. The Speakers draw attention to themselves by apologizing (this is less common in North America, where speakers are as likely to say ‘Hello’ or some equivalent greeting), which implicates an awareness that the speaker is interrupting or taking the hearer away from some other task.

NNES: it's near to native speaker's use when facing such situations.

Explanation

In order to make comprehensible request or to indicate that the request is not out of place and illegitimate, it is urgent to come up with brief explanation before requesting. Vague and abrupt explanations are considered rude in some circumstances and might not lead to addressee’s approval.

NES: It would be better to explain why you want the music turned down. While it’s not always necessary to offer a reason, when you’re dealing with your boss, you want him to know that you’re not being picky just because you can.

NNES: The response could have been made more appropriate by explaining the reason behind this request

Appropriate Alternatives

It represents comments which contain raters’ examples of the appropriate form of the speech act with highlighting its essential moves and strategies.

NES: Say something like “I would really appreciate your kindness because it would really help right now if you would do me the favor of lending me the book. I will return it as soon as I am done”.

NNES: she can have an introduction to her talk to the manager. Something like, “Sorry sir, I’m studying and I’ll have an exam tomorrow. I talked about it today. Loud music normally irritates me. Do you mind turning it a bit down?” I will highly appreciate it.

Query Preparatory and Softeners

To decrease the intensity of the request, it is suitable to employ modal softeners which implicate listener’s ‘ability’ or ‘willingness’ as propagators or to add ‘please’ or expressions such as ‘would you mind’

NES: The speaker gives a reason for the request. The request is appropriately softened by ‘wondering’ and ‘could’.

NNES: The word ‘please’ can be used at the end of the sentence as an alternative.

Formality and Social Status

Different social status and relationship among interlocutors demand various degrees of formality. For example, using colloquial and informal language with a friend is acceptable, but employing such a language with a professor, or someone who is a higher position than you are, is inappropriate.

NES: If the friend is close the phrase sounds too formal.

NNES: Totally appropriate because you understand the social status of the manager and the distance between each other. It is formal and polite.

Register Choice

Each situation demands specific word choice and conventions. It is sometimes hard for EFL learners to realize which register should be used.

NES: It is an appropriate register for the situation.

NNES: I think it is inappropriate to talk like that in this situation. The register should be in tuned with the context.

 

As Table 3 shows, there are some differences between NNES and NES raters’ use of the mentioned criteria. The most dominant criterion for NES raters was politeness, while among NNES the major criteria was formality and social status. The occurrence and dominance of the other criteria across the six situations were not totally compatible between the two groups, neither was the length of the raters’ comments. American NES raters employed politeness, query preparatory and softeners, vagueness and explanation, formality and social status, appropriate alternatives, linguistic appropriateness, directness and indirectness, register, and authenticity and cultural anomalies. However, Iranian NNES raters had different patterns of rating; they mentioned formality and social status, politeness, vagueness and explanation, query preparatory and softeners, directness and indirectness, register, appropriate alternatives, linguistic appropriacy and a lower percentage of authenticity and cultural anomalies.

Evidently, the use of query preparatory and softeners in requests seemed important for NES raters; the presence of ‘please’ (or as they say “magic word”) and modal softeners was highly frequent in their comments and examples. However, NNES raters did not consider this factor as much. Another point which needs to be mentioned is NNES raters’ insistence on paying attention to the degree of formality and social status between the interlocutors; the issue of social status of the addressee seemed significant for them, as they frequently mentioned it in their comments.

Table 3. Frequency of Request Criteria Mentioned by NES and NNES Raters

Situations

Rate’ Group

DI

Pol

LA

ACA

RC

Exp

AA

QPS

FSS

Total

1

NES

4

15

7

3

4

18

6

8

8

73

NNES

2

9

3

3

1

8

1

2

29

58

2

NES

5

18

6

4

3

23

10

15

13

97

NNES

3

16

2

1

3

15

3

7

23

73

3

NES

2

16

4

3

4

2

3

14

11

59

NNES

2

14

2

1

7

0

1

10

11

48

4

NES

1

27

8

2

9

6

12

14

11

90

NNES

5

23

1

1

4

4

0

5

26

69

5

NES

5

13

6

3

1

8

6

28

5

75

NNES

2

12

3

0

6

19

4

10

4

60

6

NES

8

27

3

4

2

7

6

14

2

73

NNES

14

29

1

0

7

3

6

9

12

81

Total

NES

25

116

34

19

23

64

43

93

50

467

NNES

28

103

12

6

28

49

15

43

105

389

Percentage

NES

5.35%

24.84%

7.28%

4.06%

4.92%

13.7%

9.2%

19.91%

10.7%

100%

NNES

7.19%

26.47%

3.08%

1.5%

7.19%

12.6%

3.85%

11.05%

26.99%

100%

 

 

In the next phase, Table 4 presented below displays the two groups’ descriptive statics to identify the variations of rating scores. As it is depicted in Table 4, NES raters awarded scores were more lenient compared to their counterparts in the first, second, third and the last situations. Moreover, the highest mean in both groups of raters belonged to the third situation. The lowest mean among NES raters is for the fourth situation’s mean and among NNES raters it occurred in the sixth situation’s mean. Table 4 also shows the degree of variations in raters’ scores, which implies the higher standard deviations among NNES raters in most of the situations (all the situations except the last one), which proves the lack of convergence in NNES raters.

Table 4. Descriptive Statistics of Ratings by NES and NNES Raters for Request

Situations

Raters’ Group

N

Mean

SD

1

NES

50

4.58

0.75

NNES

50

3.74

1.32

2

NES

50

3.82

0.94

NNES

50

2.92

1.2

3

NES

50

4.74

0.52

NNES

50

4.1

1.18

4

NES

50

1.8

0.9

NNES

50

2.14

1.04

5

NES

50

3.3

0.9

NNES

50

3.6

0.96

6

NES

50

1.9

0.98

NNES

50

1.76

0.98

 

NES and NNES raters’ different rating scores and criteria use

The second research question was posed to discover the differences between Iranian NNES raters and American NES raters in rating the request. Therefore, an independent samples t-test was run to compare the difference in request scores rating between NES raters and NNES raters. As it is illustrated in Table 5, the results of the t-test indicate that there was a significant difference between the rating scores of American and Iranian raters in rating the EFL learners’ outputs regarding the speech act of request (t (97) = 2.94, p= .004).

 

Table 5. Independent Samples T- test of NES and NNES Raters’ Scores

 

 

Levene's Test for Equality of Variances

t-test for Equality of Means

 

F

Sig.

t

df

Sig. (2-tailed)

 
 

Total

Equal Variances Assumed

2.027

.158

2.944

97

.004

 
             

 

Finally, to understand whether there is any significant differences between NES and NNES raters’ criteria employment, a chi-square test was run. As Table 6 and Table 7 indicate, the result of the chi-square (χ2 (17) = 210.84, p< .05) proves that there was a significant difference between NES and NNES ratings of the request.

Table 6. Chi-Square of NES and NNES Raters’ Criteria

 

Value

Df

Asymp. Sig. (2-sided)

Pearson Chi-Square

210.843a

17

.000

Likelihood Ratio

252.002

17

.000

Linear-by-Linear Association

40.741

1

.000

N of Valid Cases

849

 

 

a. 6 cells (16.7%) have expected count less than 5. The minimum expected count is .92.

 

Table 7. Symmetric Measures Chi-Square of NES and NNES Raters’ Criteria

 

Value

Approx. Sig.

Nominal by Nominal

Phi

.498

.000

Cramer's V

.498

.000

N of Valid Cases

849

 

 

Discussion

Similar to other types of assessment, pragmatic assessment is influenced by several factors such as test task, rater characteristics, and rating criteria. The present investigation was conducted to explore ILP rating criteria among American and Iranian EFL teachers whilst rating the request speech act. As a result of analyzing the first research question, nine criteria have been extracted from raters’ comments.

The EFL teachers’ criteria included both sociopragmatic and pragmalinguistic categories among both NES and NNES raters. For example, criteria such as formality and social relationship or politeness belong to sociopragmatic category, while linguistic appropriacy or query preparatory and softeners fit into pragmalinguistic aspect of language.  Regardless of what might have been imagined, the majority of chosen criteria belonged to sociopragmatic features. Some of the criteria, such as politeness, explanation, and directness, have been mentioned in previous studies (Alemi, Eslami-Rasekh, & Rezanejad, 2014; Alemi & Tajeddin, 2013), since they are not merely important in producing request speech act and their use is necessary in performing other speech acts as well, unlike, a criterion like use of query preparatory and softeners which has been used for request speech act more often.

Obviously, there were some differences between NES and NNES teachers’ given scores and preferred criteria which might be due to various reasons such as their different cultural and social backgrounds. NNES raters also mentioned fewer criteria compared with NES raters as observed in previous studies (see Alemi & Tajeddin, 2013; Rezanejad, 2013) which could be because of lack of NNES raters’ knowledge. For example, Iranian EFL teachers were inclined to mention one or two criteria in each situation, while American EFL teachers mentioned their criteria more precisely with more detail. Another noticeable discrepancy observed in NES and NNES raters’ rating is about Iranian raters’ failures to draw on softeners and query preparatory; according to Blum-Kulka and Olshtain’s (1984) category, query preparatory which has been mentioned by our NES raters abundantly, is one of the major strategies in requesting.

Moreover, the result of the mean analysis of rating scores indicated that NNES rated the EFL learners’ answers more severely than NES raters, at least in four of the situations. This proves the severity of NNES teachers, which has been claimed previously by Barnwell (1989); however, in Alemi and Tajeddin’s (2013) study, NES and NNES raters’ rating in assessment of refusal speech act proved the opposite. Moreover, it has also been demonstrated that variations of rating among NNES raters based on the calculated SDs of each group in each situation, as presented in Table 2, are more noticeable than NES raters, which proves the profusion of divergence in NNES teachers.

The deficiencies in NNES raters’ evaluation and the significant different between NNES raters and NES raters highlight the inadequacy of some of the NNES teachers’ pragmatic knowledge and the necessity of developing teacher training courses, especially pragmatic training for NNES teachers (see Alemi, 2012; Bardovi-Harlig & Hartford, 1997; Eslami-Rasekh, 2005; Rose, 2005). Of course, it has to be mentioned that such inconsistencies between NES and NNES raters are not merely limited to Iranian teachers, as it has been pointed out by Taguchi (2011).

The importance of teaching pragmatic knowledge to Iranian EFL learners is also quite clear, as most of their teachers are not trained for teaching and rating English pragmatic performance and they cannot transfer this knowledge to their students. In addition, both groups of raters show their sensitivity to pragmatic deficiencies of the EFL learners’ responses rather than their linguistic mistakes. As Eslami-Rasekh and Eslami-Rasekh (2008) claimed in their study, the effect of pragmatic instruction and awareness is undeniable in L2 teaching programs.

 

Conclusion

This study revealed nine different criteriathat NNES and NES EFL teachers considered in the course of rating DCTs of Iranian EFL learners about the request speech act. The derived criteria were employed in different amounts by the raters, with politeness being the leading criterion for NES raters and formality and social status for NNES raters.

Furthermore, It has been revealed that raters’ use of each criterion were not consistent across different situations, as some criteria were present in all the situations and some of them have not been mentioned at all in some situations. Moreover, the results of the study indicated that there is a significant difference between NES and NNES raters’ patterns of scoring and employing the criteria. This signifies NNES raters’ deficiencies in rating L2 pragmatic output. Furthermore, the importance of both sociopragmatic and pragmalinguistic awareness besides linguistic accuracy in producing pragmatic output became quite obvious, although their perceptions are not consistent between all raters.

Generally, in this study, NNES teachers were more severe in their ratings, as they assigned lower scores to EFL learners’ request productions compared to NES teachers. They also did not mention some of the important criteria mentioned by NES as much; furthermore, they revealed more variations in ratings of DCTs. The inconsistencies observed in ratings of Iranian NNES raters based on the SDs and inappropriacy of EFL learners’ responses prove the need for teaching the L2 pragmatic conventions and norms to both EFL learners and teachers. Appropriate pragmatic teaching courses for EFL teachers who are deprived of authentic material and input could lead to a more convergent and consistent evaluation and teaching procedure and consequently pragmatically knowledgeable language learners.

Furthermore, the present investigation has some implications that prove the necessity of pragmatic knowledge in EFL curricula. According to raters’ scores and observed pragmatic incompetency of EFL learners’ production, it is urgent to teach L2 pragmatic in language classes. Indubitably, teaching pragmatics can be influential when it is supplemented with appropriate textbooks and syllabi which are filled with authentic samples of target language in various situations, so material developers should be cognizant about different social norms between target society and learners’ cultures. As a result, to insert the pragmatic knowledge within L2 educational system in Iran, where English is taught as foreign language and is culturally and socially distinct from the American context, several alterations in educational policies are required.

Alcon-Soler, E., & Martinez-Flor, A. (Eds.). (2008). Investigating pragmatics in foreign language learning, teaching and testing. Clevedon: Multilingual Matters.
Alemi, M. (2012). Patterns in interlanguage pragmatic rating: Effects of rater training, intercultural Proficiency, and self-assessment (Unpublished doctoral dissertation). Allameh Tabataba’i University, Tehran, Iran.
Alemi, M., Eslami, Z. R., & Rezanejad, A. (2014). Rating EFL learners’ interlanguage pragmatic competence by non-native English speaking teachers. Procedia-Social and Behavioral Sciences, 98, 171-174.
Alemi, M., & Tajeddin, Z. (2013). Pragmatic rating of L2 refusal: Criteria of native and non-native English teachers. TESL Canada Journal, 30, 63–81.
Bachman, L. F. (1990). Fundamental considerations in language testing.     Oxford: Oxford University Press.
Bardovi-Harlig, K., & Hartford, B. S. (1997). Beyond methods: Components of second language teacher education. The McGraw-Hill Second Language Professional Series: Directions in second language learning. New York: McGraw Hill.
Barnwell, D. (1989). “Naive” native speakers and judgments of oral proficiency in Spanish. Language Testing, 6(2), 152–163.
Barron, A. (2012). Interlanguage pragmatics: From use to acquisition to second language pedagogy. Language Teaching, 45(01), 44-63.
Blum-Kulka, S., & Olshtain, E. (1984). Requests and apologies: A cross-cultural study of speech act realization patterns (CCSARP). Applied Linguistics, 5(3), 196-213.
Blum-Kulka, S., House, J., & Kasper, G. (1989). Cross-cultural pragmatics: Requests and apologies. Norwood, NJ: Ablex.
Brown, J. D. (2001). Pragmatics tests: Different purposes, different tests. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp.301-325). New York: Cambridge University Press.
Brown, P., & Levinson, S. (1987). Politeness: Some universals in language. Cambridge: Cambridge University.
Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessments: A many-facet Rasch analysis. Language Assessment Quarterly, 2(3), 197-221
Economidou-Kogetsidis, M. (2011). “Please answer me as soon as possible”: Pragmatic failure in non-native speakers’ E-mail requests to faculty.     Journal of Pragmatics, 43(13), 3193-3215.
Eslami-Rasekh, Z. (2005). Raising the pragmatic awareness of language learners. ELT Journal, 59(2), 199 208.
Eslami-Rasekh, Z., & Eslami-Rasekh, A. (2008). Enhancing the pragmatic competence of non-native English-speaking teacher candidates (NNESTCs) in an EFL context. Investigating Pragmatics in Foreign Language Learning, Teaching and Testing, 30(2), 178-197.
Fayer, J. M., & Krasinski, E. (1987). Native and nonnative judgments of intelligibility and irritation. Language Learning, 37(3), 313-326.
García, C. (1993). Making a request and responding to it: A case study of Peruvian Spanish speakers. Journal of Pragmatics, 19, 127-152.
Hudson, T., Detmer, E., & Brown, J. D. (1995). Developing prototypic measures of cross-cultural pragmatics (Technical Report 7). Honolulu: University of Hawai‘i, Second Language Teaching and Curriculum Center.
Jalilifar, A., Hashemian, M., & Tabatabaee, M. (2011). A cross-sectional study of Iranian EFL learners' request strategies. Journal of Language Teaching and Research, 2(4), 790-803.
Kasper, G., & Schmidt, R. (1996). Developmental issues in interlanguage pragmatics. Studies in Second Language Acquisition, 18(2), 149-169.
Leech, G. (1983). The Principles of Pragmatics. London: Longman.
Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet research measurement: Part I. Journal of Applied Measurement, 4(4), 386–422.
Rezanejad, A. (2013). Rating EFL learner's interlanguag pragmatic competence   by native and non-native English speaking teachers (Unpublished master's thesis). Sharif University of Technology, Tehran, Iran.
Roever, C. (2005). Testing ESL pragmatics. Frankfurt: Peter Lang.
Roever, C. (2006). Validation of a web-based test of ESL pragmalinguistics. Language Testing, 23(2), 229–256.
Roever, C. (2007). DIF in the assessment of second language pragmatics. Language Assessment Quarterly, 4(2), 165–189.
Roever, C. (2008). Rater, item, and candidate effects in discourse   completion tests: A FACETS approach. In A. Martinez-Flor, & E. Alcon (Eds.), Investigating pragmatics in foreign language learning, teaching, and testing (pp. 249–266). Clevedon, UK: Multilingual Matters.
Roever, C. (2010). Effects of native language in a test of ESL pragmatics: A DIF approach. In G. Kasper, H. thi Nguyen, D. R. Yoshimi, & J.           Yoshioka (Eds.), Pragmatics and language learning (Vol. 12, pp. 187–212). Honolulu, HI: National Foreign Language Resource Center.
Rose, K. R. (1997). Pragmatics in the classroom: Theoretical concerns and practical possibilities. Pragmatics and language learning, 8, 267-292.
Rose, K. (2005). On the effects of instruction in second language pragmatics. System, 33(3), 385-399.
Searle, J. R. (1976). A classification of illocutionary acts. Language in Society, 5(1), 1-23.
Taguchi, N. (2011). Teaching pragmatics: Trends and issues. Annual Review of Applied Linguistics, 31, 289-310.
Takahashi, S., & DuFon, P. (1989). Cross-    linguistic influence in indirectness: The case of English directivesperformed by native Japanese speakers. Unpublished manuscript, University of Hawai'i at Manoa.Honolulu. (ERIC Document Reproduction Service No. ED 370 439)
Thomas, J. (1983). Cross-cultural pragmatic failure. Applied Linguistics, 4(2), 91-112.
Trosborg, A. (1995). Interlanguage pragmatics: Requests, complaints, and apologies. Berlin, Germany: Mouton de Gruyter.
Walters, J. (1979). Strategies for requesting in Spanish and English: Structural similarities and pragmatic differences. Language Learning, 29(2), 277-293.
Wierzbicka, A. (2003). Cross-cultural pragmatics: The semantics of human interaction. Second edition. Berlin, Germany: Mouton de Gruyter.
Woodfield, H. (2008). Interlanguage             requests: A contrastive study. Developing Contrastive Pragmatics: Interlanguage and Cross-Cultural Perspectives, 31, 231-264.
Yamashita, S. (2008). Investigating   interlanguage pragmatic ability. In E. A. Solar, & A. Martinez-Flor (Eds.), Investigating pragmatics in foreign language learning, teaching and testing (pp. 201–223). Bristol: Multilingual Matters.
Youn, S. J. (2007). Rater bias in assessing the pragmatics of KFL learners using facets analysis. Second Language Studies, 26(1), 85–163.
Yule, G. (1996). Pragmatics. New York: Cambridge.