Document Type : Research Article
Authors
1 Ph.D. Candidate in TEFL, Department of English, Faculty of Foreign Languages and Literatures, University of Tehran, Tehran, Iran
2 Associate Professor of Applied Linguistics, Department of English, Faculty of Foreign Languages and Literatures, University of Tehran, Tehran, Iran
Abstract
Keywords
Main Subjects
Introduction
Teacher cognition refers to the complex mental processes teachers engage in, encompassing their beliefs, knowledge, thoughts, and decision-making related to teaching and learning. By examining and refining their cognitive process, teachers can better understand their teaching strategies, identify areas for growth, and make informed decisions to improve the learning and academic success of learners. Borg (2003) defined teacher cognition as the knowledge, thoughts, beliefs, and actions of language teachers that includes “complex, practically oriented, personalized, and context-sensitive networks of knowledge, thoughts, and beliefs” (p.81) integral to their teaching. Teachers tend to integrate their personal beliefs and knowledge into their teaching; this integration influences their decision-making and pedagogical approaches within the multifaceted social, cultural, and historical context in which they operate (Johnson, 2006). Despite the extensive investigation into teacher cognition regarding grammar instruction (e.g., Borg & Burns, 2008; Nishimuro & Borg, 2013; Phipps & Borg, 2009), studies focusing on teacher cognition in EFL writing context, particularly on teacher cognition about WAL are scarce. This suggests that more studies are needed to understand how teachers’ cognitive processes influence their assessment of learners’ writing ability and what types of assessment strategies and feedback they provide to improve their learners’ writing performance.
Stiggins (1991) advocated for the need for the development of assessment literacy among teachers. From that point forward, there has been a growing interest in exploring the definition, emergence, and application of assessment literacy in language education (Hamp-Lyons, 2016). Assessment literacy refers to stakeholders’ ability to employ assessment tools effectively for learning improvement and grading purposes (Taylor, 2009). Teachers’ assessment literacy involves their knowledge in designing, applying, and evaluating assessment tasks to improve learning and program effectiveness (Webb, 2002). Teachers’ assessment literacy is regarded as a bridge that connects the quality of assessment procedures with the academic accomplishments of learners (Mertler, 2002). When teachers are familiar with diverse assessment types, they are better equipped to select the most suitable and effective assessment tools that align with their learning objectives (Siegel & Wissehr, 2011).
Despite the influence of teachers’ WAL on learners’ outcomes, there is a dearth of research in this area. Particularly research into teachers’ knowledge, beliefs, and practices remains limited (Crusan, Plakans, & Gebril, 2016; Sultanpour & Valizadeh, 2019; Valizadeh, 2019). Furthermore, critical elements of WAL, including the assessment strategies and the feedback have not been adequately examined. These gaps become even more evident within Iraqi Kurdistan, where as far as literature has been reviewed, no study has so far investigated teachers’ WAL. To address these research gaps, the present study employed an exploratory mixed-method approach to examine teachers’ stated knowledge, beliefs, and practices regarding writing assessment. The study also explored the common assessment strategies employed by teachers and the feedback they provide on learners’ writing.
Review of the Related Literature
Language Assessment Literacy
Assessing learners’ performance is one of the critical responsibilities of language teachers since it impacts all aspects of teachers’ work (Mertler, 2009). Teachers are required to collect reliable evidence of learners’ knowledge to support the learning process (Stiggins, 2014).
In order to effectively assess learners’ performance and support their learning, teachers should develop assessment literacy. The concept of ‘assessment literacy’ was first developed by Stiggins (1991) to refer to the skills and knowledge teachers need to possess to measure and support learning. Popham (2009) considers assessment literacy as “the didactic knowledge a teacher needs to have and an important component of teacher cognition” (p. 4). It refers to teachers’ ability to analyze, understand, and use assessment data to promote learning and instruction; it encompasses skills, knowledge, and principles that stakeholders need to possess conduct effective assessment practices (Inbar-Lourie, 2008).
Teachers who are literate in assessment practices know what they assess, why they assess, and how to assess effectively. These teachers are cognizant of the potential negative impacts of assessment and are equipped with strategies to mitigate these adverse effects and prevent their recurrence (Stiggins, 1995). They are familiar with basic concepts of assessment, and have acquired skills related to test production, test score interpretation and use, and test evaluation, and understand the roles and functions of assessment within education (Taylor, 2009). This understanding is critical for ensuring that assessment is used to enhance learning rather than being limited to just a measurement tool.
Language assessment literacy has received considerable attention from scholars due to its impact on learners’ academic achievement (Black & William, 1998). Teachers’ inadequate knowledge of assessment can negatively impact the quality of education (Popham, 2009) and lead to ineffective instruction, inaccurate assessment of learners’ progress, and ultimately, a failure to provide the targeted support learners need to reach their full potential. Teachers’ familiarity with assessment types helps them to make informed decisions about the most appropriate and practical tools that align with their students’ learning objectives (Siegel & Wissehr, 2011).
Despite the growing recognition of the centrality of AL, numerous studies have consistently revealed that AL has not received adequate attention from teachers
(Mertler, 2003; Popham, 2009). The insufficient attention of teachers regarding for AL is particularly obvious in the construction of effective assessment methods that align with desired learning outcomes. Without an understanding of assessment principles and techniques, teachers may resort to using generic testing methods that do not effectively measure the abilities of their learners (Volante & Fazio, 2007).
Teachers’ Writing Assessment Literacy
In second/foreign language education, assessment literacy of language teachers has been deemed a critical competency (Scarino, 2013). This ability not only determines desirable learning outcomes (Mertler, 2009; Weigle, 2007; White, 2009) but also equips teachers with the necessary tools to comprehend, analyze, and utilize data regarding learners’ performance and improve their instructional strategies (Falsgraf, 2005). To develop assessment literacy, second/foreign language teachers need guidance on essential aspects of assessment, such as scoring, grading, and evaluating learners’ performance (Taylor, 2009; Weigle, 2007; White, 2009). They need to master the necessary skills to select the appropriate assessment methods, develop valid assessment tasks, provide constructive feedback, and evaluate the teaching and learning outcomes. Developing these skills is crucial for teachers to accurately assess their learners’ performance (Boyles, 2006; Deluca & Klinger, 2010).
Despite the pivotal role of assessment in influencing learning outcomes, teachers may lack sufficient knowledge in this area (Popham, 2009; Scarino, 2013; Taylor, 2013; Weigle, 2007; White, 2009). White (2009) remarks, “while research indicates that teachers spend as much as one quarter to one-third of their professional time on assessment-related activities, almost all do so without the benefit of having learned the principles of sound assessment”
(p. 6). Teachers’ inadequate knowledge might be due to several reasons. One reason is the inherent complexities of writing assessments. Evaluating learners’ writing requires careful consideration of multiple factors, including content knowledge, organization, grammar, and style. This multifaceted nature of the process can make some teachers feel overwhelmed or unsure of their abilities (Weigle, 2007). Second, teachers may have an aversion to writing assessments (Weigle, 2007) or fail to recognize it as a fundamental component of their teaching responsibilities (Hamp-Lyons, 2003). Consequently, they may not adopt a systematic approach to assessment and may even avoid assessment activities or carry out assessments without reflection. Finally, many teachers believe that assessment is not a part of teaching practice and should be handled by external stakeholders.
It is critical to challenge the beliefs teachers hold about writing assessment since they are actively involved in assessing their learner’s performance and providing their learners with a fair evaluation (Hamp-Lyons, 2003). One of the major advantages of teacher involvement in the assessment process is their ability to align the assessments with the learning objectives and the teaching methods used in the classroom (McMillan, 2001). This alignment can ensure that assessment goes beyond simply measuring learning. It evaluates learners’ understanding of what they have learned and their ability to utilize what they have learned (Brookhart, 2003). It serves as a mechanism to measure not just the acquisition of knowledge but also the practical application of that knowledge in various contexts. Teacher-led assessments can also promote a culture of continuous feedback and improvement. Teachers can use the assessment results to identify the areas where learners are struggling and modify their teaching strategies accordingly (Hattie & Timperley, 2007). This can create a more responsive and effective learning environment.
Writing assessment literacy enables teachers to use effective assessment strategies. When teachers have adequate knowledge of assessment, they are better equipped to offer constructive feedback (Brookhart, 2001; Popham, 2009; Stiggins, 2002) essential for the development of students’ writing skills. Murray (1980) emphasized that writing is essentially rewriting, highlighting its messy and intricate nature. This is where the assessment of writing becomes important. Without the capacity to assess writing, recognize good writing, and comprehend our intended message, both as educators and writers, we lose a valuable means of communication (Crusan, 2010). Consequently, teachers should design tasks that allow for multiple drafts and revisions, enabling learners to improve their writing based on the feedback provided (Ferris, 2006).
Several studies have examined teachers’ WAL and its components. In an early study, DeLuca and Klinger (2010) showed that Canadian teacher candidates were more comfortable with summative assessment evaluating students’ final performance, rather than formative assessment, which focus on ongoing feedback. In another study, Naghdipour (2016) found that writing teachers adhere to conventional methods, and there is a notable absence of formative assessment, collaborative tasks, portfolio writing when assessing learners’ writing performance. In their survey, Crusan et al. (2016) reported an essential interplay between teachers’ knowledge, beliefs, and practices of writing assessment. They suggested that these three elements are not isolated but rather they interact and influence each other to shape teachers’ overall approach to writing assessment. In another study, Lam (2019) investigating the knowledge, perceptions, and practices of secondary school teachers in Hong Kong found that the majority of teachers possess a basic level of knowledge of writing assessment and hold positive views regarding alternative assessment in writing.
The review of the literature indicates that previous studies have not sufficiently examined the interplay between components of teachers’ WAL, such as their knowledge, beliefs, and practices. Furthermore, there is a lack of exploration of assessment strategies and the feedback practices of teachers. Therefore, the present study delves into the underexplored dynamics between the knowledge, beliefs, and practices of teachers’ WAL in the Kurdish EFL context and provides an analysis of the assessment strategies and feedback practices of Kurdish EFL teachers. It addresses the following questions:
Method
Participants
A total of 80 teachers volunteered to participate in the study. The participants were selected through convenience sampling. Teachers introduced the researchers to their colleagues who would be willing to participate in the study. Ten teachers (male= 7, female= 3) from those who filled out the questionnaire were recruited for semi-structured interviews.
Table 1. Distribution of the Participants
Gender |
Age |
Experience |
Major |
Male 31 |
Twenties 40 |
0-2 years 24 |
ELT (TEFL & TESOL) 16 |
Female 49 |
Thirties 29 |
3-5 years 23 |
Literature language and literature 41 |
|
Forties 10 |
6-10 years 18 |
Translation 9 |
|
Fifties 1 |
11-20 years 11 |
Applied Linguistics 8 |
|
|
Over 20 years 4 |
Basic Education 2 |
|
|
|
Other 4 |
Total |
|
|
80 |
Instruments
To collect quantitative data, an adapted version of Crusan et al.’s (2016) questionnaire was employed. The Questionnaire, which is a valid and reliable measure of teachers’ knowledge, beliefs, and practices of writing assessment, was distributed among teachers. It consists of
49 items, including multiple-choice and Likert-scale items that explore teachers’ backgrounds and perspectives on assessment. The questionnaire focused on what teachers know regarding writing assessment. The items were categorized into five sections, including demographic information about the participating teachers, teachers’ assessment strategies, knowledge of WAL, beliefs about WAL, and practices concerning WAL. Three items (i.e., item 1, item 3, and item 15) deemed irrelevant to the context were removed; these items pertained to the specific writing programs that were not available in the context of the study. The current study focused on general English writing programs rather than the specific writing programs referenced in the original questionnaire. Two other items (i.e., item 5 and item 6) were modified according to the context. Item 5, which originally inquired about the teachers’ degrees, included majors not available in Kurdistan, was modified to reflect fields of study that were relevant to the region. Item 6 pertained to training programs in assessing writing. We removed references to programs that were not pertinent to the Kurdish educational context, ensuring that the item accurately represents the training experiences available to teachers in the area. Cronbach’s Alpha of the questionnaire for the current study was 0.81, which is considered high. Semi-structured interviews were conducted with ten teachers. The interview focused on writing assessment strategies and feedback practices of teachers. Each section contained multiple questions to explore teachers’ knowledge and beliefs of WA strategies and feedback practices. Two experts in the field evaluated the relevance of the questions.
Data Collection
The initial step involved distributing the modified version of the WAL questionnaire developed by Crusan et al. (2016) among participating teachers. The questionnaire was distributed in both hard copy and Google Form formats to facilitate data collection. To reach a large number of EFL teachers, the researchers employed a variety of strategies. Firstly, one of the researchers and a colleague visited different universities, language institutes, and schools and distributed the questionnaire. In cases where physical visits were not feasible, the researchers obtained information from the teachers’ workplace and sent the questionnaire via Google Forms to those teachers who could not be reached in person.
Ten teachers who had completed the questionnaire were interviewed face-to-face. Each interview session lasted 30-40 minutes. The interviewees had different levels of teaching experience, ranging from novice to experienced with various educational qualifications. Teachers’ understandings and beliefs of WAL were discussed and audio-recorded during the interview. The interview questions were composed in a manner that allowed for additional questions to be asked based on the teachers’ responses. The interviewees were guaranteed anonymity and informed that their responses would be used only for the study.
Data Analysis
After distributing the questionnaire among the participants, the obtained data were analyzed using SPSS software version 26. The data obtained from the questionnaire were analyzed using descriptive statistics including frequencies and percentages which were used to summarize and describe the features of the datasets. Regarding the qualitative data, all the interview sessions were transcribed and typed manually by one of the researchers with great attention to details. Subsequently, the transcriptions were subjected to thematic analysis, during which patterns and themes were identified and coded. The thematic analysis began with a careful reading of the transcribed interviews. This initial reading was followed by a systematic coding process, where segments of the text were labeled with codes that succinctly summarized the core content. The codes were then collated into potential themes that summarized the salient aspects of the data relevant to the research questions.
Results
Teachers’ Knowledge, Beliefs, and Practices of WAL
Teachers’ knowledge of several common components of WA was examined through the WAL questionnaire. The results in Table 2 show that over 81% of the teachers were confident in their ability to develop writing tasks (item 5). When they were asked about the meaning of the concept of the scoring rubrics, 71% indicated that they understood what the term meant
(item 3). However, when questioned about their ability to design scoring rubrics, 72% were uncertain (item 6). Regarding teachers’ knowledge of alternative assessment, 67% claimed to understand its meaning (item 4). Concerning integrated writing tasks, 65% of the teachers indicated they knew what these tasks entail (item 2). Portfolio assessment was the least understood concept among the teachers, with only 62% acknowledging their familiarity
(item 1).
Table 2. Teachers’ Knowledge of Basic Concepts of Classroom Writing Assessment: Frequencies and Percentages
Items |
Percentages % / frequencies |
||||
|
D |
PR |
PS |
PN |
DN |
1. I understand the concept of portfolio assessment |
32 40.0 % |
18 22.5% |
18 16.3% |
13 16.3% |
4 5.0% |
2. I know what is meant by integrated writing tasks |
35 43.8% |
17 21.3% |
17 21.3% |
9 11.3% |
2 2.5% |
3. I comprehend the concept of scoring rubrics |
40 50.0% |
17 21.3% |
16 20.0% |
4 5.0% |
3 3.8% |
4. I understand the concept of alternative assessment |
32 40.0% |
22 27.5% |
18 22.5% |
4 5.0% |
4 5.0% |
5. I know how to design good writing tasks |
29 36.3% |
36 45.0% |
13 16.3% |
1 1.3% |
1 1.3% |
6. I am unsure about how to design a scoring rubric |
25 31.3% |
33 41.3% |
17 21.3% |
3 3.8% |
2 2.5% |
(D =definitely, PR = probably, PS = possibly, PN = probably not, DN = definitely not)
Ten items in the questionnaire were analyzed to examine teachers’ beliefs about different aspects of WA. The findings indicated that teachers employ various writing assessment methods and tasks (Table 3). While 75% of the teachers perceived self-assessment as the most important means of assessing learners’ writing proficiency (item 6), only 57% agreed that self-assessment provides an accurate picture of writing abilities
(item 10). When teachers were asked about integrated writing tasks, 69% acknowledged their importance in assessing writing (item 5). This was closely followed by essay exams, with 65% of teachers recognizing their significance (item 4). Portfolio assessment was considered necessary by 47% of teachers (item 9). Multiple-choice questions were the least favored technique for assessing writing, with 43% of the teachers favoring them (item 1). Regarding scoring writing tasks, 40% of the respondents viewed scoring process as subjective (item 3); 35% of teachers expressed uncertainty regarding the accuracy of WA (item 2). The majority of the participants (80%) agreed that writing assessment yields valuable feedback on learners’ writing performance (item 8); 75% of the respondents indicated that writing exams provide a reasonable estimate of their students’ writing ability (item 7).
Table 3. Teachers’ Beliefs of Classroom Writing Assessment: Frequencies and Percentages
Items |
Percentages responses% / frequencies |
||||
|
SA |
A |
NS |
D |
SD |
1. multiple-choice questions can be used to assess writing |
9 11.33% |
26 32.5% |
15 18.8% |
18 22.5% |
12 15.0% |
2. Scoring of writing is always inaccurate |
5 6.3% |
23 28.7% |
29 36.3% |
23 28.7% |
-- |
3. Scoring of writing is subjective |
5 6.3% |
27 33.8% |
26 32.5% |
22 27.5% |
-- |
4. Essay exams are best when it comes to assess writing |
18 22.5% |
34 42.5% |
14 17.5% |
12 15.0% |
2 2.5% |
5. Writing is best assessed when integrated with other skills |
21 26.3% |
34 42.5% |
14 17.5% |
9 11.3% |
2 2.5% |
6. Self-assessment can be a good technique for assessing writing |
15 18.8% |
45 56.3 % |
14 17.5% |
5 6.3% |
1 1.3% |
7. Writing exam provides a good estimate of writing ability |
13 16.3% |
47 58.8% |
14 17.5% |
5 6.3% |
1 1.3% |
8. writing assessment provides good feedback for writing instruction |
17 21.3% |
47 58.8% |
15 18.8% |
1 1.3% |
-- |
9. A portfolio is a good tool for assessing writing |
7 8.8 % |
31 38.8% |
38 47.5% |
2 2.5% |
2 2.5% |
10. Self-assessment provides an accurate picture of writing abilities |
7 8.8% |
39 48.8% |
21 26.3% |
12 15.0% |
1 1.3% |
(Strongly agree, A = Agree, NS = Not sure, D = Disagree, SD = Strongly disagree).
The final section of the questionnaire investigated teachers’ WA practices, including their use of assessment methods, scoring rubrics, rater training, and assessment activities. The findings indicated that 51% of the teachers reported frequently motivating learners to engage in self-assessment (item 5). This was followed by the regular incorporation of integrated writing assessment tasks, as reported by 56% of the respondents (item 4). The results indicated that portfolio assessment was less common among the participating teachers, with only 34% of teachers reporting using portfolio assessment (item 3). Half of the participants (50%) regularly used scoring rubrics (item 1). Data revealed that only 16% of the participants had rater training sessions to enhance their writing assessment skills (item 2). Regarding typical writing assessment activities, most teachers (76%) indicated that they use final exams, 71% reported utilizing out-of-class essay assignments, and 70% preferred timed in-class writing tasks.
Table 4. Teachers’ Practices of Classroom Writing Assessment: Frequencies and Percentages
Items |
Percentages responses% / frequencies |
||||
|
A |
VF |
O |
R |
N |
1. I use scoring rubrics when grading essays |
14 17.5% |
26 32.5% |
27 33.8% |
10 12.5% |
3 3.8% |
2. We do rater training in our program |
5 6.3% |
8 10.0% |
36 45.0% |
21 26.3% |
10 12.5% |
3. I use portfolios in my writing classes |
11 13.8% |
17 21.3% |
26 32.5% |
19 23.8% |
7 8.8% |
4. I integrate writing assessment tasks with other skills |
13 16.3% |
32 40.0% |
26 32.5% |
6 7.5% |
3 3.8% |
5. I ask learners to do self-assessments in writing classes |
10 12.5% |
31 38.8% |
33 41.3% |
4 5.0% |
2 2.5% |
(A=always, VF=very frequently, O=occasionally, R=rarely, N=never).
Assessment Strategies and Feedback Practices
The second research question aimed at identifying assessment strategies and feedback practices of the participating teachers. To this end, a semi-structured interview was conducted with ten teachers. The findings indicated that teachers reported using both summative and formative assessment methods in assessing learners’ writing abilities. Initially, six of the teachers demonstrated their unfamiliarity with the terminologies such as ‘formative’/ ‘summative’ assessment and ‘scoring rubrics’; in the following discussion during the interview, we realized that despite their unfamiliarity with these terminologies, they implemented these strategies in their classrooms; they were just unaware of the terminology. During the interview, we found that teachers encouraged the use of scoring rubrics to offer valid and fair assessments of learners’ writing abilities. Concerning the feedback practices of the teachers, the interview results indicated that teachers provided positive and corrective feedback on learners’ writing; they were aware of the significance of offering frequent and personalized feedback for improving learners’ writing (Table 5).
As Table 5 indicates, teachers used formative and summative assessment tailored to learners’ proficiency level to improve their writing; however, they preferred formative assessment through peer review, revision tasks, and online discussions. Teachers believed that through these collaborative and reflective practices, learners can identify their strengths and areas for improvement. Supporting the effectiveness of formative assessment one teacher stated:
We follow different formative assessment strategies, such as peer and self-assessments. I always encourage my learners to check their writing before submitting their assignments. They check their assignments for grammar and spelling mistakes. I also ask them to check the writing of their classmates and provide them with feedback.
In the interview initially we found that most teachers were unfamiliar with the concept of scoring rubrics. However, when we defined it, we realized they have been using scoring rubrics in their assessment of learners’ writing abilities. These rubrics, often recommended by the institutes or suggested in course materials, specified standard criteria for assessing writing and were employed by teachers to promote an objective and fair assessment. Teachers adapted the prescribed scoring rubrics to align assessment criteria with learners’ proficiency levels and the goals of the writing tasks (Table 5).
Table 5. Teachers’ Writing Assessment Strategies and Feedback Practices
Components of WA |
Assessment Themes |
Teachers’ views of assessment strategies and feedback practices |
Assessment Strategies |
formative and summative assessment methods |
1- Teachers used formative and summative assessments depending on learners’ levels (seven teachers). 2- Teachers encouraged the incorporation of integrated writing tasks, peer and self-assessment, revision tasks, and online discussion, all of which are forms of formative assessment (six teachers). 3- Teachers preferred using portfolios (a summative assessment tool) (three teachers). |
use of rubrics and criteria |
1- Teachers used prescribed assessment criteria and rubrics (six teachers). 2- Teachers modified the prescribed scoring rubrics to align with the learners’ proficiency levels and the purpose of the writing tasks (six teachers). 3- Teachers emphasized objective and fair assessment through the use of scoring rubrics (8 teachers). 4- Teachers expressed that the use of scoring rubrics plays a pivotal role in ensuring grading consistency (five teachers). |
|
Feedback Practices |
use of positive corrective feedback |
1- Teachers emphasized the provision of positive feedback such as offering suggestions and motivating learners, taking into account learners’ feelings while giving feedback (four teachers) 2- Teachers delivered corrective feedback using symbols, commented on the writing of students, and used tracking changes in Microsoft Word (nine teachers). 3- Teachers supported the practice of peer feedback (five teachers) |
frequency of feedback |
1- Teachers provided regular feedback (three teachers) 2- Teachers underscored the importance of ongoing feedback throughout the learning process (six teachers). 3- Teachers advocated the importance of summative assessment at the end of each unit (five teachers) |
|
Personalization of feedback |
1- Teachers maintained confidentiality in the teacher-learner interaction to build a good rapport with them (three teachers). 2- Teachers expressed the necessity of consideration of learners’ emotions while delivering feedback. In the provision of personalized feedback, teachers adhered to strategies, such as identifying common errors, modifying feedback based on specific task requirements, and addressing individual needs (seven teachers). |
Heacox (2018) emphasized the necessity of tailoring rubrics to learners’ proficiency levels; by tailoring rubrics to learners’ proficiency levels, teachers can provide scaffolded support such as additional examples or instructions for less proficient learners, while incorporating different criteria or extension tasks to challenge more proficient learners. This approach ensures that all learners receive appropriate guidance and have opportunities for growth within their individual skill levels. By incorporating differentiated criteria and tasks, these rubrics ensure that each learner is evaluated fairly. This personalized assessment framework enables teachers to accurately assess learners’ progress and achievement. As the interview progressed and more questions were asked regarding the advantages of scoring rubrics, teachers noted that scoring rubrics are important in writing assessments; adhering to this view one teacher stated that:
Before scoring each writing assignment, I typically establish a set of criteria. These criteria encompass several key elements. Firstly, grammatical acceptability is assessed, evaluating the learners’ command of syntax and grammar rules. Secondly, vocabulary use is examined as it reflects their language proficiency. Thirdly, I look at how well they communicate the message in the text. Finally,
I assess the coherence and cohesion of the text to ensure that the ideas are logically connected in the text.
The insights derived from teacher interviews in this study emphasized that teachers were aware of the impact of positive and corrective feedback on learners’ writing assignments. The interviewed teachers indicated that feedback helps learners improve their writing from the early stages of their learning. They noted that while providing feedback they considered learners’ feelings and offered suggestions for the improvement of their writing.
In fact, the teachers in this study primarily provided general positive corrective feedback on learners’ writings. Their feedback was not specific and as such it did not target writing development. They were mostly concerned with learners’ emotions while providing feedback on their assignments. This is evident in the following statement by one of the interviewed teachers:
When I provide feedback, I always consider the feelings of learners. This is because an overload of feedback can sometimes lead to disappointment.
My approach is to motivate learners; for instance, if they have used a less frequent vocabulary, I acknowledge it in my feedback by saying, “Thank you, try to use more adjectives. You can even search for similar adjectives to expand your vocabulary knowledge”. This way, I encourage my learners.
Teachers occasionally employed various techniques such as correcting errors, using symbols, tracking changes in Microsoft Word, and provided comments on learners’ essays. In this regard one teacher noted “I use highlighting, commenting, underlining, and symbols to identify errors. Another participating teacher noted, “We use Google Classroom and Word documents in our teaching. When I see errors, I highlight them. I use abbreviations such as 'S' for spelling mistakes and 'G' for grammar issues”.
In the qualitative data analysis, several key themes emerged regarding what teachers did to approach personalized feedback. Firstly, it was evident that maintaining confidentiality within the teacher-learner interaction was regarded crucial for fostering a strong rapport between teachers and learners. The teacher emphasized the significance of creating a safe and trusting environment wherein learners feel comfortable and have no fear of judgment. Secondly, a prevalent concern among teachers was the acknowledgment of learners’ emotions during the feedback process. The majority of the teachers believed that learners’ emotional states should be taken into consideration while delivering feedback, and argued that various strategies should be implemented when providing feedback for different learners. These strategies included identifying common errors, tailoring feedback according to specific task requirements, and addressing individual needs. This could contribute to the development of a supportive learning environment wherein each learners’ unique strengths and challenges are duly recognized and addressed.
When teachers were interviewed about feedback uptake by learners, they noted that learners most often overlook the feedback. This finding underscores the concerns expressed by one of teachers, who stated “generally, around 20% of learners correct their error. Approximately 40% of learners, when exposed to repeated corrections, learn to avoid these errors. A significant number of learners, about 40%, tend to make the same error”. Supporting this perspective, another teacher indicated that learners display a concerning level of indifference toward the feedback provided by their teachers and often ignore them.
In relation to the learners’ disengagement with the feedback provided by teachers, one of the teachers stated that:
The learners’ reactions are quite disheartening despite our efforts. We often spend 4-5 hours preparing for a 2-hour class, but the learners do not seem to focus on the feedback we give. We persist in our efforts to guide them and provide corrections and feedback, yet we find that they make the same mistakes in subsequent essays.
Discussion
The primary objective of the present study was to explore Iraqi Kurdish EFL teachers’ knowledge, beliefs, and writing assessment practices. The findings pointed to teachers’ inadequate knowledge of writing assessment and highlighted a disparity between their stated beliefs and practices. Several factors may contribute to teachers’ limited knowledge of writing assessment, such as inadequate professional development opportunities, curricular limitations, limited teaching experience, and the personal beliefs and attitudes of teachers that could lead to an underestimation of the significance of formal assessment practices
(Fulcher & Davidson, 2007; Weigle, 2002).
The results from the self-reported questionnaire showed that many of the teachers had positive beliefs towards formative assessment including self and peer assessment. However, their self-reported practice revealed that despite recognizing the value of formative assessment, teachers reported using summative assessment methods, such as final exams, out-of-class, and timed-in-class essay writing to assess learners’ writing. They preferred summative assessment methods for several reasons. Firstly, inadequate assessment literacy among teachers is a key reason why they still use traditional summative assessment methods; inadequate assessment literacy among teachers results in limited understanding and implementation of diverse formative assessment methods. Secondly, teachers may perceive assessment primarily as a grading tool rather than a means to facilitate learning. Additionally, formative assessment methods require more time and effort to implement. Finally, inadequate resources, including training and support, hinder the effective implementation of formative assessment methods, leading teachers to stick to familiar summative assessment practices despite recognizing the benefits of formative assessment.
The qualitative examination of teachers’ common assessment strategies yielded results incongruent with those derived from quantitative data analysis. Despite their preference for summative assessment, in the interview, they noted that they used formative assessment in their classes. We found that summative assessment was less favored by the interviewed teachers because it offers a snapshot of learners’ performance at a single point in time and does not fully reflect learners’ development and learning. The participating teachers further stated that summative assessments can demotivate learners and discourage them when they perceive the emphasis is on grades rather than on the development of their writing skills. This finding is contrary to what DeLuca and Klinger (2010) reported regarding Canadian teacher candidates; in their study, the teachers preferred summative assessments. The present findings also differ from Naghdipour’s (2016) findings who reported that Iranian teachers tend to adhere to summative assessment methods in L2 writing classrooms.
In terms of using rubrics in assessing writing, we found that teachers considered scoring rubrics essential tools for unbiased evaluation, acknowledging their role in providing clear assessment guidelines and criteria. These rubrics not only outline specific expectations for learners, thereby mitigating subjectivity in grading, but also facilitate effective communication of expectations, fostering transparency in the evaluation process. While some teachers strictly adhered to prescribed assessment criteria and rubrics to ensure consistency and fairness across evaluations, others preferred to modify rubrics based on learners’ proficiency levels and task demands.
Regarding feedback practices, the participating teachers acknowledged the significant role of positive corrective feedback in improving the writing ability of learners. However, their feedback was primarily general, lacking specific guidance needed to help learners develop their writing skills. The findings further indicate that peer feedback is a valued assessment method employed by participating teachers to evaluate learners’ writing abilities. Teachers believed that by engaging in peer feedback, learners take an active role in the assessment process and this practice can enhance their understanding of assessment criteria and cultivate critical thinking skills. The importance of peer assessment is highlighted in Vygotsky’s Zone of Proximal Development theory which suggests that learners benefit from scaffolding that includes timely and specific feedback from more knowledgeable others, including peers. A similar conclusion is made by Hawe and Dixon (2014) who suggested that peer feedback fosters a sense of responsibility among learners and promotes their autonomy. However, teachers should be aware that learners may not possess the necessary skills, knowledge, and objectivity to effectively provide feedback on their peers’ writing. Teachers should train learners and specify factors that should be considered while providing feedback. They need to familiarize learners with scoring rubrics and assessment criteria and ask them to provide feedback on the basis of these criteria.
The findings also pointed to learners’ limited responsiveness to teacher-provided feedback. Teachers were concerned that learners often overlooked the feedback they provided and made similar mistakes in their subsequent writings. This suggests that the current approach to feedback provision is not effective. The lack of explicit instruction accompanying feedback and learners’ limited capacity to interpret and act upon it are likely contributing factors to this disconnect.
Conclusions and Implications
The present study provided valuable insights into teachers’ knowledge, beliefs, and practices of writing assessment. The findings revealed that Iraqi Kurdish EFL teachers possess limited knowledge of writing assessment. Despite positive beliefs towards formative assessment, teachers tended to use summative assessment methods such as final exams and timed essay writing, potentially due to a lack of assessment literacy, perceiving assessment primarily as a grading tool, and inadequate resources that hindered the effective implementation of formative assessment methods. The feedback teachers provided on the writings of learners lacked the specificity needed for writing development, and learners often disregarded that feedback. Peer feedback was considered a valued assessment method, aligning with theories emphasizing collaborative learning and development of learner autonomy.
This study has important implications for teacher education programs and language teachers; professional development programs that familiarize teachers with assessment theories can inform teachers’ classroom practices. Teacher education programs should prioritize workshops and training programs that focus on application of different assessment theories in diverse classroom contexts and encourage teachers to use different assessment methods. Language teachers should also learn to provide specific feedback on the writing of learners. They should use different techniques such as conferencing which allows teachers to provide immediate and individualized feedback to learners in a face-to-face setting. Conferencing creates a dialogue between teachers and learners, fostering an environment to discuss how they can improve the quality of their writing.
This study is not without limitations; convenience sampling and sample size are clear constraints of this study. The participants in this study were selected based on their availability and this can limit the generalizability of the findings to other contexts. The other limitation is related to the data collection procedures which included a self-reported questionnaire and an interview. Questionnaires with close-ended questions may limit participant responses and be subject to self-reporting biases, where participants might exaggerate their abilities. Interviews may suffer from inaccuracies due to participants’ recollections of past experiences. Thus, including other data collection methods such as observation could provide more in-depth information about how teachers put their knowledge and beliefs of writing assessments into practice. Through observation, they can identify key factors contributing to the disparity between teachers’ beliefs of assessment and their assessment practice. Finally, research can examine the effect of professional development interventions on teachers’ assessment literacy and their assessment practice.
Declaration of Conflicting Interest
This research did not benefit from any funding or research grants, rendering such aspects irrelevant to this study. The authors hereby affirm that they have no conflicts of interest.
All methods employed in this study were in strict compliance with ethical guidelines.