Operationalization of Formative Assessment in Writing: An Intuitive ‎Approach to the Development of an Instrument

Document Type: Original Article

Authors

1 Hakim Sabzevar University University of South-Eastern Norway

2 Hakim Sabzevar University

3 University of South-Eastern Norway

Abstract

The current study aimed to develop a Formative Assessment of Writing (FAoW) instrument through operationalizing Black and Wiliam’s (2009) Formative Assessment (FA) and Hattie and Timperley’s (2007) feedback model. Following intuitive approach of scale construction (Hase & Goldberg, 1967), a comprehensive review of the literature was undertaken, and 30 Likert scale items were devised.  The items tapped students’ experiences of FA practices in writing classrooms and their attitudes towards the helpfulness of each practice.  In a focused group interview, the items were intuitively classified by three experts of writing and assessment based on the five components of FA (clarifying criteria, evidence on students’ learning, feedback to move learners forward, peer assessment and autonomy) and in three stages (“Where the learner is going/Pre-writing, “Where the learner is right now/Writing and “How to get there/ Post-writing”). The expert interviews resulted in revisions and 20 additional items. They also agreed that items in FAoW instrument corresponded with the theoretical frameworks of FA as well as the three stages of feedback.

Keywords

Main Subjects


Introduction

Historically, students’ writing performance has been tested summatively, usually in the form of a grade, which suggests how much they achieve retrospectively (Lee, 2007). FA, however, has a prospective trend and sends subtle messages to the learner and the teacher on what needs to improve and how. Research on second language (L2) writing over the past two decades has not seen enactment of a wide range of Formative Assessment (FA) practices by teachers, especially in the English as a Foreign Language (EFL) context (Burner, 2015). Much of the research in writing has focused on summative assessment, peer assessment or the effectiveness of teachers’ feedback (Lee, 2003, 2011a).  Far too little attention has been paid so far to operationalize the theoretical FA frameworks and writing models by accumulating a comprehensive list of formative feedback practices in writing.  Carless (2007) similarly referred to this gap and the existing challenges in implementing the theoretical insights of FA from the literature.

In higher education, Yorke (2003) agrees that FA is weakly understood and insufficiently theorized. Similarly Bennett (2011, p. 5) admits that, “the term, ‘formative assessment’, does not yet represent a well-defined set of artefacts or practices.” In his viewpoint, a meaningful definition of FA requires a theory of action and concrete instantiations. His example of Keeping Learning on Track (KLT) Program (ETS, 2010) offers such a definition as it revolves around ‘one big idea and five key strategies’ (Bennett, 2011, p. 9). More specifically, it is based on Black and Wiliam (1998b, 2009) and the idea that in FA students and teachers should use learning evidence to adapt teaching and learning to meet learning needs (ETS, 2010). Bennett (2011) calls for more work like that to push the field of FA forward.

The current study, therefore, aimed to develop FAoW instrument for higher education based on FAoW theoretical framework which was developed by integrating Black and Wiliam’s (2009) FA theoretical framework with Hattie and Timperley’s (2007) model of feedback. In doing so, the researchers decided to address EFL learners as their role in FA has been undermined in comparison with the teachers’.

Despite the quintessential role of the learners, FA has been mainly studied from the perspective of teachers since teachers are usually identified to be a more reliable source of judgment on students’ learning (Brookhart, 2001). For instance, when elaborating on pre-emptive FA, Carless (2007) focused on teachers as the key mediator in enhancing students’ learning. Students’ experiences of assessment practice, however, are an important source of information for learning which is not documented sufficiently in the literature (Yorke, 2003). There are some studies on students’ experience and perception of FA through questionnaires which rely on key principles of FA, such as establishing learning targets, sharing assessment criteria, questioning, feedback, self-assessment and peer assessment (e.g. Black and Wiliam, 1999; Brookhart, 2001; Clarke, Timperley, & Hattie, 2003). However, students’ experiences of FA practices which are implemented by their teachers in writing classrooms have been investigated to a little extent. The results of the present study, hence, contribute to the understanding of teachers’ FAoW by developing an instrument to measure FA practices before, while, and after writing tasks. The instrument will also measure the students’ attitudes towards helpfulness of these practices.

Given the insufficient research on the construct of FAoW particularly in the EFL contexts, it was initially essential to theoretically define the construct and operationalize it by developing an instrument. This was the aim of this study and it has been conducted concurrently with another factor structuring validation study (Tavakoli, Amirian & Burner, under review).

 

Writing and Formative Assessment Instruments

Some educational scholars have targeted the role of FA in promoting students’ writing performance through interviews and/or surveys (e.g. Burner, 2015; Keen, 2005; Lee, 2011b; Lee & Coniam, 2013; Mak & Lee, 2014; Naghdipour, 2016; 2017). Many of these studies, however, are case studies with a narrow focus on students’ attitudes towards some FA practices in a writing course. They introduce a set of FA practices to investigate the impact of FA on students’ learning (Burner, 2015; Lee, 2011b) or teachers’ perception of FA (Keen, 2005).

Keen (2005), for instance, suggested a sequence for teaching writing based on students’ engagement, reflection approach and FA. His article was mainly a discussion of his work with a group of 26 postgraduate trainee English teachers at the University of Manchester who were asked to provide their analysis of students’ writing assessment through the following suggested sequences of FA framework:

1. Enable students to respond to a particular writing challenge with appropriate supports such as task specifications; peer discussion; sharing assessment criteria with students; providing opportunities for students to apply them to their own or their peers’ writing;

2. Identify strengths and achievements in students’ writing (text, word or phrase level);

3. Enable students to share strengths and achievements by reading out sections of their writing to the whole class or small groups;

4. Enable students to identify strengths as well as shortcomings in their own writing and in that of others, discussion about how to develop strengths and how to overcome weaknesses;

5. Enable students to apply their learning through redrafting; or using teacher/ student modeling for example;

6. Provide new writing challenges with carefully phrased assessment criteria;

7. Enable students’ increasing independence as writers, for example through continuing peer assessment and self-assessment.

The teachers deployed the findings from assessment at the preparation stage as well as devising the follow-up lesson. They benefitted from peer or student self-assessment to enable students address the strengths, achievements and shortcomings of their own and their peers’ writing. Although this list of FA practices was a thorough collection of FA practices providing an overall framework of teaching writing, Keen (2005) did not unify them under a theoretical framework similar to Black and Wiliam’s (2009).

In another study, which was limited to error feedback in a writing program, Lee (2003) designed a questionnaire and piloted it with a small group of secondary English teachers. The questionnaire mainly asked about (1) teachers’ existing error feedback practice, (2) their perspectives on error feedback and (3) their perceived problems. It consisted of two open-ended and thirteen closed-ended items only on error feedback practices such as selective error feedback, coded marking, and underlining for direct error correction or indirect hints. Although her questionnaire also included activities after marking students’ writing such as conferencing, reviewing common errors in class and error log, it did not tap at a unified framework of how feedback could be utilized formatively to promote students’ autonomy in writing.

Four years later, Lee (2007) provided a framework for teaching writing formatively through illustrating the following writing instructional unit based on FA principles:

- Planning writing (identifying goals and designing pedagogical materials to realize them)

- Teaching writing according to the goals and applying assessment criteria in self- and peer assessment

- Multiple drafting of writing with every draft followed by students’ self- and peer assessment, and students’ assessment in the subsequent drafts is followed by teacher assessment all based on assessment criteria. Teacher provides written and oral feedback to the whole class or individual students at conferences. Teacher identifies students’ strengths and weaknesses and suggests ways to improve. Students revise drafts by acting upon teacher feedback.

- Final draft is assessed by teacher and students reflect upon teacher’s feedback and set goals for further development

- Plan the next writing again on the basis of information gathered from the assessment and areas where students need help.

Lee’s previous studies (2003, 2007) did not devise an instrument to measure FA in EFL writing classroom; however, she used an innovative questionnaire in her case study (2011b) which was subsequently used by many other researchers in the field of writing assessment (e.g. Naghdipour, 2017). To identify the attitudes of the teacher’s students towards FA before and after the writing course based on FA, Lee (2011b) developed a questionnaire which consisted of 25 items with five-point scales. The teacher’s FA practices encompassed multiple drafting, longer prewriting, feedback forms based on pre-established assessment criteria, self/peer evaluation and withholding scores/grades. Although Lee (2011b) described these FA practices in details and embedded many of FA practices in her questionnaire, she did not expand on its construction nor its validity or reliability indices, probably due to the limited number of students (n=14).

Most recently, Burner (2015) developed another questionnaire to investigate how four female teachers and their 100 students in Norway responded to FA in EFL writing classes. Although he maintained that there was no one definition of FAoW in the literature, to come to an overall picture of EFL students’ perceptions of this construct in in his questionnaire, he formulated items within the holistic approach of FAoW based on FA (useful feedback, negative effects of grades, self-assessment, student involvement) and writing assessment in EFL (text revision/multiple drafting, writing practice). In contrast to the typical “I can do” statements which focused on individual cognitive domains, he used relational/interactional phrases. A Cronbach’s value of 0.71 indicated a relatively high internal consistency and reliability which was a proof of the scale items measuring the construct of FAoW.

James and Pedder’s (2006) factor analysis was conducted on section A of Secondary School Staff questionnaire in Learning how to Learn (L2L) project developed by Black, McCormick, James and Pedder (2006). The section measured teachers’ classroom assessment practices through 30 double scaled Likert items with three underlying dimensions, namely making learning explicit, promoting learning autonomy and performance orientation. It was later employed and validated by the Assessment for Learning in International Contexts (ALIC) project in the International context (Warwick, Shaw, & Johnson, 2015). The full questionnaire had initially been developed from the operationalized conceptual and empirical insights in the literature on classroom assessment practices and values, teachers’ professional learning and school management. The items in the first section were constructed with a particular interest in the relationship between assessment and learning and from the teachers’ perceptions and views on assessment practices. The items required the teachers to tell, with reference to their own practices, whether particular practices were never true, rarely true, often true or mostly true. They were also asked to respond to the value scale, indicating how important they felt any given practice was in creating opportunities for students to learn. The elements of classroom assessment were identified with a focus on purposes, goals and functions as any classroom activity intended or actually used to meet learning needs (Black, Harrison, Lee, Marshall, & Wiliam, 2003). The basis for item construction had been specifically the four major categories of questioning, feedback, sharing criteria and self-assessment, which had been identified in the review of research by Black and Wiliam (1998a, 1998b). As section A of Secondary School Staff questionnaire had been designed in line with 10 principles of assessment for learning (ARG, 2002) and Torrance and Pryor’s (1998, p. 153) typology of ‘convergent’ and ‘divergent’ assessment, it was utilized in the construction of FAoW in this research.

It seems that the existing accounts have failed to develop a comprehensive instrument for FAoW which is inclusive of FA practices in all the stages of writing, pre, while, and post writing. This gap in the FAoW instrument is felt more noticeably when the literature in Iran is reviewed.

Most of the scales in the abovementioned studies measure teachers’ classroom assessment, FA or assessment of writing in general. A number of researchers in the Iranian EFL context (e.g., Elahinia, 2004; Ghoorchaei, Tavakoli, & Nejad Ansari, 2010; Javaherbakhsh, 2010; Moradan & Hedayati, 2011; Mosmery & Barzegar, 2015; Nezakatgoo, 2005; Sharifi & Hassaskhah, 2011) investigated assessment of writing, but not FAoW. Most of these studies employed the available assessment questionnaires in the literature without validating them in the new context. Although none of these studies tapped FAoW as a unified construct, the consensus was on the beneficial effect that alternative forms of assessment had on improving students’ writing ability. When the implementation of various forms of FA is explored for writing classrooms, however, the most frequent methods among teachers of adult and young adult learners are found to be limited to writing essays and dictation. Other forms of FA, such as portfolios, journals, and self/peer-assessment, are reported by Iranian teachers to be never or rarely used (Ketabi, 2015).

Naghdipour (2017) is one of the few scholars who incorporated FA in a university EFL writing course in Iran and through a pre- and post- study questionnaire mainly developed after Lee (2011b), investigated the changes in students’ attitudes and beliefs toward writing and formative assessment. The questionnaire had 20 Likert scale statements and was translated into Persian. It was administered together with the English version to ensure understanding; its reliability analysis showed a relatively high degree of internal consistency (Cronbach’s alpha of 0.82). Students completed and returned the surveys in the classroom Comparison of the participants’ scores prior to and after the FA intervention revealed an improvement in various aspects of Iranian undergraduate students’ writing and development of their positive attitudes towards writing as well as FA.

Reviewing the above-mentioned studies revealed that most of the researchers opt for case studies and incorporated some of the FA strategies into a writing course. The literature on classroom FA lacks reliable instruments which can target the common FA practices of writing performances in an EFL context like Iran. It seems that the available instruments and questionnaires have not been informed by strong theoretical frameworks of FA, writing and feedback. Given the crucial need for developing a FAoW instrument and operationalizing its constructs, the researchers in this study aimed to design a FAoW instrument through intuitive approach (Hase & Goldberg, 1967) which will be elaborated in the following.

 

Intuitive Approach

For the development of FAoW instrument (see Appendix I), from the four available common approaches, i.e. (a) Internal (e.g., factor analytic), (b) External (e.g., group discriminative), (c) Intuitive rational and (d) intuitive theoretical introduced by Hase and Goldberg (1967), we utilized both the intuitive rational and the intuitive theoretical approaches.  In the intuitive rational approach, the researcher has some dimensions of the construct in mind, and based on intuitive understanding of these dimensions, he attempts to select items which he believes will relate to the dimensions. In the intuitive theoretical approach, on the other hand, item selection is conducted based on a formal psychological theory.

In the current study, Black and Wiliam’s (2009) theory of FA along with Hattie and Timperley’s (2007) model of feedback were initially utilized by researchers to define and operationalize the new theoretical construct of FAoW with five underlying FA components and three stages of writing feedback. After the theoretical approach, writing domain experts were asked to judge the correspondence between the five FAoW components and the operationalized FAoW practices which had been drawn from the literature. This was conducted based on the experts’ intuitive judgments. In the following sections, the five components of Black and Wiliam’s (2009) FA model, the three stages of writing feedback (Hattie & Timperley, 2007) and the experts’ correspondence of FAoW items with these two theoretical frameworks will be elaborated in details.

 

Components of Formative Assessment of Writing

The construction of FAoW instrument was guided by item selection in the literature and then by Black and Wiliam’s (2009) theory of FA since their model outlined the cyclic nature of FA and targeted both teachers and learners (particularly by including peer-assessment). It was introduced to compensate for the shortcomings in earlier models (Shirley, 2009). Earlier models, according to Shirley (2009), did not outline a cycle of formative assessment but rather described certain elements of effective formative assessment.  Although earlier models of FA (e.g. Black & Wiliam, 1998a, 1998b, 2006) emphasized that formative assessment would not be formative without leading to changes in instruction and without enhancing learning, they did not address this need (Black & Wiliam, 2009). Black and Wiliam (2009, p. 5) wrote that their earlier works,

did not start from any pre-defined theoretical base but instead drew together a wide range of research findings relevant to the notion of formative assessment. Work with teachers to explore the practical applications of lessons distilled therefrom (Black et al. 2002, 2003) led to a set of advisory practices that were presented on a pragmatic basis, with a nascent but only vaguely outlined underlying unity.

Hence Black and Wiliam’s unified theoretical framework (2009) defined FA within broader theories of pedagogy by relating FA to cognitive acceleration and dynamic assessment, and to some models of self-regulated learning and classroom discourse. They drew together five FA activities introduced in earlier works (Black et al., 2003; Wiliam, 2007), namely sharing success criteria with learners, classroom questioning, comment only marking, peer and self-assessment and finally, formative use of summative tests. These five activities revolve around three key questions in FA: where are the learners going, where are they now and how can the gap between the first two questions be filled? (Wiliam & Thompson, 2007). Table1 illustrates how the above-mentioned researchers crossed the three processes with the three agents: teacher, peer and learner.

Table 1. Aspects of Formative Assessment (Black & Wiliam, 2009, from Wiliam & Thompson, 2007)

 

Where the learner is going.

Where the learner is right now.

How to get there.

Teacher

1. Clarifying learning intensions and criteria for success

2. Engineering effective classroom discussions and other learning tasks that elicit evidence of students’ understanding

3. Providing feedback that moves learners forward

Peer

Understanding and sharing learning intensions and criteria for success

4. Activating students as instructional resources for one another

Learner

Understanding learning intensions and criteria for success

5. Activating students as the owners of their own learning

 

Hattie and Timperley’s (2007) model of feedback was also used in the development of FAoW construct since their description of effective feedback through the notions of ‘feed up, feedback and feed forward’ taps the main function of FA. They certified the main purpose of feedback as reducing “discrepancies between current understandings and performance and a goal” (p. 86). They introduced several agents to provide feedback and fill this gap
(e.g., teacher, peer, book, parent, self, experience).

 

Design of FAoW Instrument

Firstly, by reviewing the literature and identifying the salient dimensions of FA, a theoretical model (based on Black and Wiliam, 2009) was selected as the foundation of the instrument development. Secondly, the item pool of teachers’ formative assessment practices was drawn from the available instruments in the literature. Since the construct of FAoW has not been operationalized in any instruments with a unified theoretical framework, the researchers had to utilize the existing instruments on FA practices, assessment of writing and writing feedback separately (e.g., Lee, 2003, 2007, 2011b; Keen, 2005; James & Pedder, 2006; Bremner, 2014; Mak & Lee, 2014). The articles with a questionnaire on classroom assessment (e.g. James and Pedder, 2006), Writing feedback (e.g. Lee, 2003), FA (Black & Wiliam, 1998a & b) and FAoW (Burner, 2015; Lee, 2011b) were identified. The item pool was provided by the first author through accumulating all the items and removing the redundant similar practices. The initial list of potential items to measure FAoW consisted of 30 items. In the focused-group interview session, the three experts read the items and decided if it could be associated with the components of FAoW framework or the feedback model.

In this study, we used the terms experience and item interchangeably as every item in the instrument was a different FAoW practice which was reported by the participants to have been experienced in their writing classes always, often, rarely or never.

The next stage of developing the instrument was having a three-hour focused group interview with three domain experts and requesting them to judge the redundancy, face validity, content validity and language clarity of each experience. It is recommended by survey specialists (e.g. Dornyei, 2003) that the questionnaire design phase be preceded by a small scale qualitative study (e.g., focus group interviews) to provide more reliable information on the relevant points and issues. The experts, in this study, were selected based on purposeful sampling; all of them held a PhD in Teaching English as a Foreign Language (TEFL) and had at least ten years of experience in teaching and assessing writing at universities and language schools. They had all published scholarly articles in the field of language assessment and writing. The experts were also asked to give their suggestions of the construct under each experience based on Black and Wiliam’s (2009) framework and Hattie and Timperley’s (2007) feedback model.

At this stage, the experiences which did not show the consensus of the three experts, based on the five FA constructs and the three stages of feedback in the aforementioned theoretical frameworks, were identified and modified. Multi-dimensional experiences were broken into several experiences which resulted in an instrument with 50 statements, each with two scales (experience and attitude). The experience scale consisted of four-point Likert scale experiences (never, rarely, often, and always) and required the students to determine how often they had experienced each FA practice in their writing classrooms. The attitude scale required them to show how much they believed each practice could improve their writing by choosing one of the five Likert points (very helpful, helpful, neither helpful nor unhelpful, unhelpful and very unhelpful).

In order to tap the design of the instrument based on a writing model in addition to a FA model, the researchers implemented the feedback component of Hattie and Timperley’s (2007) model. The selected experiences were organized in three classes based on the feedback model, i.e. ‘feed up, feedback and feed forward’. Figure 1 shows the writing practices in the following three stages.

 

Feed up/ Pre-writing Stage

Thirteen experiences tapped pre-writing stage activities such as model-writing, pre-writing planning, setting writing goals, organizing and developing writing ideas, free writing, setting writing goals and reflection on them and setting writing assessment criteria. These writing activities relate to ‘feed up’, which is defined as ‘the goals one lays down to achieve. Students set attainable goals so that they understand what they are working towards in the ‘feed up’ stage (i.e., where they are going) (Hattie & Timperley, 2007, p. 86).

 

Feedback/ While Writing Stage

In line with Mak and Lee (2014) and based on Hattie and Timperley’s model, ‘feedback’/ while writing stage guided the second set of writing activities which specified what progress is being made towards the goal. They included writing practices such as process-writing/ multiple drafting, writing feedback on progress, peer-writing feedback, writing error log, computer feedback, autonomous writing revision, writing reflection and self-assessment. Thirty experiences were placed under this construct and tapped a variety of feedback (e.g. graded, focused, indirect, direct and descriptive) which can be given from various sources (e.g. peers, teachers and the learners). This stage of writing corresponded with ‘where the learner is right now’ principle of FA and implied the learners’ prior progress.

 

Feed Forward/ Post Writing

‘Feed forward’/post writing stage encompassed those writing practices which could lead students for their future improvement such as reflection for future progress, teacher oriented feedback and portfolio assessment. As Mak and Lee (2014) confirm, this stage of writing covers writing practices which give students a direction of what they are to achieve in the future, and the writing activities provide students with a blueprint of where they are going.

 

Figure 1. Writing constructs tapped by FAOW instrument in three stages

 

For developing the item bank, the researchers did their best to collect the possible practices for assessing students’ writing tasks and classify them based on the aforementioned theoretical models. Black and Wiliam’s (2009) five salient dimensions of FA (see Table 2) and feedback model of Hattie and Timperley (2007) (see figure 1) consequently guided the researchers in extracting the experiences, modifying and finally classifying them into five FA constructs and three writing stages.

FAoW instrument tapped various writing practices which teachers should consider when they assess students’ writing assignments. Weigle (2007, p. 200) asserts that the process of writing “involves reflection, discussion, reading, feedback, and revision, and ones’ best work is usually not produced in a single draft within 30 or 60 minutes.” In addition to reflection, questioning, feedback, revision and multiple drafting practices in FAoW instrument, in-class/ timed writing and out-of-class writing (experiences 7 and 13) were highlighted by Weigle (2007, p. 201) as “complementary sources of information about students’ writing abilities.”  She also referred to portfolio assessment (experience 47 in FAoW instrument) as a helpful writing practice which integrates assessment and instruction and teachers can utilize to identify students’ progress.

Overall, the final FAoW instrument had 13 experiences with pre-writing FA practices such as clarifying goals and assessment criteria, model writing, free-writing, brainstorming and planning for writing. The writing stage encompassed 13 experiences of FA practices such as multiple drafting, various feedback from peers and teacher, self-assessment and use of other non-human sources like computer software. The experiences aim at teachers assessing the students’ current level of writing and more in line with summative function of assessment. To assess writing more formatively and know what practices could move the learners forward in their writing and make them autonomous in self and peer assessment, 24 experiences were finalized focusing mainly on identifying and reflecting on strengths and weaknesses, and planning for progress.

 

Experts’ Interview

After focus group interviews with the domain experts, half of the initial experience items
(4, 8, 17, 18, 25, 26, 27, 28, 29, 30, 31, 33, 35, 39, 48 in Appendix I) remained intact, but the other 15 underwent three changes. Firstly, seven experiences were judged to be multidimensional and in need of breaking down to distinct experiences.  For instance, “I spend much time on pre-writing activities (e.g. asking questions, making notes, mind-mapping, free-writing, brainstorming, sharing ideas orally in small groups)” was broken down to six experiences, each one targeting a distinct pre-writing activity (experiences 5, 6, 7, 10, 11 and 12 in Appendix I).

Secondly, eight experiences were fine-tuned in terms of wording and some ambiguous key words (experiences 1, 2, 3, 9, 16, 22, 30 and 38). Two of the experts, for instance, insisted that the term qualitative feedback in the original draft, “My teacher gave me comparative qualitative feedback on my progress in writing.” needed more clarifications which led to the inclusion of two examples “compared to your last writing, this one is better/worse” or “You did better/worse in this writing because…” (See item 43 in Appendix I).

Finally, the experts concurred that two components (4 and 5 in Table 1) had not been adequately targeted in the original draft and that it was crucial to include more experiences to measure recent FAoW practices in line with the computer assisted assessment. They consequently added 12 more experiences (experiences 13, 19, 21, 24, 32, 34, 42, 45, 46, 47, 49 and 50 in Appendix I) to the original draft based on the literature and particularly their own personal knowledge and experience. The experiences which were introduced as the result of interview with experts mainly comprised of assessment by keeping an error log, frequency chart, computer software, applications and learning autonomy (the fifth component in Table 1). Hence, the original draft with 30 experiences was ultimately extended to 50 experiences which were associated with the five dimensions in the following framework of FAoW (Table 2).

Table 2. FAoW framework, experience and construct matching by experts

 

Where the learner is going?

Pre-writing

Where the learner is right now?

Writing

How to get there?

Post-writing

Teacher

Experiences 1, 2, 3, 4, 5, 6, 7, 8, 12

clarifying criteria

Experiences 14, 15, 18, 20, 22, 23, 29, 30, 31, 36, 40, 43, 48

evidence on students’ learning

Experiences 32, 33, 37, 39, 41, 44, 45, 47, 50

feedback to move learners forward

Peer

Experiences 9, 10

clarifying criteria

Experiences 16, 17, 25,26, 28

peer-assessment

Learner

Experiences 11, 13

clarifying criteria

Experiences 19, 21, 24, 27, 34, 35, 38, 42, 46, 49

Autonomy

 

The domain experts also agreed that the three stages (pre, while and post writing) corresponded with the three key questions of “where are the learners going”, “where are they now” and “how can the gap between the first two questions be filled” (Wiliam & Thompson, 2007), and that they had to be practiced in a cycle for implementing FAoW. In other words, they agreed that for practicing FAoW, assessment criteria and goals should be clarified in prewriting, feedback should be given in various stages and on several drafts by teachers and peers in the writing stage and even post writing. The teachers’ and learners’ understanding should be utilized as the objectives in the next instructions. As such, learners can be more independent and ultimate goal of FA can be more easily achieved.

 

Discussion

In the current study, the first priority was to develop an instrument based on Black and Wiliam’s (2009) model of FA and Hattie and Timperley’s (2007) feedback model to operationalize the construct of FAoW. EFL domain experts came to consensus on the five dimensions underlying all the items in the instrument, though they extended the instrument with 30 experiences to 50 due to 12 experiences which were judged to be multidimensional. Implementation of these practices in a writing program can probably better be clarified by the comparison of the empirical studies in the literature which were conducted based on three stages of pre, while and post writing. Naghdipour (2017) is our bases for this comparison for the reasons which will be discussed here.

As it was pointed out earlier, not much has been done to operationalize the construct of FAoW in EFL contexts in EFL contexts (Lee & Coniam, 2013). The literature has not documented what practices construct of FAoW actually entails; however, there are some studies in the literature that implemented FA in writing classrooms particularly in three stages of prewriting, writing and post writing (e.g. Lee, 2011b; Lee & Coniam, 2013; Mak & Lee, 2014; Naghdipour, 2017).

The development of FAoW instrument based on feed up, feedback and feed forward practices is probably consistent with Naghdipour’s (2017) case study with a three-session modular instruction. Although Naghdipour (2017) did not aim to develop an instrument to operationalize the construct of FAoW, he used a FA attitude questionnaire based on Lee’s (2011a) which revealed undergraduates’ positive attitude towards the three-session modular instruction of writing based on Black and Wiliam’s (2009) five FA strategies. His empirical research is similar to this research in many ways.

His pre-writing stage of instructional tasks based on model essays, brainstorming and pooling of ideas, (see Naghdipour & Koç, 2015, for an overview) is in accordance with many of the FAoW practices in feed up stage under the construct of clarifying criteria. In the writing stage and over the course of the semester, his research participants wrote three typed drafts for each essay (descriptive, narrative, expository, and argumentative); the second was a draft in response to the peer-assessment and the third draft was revisions in response to the teacher assessment. At this stage in his research, some of the FAoW practices under the second and the third constructs of our FAoW framework such as multiple drafting and process writing, peer and self-assessment were utilized. FAoW instrument in this research has more practices in the second stage of FA writing programs which were not employed in Naghdipour’s (2017) intervention. They encompassed preparing error log, employment of applications and software and various forms of feedback on students’ writing progress such as graded feedback, focused and unfocused feedback, direct and indirect feedback and descriptive feedback. Whole-class feedback (known as overall feedback/ feedback in plenum in our FAoW instrument), conferencing and discussion on the most common issues in students’ papers were the FA practices that Naghdipour employed in the post writing stage of his intervention.

The construct of FAoW is not indispensable from Wiliam’s (2001) teaching-learning-evaluation cycle and Ruiz-Primo and Furtak’s (2006) ESRU (Elicit question, Student response, Recognition by teacher, Use of information) model.  In both models of FA, information is collected about students’ learning and is compared to teachers’ expectations and ultimately action is taken to move students towards those goals. What most FA models have in common is the cyclic nature, where information from feedback is injected back into the instruction process and the main goal is to fill the gap between students’ current level and expected goals. The construction of FAoW instrument based on the aforementioned model in three stages can probably correspond with task representation when Wolfersberger (2013) refines the construct of classroom-based writing assessment. He defines task representation as the “writer’s conceptualization of the requirements of the assessment task … a mental model of the finished written product” (p. 50). He asserts that it is a process in which writers need to take the necessary steps to create the final written product that meets the assessment criteria which had been set prior to writing and assessment feedback based on which they performed. FAoW is similarly a construct which starts with setting criteria for success in prewriting stage, incorporating a set of feedback received through different sources during writing and ultimately achieving the expected goals autonomously by knowing the future trend for learning in post-writing stage. This is a cyclic process with feedback as its central component and can repeat for every writing task to lead to autonomy in writing (Hattie & Timperley, 2007).

 

Limitations of FAoW Instrument and Pedagogical implications

The FAoW is a broad field and can include any classroom activity as long as it aims to improve future performance. Multidimentionality of FAoW, which was due to encompassing both writing skills and FA practices, was an inevitable problem for the researchers who aimed to develop the instrument for measuring this construct by devising each experience to tap a single dimension. The researchers benefitted from both feedback model and FA as the theoretical foundation and sought to connect writing with FA, which made the job demanding.

FAoW instrument in this study was developed as part of a Ph.D project with the main aim to investigate teachers’ implementation of FAoW practices in EFL classrooms and it is tested for its factor structuring; moreover, the instrument will have the potential to be utilized by other teachers in writing classrooms as an operationalized model which can contribute to the utilization of FA. The developed instrument can be used as a guideline for the teachers in both EFL and international contexts on how to practice FA. The results of this study can additionally raise the awareness of those teachers who are not practicing writing assessment in a formative way and are mainly concerned with showing learners their current state of learning rather than the future goals; the developed instrument can subsequently pave the way for them to implement FA in their writing classrooms utilizing the strategies prior, while and after the writing stage. FAoW instrument is a collection of FA practices which can be utilized by writing program developers in addition to teachers and students.

Assessment Reform Group. (2002). Assessment for learning: 10 principles. Research based principles to guide classroom practice. London, UK: Author. Available from http://languagetesting.info/features/afl/4031afl principles.pdf

Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5-25.

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: Putting it into practice. Maidenhead: Open University Press.

Black, P., McCormick, R., James, M., & Pedder, D. (2006). Learning how to learn and assessment for learning: A theoretical inquiry. Research Papers in Education, 21, 119-132.

Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education, 5(1), 7-73.

Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80, 139-148.

Black, P., & D. Wiliam. (1999). Assessment for learning: Beyond the black box. Cambridge, University of Cambridge, Assessment Reform Group.

Black, P. & Wiliam, D. (2006). Assessment for learning in the classroom. In J. Gardner (Ed.), Assessment and learning (pp. 9-25). London: Sage.

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5.

Bremner, A. L. (2014). Teachers’ knowledge of formative assessment initial instrument validation study, (Doctoral dissertation, Boise State University).

Brookhart, S. M. (2001). Successful students' formative and summative uses of assessment information. Assessment in Education: Principles, Policy & Practice, 8(2), 153-169.

Burner, T. (2015). Formative assessment of writing in English as a foreign language. Scandinavian Journal of Educational Research, 1-23.

Carless, D. (2007). Conceptualizing pre‐emptive formative assessment, Assessment in Education: Principles, Policy & Practice, 14(2), 171-184.

Clarke, S., Timperley, H., & Hattie, J. (2003). Unlocking Formative Assessment – Practical strategies for enhancing pupils' learning in the primary & intermediate classroom (NZ ed.). Auckland, New Zealand: Hodder Moa Beckett.

Elahinia, H. (2004). Assessment of writing through portfolios and achievement tests. Unpublished Master thesis, Teacher Training University, Iran.

ETS (Educational Testing Service). (2010). About the KLT program. Princeton, NJ: Author. http://www.ets.org/Media/Campaign/12652/about.html (accessed December 17, 2010).

Ghoorchaei, B. Tavakoli, M. & Nejad Ansari, D. (2010). The impact of portfolio assessment on Iranian EFL students‟ essay writing: A process-oriented approach. GEMA Online Journal of Language Studies, 10 (3), 35-51.

Hase, H. D. & Goldberg, L. G. (1967) Comparative validity of different strategies of constructing personality inventory scales, Psychological Bulletin, 67, 231–248.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.

James, M., & Pedder, D. (2006). Beyond method: Assessment and learning practices and values. The Curriculum Journal, 17(2), 109-138.

Javaherbakhsh, M. R. (2010). The impact of self-assessment on Iranian EFL learners’ writing skill. English Language Teaching, 3(2), 213-218.

Keen, J.(2005).Assessment for writing development: Trainee English teachers’ understanding of formative assessment. Teacher Development, 9(2), 237-254.

Ketabi, S. (2015). Different methods of assessing writing among EFL teachers in Iran. International Journal of Research Studies in Language Learning, 5(2), 3-15.

Lee, I. (2003). L2 writing teachers’ perspectives, practices and problems regarding error feedback. Assessing Writing, 8(3), 216-237.

Lee, I. (2007). Assessment for learning: integrating assessment, teaching, and learning in the ESL/EFL writing classroom. The Canadian Modern Language Review, 64(1), 199-213.

Lee, I. (2011a). Bringing innovation to EFL writing through a focus on assessment for learning, Innovation in Language Learning and Teaching, 5(1), 19-33.

Lee, I. (2011b). Formative Assessment in EFL Writing: An Exploratory Case Study, Changing English: Studies in Culture and Education, 18(1), 99-111.

Lee, I., & Coniam, D. (2013). Introducing assessment for learning for EFL writing in an assessment of learning examination-driven system in Hong Kong. Journal of Second Language Writing, 22(1), 34-50.

Mak, P., & Lee, I. (2014). Implementing assessment for learning in L2 writing: An activity theory perspective. System, 47, 73-87.

Moradan, A., & Hedayati, N. (2011). The impact of portfolios and conferencing on Iranian EFL writing skill. Journal of English Language Teaching and Learning, 8, 115-141.

Mosmery, P., & Barzegar, R. (2015). The effects of using peer, self, and teacher-assessment on Iranian EFL learners’ writing ability at three levels of task complexity. International Journal of Research Studies in Language Learning, 4(4), 15-27.

Naghdipour, B., & Koç, S. (2015). The evaluation of a teaching intervention in Iranian EFL writing. The Asia-Pacific Education Researcher, 24, 389-398.

Naghdipour, B. (2016). English writing instruction in Iran: Implications for second language writing curriculum and pedagogy. Second Language Writing Journal, 32, 81-87.

Naghdipour, B. (2017). Incorporating formative assessment in Iranian EFL writing: A case study. The Curriculum Journal, 28 (2), 283-299.

Nezakatgoo, B. (2005). The effects of writing and portfolio on final examination scores and mastering mechanics of writing of EFL students. Unpublished Master thesis, Allame Tabtba'i University, Tehran, Iran.

Ruiz-Primo, M.A, & Furtak, E.M. (2006) Informal Formative Assessment and Scientific Inquiry: Exploring Teachers' Practices and Student Learning. Educational Assessment, 11, 3-4, 237-263.

Sharifi, A., & Hassaskhah, J. (2011). The role of portfolios assessment and reflection on process writing. Asian EFL Journal, 13(1), 192-229.

Shirley, M. L. (2009). A model of formative assessment practice in secondary science classrooms using an audience response system, (Unpublished doctoral thesis). College of Education and Human Ecology, the Ohio State University. U.S.

Tavakoli, E. Amirian, M.R., Burner, T. (under review). Factor Structure of Formative Assessment of Writing: Confirmatory Study on Iranian EFL Students (Manuscript submitted for publication).

Torrance, H., & Pryor, J. (1998). Investigating formative assessment: Teaching, learning and assessment in the classroom, Philadelphia, PA Open University Press.

Warwick, P., Shaw, S., & Johnson, M. (2015). Assessment for learning in international contexts: Exploring shared and divergent dimensions in teacher values and practices. The Curriculum Journal, 26(1), 39-69.

Weigle, S. C. (2007). Teaching writing teachers about assessment. Journal of Second Language Writing, 16(3), 194-209.

Wiliam, D. (2001). Level best? Levels of attainment in national curriculum assessment. London: Association of Teachers and Lecturers.

Wiliam, D. (2007b). Content then process: Teacher learning communities in the service of formative assessment. Solution Tree Press.

Wiliam, D., & Thompson, M. (2007). Integrating assessment with instruction: What will it take to make it work? In C. A. Dwyer (Ed.), the future of assessment: Shaping teaching and learning (pp. 53–82). Mahwah, NJ: Erlbaum.

Wolfersberger, M. (2013). Refining the construct of classroom-based writing-from-readings assessment: The role of task representation. Language Assessment Quarterly, 10(1), 49-72.

Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and enhancement of pedagogic practice. Higher Education, 45, 477-501.