Authors
Ferdowsi University of Mashhad, Iran
Abstract
Keywords
Main Subjects
Introduction
Deciding on priorities in teaching L2 writing
is a pedagogical necessity recognized by
many TEFL (Teaching English as a Foreign
Language) experts (e.g. Collins, 1998;
Crossley & McNamara, 2011; DeVillez,
2003; Ferris, 2004). Numerous studies have
been concerned with identification (e.g.
Crossley & McNamara, 2011) or re-evaluation (Weston, Crossley, McCarthy, &
McNamara, 2011) of the factors affecting
L2 learners’ writing performance. Various
factors such as lexical proficiency
(Nakamaru, 2011), syntactic proficiency
(Truscott, 1999), cohesion (McNamara,
Louwerse, McCarthy, & Graesser, 2010),
coherence (McNamara, Kintsch, Butler-Songer, & Kintsch, 1996), cognitive
mechanisms (Bourke & Adams, 2010), and
higher-order processes (Sparks &
Gonschow, 2001) have been considered in
debates over the primary predictors of
success in achieving L2 writing proficiency.
In addition, another line of research in
applied linguistics has been growing over
the last decade, which has placed its focus
on the study of cognitive aspects of
language perception and production.
Cognitive science is considered as the
legitimate interdisciplinary paradigm that
can cover and re-examine many research
problems in applied linguistics and TEFL
(Segalowitz, 2010). The study of
intelligence is a prolific research paradigm
in cognitive psychology. One of the factors
which seem to be of great importance in
dealing with the writing ability is narrative
intelligence (Pishghadam, Baghaei, Shams,
& Shamsaee, 2011). As the name implies,
narrative intelligence deals with the
narrative capabilities of individuals, which
can be a potential factor for writing
effectively.
Another type of intelligence which seems to
be relevant to writing is verbal intelligence.
It is defined as the ability to express what
one has in mind. There is evidence that
verbal intelligence has a meaningful
relationship with academic achievement
(Fahim & Pishghadam, 2007), writing
achievement (Abiodun & Folaranmi, 2007),
and writing fluency (Pishghadam, 2009).
All in all, we are facing two dimensions
dealing with the nature of writing ability:
linguistic and cognitive. With this in mind,
this paper attempts to connect the literature
available on linguistic features of L2 writing
into studies concerned with high-order
processes or intelligence factors in language
production. Linguistic features under
investigation include knowledge of
grammar, breadth, and depth of vocabulary;
high-order capacities included in the study
are verbal and narrative intelligences.
A set of discriminant function analyses
(DFA) has been used to explore the relative
validity of the above five variables and the
five sub-abilities of narrative intelligence for
classifying Iranian English learners’ writing
performance. The research questions of the
study are:
1. Which of the language or intelligence
factors can classify the L2 writers into
low and high groups more significantly?
2. Which of the sub-abilities of narrative
intelligence can classify the L2 writers
into low and high groups more
significantly?
The review of the related literature is meant
to provide a brief introduction to the
sequence of studies and insights that led to
the present research. A combination of
theoretical frameworks and empirical
findings are presented to set the ground for
analyzing the data and discussing the results.
Theoretical background
A purposeful review of the literature
accumulated in writing research can unveil
the evidence pointing to the possible role of
intelligences and their interaction with
cognitive mechanism involved in L2
writing. Concepts such as syntactic and
lexical processing, coherence, and
organizational skills are frequently discussed
in L2 writing research. It can be argued that
these concepts overlap with cognitive
abilities which are labeled as multiple
intelligences. A deeper look into the nature
of language-related intelligences and
cognitive processes involved in L2 writing
can shed more light on the possible
interactions between the two. It is fair to
assume that one’s ability to express oneself
through language (verbal intelligence) can
have a role in managing and directing
language-related cognitive abilities. The
most recent type of intelligence is dubbed
narrative intelligence. It is the ability to
perceive and reproduce narrative patterns
(Randall, 1999). If a broad interpretation of
narrative is adopted (see Bruner, 1987), it is
no longer limited to stories and recounting
but will cover a wide range of organizational
skills in the human mind. Many scholars
consider narrative intelligence as a very
important cognitive ability which governs
many mental processes (Bers, 2002; Bruner,
1987, 1991; 1998). Some even consider it as
the main evolutionary advantage of human
over animals (e.g. Dautenhahn, 2002).
Given the prominent place assumed for
narrative intelligence in human mental
activities, it is hard to resist the idea that
narrative intelligence might have a
meaningful role in developing one’s writing
ability. The possible relationships between
intelligence and language skills can be
reviewed under two major headings: Micro
and macro factors in L2 writing. Micro
factors refer to writing components which
are usually learned, produced, and assessed
in isolation from other parts of the text,
macro factors refer to more general abilities
that govern L2 writing in a scope that goes
well beyond words and sentence, and is
manifested throughout the whole text.
Micro factors in L2 writing
Two main micro factors which are
frequently referred in the L2 writing
literature are knowledge of vocabulary and
knowledge of grammar. Lexical and
syntactical processing has always been
considered as two cornerstones of language
proficiency, and their mastery is often
believed to play a vital role in language
production. This is evident from the bulk of
studies on the role of vocabulary and
grammar in L2 writing (e.g. Ferris, 2004,
2010; Nakamaru, 2011; Truscott, 1996,
1999). The controversy arises when it comes
to prioritization. While some scholars
emphasize the primary importance of lexical
knowledge in language learning (de la
Fuente, 2002; Ellis, 1995), another group
considers grammatical range and accuracy
as the best predictor of successful L2
production (see Kenkel & Vates, 2009).
According to Crossley and McNamara
(2009), there are two ways to study the role
of lexical items in L2 writing. Most of the
studies only focus on surface indexes such
as lexical density, accuracy, and diversity
(e.g. Polio, 2001) while there is a smaller
group of researchers who look into deeper
measures of L2 lexical proficiency such as
lexical networks (e.g. Schmitt, 1998). Polio
(2001) studied lexical diversity as one of the
measures of breadth of vocabulary. The
disadvantage of such studies is that they do
not consider depth of vocabulary
knowledge. Breadth of vocabulary is
concerned with how many words a learner
knows whereas depth of vocabulary is
concerned with to what degree a learner
knows a word. The latter is usually
examined through collocation tests (e.g.
Schmitt, Schmitt, & Calpham, 2001). Some
of the other measures of depth of vocabulary
which are mostly based on connectionist
models of lexical acquisition are conceptual
knowledge, sense relations, word
associations, and word correctness
(Haastrup & Henriksen, 2000).
MacrofFactors in L2 writing
Coherence and cohesion are two central
themes in evaluations of L2 writing, (see
Crossley & McNamara, 2010). According to
many scholars (e.g. Collins, 1998; DeVillez,
2003) both cohesion and coherence are
significantly correlated with writing quality.
However, McNamara, Crossley, and
McCarthy (2010) found no evidence that
cohesion cues are positively related to
writing quality. In a later study, Crossley
and McNamara (2010) investigated the role
of coherence and cohesion in the evaluations
of writing quality; they found that expert
raters evaluate coherence based on the
absence of cohesive ties not their presence.
As they emphasize, this finding has
important contributions to our understanding
of the dynamics of coherence and how they
are implemented in a text.
Another important macro factor in L2
writing quality is learners’ higher-order
processing. This has been reflected in many
studies describing the ways in which L2
learners’ L1 can influence their written
production (e.g. Connor, 1984; Jarvis, 2010;
Reid, 1992). A group of these cross-linguistic studies focus on higher-order
processes involved in L2 writing including
planning and text evaluation (Cumming,
1990). Crossley and McNamara (2011)
believe that these high-order processes are
strongly linked to one’s L1 and must be
incorporated into any explanation of L2
writing proficiency. Stallard (1974) believed
that successful writers are not overwhelmed
by syntactic and lexical features of L2 and
stay focused on the general organization of
their writing. The study of cognitive aspects
of writing covers one aspect of the role of
higher-order processes in language
production. Hall (1990) found evidence for
the existence of the same cognitive
behaviors in L1 and L2. Kobayashi and
Rinnert’s (2008) findings show that non-linguistic cognitive factors play an important
role in writing and transfer from L1 to L2 or
even vice versa. The study of cognitive
processes involved in L2 writing found
greater momentum as the process-oriented
paradigm in writing research flourished (see
Pennington & So, 1993).
Intelligence and organizational writing
skills
To organize written discourse properly, L2
writers must rely on their cognitive
capabilities. Multiple intelligences (Gardner,
1983) cover various aspects of cognitive
processing. As the most recent type of
intelligence proposed by Randall (1999),
narrative intelligence is defined as the ability
to perceive and reproduce narrative
constructions and consists of five sub-abilities namely emplotment,
characterization, narration, genre-ation, and
thematization. Randall argues that narrative
intelligence is a complex cognitive capacity
which includes elements from interpersonal,
intrapersonal, and verbal intelligence. The
interpersonal aspect of narrative intelligence
is concerned with communicative skills and
is related to genre-ation and thematization;
the intrapersonal aspect deals with the
ability to express one’s thoughts and
feelings and is manifested via narration and
characterization; the verbal aspect deals with
the linguistic articulation of concepts and
their relationships and is mostly reflected in
the dynamics of emplotment.
Verbal intelligence was introduced long
before narrative intelligence (see Wechsler,
1981) and has an independent measurement
scale which examines one’s ability to
explain the meaning of lexical items (see
Wechsler, 1997). Although verbal
intelligence is manifested via linguistic
performance, its nature goes beyond
measures of vocabulary. While breadth and
depth of vocabulary examine one’s
perceptive knowledge of the target words,
verbal intelligence reflects one’s productive
knowledge when dealing with various
concepts. The productive nature of verbal
intelligence makes it relevant to the
cognitive processes involved in language
production. The place of verbal intelligence
in language learning has recently received
more attention from the scholars. For
example, Fahim and Pishghadam (2007)
found a significant relationship between the
verbal intelligence and academic
achievement of university students majoring
in English; L2 writing was one of the
components of academic achievement in
their study. Abiodun and Folaranmi (2007)
found that verbal intelligence has a
meaningful effect on L2 learners’ writing
performance. Pishghadam (2009) found
causal relationships between verbal
intelligence and L2 writing ability. These
results show that the place of verbal
intelligence in L2 writing should not be
overlooked.
Classifiers of L2 writers
Crossley and McNamara (2010) used DFA
to study the classifying effect of cohesion
indices versus complexity indices for low
quality and high quality L2 writings. Their
results show that cohesion indices cannot
classify the writers into low ad high groups
whereas complexity indices do so well
above chance. In other words, lexical
diversity, word frequency, and syntactic
complexity of the produced language can
predict the quality of the writings, as
perceived by expert raters and reflected
through writing scores, better than cohesion
scores. In a later study (see Crossley &
McNamara, 2011), they delved more deeply
into the nature of the raters’ understanding
of coherence and the rubrics based on which
they operationalize it. They found out that
raters’ perception of coherence is
considerably different from many intuitive
notions of it. This was reflected in the
significant relationship found between the
absence of cohesive devices and a more
coherent representation of the text in the
raters’ mind. They argued that as advanced
readers with high topical and background
knowledge, the raters develop a more
coherent mental representation of the text
when it includes less cohesive devices such
as word overlap, resolved anaphors, causal
cohesion, and connectives. This is because
advanced readers are inclined to make
inferences that connect different parts of the
text to each other as well as to bits of their
background knowledge; therefore, the
overuse of explicit cohesive connectors does
not contribute to the coherence of their
mental image of the text.
Another discriminant study was conducted
by McNamara, Crossley, and McCarthy
(2010) to explore the linguistic differences
between L2 writings rated as high or low by
experts. They examined four linguistic
indices: 1) cohesion; 2) syntactic
complexity; 3) diversity of words; and 4)
characteristics of words. According to the
DFA results the three most predictive
indices of writing quality are syntactic
complexity, lexical diversity, and word
frequency. None of the 26 validated indices
of cohesion used in this study showed any
meaningful difference between low and high
ability L2 writers. Those writings rated by
the experts to be of higher quality were more
difficult to read and used sophisticated
language.
Method
Participants
Participants of the present study comprised
346 Iranian learners of English as a foreign
language from four cities of Iran: Mashhad,
Kashan, Lahijan, and Tehran. The age of the
participants ranged from 17 to 33. The
sample included 267 university students
majoring in English Language and
Literature, Engineering, and Basic sciences,
and the rest were high school students out of
which 201 participants were females and
145 were males. All the participants were
learners of English attending private English
institutes (223 participants) or passing
university ESP courses (123 participants).
Each participant attended 6 test sessions. All
the participants were informed about the
general objectives of the project, gave their
consent to participate in the study and were
assured of the confidentiality of any
personal information they revealed during
the study. It should be mentioned that
sampling was based on accessibility and
major was not controlled.
Instrumentation
The measures utilized in this study consist
of scales for measuring narrative
intelligence, verbal intelligence, knowledge
of grammar, depth and breadth of
knowledge of vocabulary, and writing skill.
Pishghadam, et al. (2011) developed and
validated (using Rasch analysis) a scale of
narrative intelligence. This scale which
comprises 23 items assessing participants’
performance on several dynamics of
narrative intelligence (Randall, 1999) was
employed to measure participants’ narrative
intelligence. The scale has 5 subsections:
emplotment, characterization, narration,
genre-ation, and thematization. The
reliability (internal consistency) of this
measure is 0.72 (Pishghadam et al., 2011).
The inter-rater reliability of the scale was
0.83. The Alpha Cronbach for this
instrument in the present study was 0.85.
To measure verbal intelligence of the
subjects, the verbal scale of Wechsler’s
Adult Intelligence Scale (III) (1981) was
used. The Farsi version of the WAIS
Vocabulary subsection used in the present
study consists of 40 words. This translated
version was developed by Azmoon Padid
institute (1993) in Tehran, Iran. The Alpha
Cronbach for the vocabulary subsection in
the present study was 0.68. The reliability
coefficient (internal consistency) for the
Verbal IQ is .97. The vocabulary subtest
correlates highly (.91-.95) with the Verbal
scale of the WAIS-III. The concurrent
validity of WAIS-III was established based
on high correlation with other valid
intelligence scales, ranging from 78 to 89
(Silva, 2008).
The structure module of TOEFL PBT (ETS,
2005b) was used to measure participants`
knowledge of English grammar. Since the
validity of this scale has already been tested,
the researchers found the scale appropriate
to be used in the present study. This module
contains 40 items. Fifteen items present a
sentence with one part replaced by a blank.
In the next 25 items, each sentence has four
underlined words or phrases. It was required
that the participants identify the wrong parts
and mark them on the answer sheets. The
Alpha Cronbach for this instrument in the
present study was 0.80.
To measure the depth of participants’
vocabulary knowledge, the Depth of
Vocabulary (DVK) scale was used. The test
contains 40 items. Each item consists of a
stimulus word (adjectives) and eight
choices. In each item, the first four choices
(A-D) are in one box and the second four
choices (E-H) are in another box. Among
the choices of the left box, one to three
choices could be synonymous to the
stimulus, whereas among the four choices in
the right box, one to three co-occurring
words could be matched with the stimulus
(collocations). The overall reliability of this
test is alpha: .91 (Qian, 1999), and for this
study is alpha: 0.76.
The second version of Vocabulary Levels
Test (VLT) was used to measure the breadth
of participants’ vocabulary knowledge. The
validity of the five sections of this test
reported as Rasch ability estimates is as
follows: 42.5 (2000), 45.9 (3000), 51.0
(5000), 55.2 (Academic), and 61.7 (10000).
It measures the meaning of the content
words via matching the definitions with the
choices. For each three definitions, six
choices are available, but each definition
should be associated with only one choice.
The measure is composed of five frequency
levels (2000, 3000, 5000, academic, 10000)
and thus is called the levels test. The first
two levels (2000 and 3000) are composed of
high frequency words. The 5000 level is
considered a boundary level and the next
two levels consist of words that generally
appear in university texts (academic) and
low frequency words (10000). The
reliability of the different levels of this test
was reported as follows; 2000 (.92); 3000
(.92); 5000 (.92); academic (.92); and 10000
(.96) (Schmitt et. al, 2001). The Alpha
Cronbach for this instrument in the present
study was 0.81. Schmitt et al. (2001)
estimated the validity of the Levels Test by
“establishing whether learners do better on
the higher frequency sections than on the
lower frequency ones.” (p. 67). They found
that out of 30 as the maximum, the mean for
the frequency levels were as follows: 25.29
(sd 5.80) for the 2000 level, 21.39 (7.17) for
the 3000 level, 18.66 (7.79) for the 5000
level and 9.34 (7.01) for the 10 000 level.
According to them, analysis of variance plus
Scheffe ´ tests showed that the differences
were all statistically significant (p <.001).
The validity of the Academic level section
needs more explanation. The mean score of
this section in the profile research done by
Schmitt et al. (2001) was found to be 22.65
which apparently places it somewhere
between the 2000 level and 3000 level.
However, they argue that the words in this
section are different from the other levels,
and therefore should not be included in the
profile comparison. The validity of this
section is then justified by analyzing the
facility values of individual items and Rasch
item difficulty measures. According to
Schmitt et al. (2001), “the figures suggest
that the words in the academic level fit in a
broad range between the 2000 level and the
10 000 level.” (p. 68).
To measure the participants writing ability,
the researchers used an original specimen of
the writing module of the IELTS exam
(ETS, 2005a). Half-band scores were
included. Task 2 of the General Training
Writing Module was assessed based on 1)
coherence and cohesion; 2) lexical resource;
and 3) grammatical range and accuracy. The
task requires the candidates to formulate and
develop a position in relation to a given
prompt in the form of a question or
statement. The inter-rater reliability of the
scale was 0.87.
Procedure
The data collection phase comprised the
administration of six tests; this phase started
in July, 2010 and ended in May, 2011.
During this period, the samples were
gathered across the five cities used as the
sampling pool. Other than the narrative
intelligence test which was administered via
a movie session and recording participants’
voice, the other five tests were given to them
in traditional setting of paper and pencil
exams. At the first phase of the study, the
participants took the writing test and their
performance was rated based on IELTS
scoring criteria. This produced a set of
writing scores on a scale of 1 to 9 with half-band scores. Then, the test of grammar was
taken by participants and each person
received a score out of 40. In the next step,
the depth of vocabulary test was
administered and the participants were asked
to mark four choices altogether for each
item. This test produced a set of scores
ranging from 0 to 100. Then the depth of
vocabulary test was given to the
participants. The participants’ scores on this
test were given on a scale of 0 to 160. After
that the Verbal Intelligence Test was
administered during which each participant
was presented with 1 word at a time and
asked to explain each word’s meaning
verbally. The examiner rates the responses
with a 0, 1, or 2 depending on how well the
participant defines the word. Therefore, the
scores can range from 0 to 80 (Wechsler,
1997). The last phase was the administration
of the narrative intelligence test. The
participants watched the first 10 minutes of
a movie (Defiance) and then, were asked to
recount the story. They were also asked to
tell their story of the first day of the
elementary school. The two narratives
produced by each participant were then rated
by two raters using the NIS (Narrative
Intelligence Scale). The average score for
the five sub-abilities of narrative intelligence
in the above narrative tasks were taken as
the participants’ narrative intelligence score.
First of all, the internal reliability of the tests
used in the study was calculated using the
Alpha Cronbach Method. After ensuring the
reliability of the scores, all the data were
imported into SPSS 18.0 and linked to
AMOS 16.0 to be analyzed through DFA.
Results
In the present study, six sets of data were
collected through the administration of the
tests. The descriptive statistics of the scores
obtained by all 346 participants on these
tests is presented in Table 1.
The standard deviations show that “breadth
of vocabulary” scores have the highest
diversity whereas verbal intelligence scores
are the most homogeneous among others. In
general, macro factors namely verbal and
narrative intelligences show less deviation
from the mean, as opposed to micro factors.
The widest range is found in “breadth of
vocabulary” scores while the narrowest
range is associated with verbal intelligence.
Breadth of vocabulary has the highest
standard error of measurement.
Classifying L2 writers based on language
and intelligence factors
To answer the first research question, a set
of DFAs were run with L2 writing ability as
the groping variable, and language and
intelligence factors as model predictors. The
statistics of Table 2 reflect the viability of
running DFA for analyzing the classifying
validity of language and intelligence factors.
Box’s M is non-significant in all cases
except verbal intelligence; this means that
the null hypothesis of equal population
covariance matrices is not rejected. In other
words, there is no significant difference
between the covariances of model predictors
across low and high groups. This ensures the
validity of the comparisons made between
the statistics of low and high groups.
The eigenvalues provide information about
the relative efficacy of each discriminant
function. As it can be seen, the efficacy of
depth of vocabulary and narrative
intelligence as the grouping variables is
considerably higher than the other measures.
This means that one’s depth of vocabulary
(collocational knowledge) and narrative
intelligence (discourse management ability)
can predict one’s membership in low or high
groups of L2 writing ability more efficiently
than one’s knowledge of grammar, breadth
of vocabulary (vocabulary size), and verbal
intelligence. In other words, the probability
of the correctness of one’s prediction about
learners’ L2 writing ability based on the
information available about these two
variables will be stronger than the other
variables. The canonical correlation is the
most useful measure in the table, and it is
equivalent to Pearson's correlation between
the discriminant scores and the groups (low
and high). Here the results show that the
correlation between discriminant scores
produced by the grouping variable (L2
writing) and the scores within the low and
high groups is 0.37 for depth of vocabulary
and 0.46 for narrative intelligence.
Therefore the predictions made based on
these two variables for L2 writing ability are
more realistic than the predictions made
based on the scores obtained for the other
three predictors.
Wilks' Lambda shows how well the model
predictors separate cases and assign them
into groups. This measure is actually equal
to the proportion of the variance in the
discriminant scores which cannot be
explained by differences among the groups.
Smaller values of Wilks' Lambda indicate
greater discriminatory power of the function.
The chi-square statistic tests the hypothesis
that the means of the functions listed are
equal across groups. As it can be seen, the
discriminatory power of two model
predictors (depth of vocabulary and
narrative intelligence) is more (smaller
Lambdas: 0.67 and 0.79 respectively) when
predicting L2 writing ability compared with
the other three predictors (grammar, breadth
of vocabulary, and verbal intelligence
(greater Lambdas: 0.95, 0.96, and 0.95
respectively). The main discriminant
function coefficients are shown in Table 3.
The participants of the study were divided
into low and high ability groups based on
their L2 writing scores. The statistics
presented in Table 3 show how well the
scores obtained on language and intelligence
tests can classify the participants into low
and high ability groups. The frequencies
represent overlapping areas between original
and predicted L2 writing scores. The
number of cases in Low and High groups is
173. When L gets closer to Low or when H
gets closer to High, the probability of
making correct predictions about L2 writing
ability increases. For example, the frequency
“137” in the section titled “narrative
intelligence” means that a function
extrapolated based on narrative intelligence
scores, can predict 137 out of 173 cases in
the low ability group correctly. That is to
say, 79.2% of the participants predicted as
having low L2 writing ability based on their
narrative intelligence overlap with the
participants which were put into that
category based on their original L2 writing
scores. In other words, every prediction
made about one’s membership in the low L2
writing ability group based on one’s
narrative intelligence is correct by 79.2
percent. The same explanation applies to all
of the frequencies shown in Table 3.
However, none of these numbers and
percentages can show the total
discrimination power of the model
predictors. This is presented by
classification percentages.
The numbers shown in the last column of
Table 3 indicate how well each of the
predictors can discriminate between high
and low L2 writing ability learners.
According to the results of DFA, the highest
classification coefficient is produced by
narrative intelligence with 70.5 percent. It
means that any prediction about the
membership of a participant in low or high
L2 writing ability groups is correct by 70.5
percent. The second best classifier is depth
of vocabulary with 64.5 percent. Verbal
intelligence, breadth of vocabulary, and
grammar have similar classifying validity
that is 59.0, 57.8, and 57.2 percent
respectively.
Classifying l2 writers based on sub-abilities
of narrative intelligence
To answer the second research question,
another set of DFAs were run with L2
writing ability as the grouping variable, and
the five sub-abilities of narrative
intelligence. Having found narrative
intelligence as the best classifier of L2
writing ability, the researchers then explored
it further by looking into the classifying
coefficients of the sub-abilities to see which
of the dynamics defined for narrative
intelligence by Randall (1999) plays a
greater role in predicting low or high L2
writing ability. The statistics in Table 4
reflect the viability of running DFA for
analyzing the classifying validity of the sub-abilities of narrative intelligence.
As it can be seen in Table 4, among the sub-abilities of narrative intelligence,
emplotment has the highest relative efficacy
since it has the biggest eigenvalue (0.20);
however, the significance of 0.00 in Box’s
test shows that the validity of the
comparisons made between low and high
groups based on emplotment scores cannot
be ensured. The significance levels of the
Box’s test for the other four sub-abilities
show that there is no significant difference
between the covariances of model predictors
across low and high groups; therefore, the
validity of all the comparisons related to
them can be ensured. The minimum relative
efficacy is reported for characterization with
en eigenvalue of 0.04. As already
mentioned, smaller Wilks’ Lambdas signal
greater discriminatory power. Regarding this
index, after emplotment, thematization, and
genre-ation can assign cases into low and
high groups better than characterization and
narration. The main DFA results for the sub-abilities of narrative intelligence are shown
in Table 5. The results show that the
correlation between discriminant scores
produced by the grouping variable (L2
writing) and the scores within the low and
high groups is 0.41 for emplotment, 0.37 for
thematization, and 0.34 for genre-ation.
Therefore the predictions made based on
these three sub-abilities of narrative
intelligence are more realistic than the
predictions made based on the scores
obtained for the other two predictors.
As Table 5 shows, L2 writing ability group
memberships predicted based on
emplotment scores (67.6) are more valid
than the other sub-abilities of narrative
intelligence. Genre-ation and thematization
have identical classifying validity; however,
they differ from each other in the number of
cases they can correctly assign to low and
high groups. In fact, thematization can
assign more correct cases to the low L2
writing ability group (126 > 118) while
genre-ation can predict high group
membership more efficiently than
thematization (115 > 107). The lowest case
predicting power for both low and high L2
writing ability groups is reported for
characterization (57.8%).
Discussion
The present study was launched to see how
well language and intelligence factors can
classify L2 writers. Language factors
include knowledge of grammar, depth of
vocabulary (collocational knowledge), and
breadth (size) of vocabulary. Intelligence
factors include verbal and narrative
intelligence. The secondary aim of the study
was to see how well each of the sub-abilities
of narrative intelligence can do the
classification.
According to the results, among the micro
factors, depth of vocabulary is the best
classifier of L2 writers. It can predict a
learner as a low or high ability L2 writer
better than grammar and breadth of
vocabulary. That is to say in producing L2
writing, knowing word collocations is more
important than the size of vocabulary or the
knowledge of grammar. This is in
accordance with the results of some previous
studies. For example, the results of the study
conducted by Crossley and McNamara
(2009) show that indexes of vocabulary
dealing with the depth of knowledge provide
a more meaningful insight into the lexical
aspects of L2 writing. Appropriate
collocations can have a positive effect on the
cohesion and coherence of writing which are
both important markers of writing quality.
This finding can be used to promote the idea
that teaching word collocations in L2
writing classroom is more important than
expanding the vocabulary circle or focusing
on grammar.
Among all the model predictors, narrative
intelligence has the highest classifying
validity when it comes to L2 writing ability.
It even surpasses depth of vocabulary. This
finding can be analyzed against the
background literature available on the role
of micro and macro factors in second
language writing. For example, our results
are in accordance with Hirose’s (2006)
emphasis on the role of mental macro
processes in determining the organizational
patterns in L2 writing. In the present study
verbal and narrative intelligence represent
macro organizational skills used in writing.
The fact that narrative intelligence is even a
better predictor than collocational
knowledge supports the view that favors the
superiority of macro skills over micro
components. The prominent role of narrative
intelligence in predicting L2 writing ability
was analyzed further by looking into the
classifying power of the its five dimensions.
Among the sub-abilities of narrative
intelligence, emplotment is the most valid
classifier of L2 writers. This finding has
useful implications for the study of factors
affecting L2 writing ability from another
perspective. To understand the nature of the
role played by emplotment in increasing the
quality of writing, one has to look into the
dynamics of this sub-ability as defined by
Randall (1999) and operationalized by
Pishghadam et al. (2011). Emplotment
entails skills such as recognizing the
difference between important and trivial
points, and maintaining a sold line of
argument thought produced discourse. These
are high-order mental skills that mostly
contribute to the organization of the written
discourse. There is a solid literature on the
place of higher-order processes in L2
writing (e.g. Bitchener & Knoch, 2010;
Murphy & Roca de Larios, 2010).
It is interesting to note that depth of
vocabulary, as a micro factor is even
stronger than verbal intelligence (which is a
macro factor) in classifying L2 writers. One
reason for this may lie in the mode of
testing. The test used for measuring depth of
vocabulary is written while the test of verbal
intelligence was administered orally. In
addition, the assumption that a translated
version of the verbal intelligence test is as
reliable as the original test might be
problematic. Of course, it should be noted
that verbal intelligence is still the second
best classifier of L2 writers after narrative
intelligence. This supports Randall’s (1999)
proposal which emphasizes the proximities
between narrative and verbal intelligence.
These findings have useful applications in
teaching English as a foreign or second
language. One of the controversial issues in
L2 writing research is the problem of
prioritization. Identifying and attending to
the highest teaching priorities in writing
courses have concerned many scholars
(Ferris, 2004; Nakamaru, 2011; Truscott,
1999). According to the results of the
present study, collocational knowledge and
narrative intelligence must receive the focal
attention from L2 writing teachers. Syllabi
designed based on this finding can help L2
learners improve the quality of their writings
more efficiently. Of course, paying attention
to the role of collocation in writing is not
new; however, combining this with a focus
on narrative competence is another matter
that can lead to a better framework for
managing the writing classrooms. From
another perspective, this finding can
contribute to the testing to L2 writing and
increasing the construct validity of writing
modules designed for language proficiency
exams. The definitions provided by Randall
(1999) for the dynamics of narrative
intelligence can be used to reformulate and
revise the rating criteria of the writing tests.
Raters need clear instructions to examine L2
writings; whereas, identifying lexical and
syntactic aspects of writing by expert raters
is an objective and traceable process,
unraveling the complexities of their
understating of notions such as coherence
and writing fluency is a very demanding
task. It can be argued that incorporating the
concept of narrative intelligence into the
rating frameworks used by the experts sheds
more light on the unexplored aspects of the
testing of L2 writing.
The results of this study generated a number
of questions which can be investigated in
further research. The impact of a narrative
intervention program which is merged into
an L2 writing course on L2 learners’ writing
performance can be investigated through an
experimental study. Since depth of
vocabulary and narrative intelligence were
found to be the best classifiers of L2 writers,
it would be useful to explore the relationship
of these two variables via qualitative
research. This study can also be extended by
using a more diverse set of writing topics
which may affect the interaction between
narrative intelligence and language factors
specially collocational knowledge. Another
line of research to pursue can deal with the
rating processes and the possible role of the
dynamics of narrative intelligence for
developing the mental representations of
coherence in the mind of raters. Last but not
least, the neuroimaging techniques offered
by cognitive scientists can be used to
complement the instruments of the present
study with neural correlates of lexical
processing and narrative intelligence in L2
writing.