Second Language Acquisition & Testing in Europe



SLATE is mainly a research network promoting joint projects among its members.

Here is a list of current activities:



MERLIN - written learner productions in Czech, Italian and German
(Annekatrin Kaivapalu; Estonian Science Foundation 2010-2013)

MERLIN is an EU-funded project (runtime 2012-2014) co-ordinated by the Technical University of Dresden (partners: University of Tübingen, University of Prague, European Academy of Bolzano, telc Language Testing, Berufsförderungsinstitut Oberösterreich).

In the MERLIN project, roughly 2,500 written learner productions that stem from standardized language tests for three target languages (Czech/Italian/German) were re-rated in order to relate them as directly as possible to the CEFR. They were then transcribed in an xml-based format and are now being annotated for a large number of linguistic features. MERLIN adopts a multi-dimensional approach of annotation, taking into account many relevant characteristics of learning an L2 from a SLA research perspective, but also including operationalised CEFR level descriptions in order to analyse aspects of empirical scale validity. Thus, we hope to avoid the circularity pitfall of basing all analyses on ratings. We also conducted a needs analysis with potential future users the results of which are now also integrated in the annotation scheme, and inductive text analyses were carried through to further inform the annotation process. Manual annotations are supported by up-to-date NLP tools developed by our computational linguists team in Tübingen (Prof. Detmar Meurers).

The complete dataset will be made freely available online, including test tasks and full texts. Texts will be searchable for the whole variety of features mentioned. Thus, apart from further illustrating the meaning of rated CEFR levels, MERLIN will offer an opportunity for cross-linguistic validation of CEFR scales, and it will enhance the automatic analysis of learner language.

Back to top of page


Cross-linguistic influence and second language acquisition: corpus-based research
(Annekatrin Kaivapalu; Estonian Science Foundation 2010-2013)


Our project addresses the fundamental question of how cross-linguistic influence, especially the first language (L1) influence determines second and foreign language (L2) acquisition and learning (SLA). This project seeks to examine these questions on the bases of learner Estonian ( and learner Finnish corpora (Finnish webpage: and English webpage:
At the first stage of the project the first languages involved in this study are Estonian, Finnish and Russian. Many studies on L1 influence completely ignore the possibility that it could be different both in quantity and nature at different levels of L2 proficiency. This relationship between L1 influence and L2 proficiency is in focus of this study. L2 proficiency is determined in terms of the CEFR levels. The project has the following goals:

  1. to examine morphological, morphosyntactical and lexical cross-linguistic influence and to find out differences between closely related and unrelated L1 influence on SLA;
  2. to specify the role of the first and the second language influence in the acquisition of a third language, while one of the source languages is related to the target language and the other one is not;
  3. to investigate relationships between the first language influence and second or foreign language proficiency;
  4. to find out factors interacting and competing with L1 influence in the acquisition and processing of L2;
  5. to investigate the processing strategies of the learners with closely related and unrelated L1.

Data and method

The project focuses on the written performances of learners. The performance of learners is compared with the performance of native speakers in written language corpora. The psycholinguistic reality of cross-linguistic similarity is ascertained by tests, questionnaires and interviews. In order to examine the producing process of the learners, some of the data is gathered by using the ScriptLog programme combined with retrospective interviews and the thinking-aloud-method.


The project was started in 2010 and is still on-going. One of the main aims in 2010 and 2011 was to diversify the existing L1 subcorpora of the Estonian Interlanguage Corpus (EIC) and to assess the texts according to proficiency levels of CEFR, but also to contribute to the enlarging of Estonian subcorpus of International Corpus of Learner Finnish (ICLFI). In addition, new subcorpora of EIC were created: subcorpus of Estonian proficiency level exams, subcorpus of ScriptLog and subcorpus of Estonian learners of Russian. The texts of Finnish and Russian subcorpora of EIC as well as Estonian and Russian subcorpora of ICLFI have been rated by three raters according to CEFR proficiency levels and this process continues.

So far, we have carried out some of preliminary studies which the focus on the case usage of Estonian and Russian learners of Finnish as well as Finnish and Russian learners of Estonian across CEFR levels. The preliminary results of the study suggest that positive morphological influence of a closely related first language on the producing process of Finnish and Estonian is symmetrical and acts in both directions. The inflectional process is mainly based on convergent inflectional patterns; the use of divergent patterns grows across the proficiency levels. The results also indicate that frequency of case forms is highest on the level A2, while accuracy grows across the levels. Some mixing of morphological formatives and paradigmas of Estonian and Finnish is observed on levels A2 and B1. The influence of the non-related L1 on two closely related L2 is quite similar in terms of accuracy, but not in terms of frequency.

Both Estonian learners of Finnish and Finnish learners of Estonian clearly prefer to use inflectional forms of maximum similarity, i.e. of phonological, morphological and semantical convergence in L1 and L2. The morphological convergence, similarity of inflectional patterns, turns out to be more critical a factor in terms of morphological cross-linguistic influence than semantical convergence, the similarity of word meaning.

The similarity of inflectional patterns is more salient for Estonian learners of Finnish than for Finnish learners of Estonian, whose perception of inflectional pattern similarity needs to be supported by phonological and semantical similarity.
In 2012 the main scientific aim is to explore the psycholinguistic reality of similarities and differences between closely related languages, relationships between objective and perceived/assumed similarities and differences as well as relationships between the linguistic systems of related and unrelated languages.

For more information on the project contact Annekatrin Kaivapalu,

Back to top of page


KPG English corpus
(Prof. Bessie Dendrinos, Voula Gotsoulia; English Studies, University of Athens)

Our goal is to analyse the learner production in English in terms of lexical semantic and grammatical properties that characterize the expression of meaning at different proficiency levels (as specified by the CEFR). Our work is thus closely related to that of the English Profile Project, but focuses on an extensive, systematic documentation of the linguistic profile of the Greek learner of English. Furthermore, it implements a model of interrelated linguistic features (lexical, semantic, syntactic and functional/genre-based features) as a basis for the develpoment of level characterizations. Note that the collections of learner texts in the KPG English corpus have been systematically described in terms of bundles of features encoding essential genre information.
We are really interested in joining forces with the rest of the projects in SLATE.

KPG English corpus:
The Greek Foreign Language Learner Profile Project:
The Task Analysis Project:

Back to top of page

Norwegian profile
(Project leader: Dr.Cecilie Carlsen, Norsk språktest, University of Bergen/Folkeuniversitetet)

The project Norwegian Profile, as described below, has been finished. A book is available from Novus forlag:
C.Carlsen (Ed.) 2013. NORSK PROFIL. Det europeiske rammeverket spesifisert for norsk. Et første steg. Oslo, Novus forlag.

Norwegian Profile is a collaborative project between language assessment specialists at Norsk språktest (University of Bergen/Folkeuniversitetet) and SLA-researchers at the University of Bergen.
The aim of the project is firstly to validate the linguistic scales of the CEFR against authentic learner data, and secondly to develop language specific reference level descriptors (RLD) for Norwegian as recommended by the Council of Europe in 2005.

The project is funded by Vox (Norwegian Agency for Lifelong Learning) supported by the Ministry of children, equality and social inclusion.
The CEFR holds a strong position in Norway and is used as the basis for the teaching curriculums of Norwegian for adult immigrants. The standardized tests of Norwegian for adult immigrants developed at Norsk språktest are also based on the CEFR. It is therefore of great importance to specify and further exemplify the general linguistic scales of the CEFR for Norwegian. In the project, we look at different linguistic scales and the extent to which certain linguistic traits are mastered at different levels of proficiency, mainly from level A2 to B2.

Our common source of data is an electronic learner corpus (ASK) developed as a cooperation between three parties; a) Norsk språktest, b) AKSIS and c) the Department of Linguistic, Literary and Aesthetic studies (LLE) at the University of Bergen (UiB) in 2003. The construction of the corpus was financed by the Research Council of Norway and leaded by Professor Kari Tenfjord at LLE, UiB (Tenfjord, K., Meurer, P. and Hofland, K. 2004). In a later project, leaded by Dr. Cecilie Carlsen, the majority of the corpus texts in ASK were reassessed on the CEFR proficiency scales by a group of 5-10 experienced raters as part of the ASKeladdenproject at UiB, also financed by the Research Council of Norway and leaded by Professor Tenfjord (Carlsen forthcoming). The corpus texts are automatically tagged for grammar and manually coded for errors, which allows us to investigate what learners can and can not do at different levels of proficiency.

The planned outcome of the project is an anthology of articles focusing on different linguistic scales. The document will include an introduction describing the central documents (the CEFR, the Manual for linking examinations to the CEFR, and the Guide for the production of RLD) upon which the project builds. It will also present the learner corpus ASK and the linking of corpus texts to the CEFR which is of crucial importance to the project. The document will contain around 10 articles focusing on different linguistic CEFR-scales validated against Norwegian learner data. And finally, it will contain Norwegian examples of the non-language specific scales of the CEFR. The document will be a supplement to the Threshold level of Norwegian (Svanes et al. 1987) and the Norwegian translation of the CEFR (Norwegian Directorate for Education and Training 2011).


ASKeladden homepage:
Carlsen, Cecilie. Forthcoming. Proficiency level - a fuzzy variable in computer learner corpora.
Norwegian Directorate for Education and Training 2011: Felles europeisk rammeverk for språk. (Norwegian translation of the CEFR)
Svanes et al. 1987. Et terskelnivå for norsk (A Threshold level for Norwegian).Oslo: Cappelen.
Tenfjord, K., Meurer, P. and Hofland, K. 2004: The ASK-corpus - a language learner corpus of Norwegian as a second language. Proceedings from 5th International Conference of Language Resources and Evaluation (LREC), Genova 2006 (

Back to top of page

Linguistic features of the communicative CEFR-levels in written L2 French
(Fanny Forsberg Lundell and Inge Bartning, Stockholm University)

Our project aims at matching communicative competence as proposed in the CEFR scales with the development of linguistic proficiency. The main goal of the project is the investigation of linguistic features of French L2 (CEFR-levels A1-C1), in terms of morpho-syntax, discourse organisation and the use of formulaic sequences. In earlier research on acquisitional orders in oral L2 French the linguistic phenomena under investigation in the chapter have already already found to be ‘criterial' for French. In the study published in the Eurosla monograph (Forsberg & Bartning 2010), written data were collected from 42 Swedish university students of L2 French, using two different tasks per student. The students were placed at all CEFR-levels, the B1 level being the most represented. The first results show that morpho-syntactic accuracy measures yield significant differences between the CEFR-levels up to B2. Also the use of lexical formulaic sequences increases at higher CEFR-levels, but with significant differences only betweenA2, B2 and C2.

At present, the aim is to collect more data, especially from higher levels viz. B2, C1 and C2. Performances at these CEFR-levels are more difficult to evaluate and furthermore, they have not been subject to as many studies as the lower levels.

Back to top of page


Diagnosing reading in a second or foreign language
(Research Project co-funded by the Economic and Social Research Council, UK)

Overall aim

  • To increase our understanding of how proficiency in reading in a second or foreign language develops.
  • To develop an approach to the diagnosis of SFL reading ability based on empirical research and applied linguistic theory.

Overall objectives

  • To identify task and text features that contribute to the difficulty of assessment items and tasks in tests of reading in one's first language (L1) and in a second or foreign language (SFL).
  • To identify those subskills and cognitive processes that contribute to the ability to perform well on tests of L1 reading ability and reading ability in an SFL.
  • To examine the relationship between the overall ability to read in L1 and in an SFL.
  • To examine how well diagnostic measures of L1 reading difficulties relate to difficulties in SFL reading.
  • To modify L1 diagnostic measures for the diagnosis of strengths and weaknesses in SFL reading.
  • To devise new diagnostic measures of SFL reading ability.

This will be achieved through three sub-projects, which have the same objectives and related research questions, findings from which will be combined and compared in order to develop an empirically based model of the diagnosis of reading ability in a second or foreign language.

The ACER/ESRC Reading Sub-Project

Overall aim

This project aims to enhance our understanding of what affects item difficulty in tests of reading in one's first language or in the language of instruction. It will investigate methods for improving the reliability and validity of expert judgements of those features that contribute to the difficulty of reading tests for 15 year olds, as developed for the 2009 PISA study. In addition, the project will compare the ability of English native speakers to read in English with their ability to read in French, German or Spanish.

Research questions

L1 and Second and Foreign Language reading, based on the PISA 2000 and 2009 tests of reading

  1. What features of task demands and texts best predict item and task difficulty?
  2. What process of describing item and task content and reaching agreement among judges will result in the greatest reliability of judges?
  3. What model of reading processes and text variables will be most helpful for test developers, response coders and teachers, to predict difficulty and to use pedagogically?
  4. How does ability to read in L1 relate to SFL reading ability?

The Finnish Academy of Sciences / ESRC Reading Sub-Project

Overall aim

This project aims to enhance our understanding of difficulty in reading and learning to read in one's second or foreign language. The project will investigate diagnostic tools for assessing learners' strengths and weaknesses in their first language and adjust these tools to the diagnosis of SFL reading ability, using Finnish learners of English and immigrants learning Finnish in Finland.

Research questions

Diagnosis of Second or Foreign Language reading, based on the Finnish National Certificates

  1. Which diagnostic L1 reading tasks and other diagnostic measures are most promising for SFL learning?
  2. How do L1 and SFL reading skills relate to each other?
  3. How might diagnostic L1 reading tasks best be modified for use in SFL reading assessment?
  4. Which linguistic and non-linguistic skills characterise different reading ability levels on the Common European Framework of Reference (CEFR)?
  5. How does the development of SFL reading ability relate to various potential diagnostic measures?

The PEARSON/ESRC Reading Sub-Project

Overall aim

This project aims to enhance our understanding of what affects item difficulty in tests of reading in one's second or foreign language. The project will investigate the content and construct validity of a test of English for Academic Purposes, the diagnostic value of the resulting profiles of reading abilities, and the relationship between test developers' intentions and test outcomes.

Research questions

Second and Foreign Language(SFL) reading, based on the Pearson Test of English (Academic)

  1. Which aspects of the constructs underlying SFL reading tests can expert judges agree upon?
  2. Which reported SFL reading skills have the greatest predictive validity and diagnostic utility?
  3. Which learner performance variables best predict item and task difficulty, and measures of learner ability?
  4. Which background learner variables best predict item and task difficulty, and measures of learner ability?

Project description as pdf

Back to top of page



Linking a learner corpus to the CEFR
(Postdoctoral research fellow Dr. Cecilie Carlsen & Dr. Felianka Kaftandjieva)

Part of the ASKeladden project, Department of Linguistic, Literary and Aesthetic studies, University of Bergen


The aim of this project was to link an electronic learner corpus of Norwegian as a second language (ASK) to the Common European Framework of Reference for Languages (ASK is an acronym for the three constituent morphemes of Norwegian AndreSpråksKorpus (SecondLanguageCorpus), Tenfjord, 2007, p. 207).

Data and method

ASK contains texts written by adult learners of Norwegian representing 10 different first languages (Albanian, Dutch, English, German, Polish, Russian, Serbian/Bosnian/Croatian, Spanish, Somali, and Vietnamese). The corpus texts were selected from two standardized tests of Norwegian as a second language at two broad levels of proficiency; intermediate and advanced. As part of my postdoctoral research project, I wanted to carry out a reassessment of the corpus texts to obtain a more fine-tuned scaling of the texts on the one hand, and a more reliable level assignment on the other.

Dr. Felianka Kaftandjieva at the University of Sofia was responsible for the design and the statistical analysis of the project. All texts from seven L1-groups, representing 1222 or two thirds of the corpus texts, were rated by a group of 10 experienced raters familiar with the CEFR. The design of the linking project was as follows: All 10 raters scored the same 200 text selected to represent the seven L1s and the two tests from which the texts were selected. Ratings were done on the CEFR-levels A1-C2 and in-between levels A1/A2, A2/B1 etc. Based on the rater reliability and rater severity, raters were divided into two groups, which then scored the rest of the texts, 511 texts per group. Raters received training before rating the first 200 texts. The rater reliability based on the rating of the 200 texts, was more than satisfactory (Homogeneity index – H– statistically significant (p < 0.01), mean + 0.84/ Inter-rater correlation (rij) – Pearson’s correlation coefficients between any pair of raters i & j – statistically significant at p < 0.01, mean + 0.82). The linking project was finished in August 2009, and the new level-placement of texts incorporated in the corpus during autumn 2009.


An electronic learner corpus reliably linked to the CEFR is a useful tool in the investigation of language learning and proficiency levels. For SLA research, it allows quasi-longitudinal studies of interlanguage development while at the same it also allows us to investigate the developmental sequences of syntax and morphology, vocabulary and discourse patterns. Secondly, it allows us to study L1-influence and rule out the possibility that observed differences between two or more L1-groups are due to level differences rather than L1 differences.

For language testing, it allows us to test empirically what learners, or test candidates, can and can’t do at different CEFR-levels. This is particularly interesting in a country like Norway where the teaching curriculum and standardized tests for adult immigrants are based on the CEFR.

Finally, it allows an empirical validation of the level descriptors of the CEFR, and the development of language specific level descriptors for the Norwegian language.

Back to top of page


Discourse connectives across CEFR-levels: A corpus based study
(Dr. Cecilie Carlsen,Postdoctoral research fellow, ASKeladden project, Department of Linguistic, Literary and Aesthetic studies, University of Bergen, Norway)


This study focuses on the use of discourse connectives, such as and, but, so, then, and however, in written learner texts of Norwegian as a second language. The Common European Framework of Reference for Languages (CEFR) make specific predictions about the use of such discourse connectives in learner language, elaborated in the illustrative scale of Coherence and Cohesion at p. 125. The CEFR predicts that the range of different connectives increases across proficiency levels, that more advanced learners make more use of low-frequent connectives than learners at lower levels, and that learners gain increased control of connectives as they progress.

Data and method

The overall research question of the study is whether the predictions made in the CEFR about learners’ use of discourse connectives are supported by authentic learner data. The predictions are tested against a computer learner corpus of written Norwegian (ASK) developed at the University of Bergen, Norway and linked to the CEFR. Correspondence analysis was used in order to explore the relation between proficiency levels and the use of the 36 connectives included in the study.


The results of the study largely support the predictions made in the CEFR. More advanced learners use a greater variety of different connectives than less advanced learners, they use more low-frequent connectives and they show greater control of the connectives used. The results also imply, however, that learners start to use a variety of different connectives, medium- and low-frequent ones as well, earlier than what is assumed in the CEFR. Even at a B1/B2-level learners use a range of different connectives to make their texts cohere, and they use them to a large degree correctly.

The study is reported in an article to appear in Martin, M., Bartning, I. and Vedder, S.C 2010.

Back to top of page


English Profile
(Angeliki Salamoura, Nick Saville)

English Profile is a long-term, collaborative programme to enhance the learning, teaching and assessment of English worldwide. The aim is to create a set of Reference Level Descriptions linked to the Common European Framework of Reference. These will provide detailed information about the features of learner language in English at each CEFR level, informing thus both theory (second language acquisition and learning) and practice (curricula, course and test material development) to support the learning, teaching and assessment of English.

The founding partners are: the University of Cambridge (Cambridge ESOL, Cambridge University Press, Research Centre for English and Applied Linguistics (RCEAL), Cambridge Computer Lab), British Council, English UK, and University of Bedfordshire Centre for Research in English Language Learning and Assessment (CRELLA). A growing number of researchers and educationists make up the English Profile Network.

The EPP's research programme is the latest stage in a process dating back to the 1970s, when John Trim and Jan van Ek developed the original Threshold series (T-series) which remains a cornerstone of research and materials development in language teaching and testing, and contributed to the development of the CEFR. English Profile builds on the T-series, working with a functional approach to the CEFR level descriptions. Innovative features of EPP are its empirical dimension, based on corpora; the incorporation of second language acquisition and psycholinguistic considerations in addition to the more traditional linguistic features, and a strong focus on the impact of L1 transfer effects.

English Profile is built around three major research strands: Corpus Linguistics, in which (second language) linguists and computer scientists are investigating the language which learners actually produce at each level, headed by Professor John Hawkins (RCEAL). The Pedagogy strand focuses on curricula and materials, with particular attention to higher levels, headed by Professor Cyril Weir (CRELLA). The Assessment strand focuses on how language knowledge and use develop, headed by Dr Nick Saville (Cambridge ESOL). A major undertaking is the development of the Cambridge English Profile Corpus, a principled collection of learner written and spoken data aligned to CEFR levels which complements the current Cambridge Learner Corpus, an extensive corpus of CEFR-aligned exam scripts.

From a second language point of view, a number of hypotheses investigated within English Profile derive from Hawkins' theory of Efficiency and Complexity in Grammars (Hawkins 2004). Other hypotheses tested are based on SLA theories of sentence level features – the acquisition of the morpho-syntax of verbs (e.g. Parodi 1998, 2000; White 2003) – and discourse level features – the acquisition of reference to space and person (e.g. Slobin, and Hickmann and Hendriks in press). English Profile research is also investigating effects of “transfer” from the L1, which are widely attested in the SLA literature (from Lado 1957 to recent studies, e.g. Schwartz and Spouse 1994 among many others).

For more information on specific sub-projects and Programme outcomes see or contact Angeliki Salamoura

Back to top of page


Levels in the acquisition of Italian and Direct profil
(Jonas Granfeldt, Eva Wiberg & Petra Bernardini)

This project aims to determine levels in the acquisition of Italian as a second language and to develop the software  Direct Profil of learners' Italian. Direct profil already exists for French learner language (see the work by Granfeldt and collegues, who invented and developed it in the first place). Written and oral texts of Italian L2 are analyzed to determine levels of Italian learner language and the results are implemented for an Italian version of Direct Profil . The sofware can be used to test the level of a learner's specific language production and will give a profile that consists of various morpho-syntactic phenomena, such as for instance subject-verb-agreement and noun-phrase-agreement.

Petra Bernardini
Eva Wiberg
Jonas Granfeldt

Back to top of page


Accuracy across proficiency levels and L1 backgrounds: Insights from an error-tagged EFL learner corpus
(PhD project: professor Sylviane Granger, supervisor, and Jennifer Thewissen, student)


One of the practical aims of this PhD is to investigate how an error-tagged learner corpus, i.e. a learner corpus that has been annotated for errors, can be used to flesh out the existing Common European Framework (CEF) descriptors for linguistic competence, more specifically the descriptors for grammatical accuracy, vocabulary control, vocabulary range, orthographic control (spelling and punctuation), as well as those for cohesion and coherence (CEF, 2001: 108-118). We suggest here that the ‘cannot do's' that result from error-tagged learner corpus analysis will usefully supplement the ‘can do' approach currently adopted by the CEF, thereby making the descriptors more transparent for users.

Research questions

  • Do the L1 corpora studied present error profiles that are homogeneous enough to enable a detailed refinement of the CEF descriptors?
  • Do the L1 corpora studied present error profiles that are heterogeneous and therefore force the CEF descriptors to remain fairly general?

Data and method

The research data used have been taken from the International Corpus of Learner English (ICLE), a learner corpus that comprises writing by learners from 16 different mother tongue backgrounds (Granger et al. 2009). A total of c. 50,000 tokens were selected from the French, German and Spanish components of ICLE, amounting to an overall corpus of c. 150,000 tokens. Each text in the corpus has been (1) fully error-tagged following the Louvain error tagging manual (Dagneaux et al. 2008) and (2) professionally rated according to the CEF descriptors for linguistic competence. The raters were asked to assign a CEF grade (from B1 to C2) to all of the linguistic competences as well as to each text as a whole. Two raters were initially called upon to grade the texts. In cases of major disagreement (i.e. cases where the judges disagreed by more than one band score and attributed, say, an overall score of B1 and C1 to the same text), a third rater was called upon to assess the problematic texts.

Preliminary findings

We have so far carried out a number of preliminary studies which focus more heavily on levels B2 to C2. In Granger and Thewissen 2005a/be, we first showed that, as might be expected, the average number of overall errors was lower in C1 than B2 and lower in C2 than C1. However, there was a much steadier downward trend between B2 and C1 than between C1 and C2, where there was some levelling off of the error density. We also analysed the number of formal, grammatical, lexical and punctuation errors committed by B2, C1 and C2 level learners. Two error domains stood out as having discriminating power: first, grammatical errors allowed to distinguish between B2 and the C levels and, second, punctuation errors helped to distinguish between levels C1 and C2.

In Thewissen et al. (2006) we concretely showed how the current CEF descriptors for grammatical accuracy and vocabulary control could be improved on the basis of the errors that learners make in these areas. It was for instance argued that the higher level descriptors (C1-C2) were somewhat overoptimistic regarding learners' actual can do's.

In Thewissen (2006), I carried out a detailed analysis of article errors at B2, C1, C2 and showed that the three proficiency groups mainly experienced problems using articles with generic reference. Within this category, learners' main area of difficulty was located in the use of the zero article with generic uncountable nouns. I then illustrated how these results could practically be implemented in the CEF descriptors for grammatical accuracy.

Finally, in Thewissen (2008), I analysed learners' phraseological errors, thereby attempting to flesh out the descriptors for vocabulary range. Hence, for instance, the current B1 descriptors says that a learner at this level “Has a sufficient vocabulary to express him/herself with some circumlocutions on most topics pertinent to his/her everyday life such as family, hobbies and interests, work, travel, and current events”. On the basis of the numerous errors found in the error-tagged corpus, the following rewording was proposed: “A high number of unidiomatic word combinations are found, many of which are strongly influenced by the learner's mother tongue.”

In the near future, we wish to provide learner-corpus-based suggestions for the improvement of each of the CEF descriptors for linguistic competences, thereby giving them a stronger empirical basis.


The PhD will be completed in 2010/2011


Dagneaux E., Denness S., Granger S., Meunier F., Neff J. and Thewissen J. (2008). Error Tagging Manual Version 1.3. Centre for English Corpus Linguistics. Université Catholique de Louvain, Louvain-la-Neuve. Unpublished manual.

Granger S. and Thewissen J. (2005a). Towards a reconciliation of a ‘Can Do' and ‘Can't Do' approach to language assessment. Paper presented at the Second Annual Conference of EALTA, 2 nd - 5 th June 2005, Voss, Norway.

Granger S. and Thewissen J. (2005b).The Contribution of error-tagged learner corpora to the assessment of language proficiency. Paper presented at the 2005 Language Testing Research Colloquium, July 20th - 22nd 2005, Ottawa, Canada.

Back to top of page


Observing interlanguage development at school
(Principal investigator: Gabriele Pallotti)


Allowing teachers and testers to assess learners' interlanguage development in everyday school contexts. In order to do so, it is necessary to identify easy and time-effective procedures for data collection and analysis that still guarantee the validity and reliability of observations.

Data and method

The project began in 2007 and is still under way. It involves 10 kindergarten, 7 primary and 2 middle school classes in different parts of Italy, with a total of about 40 participating teachers, who are actively involved in data collection and in the creation, selection and fine-tuning of procedures for data elicitation and analysis. Altogether, 120 NNS children and 40 NS children have been included, aged between 5 and 13. Some children have also been observed longitudinally for two or three years.


Several data elicitation procedures have been piloted, based on previous SLA studies and past research projects. A small number of them has been retained because of their practicality and their capacity to yield data samples with substantial numbers of contexts for the production of diagnostic structures. A feature with high diagnostic value is one with a relatively slow development, appearing early but continuing to be challenging even to more advanced learners. This way, the same procedure would provide useful information from a variey of learners at different proficiency levels.

After being collected, data are transcribed by teachers or research assistants and are then subjected to a kind of interlanguage analysis that can be feasible in ordinary school contexts. In this regard, several analytical solutions have been experimented, going from very general grids to fine-grained coding sheets for specific structures, all of which have successfully been used by practitioners.

Teachers have greatly appreciated the experience as part of their in-service training. Carefully looking at the systematicities of their students' linguistic system provided them with many insights about how to make their teaching more effective and a number of original didactic activities have been produced as a consequence. The value of the approach for formative assessment and for teacher training is thus undeniable.

Many issues remain unresolved for its use in summative assessment. Data transcription is the main obstacle for conducting a systematic assessment based on interlanguage analysis. Although the procedure developed in the project drastically reduced the amount of data to be collected and the analytical grids helped teachers conduct a systematic analysis in a reasonable amount of time, the usability of results for placement and certification remains problematic and points to the need of training assessors in interlanguage analysis. Collected data can however be used as a profitable form of systematic and interlanguage-oriented portfolio evalutation.

Back to top of page


CEFLING - Linguistic Basis of the Common European Framework for L2 English and L2 Finnish
(Research project funded by the Academy of Finland 2007-2009)


The CEFLING project addresses fundamental questions in how second language proficiency develops from one level to the next. These proficiency levels, or scales, are a central component of the Common European Framework of Reference for Languages (CEFR). The results of the study will provide a new theoretical model for connecting the CEFR “can do” type proficiency level descriptions with linguistic characteristics of actual language data.

What are the CEFR levels about?

  • The CEFR scale describes what language learners can do in a foreign language at different levels ranging from beginning to advanced learners. The CEFR is currently being adopted throughout Europe as the international yardstick for curricula, examinations, materials and courses.
  • Finland has pioneered in using the CEFR: it has been adapted for the new National Core Curricula for schools and for the National Certificates language examination.
  • Pulling together language testing and second language acquisition expertise
  • Describing language learners and their abilities – as is done in the CEFR – requires theoretical and practical knowledge of both language acquisition and language assessment. Rarely, however, do these two well established but independent areas of study communicate, and therefore it is uncertain to what extent the CEFR, or other scales, reflect actual language learning. This project, which is a part of a wide European network of researchers, brings second/foreign language acquisition and language testing experts together to investigate common concerns about the CEFR.

Assumptions about language and learning

The underlying principle of the project is a usage-based view of language learning, which combines some cognitive views of the Processability Theory with a more detailed structural analysis of the developmental stages based on Conceptual Semantics and Construction Grammar.

Also the theories on measurement and communicative competence underlying the CEFR itself form part of the conceptual background of this study.

Research questions

The study addresses the following questions of both theoretical and practical importance:

  • What combinations of linguistic features characterise learners' performance at the proficiency levels defined in the Common Framework and its Finnish adaptations?
  • To what extent do adult and young learners who engage in the same communicative tasks, at a given level, perform in the same way linguistically? To what extent are the adult-oriented CEFR levels and their Finnish adaptations for young learners equivalent?
  • To what extent are the pedagogical tasks found in the teaching materials in the Finnish comprehensive school comparable with the tasks defined in the CEFR and the new curriculum?
  • What are the linguistic and communicative features that teachers (or National Certificates raters) pay attention to when assessing learners with the help of the Finnish adaptations of the CEFR scales? How do these features relate to the linguistic and communicative analysis of the same performances?

Project data

The project will study both learners of English and Finnish and will focus on their performance in writing tasks. The data for adults comes from the National Certificates test performance corpus and the data for children will be collected during the project.

This study focuses only on writing skills but the approach can be modified for the study of other skills (speaking, listening, reading).

Application of findings

The results have strong possibilities for practical applications within curriculum development, teacher training, diagnostic and other language skill assessment and teaching material production.

Back to top of page


Tasks and assessing L2 listening comprehension
(Tineke Brunfaut and Andrea Révész, Lancaster University, UK)


In this funded research project, our aim was to investigate the relationships between text characteristics of task input, task difficulty, and test takers' perceptions of difficulty in L2 listening assessment. In particular, we examined how linguistic complexity (i.e., phonological, lexical, morphosyntactic, and discourse complexity), speech rate, and explicitness affect the difficulty of an L2 listening test task.


The participants were 77 EAP students at a UK University. Out of the 77 students, 68 were randomly assigned to four groups. Each group performed the same 18 versions of a listening test task. The task required participants to listen to a short listening passage and to provide a suitable ending for the passage. Participants were presented with the tasks in a split-block design to avoid sequence effects. Immediately after completing a version of the task, students also completed a brief perception questionnaire, which assessed their perception of overall task difficulty and of the linguistic complexity, speed, and explicitness of the text. The remaining 9 students were asked, through a process of stimulated recall, to describe their thought processes during task performance. The texts were analysed in terms of speed, explicitness, and a range of linguistic complexity measures. The participants’ responses were also examined for linguistic complexity as a function of task version. Rasch analysis was used to gauge the comparability of the various versions of the test task, and regression analyses to examine the impact of text characteristics on the relative difficulty of the test versions. The data obtained via the stimulated recalls were subjected to qualitative analysis.


Three main findings emerged from the study. First, only a limited number of variables were found to predict task difficulty: 1) the proportion of function words amongst the thousand most frequently used English words (K1 words), 2) the lexical density of the text, and 3) the causal content of the text. A higher proportion of K1 function words in the texts resulted in easier tasks, whereas more lexically dense texts and listening texts with higher causal content were more difficult. Other text characteristics of the task input –phonological complexity, syntactic complexity, other lexical and discourse complexity measures, speech rate, and explicitness – did not predict task difficulty. Second, the analysis of the relationship between linguistic characteristics of the responses and task difficulty showed that neither the lexical complexity nor the syntactic complexity of the response had an impact on task difficulty. Third, our analyses revealed that perceptions of task difficulty, task performance, linguistic complexity of task input, and response difficulty correlated strongly and significantly with actual task difficulty. In addition, perceptions were found strong predictors of task difficulty.

Back to top of page


Investigating Lexical Bundles Across Learner Writing Development
(PhD project : Yu-Hua Chen, supervised by Dr Paul Baker, Lancaster University, Examination panel: Professor Charles Alderson & Professor David Oakey, 2009)

This doctoral thesis dealt with both quantitative and qualitative analyses of corpora of L2 English writing from L1 Chinese learners and two L1 English corpora, attempting to explore and identify the differences and similarities in the use of lexical bundles across learner proficiencies as well as between native and non-native writing. Lexical bundles (Biber et al., 1999, 2003, 2004, 2007) are recurrent word sequences selected with a specified frequency and dispersion threshold. Functioning as building blocks of discourse, lexical bundles can therefore be used as a frequency-driven approach to looking into the discourse aspect of learner language development. Five subcorpora, each approximately 150,000 words representing different proficiency groups, were carefully chosen and compared.

In Study 1, academic learner essays written by L2 students were compared with two groups of L1 academic writing: one referring to native expert writing and the other native peer writing. The native expert writing was extracted from the component of academic prose in the FLOB corpus (FLOB-J). The two groups of student writing, L2 writing of L1 Chinese students (BAWE-CH) and L1 peer writing of British students (BAWE-EN), both come from the BAWE corpus, which compiled proficient assessed student writing from British universities.

In Study 2, argumentative and expository essays chosen from the Longman Learner Corpus were rated by at least two experienced raters. Adopting a rigorous rating procedure starting from benchmarking, rater training, to statistical analyses (including descriptive statistics and Multi-faceted analyses), learner proficiency was determined with the Common European Framework of Reference (CEFR). Two sizeable subcorpora representing two CEFR levels, B2 and C1, were selected for investigation.

Through various ways of comparison, i.e. structural and functional categorisation as well as keyness analysis, a number of developmental patterns in the use of lexical bundles have been identified. The results show that at the lower proficiency levels, learner language tends to be more simplistic (e.g. the immoderate use of ‘there is/are’ and ‘it is’ structures), colloquial (e.g. ‘there are a lot of’), clichéd (e.g. ‘as a matter of fact’), verbose (i.e. overuse of discourse organisers), categorical (i.e. underuse of hedging expressions), and overstating (e.g. ‘as we all know’). In comparison, the more proficient writing demonstrates an opposite pattern, thereby being more native-like in this regard.

A few methodological issues, such as the use of chi-square tests and determination of a cut-off frequency in lexical-bundle studies, are addressed. The interpretations of results and the implications for L2 writing pedagogy and psycholinguistics are also discussed. With regard to CEFR, it is found that the notion of ‘formulaicity’ or ‘idiomaticity’ is rarely addressed in its assessment criteria grid, and the discourse aspect of writing is insufficiently represented. It is hoped that the present thesis can provide some insights for the empirical underpinnings of learner proficiency descriptors.

For further information, contact Yu Hua at

Back to top of page


Research projects focussing on the concept of language proficiency
(Prof. Dr. J.H. Hulstijn)

For descriptions of these projects, follow this link.

Back to top of page