研究者業績
基本情報
- 所属
- 上智大学 言語教育研究センター 准教授
- 学位
- 学士(2004年3月 秋田大学)修士(教育学)(2006年3月 秋田大学)修士(言語学)(2010年3月 上智大学)博士(言語学)(2014年12月 メルボルン大学)
- 連絡先
- taka-sato
sophia.ac.jp - 研究者番号
- 60758506
- J-GLOBAL ID
- 201501041756341280
- researchmap会員ID
- 7000011115
研究分野
1学歴
4-
2010年10月 - 2014年12月
-
2008年4月 - 2010年3月
-
2004年4月 - 2006年3月
-
1999年4月 - 2004年3月
受賞
2論文
16-
Journal of Immersion and Content-Based Language Education 12(2) 221-248 2024年6月17日 査読有り
-
Language Testing 41(2) 316-337 2024年4月 査読有り
-
Language Learning in Higher Education 12(1) 309-326 2022年6月 査読有り筆頭著者
-
Language Testing in Asia 12(1) 1-19 2022年4月 査読有りAbstract Although some second language (L2) pedagogical approaches recognize critical thinking (CT) as an important skill, its assessment is challenging because it is not a well-defined construct with varying definitions. This study aimed to identify the relevant and salient features of argumentative essays that allow for the assessment of L2 students’ CT skills. This study implemented a convergent mixed-methods research design, collecting and analyzing both quantitative and qualitative data to collate the results. Five raters assessed 140 causal argumentative essays written by Japanese university students attending Content and Language Integrated Learning courses based on five criteria: Task Achievement, Coherence and Cohesion, Lexical Resource, Grammatical Range and Accuracy, and CT Skills. A standard multiple regression was conducted to examine the relationships among these criteria. Additionally, raters’ written verbal protocols were collected to identify the essay features to be considered when assessing students’ CT skills. The results indicated that raters’ judgments of students’ CT were closely linked to Task Achievement. Furthermore, their assessments were affected by the essay’s relevancy to the question, content development, logicality, and quality of ideas. This study’s findings help to conceptualize CT as a construct and should be incorporated into the assessment criteria of various L2 educational contexts.
-
Applied Linguistics 40(6) 894-916 2019年 査読有り筆頭著者
-
Papers in Language Testing and Assessment 8(1) 69-95 2019年 査読有り
-
Journal of English as a Lingua Franca 8(1) 9-35 2019年 査読有り筆頭著者
-
Papers in Language Testing and Assessment 7(1) 1-31 2018年 査読有り
-
Language & Communication 57 14-21 2017年11月 査読有り最終著者
-
Language Testing in Asia 5(10) 1-16 2015年 査読有り筆頭著者
-
Japan Association of College English Teachers (JACET) Journal 56(56) 39-56 2013年 査読有りIt has been claimed that language tests have maintained relatively conservative views on what to test and have been mostly attending to test-takers' language-specific features. This concern needs to be taken seriously since a narrowly defined, language-oriented construct undermines the inference about test-takers' communicative performance in the real world. This study investigated the aspects of communication that are often included in and excluded from the construct definitions of current English oral proficiency tests. This study analyzed publicly available handbooks, official websites, and reviews of 14 English oral proficiency tests. Content analysis was conducted on the assessment criteria used in the tests. The results showed that the tests' construct definitions focus too tightly on the components of language proficiency (grammatical knowledge, sociolinguistic knowledge, and fluency). On the other hand, features closely related to the fulfillment of communicative tasks (content, communication strategies, and non-verbal behaviors) are not necessarily assessed by the tests. Given these results, it is recommended that stakeholders of oral proficiency tests be careful about test score interpretation and take into account the non-linguistic features underlying communication in classroom assessment.
-
Japan Language Testing Association (JLTA) Journal 16 107-126 2013年 査読有りThe construct of general-purpose oral proficiency tests has been defined on the basis of the theoretical models of second language (L2) communication established by language specialists. In contrast, the perspectives on the L2 communication ability of linguistic laypersons (non-specialists in language testing and teaching) have not been incorporated into language assessment. However, it is important to understand how linguistic laypersons conceptualize L2 communication ability because they are the eventual interlocutors of L2 speakers in most real-world contexts. This study explores the features that influence linguistic laypersons' evaluative judgments of L2 oral communication ability. Four graduate students taking up disciplines other than applied linguistics and Teaching English to Speakers of Other Languages (TESOL) participated in the study. They witnessed 10 speakers' performances on the College English Test-Spoken English Test and indicated their impressions of each test-taker's communication ability. Three of the participants' ratings were moderately correlated with the test scores, whereas the ratings of one participant were weakly correlated. Their retrospective verbal protocols were also collected and analyzed. Fluency appeared to affect rater impressions the most, but grammar and vocabulary were shown to be peripheral factors in their judgments. Their protocols also revealed that the participants attended to various non-linguistic features, which implies that language proficiency does not guarantee a positive evaluation from linguistic laypersons. This study has also showed individual differences in the rating and protocols - a sign of the linguistic laypersons' complex subjective judgments. It is suggested that their unique criteria can be used to supplement conventional linguistically oriented assessment criteria and accurately predict linguistic laypersons' impressions in the real-life context.
-
Language Testing 29(2) 223-241 2012年4月 査読有り
-
Annual Review of English Language Education in Japan (ARELE) 22 17-32 2011年 査読有りThe present study examined how Japanese and native English-speaking (NS) teachers assess the overall effectiveness of Japanese students' oral English performance. Four Japanese teachers and four NS teachers were asked to rate monologues performed by 30 undergraduate students. First, the raters were asked to assign a single score for each monologue on the basis of their intuitive judgments of the performance. Following this, the teachers were asked to assess five analytic criteria: Grammatical accuracy, Fluency, Vocabulary range, Pronunciation, and Content elaboration/development. The scores and the raters' written comments were analyzed to identify the differences in scoring and to examine what criteria contribute to their intuitive judgments. The results showed that the Japanese raters assigned significantly higher scores for all the analytic criteria with the exception of Content elaboration/development, although their overall judgment of the monologues was almost the same. In addition, the scores assigned by the Japanese raters showed that only Fluency and Content elaboration/development significantly predicted their intuitive judgment, whereas the scores assigned by the NS raters revealed that all the five criteria significantly predicted the overall score. The raters' written comments indicated that the Japanese raters paid more attention to features that were not included in the given analytic criteria.
-
Tohoku English Education Society Bulletin 30(30) 65-79 2010年 査読有り
-
Japan Language Testing Association (JLTA) Journal 13 1-20 2010年 査読有りThe purpose of the present study was to examine the validity of 16 can-do items taken from the EIKEN can-do list (STEP, 2008). A total of 2,571 Japanese junior high school students were asked to assess their degree of confidence in the 16 can-do statements-four EIKEN Grade 5, Grade 4, Grade 3, and Grade Pre-2 items, respectively. The present study employed the Rasch model to investigate whether (a) the items are unidimensional, (b) their item difficulty is appropriate, (c) item difficulty correlates with the items' EIKEN grades, and (d) the students' confidence levels correlate with their proficiency levels. The results showed that the can-do items are highly reliable and unidimensional. However, the students tended to feel that the items were unchallenging, especially the speaking and listening items.
MISC
5-
JLTA Journal Vol.19 supplementary: 20th Anniversary Special Issue 73-76 2016年
-
Book Review: B. Marshall. Testing English: Formative and summative approaches to English assessment.Papers in Language Testing and Assessment 2(2) 114-117 2013年 招待有り
書籍等出版物
4講演・口頭発表等
13-
10th Anniversary Conference of English as a Lingua Franca 2017年6月13日
-
JACET International Annual Conference 2014年
-
Korean English Language Testing Association Annual Conference 2014年
共同研究・競争的資金等の研究課題
1-
日本学術振興会 科学研究費助成事業 2024年4月 - 2027年3月