研究者業績

佐藤 敬典

Sato Takanori  (Sato Takanori)

基本情報

所属
上智大学 言語教育研究センター 准教授
学位
学士(2004年3月 秋田大学)
修士(教育学)(2006年3月 秋田大学)
修士(言語学)(2010年3月 上智大学)
博士(言語学)(2014年12月 メルボルン大学)

連絡先
taka-satosophia.ac.jp
研究者番号
60758506
J-GLOBAL ID
201501041756341280
researchmap会員ID
7000011115

論文

 16
  • Takanori Sato
    Journal of Immersion and Content-Based Language Education 2024年6月  査読有り
  • Takanori Sato, Chantal Hemmi
    Language Learning in Higher Education 12(1) 309-326 2022年6月  査読有り筆頭著者
  • Takanori Sato
    Language Testing in Asia 12(1) 1-19 2022年4月  査読有り
    Abstract Although some second language (L2) pedagogical approaches recognize critical thinking (CT) as an important skill, its assessment is challenging because it is not a well-defined construct with varying definitions. This study aimed to identify the relevant and salient features of argumentative essays that allow for the assessment of L2 students’ CT skills. This study implemented a convergent mixed-methods research design, collecting and analyzing both quantitative and qualitative data to collate the results. Five raters assessed 140 causal argumentative essays written by Japanese university students attending Content and Language Integrated Learning courses based on five criteria: Task Achievement, Coherence and Cohesion, Lexical Resource, Grammatical Range and Accuracy, and CT Skills. A standard multiple regression was conducted to examine the relationships among these criteria. Additionally, raters’ written verbal protocols were collected to identify the essay features to be considered when assessing students’ CT skills. The results indicated that raters’ judgments of students’ CT were closely linked to Task Achievement. Furthermore, their assessments were affected by the essay’s relevancy to the question, content development, logicality, and quality of ideas. This study’s findings help to conceptualize CT as a construct and should be incorporated into the assessment criteria of various L2 educational contexts.
  • Takanori Sato, Tim McNamara
    Applied Linguistics 40(6) 894-916 2019年  査読有り筆頭著者
  • Takanori Sato
    Papers in Language Testing and Assessment 8(1) 69-95 2019年  査読有り
  • Takanori Sato, Yuri Jody Yujobo, Tricia Okada, Ethel Ogane
    Journal of English as a Lingua Franca 8(1) 9-35 2019年  査読有り筆頭著者
  • Takanori Sato
    Papers in Language Testing and Assessment 7(1) 1-31 2018年  査読有り
  • Catherine Elder, Tim McNamara, Hyejeong Kim, John Pill, Takanori Sato
    Language & Communication 57 14-21 2017年11月  査読有り最終著者
    Models of communicative competence in a second language invoked in defining the construct of widely used tests of communicative language ability have drawn largely on the work of language specialists. The risk of exclusive reliance on language expertise to conceptualize, design and administer language tests is that test scores may carry meanings that are misaligned with the values of non-language specialists, that is, those without language expertise but perhaps with expert knowledge in the domain of concern. Neglect of the perspective of lay (i.e., non-linguistic) judges on language and communication is a serious validity concern, since they are the ultimate arbiters of what matters for effective communication in the relevant context of language use. The paper reports on three research studies exploring the validity of rating scales used to assess speaking performance on a number of high-stakes English-language tests developed for professional or general proficiency assessment purposes in Korea, Australia, China, and the UK. Drawing on Jacoby and McNamara's (1999) notion of "indigenous assessment", each project attempted to identify the values underlying non-language specialists' judgements of spoken communication as they rated test performance or participated in focus-group workshops where they viewed and commented on video- or audio-recorded samples of performance in the relevant real-world domain. The findings of these studies raise the question of whether language can or should be assessed as object independently of the content which it conveys or without regard for the goal and context of the communication. The studies' findings also cast doubt on the notion that the native speaker should always serve as benchmark for judging communicative effectiveness, especially with tests of language for specific purposes, where native speakers and second-language learners alike may lack the requisite skills for the kind of effective interaction demanded by the context. (C) 2017 Elsevier Ltd. All rights reserved.
  • Takanori Sato, Naoki Ikeda
    Language Testing in Asia 5(10) 1-16 2015年  査読有り筆頭著者
  • Takanori Sato
    Japan Association of College English Teachers (JACET) Journal 56(56) 39-56 2013年  査読有り
    It has been claimed that language tests have maintained relatively conservative views on what to test and have been mostly attending to test-takers' language-specific features. This concern needs to be taken seriously since a narrowly defined, language-oriented construct undermines the inference about test-takers' communicative performance in the real world. This study investigated the aspects of communication that are often included in and excluded from the construct definitions of current English oral proficiency tests. This study analyzed publicly available handbooks, official websites, and reviews of 14 English oral proficiency tests. Content analysis was conducted on the assessment criteria used in the tests. The results showed that the tests' construct definitions focus too tightly on the components of language proficiency (grammatical knowledge, sociolinguistic knowledge, and fluency). On the other hand, features closely related to the fulfillment of communicative tasks (content, communication strategies, and non-verbal behaviors) are not necessarily assessed by the tests. Given these results, it is recommended that stakeholders of oral proficiency tests be careful about test score interpretation and take into account the non-linguistic features underlying communication in classroom assessment.
  • Takanori Sato
    Japan Language Testing Association (JLTA) Journal 16 107-126 2013年  査読有り
    The construct of general-purpose oral proficiency tests has been defined on the basis of the theoretical models of second language (L2) communication established by language specialists. In contrast, the perspectives on the L2 communication ability of linguistic laypersons (non-specialists in language testing and teaching) have not been incorporated into language assessment. However, it is important to understand how linguistic laypersons conceptualize L2 communication ability because they are the eventual interlocutors of L2 speakers in most real-world contexts. This study explores the features that influence linguistic laypersons' evaluative judgments of L2 oral communication ability. Four graduate students taking up disciplines other than applied linguistics and Teaching English to Speakers of Other Languages (TESOL) participated in the study. They witnessed 10 speakers' performances on the College English Test-Spoken English Test and indicated their impressions of each test-taker's communication ability. Three of the participants' ratings were moderately correlated with the test scores, whereas the ratings of one participant were weakly correlated. Their retrospective verbal protocols were also collected and analyzed. Fluency appeared to affect rater impressions the most, but grammar and vocabulary were shown to be peripheral factors in their judgments. Their protocols also revealed that the participants attended to various non-linguistic features, which implies that language proficiency does not guarantee a positive evaluation from linguistic laypersons. This study has also showed individual differences in the rating and protocols - a sign of the linguistic laypersons' complex subjective judgments. It is suggested that their unique criteria can be used to supplement conventional linguistically oriented assessment criteria and accurately predict linguistic laypersons' impressions in the real-life context.
  • Takanori Sato
    Language Testing 29(2) 223-241 2012年4月  査読有り
    The content that test-takers attempt to convey is not always included in the construct definition of general English oral proficiency tests, although some English-for-academic-purposes (EAP) speaking tests and most writing tests tend to place great emphasis on the evaluation of the content or ideas in the performance. This study investigated the relative contribution of linguistic criteria and the elaboration of speech content to scores on a test of speaking proficiency. A speaking test was designed and administered to Japanese undergraduates to determine what criteria English teachers associate with general oral proficiency. Nine raters were recruited to rate 30 students' monologues on three topics, using intuitive judgments of oral proficiency (referred to as Overall communicative effectiveness). Following this, they assigned scores to the monologues using five criteria: Grammatical accuracy, Fluency, Vocabulary range, Pronunciation, and Content elaboration/development. The raters were also asked to provide open-ended written comments on the factors contributing to their intuitive judgments. Statistical analyses of the scores - Rasch measurement, multiple regression, and multivariate generalizability (G) theory analysis - revealed that Content elaboration/development made a substantive contribution to the intuitive judgments and composite score. The present study enriches our understanding of general oral proficiency and the construct definition of proficiency tests.
  • Takanori Sato
    Annual Review of English Language Education in Japan (ARELE) 22 17-32 2011年  査読有り
    The present study examined how Japanese and native English-speaking (NS) teachers assess the overall effectiveness of Japanese students' oral English performance. Four Japanese teachers and four NS teachers were asked to rate monologues performed by 30 undergraduate students. First, the raters were asked to assign a single score for each monologue on the basis of their intuitive judgments of the performance. Following this, the teachers were asked to assess five analytic criteria: Grammatical accuracy, Fluency, Vocabulary range, Pronunciation, and Content elaboration/development. The scores and the raters' written comments were analyzed to identify the differences in scoring and to examine what criteria contribute to their intuitive judgments. The results showed that the Japanese raters assigned significantly higher scores for all the analytic criteria with the exception of Content elaboration/development, although their overall judgment of the monologues was almost the same. In addition, the scores assigned by the Japanese raters showed that only Fluency and Content elaboration/development significantly predicted their intuitive judgment, whereas the scores assigned by the NS raters revealed that all the five criteria significantly predicted the overall score. The raters' written comments indicated that the Japanese raters paid more attention to features that were not included in the given analytic criteria.
  • Takanori Sato
    Tohoku English Education Society Bulletin 30(30) 65-79 2010年  査読有り
  • Takanori Sato
    Japan Language Testing Association (JLTA) Journal 13 1-20 2010年  査読有り
    The purpose of the present study was to examine the validity of 16 can-do items taken from the EIKEN can-do list (STEP, 2008). A total of 2,571 Japanese junior high school students were asked to assess their degree of confidence in the 16 can-do statements-four EIKEN Grade 5, Grade 4, Grade 3, and Grade Pre-2 items, respectively. The present study employed the Rasch model to investigate whether (a) the items are unidimensional, (b) their item difficulty is appropriate, (c) item difficulty correlates with the items' EIKEN grades, and (d) the students' confidence levels correlate with their proficiency levels. The results showed that the can-do items are highly reliable and unidimensional. However, the students tended to feel that the items were unchallenging, especially the speaking and listening items.

MISC

 5

書籍等出版物

 3

共同研究・競争的資金等の研究課題

 1