研究者業績

矢入 郁子

ヤイリ イクコ  (Yairi Ikuko)

基本情報

所属
上智大学 理工学部情報理工学科 教授
学位
博士(工学)(1999年3月 東京大学)

通称等の別名
矢入(江口)郁子
研究者番号
10358880
ORCID ID
 https://orcid.org/0000-0001-7522-0663
J-GLOBAL ID
200901082419968115
researchmap会員ID
6000011105

外部リンク

1994年東京大学工学部卒業,1996年同大学院工学系研究科修士課程修了,1999年同博士課程修了,博士 (工学). 同年、郵政省通信総合研究所 (現 :国立研究開発法人情報通信研究機構)研究官,2008年 り上智大学准教授.ユビキタス歩行者ITSのための時空間情報処理や高齢者・障害者向けインタフエース,Future lnternet,人間行動データ分析への深層学習応用,脳情報処理などの研究開発に従事.元人工知能学会理事,元ヒューマ ンインタフエース学会理事 .

 


論文

 136
  • 藤吉弘亘, 岡田稔, 小村剛史, 矢入郁子, 香山健太郎, 吉水宏
    情報科学リサーチジャーナル 11 51-60 2004年3月  
  • Fusako Kusunoki, Ikuko Eguchi Yairi, Takuichi Nishimura
    Proceedings of the 2004 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology 67-73 2004年  
  • 小山 慎哉, 西村 拓一, 矢入 江口 郁子, 猪木 誠二
    人工知能学会全国大会論文集 4 214-214 2004年  
    視覚障害者は音声により環境を認知するため、CoBITによる音声情報提供は、自立的な外出や行動を促すと期待される。そこで、視覚障害者の使用を考慮したCoBITシステムやシステムの利用実験について報告する。
  • 常盤 拓司, 楠 房子, 矢入(江口) 郁子, 西村 拓一, 岩竹 徹
    人工知能学会全国大会論文集 4 213-213 2004年  査読有り
    各種センサ及びユーザへの表示装置、通信装置、計算装置を備えるイベント空間では、参加者が気軽に情報交換できる。また、参加者情報も多く入手できるため、センサを内蔵したインタラクティブボードと実世界空間を密に結んだ学習支援、インタラクティブメディアアートを実現できる可能性を持っている。本発表では、この相乗効果の可能性を考える。
  • S Oyama, IE Yairi, S Igi, T Nishimura
    COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS: PROCEEDINGS 3118 468-475 2004年  査読有り
    We developed a voice guidance system that increases-mobility for visually impaired people. We use infrared communication technology called Compact Battery-less Information Terminals. The user-friendly information terminal of this system provides guidance as well as instructions for the system, which can be installed at various locations. We also developed a bone conduction headphone for the system's information terminal, which helps visually impaired users hear other sounds in the users' surroundings without disturbance by audio information generated by the system. To evaluate the usability of this system, we conducted an experiment in which visually impaired people used the system to be guided to a destination.
  • S Oyama, IE Yairi, S Igi, T Nishimura
    COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS: PROCEEDINGS 3118 468-475 2004年  査読有り
    We developed a voice guidance system that increases-mobility for visually impaired people. We use infrared communication technology called Compact Battery-less Information Terminals. The user-friendly information terminal of this system provides guidance as well as instructions for the system, which can be installed at various locations. We also developed a bone conduction headphone for the system's information terminal, which helps visually impaired users hear other sounds in the users' surroundings without disturbance by audio information generated by the system. To evaluate the usability of this system, we conducted an experiment in which visually impaired people used the system to be guided to a destination.
  • F Kusunoki, T Murarnatsu, Yairi, I
    2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7 2 1198-1202 2004年  査読有り
    We have developed the MultiAudible system using a touch panel screen and compact battery-less information terminals (CoBIT) for interactive support. We created two systems that generate sounds from different spots on a Screen in an attempt to promote user interaction. This paper discusses the analyzed results of questionnaires and shows the effectiveness Of individual audio feedback for collaborative work.
  • F Kusunoki, T Murarnatsu, Yairi, I
    2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7 1198-1202 2004年  査読有り
    We have developed the MultiAudible system using a touch panel screen and compact battery-less information terminals (CoBIT) for interactive support. We created two systems that generate sounds from different spots on a Screen in an attempt to promote user interaction. This paper discusses the analyzed results of questionnaires and shows the effectiveness Of individual audio feedback for collaborative work.
  • 矢入郁子, 吉岡 裕, 小松 正典, 猪木 誠二
    ヒューマンインタフェース学会論文誌 5(4) 17-24 2003年11月25日  査読有り
  • YAIRI IKUKO, Yairi, T, Igi, S
    Proc. of The 11th International Conference on Advanced Robotics, Portugal 434-430 2003年6月30日  査読有り
  • Yairi Ikuko, Yairi, T, Igi, S
    Proc. of IASTED International Conference AI2003, Austria 166-171 2003年2月10日  査読有り
  • 小山 慎哉, 香山 健太郎, 矢入(江口) 郁子, 西村 拓一, 猪木 誠二
    人工知能学会全国大会論文集 3 105-105 2003年  
    本研究では、視覚障害のために移動するのが大変な高齢者・障害者を支援するため、簡単な操作で使いやすい携帯型の移動支援端末による経路誘導システムを開発した。開発にあたっては、赤外線を介した無電源音声受信端末であるCoBITの技術を利用し、赤外線の直進性や局所性により、指示音声を適切な場所・方向でシームレスに提供することができるようにした。また、ユーザの聴覚を阻害しないよう配慮し、骨伝導振動子を用いた端末を開発した。そして、このシステムによる音声誘導が有効であるかどうか評価するため、開発した端末を使って、視覚障害者による屋内および屋外での経路誘導実験を行なった。
  • 市原 梢, 村松 泰起, 矢入 郁子, 楠 房子, 西村 拓一
    人工知能学会全国大会論文集 3 174-174 2003年  
    永遠に繰り返す夢の世界を体験している女の子が主人公です.主人公の夢に入り込んで不思議な体験を音と映像で楽しむことができます.映像に直接触れて,隠された体験を引き出してください.ヘッドホン一体型の無電源小型情報端末CoBITを耳に付けた人たちだけが,音と映像体験を共有できます.
  • Ikuko Eguchi Yairi, Seiji Igi
    Transactions of the Japanese Society for Artificial Intelligence 18(1) 29-35 2003年  査読有り
    This is the third paper of Robotic Communication Terminals(RCT) which was selected as a theme of Challenge for Realizing Early Profits (CREP) in 2002 anuual conference of JSAI. RCT is a mobility support system for the elderly and disabled people, which assists for their impaired elements of mobility - recognition, actuation, and information access. The RCT consist of three types of terminals: "environment-embedded terminal", "user-carried mobile terminal", and "user-carrying mobile terminal". These terminals communicate with one another to provide the users with a comfortable means of mobility. This paper introduces our recent research progress in three parts: developing RCT prototypes in section 3, interfaces and servers for users navigation in section 4, and user surveys about daily mobility problems with 3,503 responses in sections. Copyright © 2002 JSAI.
  • K Kayama, IE Yairi, S Igi
    2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS 5 4631-4636 2003年  査読有り
    The construction method of environment description using stereo camera is described in this paper. This method is for safety running of user-carried semi-autonomous mobile robot in outdoor environment that contains steps and slopes. This method constructs environment map by using three-dimension occupancy grid. These grids are cuboids fine in height direction. Three-dimension flow is used for self-localization. The elevation map is constructed from the three-dimension occupancy grid, and the dangerous area is calculated from the elevation map. This make it enable to distinguish passable slopes and impassable steps. Moreover, this system is equipped for an outdoor semi-autonomous electric scooter and performed in real world.
  • K Kayama, IE Yairi, S Igi
    IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4 3 2606-2611 2003年  査読有り
    We have been developing the Robotic Communication Terminals (RCTs), which are integrated into a mobility support system to assist elderly or disabled people who suffer from impaired mobility. The RCT system consists of three types of terminals and one server: an environment-embedded terminal, a user-carried mobile terminal, a user-carrying mobile terminal, and a barrier-free map server. The RCT is an integrated system that can be used to cope with various problems of mobility, and provide suitable support to a wide variety of users. This paper provides an in-depth description of the user-carrying mobile terminal. The system itself is a kind of intelligent wheeled vehicle. It can recognize the surrounding 3D environment through infrared sensors, sonar sensors, and a stereo vision system with three cameras, and avoid hazards semi-autonomously. It also can provide adequate navigation by communicating with the geographic information system (GIS) server and detect vehicles appearing from the blind side by communicating with environment-embedded terminals in the real-world.
  • 矢入郁子, 猪木誠二
    人工知能学会論文誌 18(1) 29-35 2003年  査読有り
  • Yairi Ikuko, Nagou N, Yairi, T, Igi, S
    Proc. of The Second International Conference on Computational Intelligence, Robotics and Autonomous Systems (CIRAS 2003) PS09-5-04 2003年  査読有り
  • Yairi, I. E, Yairi, T, Igi, S
    IASTED International Conference AI2003 2003年  査読有り
  • IE Yairi, T Yairi, S Igi
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3 435-440 2003年  査読有り
    Real-time intention recognition by sensing a user and its environment is an important function for man-machine interaction. However, when an action can not be divided into a clear sequence of action fractions because of the duration uncertainty, conventional methods of intention recognition axe not applicable to the recognition of partial and short term intentions relating to the action fractions, i.e. micro-level intentions. Our approach to this micro-level intention recognition is to apply a pattern learning function which discovers and utilizes synchronous and temporal relations among the multi-dimensional time-series data of both user and environment. In this paper, we deal with a vehicle driving task as a typical application of the proposed intention recognition method, and examine the realizability of the pattern learning function by Support Vector Machine (SVM). Appropriate pattern learning methods to this problem axe discussed as well as our future plan.
  • IE Yairi, N Nagou, T Yairi, S Igi
    SICE 2003 ANNUAL CONFERENCE, VOLS 1-3 1502-1507 2003年  査読有り
    We have been concerned with a recognition of partial and short term intentions, "i.e. micro-level intention recognition," to which conventional methods are not applicable because of action duration uncertainty. Our approach to micro-level intention recognition is to deal with a vehicle driving task as a typical application and to apply a pattern learning function which discovers and utilizes synchronous and temporal relations among the multi-dimensional time-series data of both user and environment. In this paper, appropriate sensor settings of the vehicle for intention recognition are examined by Support Vector Machine (SVM).
  • Kentaro KAYAMA, IKUKO YAIRI, Seiji Igi, Hiroshi Yoshimizu
    Proc. of the IAPR Workshop on Machine Vision Applications, Japan 534-537 2002年12月11日  査読有り
  • YAIRI I. E.
    Proc. of 4th Asia-Pacific Conference on Simulated Evolution And Learning, 2002 811-815 2002年11月18日  査読有り
  • 矢入郁子, 佐藤知正, 森武俊
    日本ロボット学会誌 20(4) 437-445 2002年5月15日  査読有り
    This paper proposes a medical care support system for computational behavioral data accumulation and interactive display by behavior understanding. The objective of this system is to save doctors' labor of recording their clinical behavior for diagnosis and treatment, and to help doctors to use the medical information more efficiently. The support system has three functions as follows: (1) understanding of a doctor's behavior by visual and audio processing, (2) accumulation of the results of understanding as the doctor's behavioral data, (3) displaying of the clinical information for the decision making by judging the content and timing from the results of understanding. In this paper, the overview of the proposed support system is introduced as well as development of a prototype system.
  • K Kayama, BE Yairi, S Igi, H Yoshimizu
    IEEE 5TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, PROCEEDINGS 347-352 2002年  査読有り
    This paper describes the Robotic Communication Terminals (RCT), a mobility support system developed to assist elderly and disabled people who suffer from Impaired mobility. The RCT system consists of three types of terminals and one server: an "environment-embedded terminal". a "user-carried mobile terminal", a "user-carrying mobile terminal", and a GIS server. The RCT is an Integrated system that can cope with various problems of mobility, and provide suitable support to a wide variety of users according to their physical status, such as types, levels, combinations and histories of Impairment to their eyes, ears, and legs. This paper provides an In-depth description of the "environment-embedded terminal". In this system a camera Is located a few meters above ground on the sides of roads in residential areas, railway stations, etc. Moving objects and obstacles are constantly monitored, and the data compiled are presented to users. We implemented the recognition part of this system using plug-in modules of the Windows DLL formal, thus making It easy to change the algorithm depending on the time of day and the weather conditions. Moreover, it is shown by experiments in the real-world that this system can detect speeding vehicles and send the proper warning messages to alert users to the impending danger.
  • 矢入郁子, 猪木 誠二
    人工知能学会論文誌 17(2) 170-176 2002年  査読有り
  • Seiji IGI, Shang LU, YAIRI IKUKO
    Journal of the Communications Research Laboratory 48(3) 71-80 2001年9月  招待有り
  • YAIRI IKUKO, Seiji IGI
    The 5th World Multiconference on Systemics,Cybernetics and Informatics, Orlando,USA 36-41 2001年7月22日  査読有り
  • Yairi Ikuko, Seiji IGI
    2001 IEEE Intelligent Vehicles Symposium, Tokyo, Japan 255-260 2001年5月15日  査読有り
  • Igi, S., Lu, S., Yairi, I.E.
    Journal of the Communications Research Laboratory 48(3) 2001年  
  • YAIRI I. E.
    Proc. 7th International Workshop on Mobile Multimedia Communications, 2000 3B-3, 2000年10月23日  査読有り
  • IE Yairi, S Igi
    INTELLIGENT AUTONOMOUS SYSTEMS 6 692-697 2000年  査読有り
    In this article, we propose a self-sustained moving support system for aged and disabled persons that totally assist for their impaired ability of recognition, actuation, and information access. It consists of two types of subsystems. One is the "environment-embedded system," which is fixed in the real-world, for example on walkways, in shopping malls, and at stations. The other is the "mobile system," which accompanies the user. These systems communicate with each other and connect the real-world, computer networks, and users to provide physically impaired people with mobility and freedom. This article discusses the difficulty of self-sustained moving of aged and disabled persons. Their real-life situations of self-sustained moving are surveyed by a-mail questionnaire to 359 aged and disabled persons, and by roundtable discussions with 13 persons. An idea to develop and provide the support system is also described.
  • 矢入郁子, 佐藤知正, 森武俊
    日本ロボット学会誌 15(5) 766-772 1997年7月15日  査読有り
    This paper deals with computerized description of medical care, a new way to support medical information processing, and also shows its feasibility by experimenting visual understanding of medical care by a doctor. Computerized description of medical care supports collecting medical information by sensing and inputting the doctor's behavior automatically into a computer during medical care. The computer senses the doctor's behavior using visual, aural and force sensors, and uses these sensor outputs to understand what care is being performed. The visual understanding of medical care is a core function of the computerized description of medical care. Taking application of medicine to the ear, nose and throat in otolaryngology clinics as a typical example, visual understanding of medical care is successfully performed. The experiment showed that visual behavior understanding is more easily realized by monitoring objects used for behavior than by conventional method of monitoring the persons. Computerized description of medical care in this paper will contribute to improve the quality of medical service by reducing the burden of medical information inputting and recording on doctors.
  • Eguchi, I, T Sato, T Mori
    IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING WITH DYNAMIC WORLDS, VOLS 1-3 3 1573-1578 1996年  査読有り
    This paper proposes a system for computerized description of medical care, and also shows its possibility by experimenting visual understanding of medical care by a doctor. Proposed system supports forming the medical documents by sensing and inputting the doctor's behavior into a computer during medical care. In the system, the computer senses a doctors' behavior using visual, aural and force sensors, and uses these sensor outputs to understand what care is being performed. The visual understanding of medical care is a core function of the system. Taking application of medicine to the ear, nose and throat in ortholaryngology clinics as a typical example of medical care, visual understanding of medical care is successfully performed. The procedure of visual understanding in the experiment is as follows: Firstly, CCD cameras monitor the objects (tools, medicine and the patient's affected body parts) whose states change in accordance with the doctor's actions. Secondly, the doctor's action is identified with the change in the state of these objects. Finally, the doctor's behavior is understood as the sequence of these identified actions. In the procedure, visual behavior understanding is more easily realized by monitoring objects used for behavior than by conventional method of monitoring the persons. Medical care description in this paper will contribute to improve the quality of medical service by reducing the burden of medical information recording on doctors.
  • Yairi Ikuko, Masayuki Nakao, Yotaro Hatamura
    Annual meeting of ASPE 61-64 1994年  査読有り

MISC

 121

講演・口頭発表等

 202

共同研究・競争的資金等の研究課題

 15

学術貢献活動

 1

社会貢献活動

 22