研究者業績

亀田 裕介

カメダ ユウスケ  (Yusuke Kameda)

基本情報

所属
上智大学 理工学部 情報理工学科 助教
学位
博士(工学)(2012年9月 千葉大学)
修士(工学)(2008年3月 千葉大学)

連絡先
kamedasophia.ac.jp
研究者番号
50711553
ORCID ID
 https://orcid.org/0000-0001-8503-4098
J-GLOBAL ID
200901044504595965
researchmap会員ID
6000014798

外部リンク

師から受け継いだ教えですが、何ができて何ができないかを明確にする手段が自然科学(理学)であり、科学的なモノ作りが工学であると思います。それらの中で、あらゆる現象やものごとを計算過程としてとらえるのが情報理工学なのでしょう。専門知識と技術を身に付け新たな価値を世界に提供すべく学問を共にしてみませんか。

専門分野は画像処理、特に動画像上の動きや流れの推定です。動画像から推定した動きや流れの情報は、コンピュータによる物体認識や、ロボットやクルマの自己位置推定・障害物検出、そのほか流体の計測や解析など、幅広く応用が期待されています。高速高精度に動きを推定する研究を行っています。研究だけでなく、プログラミングやサーバ管理などの実用上のICT技術も指導します。


論文

 89
  • Takaki Fushimi, Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2024 13164(131642Q) 1-5 2024年5月2日  査読有り最終著者
  • Takumi Eda, Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2024 13164(1316428) 1-5 2024年5月2日  査読有り最終著者
  • Keisuke Kaji, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Yusuke Kameda
    2022 Picture Coding Symposium (PCS) 79-83 2022年12月7日  査読有り
    An autoregressive image generative model that estimates the conditional probability distributions of image signals pel-by-pel is a promising tool for lossless image coding. In this paper, a generative model based on a convolutional neural network (CNN) was combined with a locally trained adaptive predictor to improve its accuracy. Furthermore, sets of parameters that adjust the estimated probability distribution were numerically optimized for each image to minimize the resulting coding rate. Simulation results indicate that the proposed method improves the coding efficiency obtained by the CNN-based model for most of the tested images.
  • Kodai Ogawa, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2022 (1217718) 1-5 2022年5月1日  査読有り
  • Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2022 (121770E) 1-4 2022年5月1日  査読有り筆頭著者最終著者責任著者
  • Hiroki Kojima, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Yusuke Kameda, Kyohei Unno, Kei Kawamura
    International Workshop on Advanced Imaging Technology (IWAIT) 2022 (1217705) 1-6 2022年5月1日  査読有り
  • Shuichi Namiki, Shunichi Sato, Yusuke Kameda, Takayuki Hamamoto
    International Workshop on Advanced Imaging Technology (IWAIT) 2022 (1217702) 1-6 2022年5月1日  査読有り責任著者
  • Taiki Kure, Haruka Danil Tsuchiya, Yusuke Kameda, Hiroki Yamamoto, Daisuke Kodaira, Junji Kondoh
    Energies 15(8) 2855-2855 2022年4月13日  査読有り
    The power-generation capacity of grid-connected photovoltaic (PV) power systems is increasing. As output power forecasting is required by electricity market participants and utility operators for the stable operation of power systems, several methods have been proposed using physical and statistical approaches for various time ranges. A short-term (30 min ahead) forecasting method had been proposed previously for multiple PV systems using motion estimation. This method forecasts the short time ahead PV power generation by estimating the motion between two geographical images of the distributed PV power systems. In this method, the parameter λ, which relates the smoothness of the resulting motion vector field and affects the accuracy of the forecasting, is important. This study focuses on the parameter λ and evaluates the effect of changing this parameter on forecasting accuracy. In the periods with drastic power output changes, the forecasting was conducted on 101 PV systems. The results indicate that the absolute mean error of the proposed method with the best parameter is 10.3%, whereas that of the persistence forecasting method is 23.7%. Therefore, the proposed method is effective in forecasting periods when PV output changes drastically within a short time interval.
  • Kurumi Kataoka, Yusuke Kameda, Shunichi Sato, Takayuki Hamamoto
    Proc. of The 11th International Workshop on Image Media Quality and its Applications, IMQA2022 82-85 2022年3月  査読有り責任著者
  • Yuma Masui, Yusuke Kameda, Shunichi Sato, Takayuki Hamamoto
    Proc. of The 11th International Workshop on Image Media Quality and its Applications, IMQA2022 78-81 2022年3月  査読有り責任著者
  • Shinnosuke KURATA, Toshinori OTAKA, Yusuke KAMEDA, Takayuki HAMAMOTO
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E105.A(1) 82-86 2022年1月1日  査読有り責任著者
    We propose a HDR (high dynamic range) reconstruction method in an image sensor with a pixel-parallel ADC(analog-to-digital converter) for non-destructively reading out the intermediate exposure image. We report the circuit design for such an image sensor and the evaluation of the basic HDR reconstruction method.
  • Yuya KAMATAKI, Yusuke KAMEDA, Yasuyo KITA, Ichiro MATSUDA, Susumu ITOH
    IEICE Transactions on Information and Systems E104.D(10) 1572-1575 2021年10月1日  査読有り
    This paper proposes a lossless coding method for HDR color images stored in a floating point format called Radiance RGBE. In this method, three mantissa and a common exponent parts, each of which is represented in 8-bit depth, are encoded using the block-adaptive prediction technique with some modifications considering the data structure.
  • Kyohei Unno, Yusuke Kameda, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Kei Kawamura
    2021 IEEE International Conference on Image Processing (ICIP) 1-5 2021年9月19日  査読有り
    We previously proposed a novel lossless coding method that utilizes example search and adaptive prediction within a framework of probability model optimization for monochrome video. In this paper, we improve the adaptive prediction in terms of coding performance and processing time. More precisely, we made modifications to the following three items: (a) reference pel arrangements, (b) motion vector derivation, and (c) optimal selection of predictors. Experimental results show that the proposed method certainly improves the coding performance and the processing time compared to our previous method, and achieves better coding performance than the VVC-based lossless video coding scheme.
  • Yutaka Fukuchi, Yusuke Kameda, Joji Maeda
    The 12th International Conference on Optics-photonics Design and Fabrication (ODF 2020) 02PS2-06 1-2 2021年6月  査読有り
  • Misaki SHIKAKURA, Yusuke KAMEDA, Takayuki HAMAMOTO
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E104.A(6) 907-911 2021年6月1日  査読有り責任著者
    This paper reports the evolution and application potential of image sensors with high-speed brightness gradient sensors. We propose an adaptive exposure time control method using the apparent motion estimated by this sensor, and evaluate results for the change in illuminance and global / local motion.
  • Kurumi Kataoka, Yusuke Kameda, Takayuki Hamamoto
    ITE Transactions on Media Technology and Applications 9(2) 128-135 2021年4月  査読有り責任著者
    We propose an adaptive exposure-time-control method for image sensors, which can control the exposure time for each pixel to reconstruct a high-dynamic-range image, while suppressing blown-out highlights and blocked-up shadows, according to the luminance and contrast of the scene. First, the proposed method determines the exposure time that maximizes the entropy of the entire image, as an image with high entropy contains more object details. In order to estimate the exposure time appropriate for the light and dark areas in the scene, the proposed method divides the image into blocks and estimates the exposure time that maximizes the entropy for each block. Because the proposed method captures and estimates several exposure times simultaneously, the time required for adjusting the exposure time is reduced. Simulation experiments show the effectiveness of the proposed method.
  • Hiroki Kojima, Yusuke Kameda, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh
    International Workshop on Advanced Imaging Technology (IWAIT) 2021 (1176605) 1-5 2021年3月13日  査読有り
  • Kodai Ogawa, Yusuke Kameda, Yasuyo Kita, Ichiro Matsuda, Itoh Susumu
    International Workshop on Advanced Imaging Technology (IWAIT) 2021 (117660M) 1-4 2021年3月13日  査読有り
  • Kiyotaka Iwabuchi, Yusuke Kameda, Takayuki Hamamoto
    IEEE Access 9 30080-30094 2021年2月  査読有り責任著者
    Photon counting imaging can be used to capture clearly photon-limited scenes. In photon counting imaging, information on incident photons is obtained as binary frames (bit-plane frames), which are transformed into a multi-bit image in the reconstruction process. In this process, it is necessary to apply a deblurring method to enable the capture of dynamic scenes without motion blur. In this article, a deblurring method for the high-quality bit-plane frame reconstruction of dynamic scenes is proposed. The proposed method involves the deblurring of units of object motion within a scene through the application of motion compensation to pixels sharing the same motions. This method achieves more efficient motion blur suppression than the application of simple deblurring to pixel block or spatial region units. It also applies a novel technique for accurate motion estimation from the bit-plane frame even in photon-limited situations through the statistical evaluation of the temporal variation of photon incidence. In addition to deblurring, our experimental results also revealed that the proposed method can be applied for denoising, which improves the peak signal-to-noise ratio by 1.2 dB. In summary, the proposed method for bit-plane reconstruction achieves high quality imaging even in photon-limited dynamic scenes.
  • Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    2020 28th European Signal Processing Conference (EUSIPCO) 605-609 2021年1月24日  査読有り筆頭著者責任著者
    Scene flow is a three-dimensional (3D) vector field with velocity in the depth direction and optical flow that represents the apparent motion, which can be estimated from RGB-D videos. Scene flow can be used to estimate the 3D motion of objects with a camera; thus, it is used for obstacle detection and self-localization. It can potentially be applied to inter prediction in 3D video coding. The scene-flow estimation method based on the variational method requires numerical computations of nonlinear equations that control the regularization strength to prevent excessive smoothing due to scene-flow regularization. Because numerical stability depends on multi-channel images and computational parameters such as regularization weights, it is difficult to determine appropriate parameters that satisfy the stability requirements. Therefore, we propose a numerical computation method to derive a numerical stability condition that does not depend on the color of the image or the weight of the regularization term. This simplifies the traditional method and facilitates the setting up of various regularization weight functions. Finally, we evaluate the performance of the proposed method.
  • Kyohei Unno, Koji Nemoto, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Sei Naito
    Proceedings of The 27th IEEE International Conference on Image Processing (ICIP 2020) 1103-1107 2020年10月  査読有り
  • Shimamoto, M., Kameda, Y., Hamamoto, T.
    IEICE Transactions on Information and Systems E103D(10) 2067-2071 2020年10月  査読有り責任著者
  • Yutaka Fukuchi, Yusuke Kameda
    2020 IEEE Photonics Conference (IPC) 1-2 2020年9月  査読有り
  • Taiki Kure, Yosui Miyazaki, Junji Kondoh, Yusuke Kameda
    2020 4th International Conference on Smart Grid and Smart Cities (ICSGSC) 24-28 2020年8月18日  査読有り
  • 園山 隼,前田 慶博,亀田 裕介,浜本 隆之
    3次元画像コンファレンス2020 (5-2) 1-4 2020年7月  査読有り
  • Kyohei Unno, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Sei Naito
    ITE Transactions on Media Technology and Applications 8(3) 132-139 2020年7月  査読有り
  • Yusuke Kameda, Yoshihiro Maeda, Takayuki Hamamoto
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020) 115150K 1-5 2020年6月1日  査読有り筆頭著者責任著者
  • Yuya Kamataki, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020) 115150W 1-5 2020年6月1日  査読有り
  • Koji Nemoto, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Kyohei Unno, Sei Naito
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020) 115150U 1-5 2020年6月1日  査読有り
  • Takumi Owada, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020) 115150X 1-5 2020年6月1日  査読有り
  • Shinnosuke Kurata, Toshinori Ootaka, Yusuke Kameda, Takayuki Hamamoto
    The Tenth International Workshop on Image Media Quality and its Applications, IMQA2020 33-36 2020年3月  査読有り責任著者
  • Misaki Shikakura, Yusuke Kameda, Takayuki Hamamoto
    The Tenth International Workshop on Image Media Quality and its Applications, IMQA2020 37-40 2020年3月  査読有り責任著者
  • Miyazaki, Y., Kameda, Y., Kondoh, J.
    Energies 12(24) 4815-4828 2019年12月17日  査読有り
  • 海野恭平, 亀田裕介, 松田一朗, 伊東晋, 内藤整
    電子情報通信学会論文誌 D J102-D(10) 619-627 2019年10月  査読有り
  • Kyohei Unno, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Sei Naito
    Proceedings of the 27th European Signal Processing Conference (EUSIPCO 2019) 1-5 2019年9月  査読有り
  • Koji Nemoto, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proc. SPIE, The 22nd International Workshop on Advanced Image Technology (IWAIT 2019) 11049(48) 1-5 2019年3月22日  査読有り
  • Jun Sakurai, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of SPIE - The International Society for Optical Engineering 11049(31) 1-5 2019年3月22日  査読有り
    In general, »drawing collapse» is a word used when very low quality animated contents are broadcast. For example, perspective of the scene is unnaturally distorted and/or sizes of people and buildings are abnormally unbalanced. In our research, possibility of automatic discrimination of drawing collapse is explored for the purpose of reducing a workload for content check typically done by the animation director. In this paper, we focus only on faces of animated characters as a preliminary task, and distances as well as angles between several feature points on facial parts are used as input data. By training a support vector machine (SVM) using the input data extracted from both positive and negative example images, about 90% of discrimination accuracy is obtained when the same character is tested.
  • YUYA YAMAKI, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    International Workshop on Advanced Image Technology (IWAIT) 2019 11049(32) 1-5 2019年3月22日  査読有り
  • Naoaki Kataoka, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of SPIE - The International Society for Optical Engineering 11049(41) 1-5 2019年3月  査読有り
    This paper describes a method for creating cel-style CG animations of waving hair. In this method, gatherings of air are considered as virtual circles moving at a constant velocity, and hair bundles are modeled as elastic bodies. Deformation of the hair bundles is then calculated by simulating collision events between the virtual circles and the hair bundles. Since the method is based on the animator's technique used in creation of the traditional cel animations, it is expected to suppress a feeling of strangeness that is often introduced by the conventional procedural animation techniques.
  • Tomokazu Ishikawa, Yusuke Kameda, Masarori Kakimoto, Ichiro Matsuda, Susumu Itoh
    IIEEJ Transactions on Image Electronics and Visual Computing 6(2) 82-88 2018年12月15日  査読有り
  • Naoaki Kataoka, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    SIGGRAPH Asia 2018 Posters 73:1-73:2 2018年12月4日  査読有り
  • Ichiro Matsuda, Tomokazu Ishikawa, Yusuke Kameda, Susumu Itoh
    Proceedings of the 26th European Signal Processing Conference (EUSIPCO-2018) 151-155 2018年9月  査読有り
  • Yosui Miyazaki, Junji Kondoh, Yusuke Kameda
    Grand Renewable Energy 2018 Proceedings (O-Pv-8-5) 1-4 2018年6月  査読有り
  • Yuta Ishida, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E101A(6) 992-996 2018年6月1日  査読有り責任著者
    Abstract This paper proposes a lossy image coding method for still images. In this method, recursive and non-recursive type intra prediction techniques are adaptively selected on a block-by-block basis. The recursive-type intra prediction technique applies a linear predictor to each pel within a prediction block in a recursive manner, and thus typically produces smooth image values. In this paper, the non-recursive type intra prediction technique is extended from the angular prediction technique adopted in the H.265/HEVC video coding standard to enable interpolative prediction to the maximum possible extent. The experimental results indicate that the proposed method achieves better coding performance than the conventional method that only uses the recursive-type prediction technique.
  • Ryota Nakazato, Hiroyuki Funakoshi, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 21st International Workshop on Advanced Image Technology (IWAIT 2018) (80) 1-4 2018年1月  査読有り
  • Idomu Fujita, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 21st International Workshop on Advanced Image Technology (IWAIT 2018) (86) 1-4 2018年1月  査読有り
  • Akihiro Miyazawa, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 21st International Workshop on Advanced Image Technology (IWAIT 2018) (84) 1-4 2018年1月  査読有り責任著者
  • Toru Sumi, Yuta Inamura, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E100A(11) 2351-2354 2017年11月1日  査読有り責任著者
    We previously proposed a lossless image coding scheme using example-based probability modeling, wherein the probability density function of image signals was dynamically modeled pel-by-pel. To appropriately estimate the peak positions of the probability model, several examples, i.e., sets of pels whose neighborhoods are similar to the local texture of the target pel to be encoded, were collected from the already encoded causal area via template matching. This scheme primarily makes use of non-local information in image signals. In this study, we introduce a prediction technique into the probability modeling to offer a better trade-off between the local and non-local information in the image signals.
  • Shota Kasai, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E100D(9) 2039-2043 2017年9月  査読有り責任著者
    We propose a method of interframe prediction in depth map coding that uses pixel-wise 3D motion estimated from encoded textures and depth maps. By using the 3D motion, an approximation of the depth map frame to be encoded is generated and used as a reference frame of block-wise motion compensation.
  • Ichiro Matsuda, Tomokazu Ishikawa, Yusuke Kameda, Susumu Itoh
    Proceedings of the 25th European Signal Processing Conference (EUSIPCO-2017) 1485-1489 2017年8月  査読有り

MISC

 169

講演・口頭発表等

 192

主要な担当経験のある科目(授業)

 32

所属学協会

 14

Works(作品等)

 1

主要な共同研究・競争的資金等の研究課題

 13