Curriculum Vitaes

Yusuke Kameda

  (亀田 裕介)

Profile Information

Affiliation
Assistant Professor, Department of Information and Communication Sciences, Faculty of Science and Technology, Sophia University
Degree
Doctor of Philosophy in Computer Science(Sep, 2012, Chiba University)
Master of Engineering(Mar, 2008, Chiba University)

Contact information
kamedasophia.ac.jp
Researcher number
50711553
ORCID ID
 https://orcid.org/0000-0001-8503-4098
J-GLOBAL ID
200901044504595965
researchmap Member ID
6000014798

External link

Natural science is a way to clarify what is possible or impossible, and engineering is scientific manufacturing. Among them, computer science is a discipline that regards all phenomena and things as a calculation process. Let's study together to acquire specialized knowledge and skills and provide new value to the world.

My major is image processing, especially video motion and flow estimation. Motion and flow information estimated from images is expected to be widely applied to object recognition by computers, self-position estimation and obstacle detection of robots and cars, and fluid measurement and analysis. We are conducting research to estimate motion with high speed and high accuracy. In addition to research, we also teach practical ICT technologies such as programming and server management.


Papers

 89
  • Takaki Fushimi, Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2024, 13164(131642Q) 1-5, May 2, 2024  Peer-reviewedLast author
  • Takumi Eda, Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2024, 13164(1316428) 1-5, May 2, 2024  Peer-reviewedLast author
  • Keisuke Kaji, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Yusuke Kameda
    2022 Picture Coding Symposium (PCS), 79-83, Dec 7, 2022  Peer-reviewed
    An autoregressive image generative model that estimates the conditional probability distributions of image signals pel-by-pel is a promising tool for lossless image coding. In this paper, a generative model based on a convolutional neural network (CNN) was combined with a locally trained adaptive predictor to improve its accuracy. Furthermore, sets of parameters that adjust the estimated probability distribution were numerically optimized for each image to minimize the resulting coding rate. Simulation results indicate that the proposed method improves the coding efficiency obtained by the CNN-based model for most of the tested images.
  • Kodai Ogawa, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2022, (1217718) 1-5, May 1, 2022  Peer-reviewed
  • Yusuke Kameda
    International Workshop on Advanced Imaging Technology (IWAIT) 2022, (121770E) 1-4, May 1, 2022  Peer-reviewedLead authorLast authorCorresponding author
  • Hiroki Kojima, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Yusuke Kameda, Kyohei Unno, Kei Kawamura
    International Workshop on Advanced Imaging Technology (IWAIT) 2022, (1217705) 1-6, May 1, 2022  Peer-reviewed
  • Shuichi Namiki, Shunichi Sato, Yusuke Kameda, Takayuki Hamamoto
    International Workshop on Advanced Imaging Technology (IWAIT) 2022, (1217702) 1-6, May 1, 2022  Peer-reviewedCorresponding author
  • Taiki Kure, Haruka Danil Tsuchiya, Yusuke Kameda, Hiroki Yamamoto, Daisuke Kodaira, Junji Kondoh
    Energies, 15(8) 2855-2855, Apr 13, 2022  Peer-reviewed
    The power-generation capacity of grid-connected photovoltaic (PV) power systems is increasing. As output power forecasting is required by electricity market participants and utility operators for the stable operation of power systems, several methods have been proposed using physical and statistical approaches for various time ranges. A short-term (30 min ahead) forecasting method had been proposed previously for multiple PV systems using motion estimation. This method forecasts the short time ahead PV power generation by estimating the motion between two geographical images of the distributed PV power systems. In this method, the parameter λ, which relates the smoothness of the resulting motion vector field and affects the accuracy of the forecasting, is important. This study focuses on the parameter λ and evaluates the effect of changing this parameter on forecasting accuracy. In the periods with drastic power output changes, the forecasting was conducted on 101 PV systems. The results indicate that the absolute mean error of the proposed method with the best parameter is 10.3%, whereas that of the persistence forecasting method is 23.7%. Therefore, the proposed method is effective in forecasting periods when PV output changes drastically within a short time interval.
  • Kurumi Kataoka, Yusuke Kameda, Shunichi Sato, Takayuki Hamamoto
    Proc. of The 11th International Workshop on Image Media Quality and its Applications, IMQA2022, 82-85, Mar, 2022  Peer-reviewedCorresponding author
  • Yuma Masui, Yusuke Kameda, Shunichi Sato, Takayuki Hamamoto
    Proc. of The 11th International Workshop on Image Media Quality and its Applications, IMQA2022, 78-81, Mar, 2022  Peer-reviewedCorresponding author
  • Shinnosuke KURATA, Toshinori OTAKA, Yusuke KAMEDA, Takayuki HAMAMOTO
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E105.A(1) 82-86, Jan 1, 2022  Peer-reviewedCorresponding author
    We propose a HDR (high dynamic range) reconstruction method in an image sensor with a pixel-parallel ADC(analog-to-digital converter) for non-destructively reading out the intermediate exposure image. We report the circuit design for such an image sensor and the evaluation of the basic HDR reconstruction method.
  • Yuya KAMATAKI, Yusuke KAMEDA, Yasuyo KITA, Ichiro MATSUDA, Susumu ITOH
    IEICE Transactions on Information and Systems, E104.D(10) 1572-1575, Oct 1, 2021  Peer-reviewed
    This paper proposes a lossless coding method for HDR color images stored in a floating point format called Radiance RGBE. In this method, three mantissa and a common exponent parts, each of which is represented in 8-bit depth, are encoded using the block-adaptive prediction technique with some modifications considering the data structure.
  • Kyohei Unno, Yusuke Kameda, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh, Kei Kawamura
    2021 IEEE International Conference on Image Processing (ICIP), 1-5, Sep 19, 2021  Peer-reviewed
    We previously proposed a novel lossless coding method that utilizes example search and adaptive prediction within a framework of probability model optimization for monochrome video. In this paper, we improve the adaptive prediction in terms of coding performance and processing time. More precisely, we made modifications to the following three items: (a) reference pel arrangements, (b) motion vector derivation, and (c) optimal selection of predictors. Experimental results show that the proposed method certainly improves the coding performance and the processing time compared to our previous method, and achieves better coding performance than the VVC-based lossless video coding scheme.
  • Yutaka Fukuchi, Yusuke Kameda, Joji Maeda
    The 12th International Conference on Optics-photonics Design and Fabrication (ODF 2020), 02PS2-06 1-2, Jun, 2021  Peer-reviewed
  • Misaki SHIKAKURA, Yusuke KAMEDA, Takayuki HAMAMOTO
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E104.A(6) 907-911, Jun 1, 2021  Peer-reviewedCorresponding author
    This paper reports the evolution and application potential of image sensors with high-speed brightness gradient sensors. We propose an adaptive exposure time control method using the apparent motion estimated by this sensor, and evaluate results for the change in illuminance and global / local motion.
  • Kurumi Kataoka, Yusuke Kameda, Takayuki Hamamoto
    ITE Transactions on Media Technology and Applications, 9(2) 128-135, Apr, 2021  Peer-reviewedCorresponding author
    We propose an adaptive exposure-time-control method for image sensors, which can control the exposure time for each pixel to reconstruct a high-dynamic-range image, while suppressing blown-out highlights and blocked-up shadows, according to the luminance and contrast of the scene. First, the proposed method determines the exposure time that maximizes the entropy of the entire image, as an image with high entropy contains more object details. In order to estimate the exposure time appropriate for the light and dark areas in the scene, the proposed method divides the image into blocks and estimates the exposure time that maximizes the entropy for each block. Because the proposed method captures and estimates several exposure times simultaneously, the time required for adjusting the exposure time is reduced. Simulation experiments show the effectiveness of the proposed method.
  • Hiroki Kojima, Yusuke Kameda, Yasuyo Kita, Ichiro Matsuda, Susumu Itoh
    International Workshop on Advanced Imaging Technology (IWAIT) 2021, (1176605) 1-5, Mar 13, 2021  Peer-reviewed
  • Kodai Ogawa, Yusuke Kameda, Yasuyo Kita, Ichiro Matsuda, Itoh Susumu
    International Workshop on Advanced Imaging Technology (IWAIT) 2021, (117660M) 1-4, Mar 13, 2021  Peer-reviewed
  • Kiyotaka Iwabuchi, Yusuke Kameda, Takayuki Hamamoto
    IEEE Access, 9 30080-30094, Feb, 2021  Peer-reviewedCorresponding author
    Photon counting imaging can be used to capture clearly photon-limited scenes. In photon counting imaging, information on incident photons is obtained as binary frames (bit-plane frames), which are transformed into a multi-bit image in the reconstruction process. In this process, it is necessary to apply a deblurring method to enable the capture of dynamic scenes without motion blur. In this article, a deblurring method for the high-quality bit-plane frame reconstruction of dynamic scenes is proposed. The proposed method involves the deblurring of units of object motion within a scene through the application of motion compensation to pixels sharing the same motions. This method achieves more efficient motion blur suppression than the application of simple deblurring to pixel block or spatial region units. It also applies a novel technique for accurate motion estimation from the bit-plane frame even in photon-limited situations through the statistical evaluation of the temporal variation of photon incidence. In addition to deblurring, our experimental results also revealed that the proposed method can be applied for denoising, which improves the peak signal-to-noise ratio by 1.2 dB. In summary, the proposed method for bit-plane reconstruction achieves high quality imaging even in photon-limited dynamic scenes.
  • Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    2020 28th European Signal Processing Conference (EUSIPCO), 605-609, Jan 24, 2021  Peer-reviewedLead authorCorresponding author
    Scene flow is a three-dimensional (3D) vector field with velocity in the depth direction and optical flow that represents the apparent motion, which can be estimated from RGB-D videos. Scene flow can be used to estimate the 3D motion of objects with a camera; thus, it is used for obstacle detection and self-localization. It can potentially be applied to inter prediction in 3D video coding. The scene-flow estimation method based on the variational method requires numerical computations of nonlinear equations that control the regularization strength to prevent excessive smoothing due to scene-flow regularization. Because numerical stability depends on multi-channel images and computational parameters such as regularization weights, it is difficult to determine appropriate parameters that satisfy the stability requirements. Therefore, we propose a numerical computation method to derive a numerical stability condition that does not depend on the color of the image or the weight of the regularization term. This simplifies the traditional method and facilitates the setting up of various regularization weight functions. Finally, we evaluate the performance of the proposed method.
  • Kyohei Unno, Koji Nemoto, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Sei Naito
    Proceedings of The 27th IEEE International Conference on Image Processing (ICIP 2020), 1103-1107, Oct, 2020  Peer-reviewed
  • Shimamoto, M., Kameda, Y., Hamamoto, T.
    IEICE Transactions on Information and Systems, E103D(10) 2067-2071, Oct, 2020  Peer-reviewedCorresponding author
  • Yutaka Fukuchi, Yusuke Kameda
    2020 IEEE Photonics Conference (IPC), 1-2, Sep, 2020  Peer-reviewed
  • Taiki Kure, Yosui Miyazaki, Junji Kondoh, Yusuke Kameda
    2020 4th International Conference on Smart Grid and Smart Cities (ICSGSC), 24-28, Aug 18, 2020  Peer-reviewed
  • 園山 隼,前田 慶博,亀田 裕介,浜本 隆之
    3次元画像コンファレンス2020, (5-2) 1-4, Jul, 2020  Peer-reviewed
  • Kyohei Unno, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Sei Naito
    ITE Transactions on Media Technology and Applications, 8(3) 132-139, Jul, 2020  Peer-reviewed
  • Yusuke Kameda, Yoshihiro Maeda, Takayuki Hamamoto
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020), 115150K 1-5, Jun 1, 2020  Peer-reviewedLead authorCorresponding author
  • Yuya Kamataki, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020), 115150W 1-5, Jun 1, 2020  Peer-reviewed
  • Koji Nemoto, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Kyohei Unno, Sei Naito
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020), 115150U 1-5, Jun 1, 2020  Peer-reviewed
  • Takumi Owada, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proc. SPIE, The 23rd International Workshop on Advanced Image Technology (IWAIT 2020), 115150X 1-5, Jun 1, 2020  Peer-reviewed
  • Shinnosuke Kurata, Toshinori Ootaka, Yusuke Kameda, Takayuki Hamamoto
    The Tenth International Workshop on Image Media Quality and its Applications, IMQA2020, 33-36, Mar, 2020  Peer-reviewedCorresponding author
  • Misaki Shikakura, Yusuke Kameda, Takayuki Hamamoto
    The Tenth International Workshop on Image Media Quality and its Applications, IMQA2020, 37-40, Mar, 2020  Peer-reviewedCorresponding author
  • Miyazaki, Y., Kameda, Y., Kondoh, J.
    Energies, 12(24) 4815-4828, Dec 17, 2019  Peer-reviewed
  • 海野恭平, 亀田裕介, 松田一朗, 伊東晋, 内藤整
    電子情報通信学会論文誌 D, J102-D(10) 619-627, Oct, 2019  Peer-reviewed
  • Kyohei Unno, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh, Sei Naito
    Proceedings of the 27th European Signal Processing Conference (EUSIPCO 2019), 1-5, Sep, 2019  Peer-reviewed
  • Koji Nemoto, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proc. SPIE, The 22nd International Workshop on Advanced Image Technology (IWAIT 2019), 11049(48) 1-5, Mar 22, 2019  Peer-reviewed
  • Jun Sakurai, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of SPIE - The International Society for Optical Engineering, 11049(31) 1-5, Mar 22, 2019  Peer-reviewed
    In general, »drawing collapse» is a word used when very low quality animated contents are broadcast. For example, perspective of the scene is unnaturally distorted and/or sizes of people and buildings are abnormally unbalanced. In our research, possibility of automatic discrimination of drawing collapse is explored for the purpose of reducing a workload for content check typically done by the animation director. In this paper, we focus only on faces of animated characters as a preliminary task, and distances as well as angles between several feature points on facial parts are used as input data. By training a support vector machine (SVM) using the input data extracted from both positive and negative example images, about 90% of discrimination accuracy is obtained when the same character is tested.
  • YUYA YAMAKI, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    International Workshop on Advanced Image Technology (IWAIT) 2019, 11049(32) 1-5, Mar 22, 2019  Peer-reviewed
  • Naoaki Kataoka, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of SPIE - The International Society for Optical Engineering, 11049(41) 1-5, Mar, 2019  Peer-reviewed
    This paper describes a method for creating cel-style CG animations of waving hair. In this method, gatherings of air are considered as virtual circles moving at a constant velocity, and hair bundles are modeled as elastic bodies. Deformation of the hair bundles is then calculated by simulating collision events between the virtual circles and the hair bundles. Since the method is based on the animator's technique used in creation of the traditional cel animations, it is expected to suppress a feeling of strangeness that is often introduced by the conventional procedural animation techniques.
  • Tomokazu Ishikawa, Yusuke Kameda, Masarori Kakimoto, Ichiro Matsuda, Susumu Itoh
    IIEEJ Transactions on Image Electronics and Visual Computing, 6(2) 82-88, Dec 15, 2018  Peer-reviewed
  • Naoaki Kataoka, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    SIGGRAPH Asia 2018 Posters, 73:1-73:2, Dec 4, 2018  Peer-reviewed
  • Ichiro Matsuda, Tomokazu Ishikawa, Yusuke Kameda, Susumu Itoh
    Proceedings of the 26th European Signal Processing Conference (EUSIPCO-2018), 151-155, Sep, 2018  Peer-reviewed
  • Yosui Miyazaki, Junji Kondoh, Yusuke Kameda
    Grand Renewable Energy 2018 Proceedings, (O-Pv-8-5) 1-4, Jun, 2018  Peer-reviewed
  • Yuta Ishida, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E101A(6) 992-996, Jun 1, 2018  Peer-reviewedCorresponding author
    Abstract This paper proposes a lossy image coding method for still images. In this method, recursive and non-recursive type intra prediction techniques are adaptively selected on a block-by-block basis. The recursive-type intra prediction technique applies a linear predictor to each pel within a prediction block in a recursive manner, and thus typically produces smooth image values. In this paper, the non-recursive type intra prediction technique is extended from the angular prediction technique adopted in the H.265/HEVC video coding standard to enable interpolative prediction to the maximum possible extent. The experimental results indicate that the proposed method achieves better coding performance than the conventional method that only uses the recursive-type prediction technique.
  • Ryota Nakazato, Hiroyuki Funakoshi, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 21st International Workshop on Advanced Image Technology (IWAIT 2018), (80) 1-4, Jan, 2018  Peer-reviewed
  • Idomu Fujita, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 21st International Workshop on Advanced Image Technology (IWAIT 2018), (86) 1-4, Jan, 2018  Peer-reviewed
  • Akihiro Miyazawa, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 21st International Workshop on Advanced Image Technology (IWAIT 2018), (84) 1-4, Jan, 2018  Peer-reviewedCorresponding author
  • Toru Sumi, Yuta Inamura, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E100A(11) 2351-2354, Nov 1, 2017  Peer-reviewedCorresponding author
    We previously proposed a lossless image coding scheme using example-based probability modeling, wherein the probability density function of image signals was dynamically modeled pel-by-pel. To appropriately estimate the peak positions of the probability model, several examples, i.e., sets of pels whose neighborhoods are similar to the local texture of the target pel to be encoded, were collected from the already encoded causal area via template matching. This scheme primarily makes use of non-local information in image signals. In this study, we introduce a prediction technique into the probability modeling to offer a better trade-off between the local and non-local information in the image signals.
  • Shota Kasai, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, E100D(9) 2039-2043, Sep, 2017  Peer-reviewedCorresponding author
    We propose a method of interframe prediction in depth map coding that uses pixel-wise 3D motion estimated from encoded textures and depth maps. By using the 3D motion, an approximation of the depth map frame to be encoded is generated and used as a reference frame of block-wise motion compensation.
  • Ichiro Matsuda, Tomokazu Ishikawa, Yusuke Kameda, Susumu Itoh
    Proceedings of the 25th European Signal Processing Conference (EUSIPCO-2017), 1485-1489, Aug, 2017  Peer-reviewed

Misc.

 169

Presentations

 192

Major Teaching Experience

 32

Major Research Projects

 13

Media Coverage

 1