研究者業績

亀田 裕介

カメダ ユウスケ  (Yusuke Kameda)

基本情報

所属
上智大学 理工学部 情報理工学科 助教
学位
博士(工学)(2012年9月 千葉大学)
修士(工学)(2008年3月 千葉大学)

連絡先
kamedasophia.ac.jp
研究者番号
50711553
ORCID ID
 https://orcid.org/0000-0001-8503-4098
J-GLOBAL ID
200901044504595965
researchmap会員ID
6000014798

外部リンク

師から受け継いだ教えですが、何ができて何ができないかを明確にする手段が自然科学(理学)であり、科学的なモノ作りが工学であると思います。それらの中で、あらゆる現象やものごとを計算過程としてとらえるのが情報理工学なのでしょう。専門知識と技術を身に付け新たな価値を世界に提供すべく学問を共にしてみませんか。

専門分野は画像処理、特に動画像上の動きや流れの推定です。動画像から推定した動きや流れの情報は、コンピュータによる物体認識や、ロボットやクルマの自己位置推定・障害物検出、そのほか流体の計測や解析など、幅広く応用が期待されています。高速高精度に動きを推定する研究を行っています。研究だけでなく、プログラミングやサーバ管理などの実用上のICT技術も指導します。


論文

 89
  • Tomokazu Ishikawa, Yusuke Kameda, Masarori Kakimoto, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 5th International Workshop on Image Electronics and Visual Computing (IEVC 2017) (1A-2) 1-4 2017年3月  査読有り
  • Shota Kasai, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 20th International Workshop on Advanced Image Technology 2017 (IWAIT 2017) (P.2B-29) 1-4 2017年1月  査読有り責任著者
  • Hiroyuki Kishi, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 20th International Workshop on Advanced Image Technology 2017 (IWAIT 2017) (P.2A-6) 1-4 2017年1月  査読有り責任著者
  • Yusuke Kameda, Hiroyuki Kishi, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    2016 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT) 300-304 2016年12月  査読有り筆頭著者責任著者
    We propose an efficient motion compensation method based on a temporally extrapolated frame by using a pel-wise motion (optical flow) estimation. In traditional motion compensation methods, motion vectors are generally detected on a block-by-block basis and sent to the decoder as side information. However, such block-wise motions are not always suitable for motions such as local scaling, rotation, and deformation. On the other hand, pel-wise motion can be estimated on both the side of the encoder and decoder from two successive frames that were previously encoded without side information. The use of pel-wise motion enables the extrapolated frame to be generated under the assumption of linear uniform motions within a short time period. This frame is an approximation of the frame to be encoded. The proposed bi-prediction method uses the extrapolated frame as one of the reference frames. The experimental results indicate that the prediction performance of the proposed method is higher than that of the traditional method.
  • Shu Tajima, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E99A(11) 2016-2018 2016年11月  査読有り責任著者
    This paper proposes an efficient lossless coding scheme for color video in RGB 4:4:4 format. For the R signal that is encoded before the other signals at each frame, we employ a block -adaptive prediction technique originally developed for monochrome video. The prediction technique used for the remaining G and B signals is extended to exploit inter -color correlations as well as inter- and intra-frame ones. In both cases, multiple predictors are adaptively selected on a block-by-block basis. For the purpose of designing a set of predictors well suited to the local properties of video signals, we also explore an appropriate setting for the spatiotemporal partitioning of a video volume.
  • 田嶋周, 亀田裕介, 松田一朗, 伊東晋
    電子情報通信学会論文誌 D J99-D(9) 815-822 2016年9月  査読有り責任著者
  • 亀田裕介, 武市惇平, 石橋雅貴, 松田一朗, 伊東晋
    電子情報通信学会論文誌 D J99-D(9) 861-864 2016年9月  査読有り筆頭著者責任著者
  • Toru Sumi, Yuta Inamura, Yusuke Kameda, Tomokazu Ishikawa, Ichiro Matsuda, Susumu Itoh
    2016 International Workshop on Smart Info-Media Systems in Asia (SISA2016) 183-185 2016年9月  査読有り
  • Takaaki Yokomizo, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    2016 International Workshop on Smart Info-Media Systems in Asia (SISA2016) 180-182 2016年9月  査読有り
  • Takatoshi Tamura, Tomokazu Ishikawa, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    2016 International Workshop on Smart Info-Media Systems in Asia (SISA2016) 201-203 2016年9月  査読有り
  • Taira Komatsuzaki, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 19th International Workshop on Advanced Image Technology 2016 (IWAIT 2016) (P.2B-3) 9-12 2016年1月  査読有り
  • Yuta Ishida, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 19th International Workshop on Advanced Image Technology 2016 (IWAIT 2016) (P.2A-3) 7-10 2016年1月  査読有り
  • Hayata Tsukitani, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 19th International Workshop on Advanced Image Technology 2016 (IWAIT 2016) (P.1B-8) 23-25 2016年1月  査読有り
  • Yusuke Kameda, Junpei Takeichi, Masaki Ishibashi, Ichiro Matsuda, Susumu Itoh
    2015 INTERNATIONAL CONFERENCE ON SYSTEMS, SIGNALS AND IMAGE PROCESSING (IWSSIP 2015) 145-148 2015年9月  査読有り筆頭著者責任著者
    This paper proposes a method to improve the prediction accuracy of motion compensation (MC) in video coding using block-wise and pixel-wise motion with little additional information. In general MC, block-wise motion vectors (MVs) are computed by block matching (BM), but these are not always suitable for motions such as local scaling and deformation. To improve the accuracy of such motions, the proposed method estimates the pixel-wise motion precisely from two decoded frames with little additional information, and extrapolates the frame to be encoded using the pixel-wise motion and an interpolation method. This is first-stage MC. In second-stage MC, the encoder adaptively chooses the extrapolated or previous frames as a reference frame for each MC block. The block-wise MVs are then computed using BM to prevent loss of accuracy. Experimental results show that the proposed method outperforms BM for local scaling and deformation motions.
  • Yuki Nakamura, Ichiro Matsuda, Yusuke Kameda, Susumu Itoh
    Proceedings of the 1st International Conference on Advanced Imaging (ICAI2015) 300-303 2015年6月  査読有り
  • Takayuki Shikakura, Ichiro Matsuda, Yusuke Kameda, Susumu Itoh, Shin-ichi Satake
    Proceedings of the 1st International Conference on Advanced Imaging (ICAI2015) 261-264 2015年6月  査読有り
  • Hironao Abe, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    2014 International Workshop on Smart Info-Media Systems in Asia (SISA2014) 99-102 2014年10月  査読有り
  • Shu Tajima, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    2014 International Workshop on Smart Info-Media Systems in Asia (SISA2014) 91-94 2014年10月  査読有り
  • Naoya Nakajima, Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    2014 International Workshop on Smart Info-Media Systems in Asia (SISA2014) 95-98 2014年10月  査読有り
  • Yusuke Kameda, Ichiro Matsuda, Susumu Itoh
    Proceedings of the 22nd European Signal Processing Conference (EUSIPCO) 1068-1072 2014年8月  査読有り筆頭著者責任著者
    In video images, apparent motions can be computed using optical flow estimation. However, estimation of the depth directional velocity is difficult using only a single viewpoint. Scene flows (SF) are three-dimensional (3D) vector fields with apparent motion and a depth directional velocity field, which are computed from stereo video. The 3D motion of objects and a camera can be estimated using SF, thus it is used for obstacle detection and self-localization. SF estimation methods require the numerical computation of nonlinear equations to prevent over-smoothing due to the regularization of SF. Since the numerical stability depends on the image and regularizer weights, it is impossible to determine appropriate values for the weights. Thus, we propose a method that is independent of the images and weights, which simplifies previous methods and derives the numerical stability conditions, thereby facilitating the estimation of suitable weights. We also evaluated the performance of the proposed method.
  • Ichiro Matsuda, Yusuke Kameda, Susumu Itoh
    2014 PROCEEDINGS OF THE 22ND EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) 196-200 2014年8月  査読有り
    This paper proposes an adaptive intra prediction method for DCT-based image coding. In this method, predicted values in each block are generated in spatial domain like the conventional intra prediction methods. On the other hand, prediction residuals to be encoded are separately calculated in DCT domain, i.e. differences between the original and predicted values are calculated after performing DCT. Such a prediction framework allows us to change the coding process from block-wise order to coefficient-wise one. When the coefficient-wise order is adopted, a block to be predicted is almost always surrounded by partially reconstructed image signals, and therefore, efficient interpolative prediction can be performed. Simulation results indicate that the proposed method is beneficial for removing inter-block correlations of high-frequency components.
  • 亀田裕介, 松田一朗, 伊東晋
    映像情報メディア学会誌 68(7) J292-J298 2014年7月  査読有り筆頭著者責任著者
  • Riku Tsuto, Ichiro Matsuda, Yusuke Kameda, Susumu Itoh
    2013 International Workshop on Smart Info-Media Systems in Asia (SISA2013) 180-183 2013年10月  査読有り
  • Nobutoshi Sugai, Ichiro Matsuda, Yusuke Kameda, Susumu Itoh
    2013 International Workshop on Smart Info-Media Systems in Asia (SISA2013) 177-179 2013年10月  査読有り
  • Yusuke Kameda
    Graduate School of Advanced Integration Science, Chiba University 1-127 2012年9月  筆頭著者最終著者責任著者
  • 亀田 裕介, 井宮 淳, 酒井 智弥
    電子情報通信学会論文誌. D, 情報・システム = The IEICE transactions on information and systems (Japanese edition) 95(8) 1644-1653 2012年8月1日  査読有り筆頭著者責任著者
    変分法に基づくオプティカルフロー算出法の枠組みにおいて,エネルギー関数の各項の重み係数という所与のパラメータを減らすことを目的とする.この目的のために,その係数をラグランジュ乗数法の枠組みによって適応的に推定する手法を提案する.提案手法が実現可能であることを示すため,本論文ではまずHorn-Schunck型の基礎的なエネルギー関数を例に扱う.ラグランジュ乗数法の枠組みにより,画像の全領域で定数である各項の重み係数は画像上の位置と時間の関数となり,これにより運動の見かけの境界をより表現しやすくなると考えられる.本手法では非線型最適化問題を解く必要がないことと,その数値計算が不安定にならないための条件式を示す.誤差解析では,適切でない重み係数を選んだ従来手法の誤差よりも本手法のそれが小さくなることと,運動の見かけの境界をより表現できることを示す.本手法により,最適化問題の中でオプティカルフローと同時に重み係数関数を求めることができ,変分法に基づく算出法の枠組みにおける重み係数という所与のパラメータを減らすことができる.
  • Yusuke Kameda, Atsushi Imiya, Tomoya Sakai
    COMPUTER VISION - ECCV 2012, PT II 7584 576-585 2012年  査読有り
    Most of the methods to compute optical flows are variational-technique-based methods, which assume that image functions have spatiotemporal continuities and appearance motions are small. In the viewpoint of the discrete errors of spatial-and time-differentials, the appropriate resolution for optical flow depends on both the resolution and the frame rate of images since there is a problem with the accuracy of the discrete approximations of derivatives. Therefore, for low frame-rate images, the appropriate resolution for optical flow should be lower than the resolution of the images. However, many traditional methods estimate optical flow with the same resolution as the images. Therefore, if the resolution of images is too high, down-sampling the images is effective for the variational-technique-based methods. In this paper, we analyze the appropriate resolutions for optical flows estimated by variational optical-flow computations from the viewpoint of the error analysis of optical flows. To analyze the appropriate resolutions, we use hierarchical structures constructed from the multi-resolutions of images. Numerical results show that decreasing image resolutions is effective for computing optical flows by variational optical-flow computations in low frame-rate sequences.
  • Yoshihiko Mochizuki, Yusuke Kameda, Atsushi Imiya, Tomoya Sakai, Takashi Imaizumi
    SIGNAL PROCESSING 91(7) 1535-1567 2011年7月  査読有り
    The motion fields in an image sequence observed by a car-mounted imaging system depend on the positions in the imaging plane. Since the motion displacements in the regions close to the camera centre are small, for accurate optical flow computation in this region, we are required to use super-resolution of optical flow fields. We develop an algorithm for super-resolution optical flow computation. Super-resolution of images is a technique for recovering a high-resolution image from a low-resolution image and/or image sequence. Optical flow is the appearance motion of points on the image. Therefore, super-resolution optical flow computation yields the appearance motion of each point on the high-resolution image from a sequence of low-resolution images. We combine variational super-resolution and variational optical flow computation in super-resolution optical flow computation. Our method directly computes the gradient and spatial difference of high-resolution images from those of low-resolution images, without computing any high-resolution images used as intermediate data for the computation of optical flow vectors of the high-resolution image. (C) 2010 Elsevier B.V. All rights reserved.
  • Koji Kashu, Yusuke Kameda, Masaki Narita, Atsushi Imiya, Tomoya Sakai
    BIOMEDICAL IMAGE REGISTRATION 6204 48-+ 2010年  査読有り
    We introduce a method for volumetric cardiac motion analysis using variational optical flow computation involving the prior with the fractional order differentiations. The order of the differentiation of the prior controls the continuity class of the solution. Fractional differentiations is a typical tool for edge detection of images. As a sequel of image analysis by fractional differentiation, we apply the theory of fractional differentiation to a temporal image sequence analysis. Using the fractional order differentiations, we can estimate the orders of local continuities of optical flow vectors. Therefore, we can obtain the optical flow vector with the optimal continuity at each point.
  • Yoshihiko Mochizuki, Yusuke Kameda, Atsushi Imiya, Tomoya Sakai, Takashi Imaizumi
    Proceedings - International Conference on Pattern Recognition 2270-2273 2010年  査読有り
    Superresolution is a technique to recover a high-resolution image from a low resolution image. We develop a variational superresolution method for the subpixel accurate optical flow computation using variational optimisation. We combine variational superresolution and the variational optical flow computation for the superresolution optical flow computation. © 2010 IEEE.
  • K. Kashu, Y. Kameda, A. Imiya, T. Sakai, Y. Mochizuki
    ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS 5681 154-+ 2009年  査読有り
    We introduce variational optical flow computation involving priors with fractional order differentiations. Fractional order differentiations are typical tools in signal processing and image analysis. The zero-crossing of a fractional order Laplacian yields better performance for edge detection than the zero-crossing of the usual Laplacian. The order of the differentiation of the prior controls the continuity class of the solution. Therefore, using the square norm of the fractional order differentiation of optical flow field as the prior; we develop a method to estimate the local continuity order of the optical flow held at each point. The method detects the optimal continuity order of optical flow and corresponding optical flow vector at each point. Numerical results show that the Horn-Schunck type prior involving the n + epsilon order differentiation for 0 < epsilon < 1 and an integer n is suitable for accurate optical flow computation.
  • Yoshihiko Mochizuki, Yusuke Kameda, Atsushi Imiya, Tomoya Sakai, Takashi Imaizumi
    ADVANCES IN VISUAL COMPUTING, PT 2, PROCEEDINGS 5876 1109-+ 2009年  査読有り
    We develop an algorithm for the super-resolution optical flow computation by combining variational super-resolution and the variational optical flow computation. Our method first computes the gradient and the spatial difference of a high resolution images from these of low resolution images directly, without computing any high resolution images. Second the algorithm computes optical flow of high resolution image using the results of the first step.
  • Yusuke Kameda, Naoya Ohnishi, Atsushi Imiya, Tomoya Sakai
    ADVANCES IN VISUAL COMPUTING, PT 1, PROCEEDINGS 5875 403-+ 2009年  査読有り
    We develop a method for the optical flow computation from a zooming image sequence. The synchronisation of image resolution for a pair of successive images in an image sequence is a fundamental requirement for optical flow computation. In a real application: we are, however, required to deal with a zooming and dezooming image sequences: that is, we are required to compute optical flow from a multiresolution image sequence whose resolution dynamically increases and decreases. As an extension of the multiresolution optical flow computation which computes the optical How vectors using coarse-to-fine propagation of the computation results across the layers, we develop an algorithm for the computation of optical Mow from a zooming image sequence.
  • 亀田裕介
    千葉大学大学院自然科学研究科知能情報工学専攻 修士論文 1-107 2008年2月  筆頭著者最終著者責任著者
  • Yusuke Kameda, Atsushi Imiya
    Human Motion: Understanding, Modelling, Capture, and Animation, Computational Imaging and Vision 36 81-104 2008年  査読有り
  • Naoya Ohnishi, Yusuke Kameda, Atsushi Imiya, Leo Dorst, Reinhard Klette
    ROBOT VISION, PROCEEDINGS 4931 1-+ 2008年  査読有り
    This paper introduces a new algorithm for computing multiresolution optical flow, and compares this new hierarchical method with the traditional combination of the Lucas-Kanade method with a pyramid transform. The paper shows that the new method promises convergent optical flow computation. Aiming at accurate and stable computation of optical flow, the new method propagates results of computations from low resolution images to those of higher resolution. The resolution of images increases this way for the sequence of images used in those calculations. The given input sequence of images defines the maximum of possible resolution.
  • Atsushi Imiya, Yusuke Kameda, Naoya Ohnishi
    DISCRETE GEOMETRY FOR COMPUTER IMAGERY, PROCEEDINGS 4992 69-+ 2008年  査読有り
    In this paper, we introduce a method to express a local linear operated in the neighbourhood of each point in the discrete space as a matrix transform. To derive matrix expressions, we develop a decomposition and construction method of the neighbourhood operations using algebraic properties of the noncommutative matrix ring. This expression of the transforms in image analysis clarifies analytical properties, such as the norm of the transforms. We show that the symmetry kernels for the neighbourhood operations have the symmetry matrix expressions.
  • Yusuke Kameda, Atsushi Imiya, Naoya Ohnishi
    COMBINATORIAL IMAGE ANALYSIS 4958 262-+ 2008年  査読有り
    In this paper, we prove the convergence property of the Horn-Schunck optical-flow computation scheme. Horn and Schunck derived a Jacobi-method-based scheme for the computation of optical-flow vectors of each point of an image from a pair of successive digitised images. The basic idea of the Horn-Schunck scheme is to separate the numerical operation into two steps: the computation of the average flow vector in the neighborhood of each point and the refinement of the optical flow vector by the residual of the average flow vectors in the neighborhood. Mitiche and Mansouri proved the convergence property of the Gauss-Seidel- and Jacobi-method-based schemes for the Horn-Schunck-type minimization using algebraic properties of the matrix expression of the scheme and some mathematical assumptions on the system matrix of the problem. In this paper, we derive an alternative proof for the original Horn-Schunck scheme. To prove the convergence property, we develop a method of expressing shift-invariant local operations for digital planar images in the matrix forms. These matrix expressions introduce the norm of the neighborhood operations. The norms of the neighborhood operations allow us to prove the convergence properties of iterative image processing procedures.
  • Yusuke Kameda, Atsushi Imiya
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS, PROCEEDINGS 4673 61-68 2007年  査読有り
    In this paper, we analyse mathematical properties of spatial optical-flow computation algorithm. First by numerical analysis, we derive the convergence property on variational optical-flow computation method used for cardiac motion detection. From the convergence property of the algorithm, we clarify the condition for the scheduling of the regularisation parameters. This condition shows that for the accurate and stable computation with scheduling the regularisation coefficients, we are required to control the sampling interval for numerical computation.

MISC

 169

講演・口頭発表等

 192

主要な担当経験のある科目(授業)

 32

所属学協会

 14

Works(作品等)

 1

主要な共同研究・競争的資金等の研究課題

 13