理工学部

吳 里奈

オ リナ  (Rina Oh)

基本情報

所属
上智大学 理工学部 特任助教

通称等の別名
小松 里奈
ORCID ID
 https://orcid.org/0000-0002-7412-1249
J-GLOBAL ID
202201003919925639
researchmap会員ID
R000034689

研究分野

 1

学歴

 2

論文

 4
  • Rina Komatsu, Tad Gonsalves
    Advances in Intelligent Systems and Computing 57-68 2022年2月26日  
  • Rina Oh, Tad Gonsalves
    AI 3(1) 37-52 2022年1月24日  
    In CycleGAN, an image-to-image translation architecture was established without the use of paired datasets by employing both adversarial and cycle consistency loss. The success of CycleGAN was followed by numerous studies that proposed new translation models. For example, StarGAN works as a multi-domain translation model based on a single generator–discriminator pair, while U-GAT-IT aims to close the large face-to-anime translation gap by adapting its original normalization to the process. However, constructing robust and conditional translation models requires tradeoffs when the computational costs of training on graphic processing units (GPUs) are considered. This is because, if designers attempt to implement conditional models with complex convolutional neural network (CNN) layers and normalization functions, the GPUs will need to secure large amounts of memory when the model begins training. This study aims to resolve this tradeoff issue via the development of Multi-CartoonGAN, which is an improved CartoonGAN architecture that can output conditional translated images and adapt to large feature gap translations between the source and target domains. To accomplish this, Multi-CartoonGAN reduces the computational cost by using a pretrained VGGNet to calculate the consistency loss instead of reusing the generator. Additionally, we report on the development of the conditional adaptive layer-instance normalization (CAdaLIN) process for use with our model to make it robust to unique feature translations. We performed extensive experiments using Multi-CartoonGAN to translate real-world face images into three different artistic styles: portrait, anime, and caricature. An analysis of the visualized translated images and GPU computation comparison shows that our model is capable of performing translations with unique style features that follow the conditional inputs and at a reduced GPU computational cost during training.
  • Rina Komatsu, Tad Gonsalves
    SN Computer Science 2(6) 2021年11月  
  • Rina Komatsu, Tad Gonsalves
    AI 2020年10月12日  

主要な講演・口頭発表等

 9

担当経験のある科目(授業)

 5