Information and Communication Sciences

Rina Oh

  (吳 里奈)

Profile Information

Affiliation
Project Assistant Professor, Faculty of Science and Technology, Sophia University

Other name(s) (e.g. nickname)
Rina Komatsu
ORCID ID
 https://orcid.org/0000-0002-7412-1249
J-GLOBAL ID
202201003919925639
researchmap Member ID
R000034689

Research Areas

 1

Papers

 4
  • Rina Komatsu, Tad Gonsalves
    Advances in Intelligent Systems and Computing, 57-68, Feb 26, 2022  
  • Rina Oh, Tad Gonsalves
    AI, 3(1) 37-52, Jan 24, 2022  
    In CycleGAN, an image-to-image translation architecture was established without the use of paired datasets by employing both adversarial and cycle consistency loss. The success of CycleGAN was followed by numerous studies that proposed new translation models. For example, StarGAN works as a multi-domain translation model based on a single generator–discriminator pair, while U-GAT-IT aims to close the large face-to-anime translation gap by adapting its original normalization to the process. However, constructing robust and conditional translation models requires tradeoffs when the computational costs of training on graphic processing units (GPUs) are considered. This is because, if designers attempt to implement conditional models with complex convolutional neural network (CNN) layers and normalization functions, the GPUs will need to secure large amounts of memory when the model begins training. This study aims to resolve this tradeoff issue via the development of Multi-CartoonGAN, which is an improved CartoonGAN architecture that can output conditional translated images and adapt to large feature gap translations between the source and target domains. To accomplish this, Multi-CartoonGAN reduces the computational cost by using a pretrained VGGNet to calculate the consistency loss instead of reusing the generator. Additionally, we report on the development of the conditional adaptive layer-instance normalization (CAdaLIN) process for use with our model to make it robust to unique feature translations. We performed extensive experiments using Multi-CartoonGAN to translate real-world face images into three different artistic styles: portrait, anime, and caricature. An analysis of the visualized translated images and GPU computation comparison shows that our model is capable of performing translations with unique style features that follow the conditional inputs and at a reduced GPU computational cost during training.
  • Rina Komatsu, Tad Gonsalves
    SN Computer Science, 2(6), Nov, 2021  
  • Rina Komatsu, Tad Gonsalves
    AI, Oct 12, 2020  

Major Presentations

 9

Teaching Experience

 5