Information and Communication Sciences
					
	
	Profile Information
- Affiliation
 - Project Assistant Professor, Faculty of Science and Technology, Sophia University
 
- Degree
 - Bachelor (Engineering)(Mar, 2017, Sophia University)Master (Engineering)(Mar, 2019, Sophia University)Doctor (Engineering)(Mar, 2022, Sophia University)
 
- Other name(s) (e.g. nickname)
 - Rina Komatsu
 - ORCID ID
 
 https://orcid.org/0000-0002-7412-1249- J-GLOBAL ID
 - 202201003919925639
 - researchmap Member ID
 - R000034689
 
Research Interests
4Research Areas
1Research History
4- 
	Apr, 2022 - Mar, 2023
 - 
	Apr, 2019 - Jan, 2022
 - 
	May, 2016 - Mar, 2019
 
Education
3- 
	Apr, 2019 - Mar, 2022
 - 
	Apr, 2017 - Mar, 2019
 
Awards
2Papers
6- 
	Electronics, 14(4) 676-676, Feb 10, 2025 Peer-reviewedLead authorAutomatic medical segmentation is crucial for assisting doctors in identifying disease regions effectively. As a state-of-the-art (SOTA) approach, generative AI models, particularly diffusion models, have surpassed GANs in generating high-quality images for tasks like segmentation. However, most diffusion-based architectures rely on U-Net designs with multiple residual blocks and convolutional layers, resulting in high computational costs and limited applicability on general-purpose devices. To solve this issue, we propose an enhanced denoising diffusion implicit model (DDIM) that incorporates lightweight depthwise convolution layers within residual networks and self-attention layers. This approach significantly reduces computational overhead while maintaining segmentation performance. We evaluated the proposed DDIM on two distinct medical imaging datasets: X-ray and skin lesion and polyp segmentation. Experimental results demonstrate that our model achieves, with reduced resource requirements, accuracy comparable to standard DDIMs in both visual representation and region-based scoring. The proposed lightweight DDIM offers a promising solution for medical segmentation tasks, enabling easier implementation on general-purpose devices without the need for expensive high-performance computing resources.
 - 
	IEEE Open Journal of the Computer Society, 5 624-635, 2024 Peer-reviewedLead author
 - 
	Advances in Intelligent Systems and Computing, 57-68, Feb 26, 2022 Peer-reviewedLead author
 - 
	AI, 3(1) 37-52, Jan 24, 2022 Peer-reviewedLead authorIn CycleGAN, an image-to-image translation architecture was established without the use of paired datasets by employing both adversarial and cycle consistency loss. The success of CycleGAN was followed by numerous studies that proposed new translation models. For example, StarGAN works as a multi-domain translation model based on a single generator–discriminator pair, while U-GAT-IT aims to close the large face-to-anime translation gap by adapting its original normalization to the process. However, constructing robust and conditional translation models requires tradeoffs when the computational costs of training on graphic processing units (GPUs) are considered. This is because, if designers attempt to implement conditional models with complex convolutional neural network (CNN) layers and normalization functions, the GPUs will need to secure large amounts of memory when the model begins training. This study aims to resolve this tradeoff issue via the development of Multi-CartoonGAN, which is an improved CartoonGAN architecture that can output conditional translated images and adapt to large feature gap translations between the source and target domains. To accomplish this, Multi-CartoonGAN reduces the computational cost by using a pretrained VGGNet to calculate the consistency loss instead of reusing the generator. Additionally, we report on the development of the conditional adaptive layer-instance normalization (CAdaLIN) process for use with our model to make it robust to unique feature translations. We performed extensive experiments using Multi-CartoonGAN to translate real-world face images into three different artistic styles: portrait, anime, and caricature. An analysis of the visualized translated images and GPU computation comparison shows that our model is capable of performing translations with unique style features that follow the conditional inputs and at a reduced GPU computational cost during training.
 - 
	SN Computer Science, 2(6), Nov, 2021 Peer-reviewedLead author
 
Major Presentations
11- 
	Proceedings of the Annual Conference of JSAI 35th Annual Conference
 
Teaching Experience
12- 
	Apr, 2024 - PresentHOW CAN WE LIVE WITH AI? (Sophia University)
 - 
	Sep, 2023 - PresentINFORMATION AND COMMUNICATION SCIENCES LAB. 1 (Sophia University)
 - 
	Sep, 2023 - PresentPROGRAMING FUNDAMENTALS(C Language) (Sophia University)
 - 
	Sep, 2023 - PresentDATA SCIENCE (Sophia University)
 - 
	Sep, 2023 - PresentBASIC INFORMATICS (Sophia University)
 
Works
1Research Projects
3- 
	2025 - 2027
 - 
	科学研究費助成事業インセンティブ制度, 上智大学, 2025
 - 
	Advanced Cognitive Systems and Data Science Research Group (ACSAD), 2020