
Few-Shot Image Generation Using Diffusion Models
– Published Date : TBD
– Category : Image Generation
– Place of publication : 제35회 영상 처리 및 이해에 관한 워크샵 (IPIU)
Recently, the diffusion model has gained significant attention in the field of image generation due to its impressive performance in various tasks, outperforming traditional Generative Adversarial Networks (GANs) in many cases. While diffusion models have shown great potential in generating high-quality samples, they do have a limitation in that they require large amounts of training data in order to generate diverse samples. If the number of training samples is insufficient, the model may simply replicate the training data rather than generating new samples, leading to poor performance and a lack of diversity in the generated samples. To overcome this limitation, we propose a novel method for adapting diffusion models to few-shot learning scenarios, where the number of training samples is very limited (e.g. 10 or fewer). Our approach involves fine-tuning the diffusion model with diffusion loss and modified cross-domain distance consistency loss in order to prevent overfitting. In order to maintain the overall structure of generated samples from a pre-trained source model, we also introduce a source model-guided sampling method that utilizes the source model during the sampling process. Additionally, unlike the previous studies, we extend the proposed framework to source-to-target image translation using the pre-trained source model in the sampling process. To evaluate the performance and adaptation capabilities of our proposed framework, we conduct a series of experiments in various data domains. Our results show that our method is able to generate high-quality samples even with a small number of training samples. The experimental results show that the proposed method is effective at adapting diffusion models to few-shot datasets and can be applied to source-to-target image translation tasks.