Please use this identifier to cite or link to this item:
http://cmuir.cmu.ac.th/jspui/handle/6653943832/74774
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Aniwat Phaphuangwittayakul | en_US |
dc.contributor.author | Fangli Ying | en_US |
dc.contributor.author | Yi Guo | en_US |
dc.contributor.author | Liting Zhou | en_US |
dc.contributor.author | Nopasit Chakpitak | en_US |
dc.date.accessioned | 2022-10-16T06:49:04Z | - |
dc.date.available | 2022-10-16T06:49:04Z | - |
dc.date.issued | 2022-01-01 | en_US |
dc.identifier.issn | 01782789 | en_US |
dc.identifier.other | 2-s2.0-85134585341 | en_US |
dc.identifier.other | 10.1007/s00371-022-02566-3 | en_US |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85134585341&origin=inward | en_US |
dc.identifier.uri | http://cmuir.cmu.ac.th/jspui/handle/6653943832/74774 | - |
dc.description.abstract | Traditional deep generative models rely on enormous training data for generating images from a given class. However, they face the challenges associated with expensive and time-consuming in data acquisition as well as the requirements for fast learning from limited data of new categories. In this study, a contrastive meta-learning generative adversarial network (CML-GAN) is proposed to generate novel images of unseen classes from a few images by applying a self-supervised contrastive learning strategy to a fast adaptive meta-learning framework. By introducing a meta-learning framework into a GAN-based model, our model can efficiently learn the feature representations and quickly adapt to new generation tasks with only a few samples. The proposed model takes original input and generated images from the GAN-based model as inputs and evaluates both contrastive loss and distance loss based on the feature representations of the inputs extracted from the encoder. The original input image and its generated version from the generator are considered a positive pair, while the rest of the generated images in the same batch are considered negative samples. Then, the model converges to differentiate positive samples from negative ones and learns to generate distinct representations of the same samples, which prevents model overfitting. Thus, our model can generalize to generate diverse images from only a few samples of unseen categories, while fast adapting to new image generation tasks. Furthermore, the effectiveness of our model is demonstrated through extensive experiments on three datasets. | en_US |
dc.subject | Computer Science | en_US |
dc.title | Few-shot image generation based on contrastive meta-learning generative adversarial network | en_US |
dc.type | Journal | en_US |
article.title.sourcetitle | Visual Computer | en_US |
article.stream.affiliations | The State Key Laboratory of Bioreactor Engineering | en_US |
article.stream.affiliations | Dublin City University | en_US |
article.stream.affiliations | East China University of Science and Technology | en_US |
article.stream.affiliations | Chiang Mai University | en_US |
article.stream.affiliations | Shanghai Engineering Research Center of Big Data and Internet Audience | en_US |
article.stream.affiliations | National Engineering Laboratory for Big Data Distribution and Exchange Technologies | en_US |
Appears in Collections: | CMUL: Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.