Please use this identifier to cite or link to this item: http://cmuir.cmu.ac.th/jspui/handle/6653943832/74774
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAniwat Phaphuangwittayakulen_US
dc.contributor.authorFangli Yingen_US
dc.contributor.authorYi Guoen_US
dc.contributor.authorLiting Zhouen_US
dc.contributor.authorNopasit Chakpitaken_US
dc.date.accessioned2022-10-16T06:49:04Z-
dc.date.available2022-10-16T06:49:04Z-
dc.date.issued2022-01-01en_US
dc.identifier.issn01782789en_US
dc.identifier.other2-s2.0-85134585341en_US
dc.identifier.other10.1007/s00371-022-02566-3en_US
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85134585341&origin=inwarden_US
dc.identifier.urihttp://cmuir.cmu.ac.th/jspui/handle/6653943832/74774-
dc.description.abstractTraditional deep generative models rely on enormous training data for generating images from a given class. However, they face the challenges associated with expensive and time-consuming in data acquisition as well as the requirements for fast learning from limited data of new categories. In this study, a contrastive meta-learning generative adversarial network (CML-GAN) is proposed to generate novel images of unseen classes from a few images by applying a self-supervised contrastive learning strategy to a fast adaptive meta-learning framework. By introducing a meta-learning framework into a GAN-based model, our model can efficiently learn the feature representations and quickly adapt to new generation tasks with only a few samples. The proposed model takes original input and generated images from the GAN-based model as inputs and evaluates both contrastive loss and distance loss based on the feature representations of the inputs extracted from the encoder. The original input image and its generated version from the generator are considered a positive pair, while the rest of the generated images in the same batch are considered negative samples. Then, the model converges to differentiate positive samples from negative ones and learns to generate distinct representations of the same samples, which prevents model overfitting. Thus, our model can generalize to generate diverse images from only a few samples of unseen categories, while fast adapting to new image generation tasks. Furthermore, the effectiveness of our model is demonstrated through extensive experiments on three datasets.en_US
dc.subjectComputer Scienceen_US
dc.titleFew-shot image generation based on contrastive meta-learning generative adversarial networken_US
dc.typeJournalen_US
article.title.sourcetitleVisual Computeren_US
article.stream.affiliationsThe State Key Laboratory of Bioreactor Engineeringen_US
article.stream.affiliationsDublin City Universityen_US
article.stream.affiliationsEast China University of Science and Technologyen_US
article.stream.affiliationsChiang Mai Universityen_US
article.stream.affiliationsShanghai Engineering Research Center of Big Data and Internet Audienceen_US
article.stream.affiliationsNational Engineering Laboratory for Big Data Distribution and Exchange Technologiesen_US
Appears in Collections:CMUL: Journal Articles

Files in This Item:
There are no files associated with this item.


Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.