Please use this identifier to cite or link to this item:
http://cmuir.cmu.ac.th/jspui/handle/6653943832/74764
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Paravee Maneejuk | en_US |
dc.contributor.author | Torben Peters | en_US |
dc.contributor.author | Claus Brenner | en_US |
dc.contributor.author | Vladik Kreinovich | en_US |
dc.date.accessioned | 2022-10-16T06:49:00Z | - |
dc.date.available | 2022-10-16T06:49:00Z | - |
dc.date.issued | 2022-01-01 | en_US |
dc.identifier.issn | 21984190 | en_US |
dc.identifier.issn | 21984182 | en_US |
dc.identifier.other | 2-s2.0-85135508602 | en_US |
dc.identifier.other | 10.1007/978-3-030-97273-8_14 | en_US |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85135508602&origin=inward | en_US |
dc.identifier.uri | http://cmuir.cmu.ac.th/jspui/handle/6653943832/74764 | - |
dc.description.abstract | In many practical situations, there exist several representations, each of which is convenient for some operations, and many data processing algorithms involve transforming back and forth between these representations. Many such transformations are computationally time-consuming when performed exactly. So, taking into account that input data is usually only 1–10% accurate anyway, it makes sense to replace time-consuming exact transformations with faster approximate ones. One of the natural ways to get a fast-computing approximation to a transformation is to train the corresponding neural network. The problem is that if we train A-to-B and B-to-A networks separately, the resulting approximate transformations are only approximately inverse to each other. As a result, each time we transform back and forth, we add new approximation error—and the accumulated error may become significant. In this paper, we show how we can avoid this accumulation. Specifically, we show how to train A-to-B and B-to-A neural networks so that the resulting transformations are (almost) exact inverses. | en_US |
dc.subject | Computer Science | en_US |
dc.subject | Decision Sciences | en_US |
dc.subject | Economics, Econometrics and Finance | en_US |
dc.subject | Engineering | en_US |
dc.subject | Mathematics | en_US |
dc.title | How to Train A-to-B and B-to-A Neural Networks So That the Resulting Transformations Are (Almost) Exact Inverses | en_US |
dc.type | Book Series | en_US |
article.title.sourcetitle | Studies in Systems, Decision and Control | en_US |
article.volume | 429 | en_US |
article.stream.affiliations | The University of Texas at El Paso | en_US |
article.stream.affiliations | Gottfried Wilhelm Leibniz Universität Hannover | en_US |
article.stream.affiliations | Chiang Mai University | en_US |
Appears in Collections: | CMUL: Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.