Please use this identifier to cite or link to this item:
http://cmuir.cmu.ac.th/jspui/handle/6653943832/74764
Title: | How to Train A-to-B and B-to-A Neural Networks So That the Resulting Transformations Are (Almost) Exact Inverses |
Authors: | Paravee Maneejuk Torben Peters Claus Brenner Vladik Kreinovich |
Authors: | Paravee Maneejuk Torben Peters Claus Brenner Vladik Kreinovich |
Keywords: | Computer Science;Decision Sciences;Economics, Econometrics and Finance;Engineering;Mathematics |
Issue Date: | 1-Jan-2022 |
Abstract: | In many practical situations, there exist several representations, each of which is convenient for some operations, and many data processing algorithms involve transforming back and forth between these representations. Many such transformations are computationally time-consuming when performed exactly. So, taking into account that input data is usually only 1–10% accurate anyway, it makes sense to replace time-consuming exact transformations with faster approximate ones. One of the natural ways to get a fast-computing approximation to a transformation is to train the corresponding neural network. The problem is that if we train A-to-B and B-to-A networks separately, the resulting approximate transformations are only approximately inverse to each other. As a result, each time we transform back and forth, we add new approximation error—and the accumulated error may become significant. In this paper, we show how we can avoid this accumulation. Specifically, we show how to train A-to-B and B-to-A neural networks so that the resulting transformations are (almost) exact inverses. |
URI: | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85135508602&origin=inward http://cmuir.cmu.ac.th/jspui/handle/6653943832/74764 |
ISSN: | 21984190 21984182 |
Appears in Collections: | CMUL: Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.