Neural-space generalization of a topological transformation |
| |
Authors: | G. Josin |
| |
Affiliation: | (1) Neural Systems Incorporated, 2827 West 43rd Avenue, V6N 3H9 Vancouver, British Columbia, Canada |
| |
Abstract: | An investigation is performed to assess the generalization capability found in neural network paradigms to approximate a 2-dimensional coordinate (topological) transformation. A developed strategy uses the example to give a physical meaning to what is meant by generalization. The example shows how to use a neural network paradigm to generalize a two-degree of freedom topological transformation from cartesian end-point coordinates to corresponding joint angle coordinates based only on examples of the mapping. The importance of this example is that it provides a clear understanding of how and what a neural network is actually communications and brings a theoretical idea to a useful understanding. When examples characterize the topology, a collective generalization property begins to emerge and the network learns the topology. If the network is presented with additional examples of the transformation, the network can generate the corresponding joint angles to any unseen position, that is, by generalization. It is also significant that the network's generalization property emerges from the network based on very few training examples! Further, the networks power exists with very few neurons. Results suggest the use of the paradigm's generalization capability to provide solutions to unknown or intractable algorithms for applications. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|