共查询到20条相似文献,搜索用时 0 毫秒
1.
We developed an efficient neural network algorithm for solving the Multiple Traveling Salesmen Problem (MTSP). A new transformation of the N-city M-salesmen MTSP to the standard Traveling Salesmen Problem (TSP) is introduced. The transformed problem is represented by an expanded version of Hopfield-Tank's neuromorphic city-position map with (N + M-1)-cities and a single fictitious salesmen. The dynamic model associated with the problem is based on the Basic Differential Multiplier Method (BDMM) [26] which evaluates Lagrange multipliers simultaneously with the problem's state variables. The algorithm was successfully tested on many problems with up to 30 cities and five salesmen. In all test cases, the algorithm always converged to valid solutions. The great advantage of this kind of algorithm is that it can provide solutions to complex decision making problems directly by solving a system of ordinary differential equations. No learning steps, logical if statements or adjusting of parameters are required during the computation. The algorithm can therefore be implemented in hardware to solve complex constraint satisfaction problems such as the MTSP at the speed of analog silicon VLSI devices or possibly future optical neural computers. 相似文献
2.
A statistical correlation technique (SCT) and two variants of a neural network are presented to solve the motion correspondence problem. Solutions of the motion correspondence problem aim to maintain the identities of individuated elements as they move. In a pre-processing stage, two snapshots of a moving scene are convoluted with two-dimensional Gabor functions, which yields orientations and spatial frequencies of the snapshots at every position. In this paper these properties are used to extract, respectively, the attributes orientation, size and position of line segments. The SCT uses cross-correlations to find the correct translation components, angle of rotation and scaling factor. These parameters are then used in combination with the positions of the line segments to calculate the centre of motion. When all of these parameters are known, the new positions of the line segments from the first snapshot can be calculated and compared to the features in the second snapshot. This yields the solution of the motion correspondence problem. Since the SCT is an indirect way of solving the problem, the principles of the technique are implemented in interactive activation and competition neural networks. With boundary problems and noise these networks perform better than the SCT. They also have the advantage that at every stage of the calculations the best candidates for corresponding pairs of line segments are known. 相似文献
3.
Based on the analysis and comparison of several annealing strategies, we present a flexible annealing chaotic neural network which has flexible controlling ability and quick convergence rate to optimization problem. The proposed network has rich and adjustable chaotic dynamics at the beginning, and then can converge quickly to stable states. We test the network on the maximum clique problem by some graphs of the DIMACS clique instances, p-random and k random graphs. The simulations show that the flexible annealing chaotic neural network can get satisfactory solutions at very little time and few steps. The comparison between our proposed network and other chaotic neural networks denotes that the proposed network has superior executive efficiency and better ability to get optimal or near-optimal solution. 相似文献
4.
In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms. 相似文献
5.
6.
By analyzing the dynamic behaviors of the transiently chaotic neural network and greedy heuristic for the maximum independent set (MIS) problem, we present an improved transiently chaotic neural network for the MIS problem in this paper. Extensive simulations are performed and the results show that this proposed transiently chaotic neural network can yield better solutions to p-random graphs than other existing algorithms. The efficiency of the new model is also confirmed by the results on the complement graphs of some DIMACS clique instances in the second DIMACS challenge. Moreover, the improved model uses fewer steps to converge to stable state in comparison with the original transiently chaotic neural network. 相似文献
7.
In this paper we conjecture that neuronal networks develop following an optimality principle. We point out that a neuronal outgrowth in culture may be seen as the solution of a classical optimization problem: the "Steiner Problem". A neuron might grow minimizing a "cost", which may be determined by the viscoelastic properties of the neuron cytoplasm. We then discuss the role of chemiotactic factors such as the Nerve Growth Factor (NGF) in an optimized neuronal development in vivo. Finally we suggest, with some mathematical arguments, that the optimization of the elastic forces in the growing neuron may give rise to a "fractal" structure. 相似文献
8.
A neural network architecture for data classification 总被引:1,自引:0,他引:1
Lezoray O 《International journal of neural systems》2001,11(1):33-42
This article aims at showing an architecture of neural networks designed for the classification of data distributed among a high number of classes. A significant gain in the global classification rate can be obtained by using our architecture. This latter is based on a set of several little neural networks, each one discriminating only two classes. The specialization of each neural network simplifies their structure and improves the classification. Moreover, the learning step automatically determines the number of hidden neurons. The discussion is illustrated by tests on databases from the UCI machine learning database repository. The experimental results show that this architecture can achieve a faster learning, simpler neural networks and an improved performance in classification. 相似文献
9.
Wayne M. Getz 《Bulletin of mathematical biology》1991,53(6):805-823
Several critical issues associated with the processing of olfactory stimuli in animals (but focusing on insects) are discussed
with a view to designing a neural network which can process olfactory stimuli. This leads to the construction of a neural
network that can learn and identify the quality (direction cosines) of an input vector or extract information from a sequence
of correlated input vectors, where the latter corresponds to sampling a time varying olfactory stimulus (or other generically
similar pattern recognition problems). The network is constructed around a discrete time content-addressable memory (CAM)
module which basically satisfies the Hopfield equations with the addition of a unit time delay feedback. This modification
improves the convergence properties of the network and is used to control a switch which activates the learning or template
formation process when the input is “unknown”. The network dynamics are embedded within a sniff cycle which includes a larger
time delay (i.e. an integert
s
<1) that is also used to control the template formation switch. In addition, this time delay is used to modify the input into
the CAM module so that the more dominant of two mingling odors or an odor increasing against a background of odors is more
readily identified. The performance of the network is evaluated using Monte Carlo simulations and numerical results are presented. 相似文献
10.
We studied the dynamics of a neural network that has both recurrent excitatory and random inhibitory connections. Neurons started to become active when a relatively weak transient excitatory signal was presented and the activity was sustained due to the recurrent excitatory connections. The sustained activity stopped when a strong transient signal was presented or when neurons were disinhibited. The random inhibitory connections modulated the activity patterns of neurons so that the patterns evolved without recurrence with time. Hence, a time passage between the onsets of the two transient signals was represented by the sequence of activity patterns. We then applied this model to represent the trace eye blink conditioning, which is mediated by the hippocampus. We assumed this model as CA3 of the hippocampus and considered an output neuron corresponding to a neuron in CA1. The activity pattern of the output neuron was similar to that of CA1 neurons during trace eye blink conditioning, which was experimentally observed. 相似文献
11.
A model of texture discrimination in visual cortex was built using a feedforward network with lateral interactions among relatively realistic spiking neural elements. The elements have various membrane currents, equilibrium potentials and time constants, with action potentials and synapses. The model is derived from the modified programs of MacGregor (1987). Gabor-like filters are applied to overlapping regions in the original image; the neural network with lateral excitatory and inhibitory interactions then compares and adjusts the Gabor amplitudes in order to produce the actual texture discrimination. Finally, a combination layer selects and groups various representations in the output of the network to form the final transformed image material. We show that both texture segmentation and detection of texture boundaries can be represented in the firing activity of such a network for a wide variety of synthetic to natural images. Performance details depend most strongly on the global balance of strengths of the excitatory and inhibitory lateral interconnections. The spatial distribution of lateral connective strengths has relatively little effect. Detailed temporal firing activities of single elements in the lateral connected network were examined under various stimulus conditions. Results show (as in area 17 of cortex) that a single element's response to image features local to its receptive field can be altered by changes in the global context. 相似文献
12.
We propose a new multilayered neural network model which has the ability of rapid self-organization. This model is a modified version of the cognitron (Fukushima, 1975). It has modifiable inhibitory feedback connections, as well as conventional modifiable excitatory feedforward connections, between the cells of adjoining layers. If a feature-extracting cell in the network is excited by a stimulus which is already familiar to the network, the cell immediately feeds back inhibitory signals to its presynaptic cells in the preceding layer, which suppresses their response. On the other hand, the feature-extracting cell does not respond to an unfamiliar feature, and the responses from its presynaptic cells are therefore not suppressed because they do not receive any feedback inhibition. Modifiable synapses in the new network are reinforced in a way similar to those in the cognitron, and synaptic connections from cells yielding a large sustained output are reinforced. Since familiar stimulus features do not elicit a sustained response from the cells of the network, only circuits which detect novel stimulus features develop. The network therefore quickly acquires favorable pattern-selectivity by the mere repetitive presentation of set of learning patterns. 相似文献
13.
The simultaneous recurrent neural network for addressing the scaling problem in static optimization.
A trainable recurrent neural network, Simultaneous Recurrent Neural network, is proposed to address the scaling problem faced by neural network algorithms in static optimization. The proposed algorithm derives its computational power to address the scaling problem through its ability to "learn" compared to existing recurrent neural algorithms, which are not trainable. Recurrent backpropagation algorithm is employed to train the recurrent, relaxation-based neural network in order to associate fixed points of the network dynamics with locally optimal solutions of the static optimization problems. Performance of the algorithm is tested on the NP-hard Traveling Salesman Problem in the range of 100 to 600 cities. Simulation results indicate that the proposed algorithm is able to consistently locate high-quality solutions for all problem sizes tested. In other words, the proposed algorithm scales demonstrably well with the problem size with respect to quality of solutions and at the expense of increased computational cost for large problem sizes. 相似文献
14.
Using computer simulations, this paper investigates how input codes affect a minimal computational model of the hippocampal
region CA3. Because encoding context seems to be a function of the hippocampus, we have studied problems that require learning
context for their solution. Here we study a hippocampally dependent, configural learning problem called transverse patterning.
Previously, we showed that the network does not produce long local context codings when the sequential input patterns are
orthogonal, and it fails to solve many context-dependent problems in such situations. Here we show that this need not be the
case if we assume that the input changes more slowly than a processing interval. Stuttering, i.e., repeating inputs, allows
the network to create long local context firings even for orthogonal inputs. With these long local context firings, the network
is able to solve the transverse patterning problem. Without stuttering, transverse patterning is not learned. Because stuttering
is so useful, we investigate the relationship between the stuttering repetition length and relative context length in a simple,
idealized sequence prediction problem. The relative context length, defined as the average length of the local context codes
divided by the stuttering length, interacts with activity levels and has an optimal stuttering repetition length. Moreover,
the increase in average context length can reach this maximum without loss of relative capacity. Finally, we note that stuttering
is an example of maintained or introduced redundancy that can improve neural computations.
Received: 17 April 1997 / Accepted in revised form: 22 June 1998 相似文献
15.
A neural network which models multistable perception is presented. The network consists of sensor and inner neurons. The dynamics is established by a stochastic neuronal dynamics, a formal Hebb-type coupling dynamics and a resource mechanism that corresponds to saturation effects in perception. From this a system of coupled differential equations is derived and analyzed. Single stimuli are bound to exactly one percept, even in ambiguous situations where multistability occurs. The network exhibits discontinuous as well as continuous phase transitions and models various empirical findings, including the percepts of succession, alternative motion and simultaneity; the percept of oscillation is explained by oscillating percepts at a continuous phase transition. Received: 13 September 1995 / Accepted: 3 June 1996 相似文献
16.
A neural network for computing eigenvectors and eigenvalues 总被引:2,自引:0,他引:2
A dynamic method which produces estimates of real eigenvectors and eigenvalues is presented. More generally, the technique can be applied to estimate eigenspectra of real n-dimensional k-forms. The proposed approach is based on a spectral splicing property of the line manifolds often found in solutions of polynomial differential equations. As such, it defines an artificial continuous time neural network with stored memories determined by the eigenspectra locations. This paradigm provides a good insight into an analog behavior of large scale neural structures which provide auto- or hetero-associative memories. Consequently, it has applications not only in computational sciences but also as an information processor. 相似文献
17.
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems. 相似文献
18.
This paper deals with the problem of representing and generating unconstrained aiming movements of a limb by means of a neural network architecture. The network produced time trajectories of a limb from a starting posture toward targets specified by sensory stimuli. Thus the network performed a sensory-motor transformation. The experimenters trained the network using a bell-shaped velocity profile on the trajectories. This type of profile is characteristic of most movements performed by biological systems. We investigated the generalization capabilities of the network as well as its internal organization. Experiments performed during learning and on the trained network showed that: (i) the task could be learned by a three-layer sequential network; (ii) the network successfully generalized in trajectory space and adjusted the velocity profiles properly; (iii) the same task could not be learned by a linear network; (iv) after learning, the internal connections became organized into inhibitory and excitatory zones and encoded the main features of the training set; (v) the model was robust to noise on the input signals; (vi) the network exhibited attractor-dynamics properties; (vii) the network was able to solve the motorequivalence problem. A key feature of this work is the fact that the neural network was coupled to a mechanical model of a limb in which muscles are represented as springs. With this representation the model solved the problem of motor redundancy. 相似文献
19.
A novel neural network approach using the maximum neuron model is presented for N-queens problems. The goal of the N-queens problem is to find a set of locations of N queens on an N×N chessboard such that no pair of queens commands each other. The maximum neuron model proposed by Takefuji et al. has been applied to two optimization problems where the optimization of objective functions is requested without constraints. This paper demonstrates the effectiveness of the maximum neuron model for constraint satisfaction problems through the N-queens problem. The performance is verified through simulations in up to 500-queens problems on the sequential mode, the N-parallel mode, and the N 2-parallel mode, where our maximum neural network shows the far better performance than the existing neural networks. Received: 4 June 1996/Accepted in revised form: 13 November 1996 相似文献
20.
A hierarchical neural network model for associative memory 总被引:1,自引:0,他引:1
Kunihiko Fukushima 《Biological cybernetics》1984,50(2):105-113
A hierarchical neural network model with feedback interconnections, which has the function of associative memory and the ability to recognize patterns, is proposed. The model consists of a hierarchical multi-layered network to which efferent connections are added, so as to make positive feedback loops in pairs with afferent connections. The cell-layer at the initial stage of the network is the input layer which receives the stimulus input and at the same time works as an output layer for associative recall. The deepest layer is the output layer for pattern-recognition. Pattern-recognition is performed hierarchically by integrating information by converging afferent paths in the network. For the purpose of associative recall, the integrated information is again distributed to lower-order cells by diverging efferent paths. These two operations progress simultaneously in the network. If a fragment of a training pattern is presented to the network which has completed its self-organization, the entire pattern will gradually be recalled in the initial layer. If a stimulus consisting of a number of training patterns superposed is presented, one pattern gradually becomes predominant in the recalled output after competition between the patterns, and the others disappear. At about the same time when the recalled pattern reaches a steady state in he initial layer, in the deepest layer of the network, a response is elicited from the cell corresponding to the category of the finally-recalled pattern. Once a steady state has been reached, the response of the network is automatically extinguished by inhibitory signals from a steadiness-detecting cell. If the same stimulus is still presented after inhibition, a response for another pattern, formerly suppressed, will now appear, because the cells of the network have adaptation characteristics which makes the same response unlikely to recur. Since inhibition occurs repeatedly, the superposed input patterns are recalled one by one in turn. 相似文献