首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Random field models for fitness landscapes   总被引:1,自引:0,他引:1  
 In many cases fitness landscapes are obtained as particular instances of random fields by randomly assigning a large number of parameters. Models of this type are often characterized reasonably well by their covariance matrices. We characterize isotropic random fields on finite graphs in terms of their Fourier series expansions and investigate the relation between the covariance matrix of the random field model and the correlation structure of the individual landscapes constructed from this random field. Correlation measures are a good characteristic of “rugged landscapes” models as they are closely related to quantities like the number of local optima or the length of adaptive walks. Our formalism suggests to approximate landscape with known autocorrelation function by a random field model that has the same correlation structure. Received: 10 November 1995 / Revised version: 19 February 1996  相似文献   

2.
Wireless sensor networks have found more and more applications in a variety of pervasive computing environments, in their functions as data acquisition in pervasive applications. However, how to get better performance to support data acquisition of pervasive applications over WSNs remains to be a nontrivial and challenging task. The network lifetime and application requirement are two fundamental, yet conflicting, design objectives in wireless sensor networks for tracking mobile objects. The application requirement is often correlated to the delay time within which the application can send its sensing data back to the users in tracking networks. In this paper we study the network lifetime maximization problem and the delay time minimization problem together. To make both problems tractable, we have the assumption that each sensor node keeps working since it turns on. And we formulate the network lifetime maximization problem as maximizing the number of sensor nodes who don’t turn on, and the delay time minimization problem as minimizing the routing path length, after achieving the required tracking tasks. Since we prove the problems are NP-complete and APX-complete, we propose three heuristic algorithms to solve them. And we present several experiments to show the advantages and disadvantages referring to the network lifetime and the delay time among these three algorithms on three models, random graphs, grids and hypercubes. Furthermore, we implement the distributed version of these algorithms.  相似文献   

3.
The tortuosity of an animal's path is a key parameter in orientation and searching behaviours. The tortuosity of an oriented path is inversely related to the efficiency of the orientation mechanism involved, the best mechanism being assumed to allow the animal to reach its goal along a straight line movement. The tortuosity of a random search path controls the local searching intensity, allowing the animal to adjust its search effort to the local profitability of the environment. This paper shows that (1) the efficiency of an oriented path can be reliably estimated by a straightness index computed as the ratio between the distance from the starting point to the goal and the path length travelled to reach the goal, but such a simple index, ranging between 0 and 1, cannot be applied to random search paths; (2) the tortuosity of a random search path, ranging between straight line movement and Brownian motion, can be reliably estimated by a sinuosity index which combines the mean cosine of changes of direction with the mean step length; and (3) in the current state of the art, the fractal analysis of animals' paths, which may appear as an alternative and promising way to measure the tortuosity of a random search path as a fractal dimension ranging between 1 (straight line movement) and 2 (Brownian motion), is only liable to generate artifactual results. This paper also provides some help for distinguishing between oriented and random search paths, and depicts a general, comprehensive framework for analysing individual animals' paths in a two-dimensional space.  相似文献   

4.
Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.  相似文献   

5.
We study the probability distribution of the distance d = n + chi - kappa - psi between two genomes with n markers distributed on chi chromosomes and with breakpoint graphs containing kappa cycles and psi "good" paths, under the hypothesis of random gene order. We interpret the random order assumption in terms of a stochastic method for constructing the bicolored breakpoint graph. We show that the limiting expectation of E[d] = n - 1/2chi - 1/2 log n+chi/2chi. We also calculate the variance, the effect of different numbers of chromosomes in the two genomes, and the number of plasmids, or circular chromosomes, generated by the random breakpoint graph construction. A more realistic model allows intra- and interchromosomal operations to have different probabilities, and simulations show that for a fixed number of rearrangements, kappa and d depend on the relative proportions of the two kinds of operation.  相似文献   

6.
The transport of water and of macromolecules across the glomerular membrane of the kidney depends on the membrane parameters (radius, length and number of pores) as well as on the hydrostatic and oncotic pressures on either side of the membrane. The filtration pressure decreases along the capillary loops from afferent to efferent end. Water and solute flows are thus given by a system of two differential equations. The sieving coefficient of the macromolecules is the ratio of solute to water flow. In the program described the differential equations are solved by the Runge-Kutta method (fourth order). Rosenbrock's method of minimization is used to adjust the theoretical to the experimental sieving coefficients. The pore radius, total pore area per unit of path length and conductance of the membrane, as well as the intracapillary hydrostatic pressure and its gradient can thus be determined.  相似文献   

7.
《Ecological Complexity》2005,2(3):287-299
Individuals in a population susceptible to a disease may be represented as vertices in a network, with the edges that connect vertices representing social and/or spatial contact between individuals. Networks, which explicitly included six different patterns of connection between vertices, were created. Both scale-free networks and random graphs showed a different response in path level to increasing levels of clustering than regular lattices. Clustering promoted short path lengths in all network types, but randomly assembled networks displayed a logarithmic relationship between degree and path length; whereas this response was linear in regular lattices. In all cases, small-world models, generated by rewiring the connections of a regular lattice, displayed properties, which spanned the gap between random and regular networks.Simulation of a disease in these networks showed a strong response to connectance pattern, even when the number of edges and vertices were approximately equal. Epidemic spread was fastest, and reached the largest size, in scale-free networks, then in random graphs. Regular lattices were the slowest to be infected, and rewired lattices were intermediate between these two extremes. Scale-free networks displayed the capacity to produce an epidemic even at a likelihood of infection, which was too low to produce an epidemic for the other network types. The interaction between the statistical properties of the network and the results of epidemic spread provides a useful tool for assessing the risk of disease spread in more realistic networks.  相似文献   

8.
The size and complexity of actual networked systems hinders the access to a global knowledge of their structure. This fact pushes the problem of navigation to suboptimal solutions, one of them being the extraction of a coherent map of the topology on which navigation takes place. In this paper, we present a Markov chain based algorithm to tag networked terms according only to their topological features. The resulting tagging is used to compute similarity between terms, providing a map of the networked information. This map supports local-based navigation techniques driven by similarity. We compare the efficiency of the resulting paths according to their length compared to that of the shortest path. Additionally we claim that the path steps towards the destination are semantically coherent. To illustrate the algorithm performance we provide some results from the Simple English Wikipedia, which amounts to several thousand of pages. The simplest greedy strategy yields over an 80% of average success rate. Furthermore, the resulting content-coherent paths most often have a cost between one- and threefold compared to shortest-path lengths.  相似文献   

9.
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most 𝒪(n(logn)2). As a practical example we show how to generate samples of power-law degree distribution graphs with tunable assortativity.  相似文献   

10.
This paper presents an architecture, implementation, and performance evaluation of an adaptive message-passing system for a heterogeneous wide-area ATM cluster that we call the Adaptive Communication System (ACS). ACS uses multithreading to provide efficient techniques for overlapping computation and communication in wide-area computing. By separating control and data activities, ACS eliminates unnecessary control transfers over the data path. This optimizes the data path and improves the performance. ACS supports several different flow control algorithms, error control algorithms, and multicasting algorithms. Furthermore, ACS allows programmers to select at runtime the suitable communication schemes per-connection basis to meet the requirements of a given application. ACS provides three application communication interfaces: Socket Communication Interface (SCI), ATM Communication Interface (ACI), and High Performance Interface (HPI) to support various classes of applications. The SCI is provided mainly for applications that must be portable to many different computing platforms. The ACI provides services that are compatible with ATM connection oriented services where each connection can be configured to meet the Quality of Service (QOS) requirements of that connection. This allows programmers to fully utilize the benefits of the ATM network. The HPI supports applications that demand low-latency and high-throughput communication services. In this interface, ACS uses read/write trap routines to reduce latency and data transfer time, and to avoid using traditional communication protocols. We analyze and compare the performance of ACS with those of other message-passing systems such as p4, PVM, and MPI in terms of point-to-point, multicasting, and application performance. The benchmarking results show that ACS outperforms other message-passing systems and provides flexible communication services for various classes of applications. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

11.
Many authors have claimed to observe animal movement paths that appear to be Lévy walks, i.e. a random walk where the distribution of move lengths follows an inverse power law. A Lévy walk is known to be the optimal search strategy of a particular class of random walks in certain environments; hence, it is important to distinguish correctly between Lévy walks and other types of random walks in observed animal movement paths. Evidence of a power law distribution in the step length distribution of observed animal movement paths is often used to classify a particular movement path as a Lévy walk. However, there is some doubt about the accuracy of early studies that apparently found Lévy walk behaviour. A recently accepted method to determine whether a movement path truly exhibits Lévy walk behaviour is based on an analysis of move lengths with a maximum likelihood estimate using Akaike weights. Here, we show that simulated (non-Lévy) random walks representing different types of animal movement behaviour (a composite correlated random walk; pooled data from a set of random walks with different levels of correlation and three-dimensional correlated random walks projected into one dimension) can all show apparent power law behaviour typical of Lévy walks when using the maximum likelihood estimation method. The probability of the movement path being identified as having a power law step distribution is related to both the sampling rate used by the observer and the way that ‘turns’ or ‘reorientations’ in the movement path are designated. However, identification is also dependent on the nature and properties of the simulated path, and there is currently no standard method of observation and analysis that is robust for all cases. Our results indicate that even apparently robust maximum likelihood methods can lead to a mismatch between pattern and process, as paths arising from non-Lévy walks exhibit Lévy-like patterns.  相似文献   

12.
Wheeler (2012) stated that minimization of ad hoc hypotheses as emphasized by Farris (1983) always leads to a preference for trivial optimizations when analysing unaligned sequence data, leaving no basis for tree choice. That is not correct. Farris's framework can be expressed as maximization of homology, a formulation that has been used to overcome the problems with inapplicables (it leads to the notion of subcharacters as a quantity to be co‐minimized in parsimony analysis) and that is known not to lead to a preference for trivial optimizations when analysing unaligned sequence data. Maximization of homology, in turn, can be formulated as a minimization of ad hoc hypotheses of homoplasy in the sense of Farris, as shown here. These issues are not just theoretical but have empirical relevance. It is therefore also discussed how maximization of homology can be approximated under various weighting schemes in heuristic tree alignment programs, such as POY, that do not take into account subcharacters. Empirical analyses that use the so‐called 3221 cost set (gap opening cost three, transversion and transition costs two, and gap extension cost one), the cost set that is known to be an optimal approximation under equally weighted homology in POY, are briefly reviewed. From a theoretical point of view, maximization of homology provides the general framework to understand such cost sets in terms that are biologically relevant and meaningful. Whether or not embedded in a sensitivity analysis, this is not the case for minimization of a cost that is defined in operational terms only. Neither is it the case for minimization of equally weighted transformations, a known problem that is not addressed by Kluge and Grant's (2006) proposal to invoke the anti‐superfluity principle as a rationale for this minimization.  相似文献   

13.
It is difficult to watch wild animals while they move, so often biologists analyse characteristics of animal movement paths. One common path characteristic used is tortuousity, measured using the fractal dimension (D). The typical method for estimating fractal D, the divider method, is biased and imprecise. The bias occurs because the path length is truncated. I present a method for minimising the truncation error. The imprecision occurs because sometimes the divider steps land inside the bends of curves, and sometimes they miss the curves. I present three methods for minimising this variation and test the methods with simulated correlated random walks. The traditional divider method significantly overestimates fractal D when paths are short and the range of spatial scales is narrow. The best method to overcome these problems consists of walking the dividers forwards and backwards along the path, and then estimating the path length remaining at the end of the last divider step.  相似文献   

14.
An algorithm is proposed for the conversion of a virtual-bond polypeptide chain (connected C alpha atoms) to an all-atom backbone, based on determining the most extensive hydrogen-bond network between the peptide groups of the backbone, while maintaining all of the backbone atoms in energetically feasible conformations. Hydrogen bonding is represented by aligning the peptide-group dipoles. These peptide groups are not contiguous in the amino acid sequence. The first dipoles to be aligned are those that are both sufficiently close in space to be arranged in approximately linear arrays termed dipole paths. The criteria used in the construction of dipole paths are: to assure good alignment of the greatest possible number of dipoles that are close in space; to optimize the electrostatic interactions between the dipoles that belong to different paths close in space; and to avoid locally unfavorable amino acid residue conformations. The equations for dipole alignment are solved separately for each path, and then the remaining single dipoles are aligned optimally with the electrostatic field from the dipoles that belong to the dipole-path network. A least-squares minimizer is used to keep the geometry of the alpha-carbon trace of the resulting backbone close to that of the input virtual-bond chain. This procedure is sufficient to convert the virtual-bond chain to a real chain; in applications to real systems, however, the final structure is obtained by minimizing the total ECEPP/2 (empirical conformational energy program for peptides) energy of the system, starting from the geometry resulting from the solution of the alignment equations. When applied to model alpha-helical and beta-sheet structures, the algorithm, followed by the ECEPP/2 energy minimization, resulted in an energy and backbone geometry characteristic of these alpha-helical and beta-sheet structures. Application to the alpha-carbon trace of the backbone of the crystallographic 5PTI structure of bovine pancreatic trypsin inhibitor, followed by ECEPP/2 energy minimization with C alpha-distance constraints, led to a structure with almost as low energy and root mean square deviation as the ECEPP/2 geometry analog of 5PTI, the best agreement between the crystal and reconstructed backbone being observed for the residues involved in the dipole-path network.  相似文献   

15.
We describe a new distance measure for comparing DNA sequence profiles. For this measure, columns in a multiple alignment are treated as character frequency vectors (sum of the frequencies equal to one). The distance between two vectors is based on minimum path length along an entropy surface. Path length is estimated using a random graph generated on the entropy surface and Dijkstra's algorithm for all shortest paths to a source. We use the new distance measure to analyze similarities within familes of tandem repeats in the C. elegans genome and show that this new measure gives more accurate refinement of family relationships than a method based on comparing consensus sequences.  相似文献   

16.
The microarray layout problem is a generalization of the border length minimization problem, and asks to distribute oligonucleotide probes on a microarray and to determine their embeddings in the deposition sequence in such a way that the overall quality of the resulting synthesized probes is maximized. Because of its inherent computational complexity, it is traditionally attacked in several phases: partitioning, placement, and re-embedding. We present the first algorithm, Greedy+, that combines placement and embedding and that results in improved layouts in terms of border length and conflict index (a more realistic measure of probe quality), both on arrays of random probes and on existing Affymetrix GeneChip arrays. We also present a detailed study on the layouts of the latest GeneChip arrays, and show how Greedy+ can further improve layout quality by as much as 12% in terms of border length and 35% in terms of conflict index.  相似文献   

17.
Numerous studies have concluded that primates move about their environments in a nonrandom manner, frequently traveling between consecutive foraging sites along relatively straight-line paths. However, primates do not always take the most direct path between resources, and a number of species have been observed to travel repeatedly along a network of the same arboreal pathways. In this study, I used spatially explicit techniques to examine quantitatively what mantled howler monkey groups on Barro Colorado Island, Panama, accomplish by selecting nonlinear paths between resources and by repeatedly using the same paths within an arboreal network. Results show that chosen arboreal paths between sites where foraging occurred have higher levels of resource availability and canopy connectivity than comparable straight-line paths between the same sites. When comparing the relative importance of these factors, autologistic models of pathway choice indicate that though canopy connectivity is related to the location of repeatedly used arboreal pathway networks, the most statistically significant predictor is resource availability (both on a path and within a visual detection distance of a path). These results provide support for the hypothesis that repeated use of arboreal pathway networks aids in resource monitoring and acquisition. In addition, statistical models developed from 1 primary focal group’s travel patterns had high predictive value when employed to generate likely locations for arboreal pathways in the home ranges of 3 neighboring groups. This finding has important implications for studies of primate habitat use and seed dispersal, as it suggests that different groups consistently use similar characteristics when deciding on travel paths.  相似文献   

18.
Daily MD  Upadhyaya TJ  Gray JJ 《Proteins》2008,71(1):455-466
Allosteric proteins bind an effector molecule at one site resulting in a functional change at a second site. We hypothesize that networks of contacts altered, formed, or broken are a significant contributor to allosteric communication in proteins. In this work, we identify which interactions change significantly between the residue-residue contact networks of two allosteric structures, and then organize these changes into graphs. We perform the analysis on 15 pairs of allosteric structures with effector and substrate each present in at least one of the two structures. Most proteins exhibit large, dense regions of contact rearrangement, and the graphs form connected paths between allosteric effector and substrate sites in five of these proteins. In the remaining 10 proteins, large-scale conformational changes such as rigid-body motions are likely required in addition to contact rearrangement networks to account for substrate-effector communication. On average, clusters which contain at least one substrate or effector molecule comprise 20% of the protein. These allosteric graphs are small worlds; that is, they typically have mean shortest path lengths comparable to those of corresponding random graphs and average clustering coefficients enhanced relative to those of random graphs. The networks capture 60-80% of known allostery-perturbing mutants in three proteins, and the metrics degree and closeness are statistically good discriminators of mutant residues from nonmutant residues within the networks in two of these three proteins. For two proteins, coevolving clusters of residues which have been hypothesized to be allosterically important differ from the regions with the most contact rearrangement. Residues and contacts which modulate normal mode fluctuations also often participate in the contact rearrangement networks. In summary, residue-residue contact rearrangement networks provide useful representations of the portions of allosteric pathways resulting from coupled local motions.  相似文献   

19.
The interpretation of large-scale protein network data depends on our ability to identify significant substructures in the data, a computationally intensive task. Here we adapt and extend efficient techniques for finding paths and trees in graphs to the problem of identifying pathways in protein interaction networks. We present linear-time algorithms for finding paths and trees in networks under several biologically motivated constraints. We apply our methodology to search for protein pathways in the yeast protein-protein interaction network. We demonstrate that our algorithm is capable of reconstructing known signaling pathways and identifying functionally enriched paths and trees in an unsupervised manner. The algorithm is very efficient, computing optimal paths of length 8 within minutes and paths of length 10 in about three hours.  相似文献   

20.
A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning (RL) mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields (PF) of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become 'wiggly'. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: a gradual drop of the learned weights (weight decay) and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap (coverage density) in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals' exploration pattern.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号