首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a scoring function combines the values for different parameters of possible sequence-to-structure alignments into a single score to allow for unambiguous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, a partial ordering on optimal alignments to other structures, e.g., derived from structural comparisons, may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a heuristic approach: iterating the computation of solutions (here, threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via systematic calibration methods. For our application (i.e., threading), this iterative approach results in structurally meaningful weights that significantly improve performance on both the training and the test data sets. In addition, the optimized parameters show significant improvements on the recognition rate for a grossly enlarged comprehensive benchmark, a modified recognition protocol as well as modified alignment types (local instead of global and profiles instead of single sequences). These results show the general validity of the optimized weights for the given threading program and the associated scoring contributions.  相似文献   

2.
The growth of males sampled from two mouse lines long-term selected for over 86 generations on body weight (DU6) or on protein amount (DU6P) was analysed from birth till 120 days of age and compared to the growth of an unselected control line (DUKs). Animals from the selected lines are already approximately 40 to 50% heavier at birth than the controls. This divergence increases to about 210 to 240% at the 120 day of age. With birth weights of 2.2 and 2.4 g and weights of 78 and 89 g at the 120 day these selection lines are the heaviest known mouse lines.

The fit of three modified non-linear growth functions (Gompertz function, Logistic function, Richards function) was compared and the effect of three different data inputs elucidated. The modification was undertaken to use parameters having a direct biological meaning, for example: A: theoretical final body weight, B: maximum weight gain, C: age at maximum weight gain, D (only Richards function): determines the position of the inflection point in relation to the final weight. All three models fit the observed data very well (r2 = 0.949–0.998), with a slight advantage for the Richards function. There were no substantial effects of the data input (averages, single values, fitting a curve for every animal with subsequent averaging the parameters).

The high growth of the selected mice is connected with very substantial changes in the final weight and in the maximum weight gain, whereas the changes of the age at the point of inflection were, although partially significant, relatively small and dependent on the model used.  相似文献   


3.
A statistical mechanical theory of the helix-coil transition in sequential polypeptides is developed assuming that the statistical weights of the Zimm-Bragg parameters of a given residue depend on the type of adjacent residues. In the case of a sequential polypeptide consisting of two kinds of residues, the theory describes the helix- coil transition of the polypeptide in terms of the Zimm-Bragg parameters associated with the corresponding residues. The theory is then used to determine this parameter, as a function of temperature, from experimental data for transition temperature as a function of solvent composition, for a series of sequential polypeptides consisting of Glu(OBzl) and Lys(Chz) residues in mixtures of dichloroacetic acid and 1,2-dichlorethane. This parameter is then combined with the Zimm-Bragg parameters for the parent homopolypeptides, and the theory used to predict helix coil transition curves which are in good agreement with the experimental ones for the sequential polypeptides studied.  相似文献   

4.
Summary A theoretical comparison between two multiple-trait selection methods, index and tandem selection, after several generations of selection was carried out. An infinite number of loci determining the traits, directional and truncation selection, discrete generations and infinite population size were assumed. Under these assumptions, changes in genetic parameters over generations are due to linkage disequilibrium generated by selection. Changes continue for several generations until equilibrium is approached. Algebraic expressions for asymptotic responses from index selection can be derived if index weights are maintained constant across generations. Expressions at equilibrium for genetic parameters and responses are given for the index and its component traits. The loss in response by using initial index weights throughout all generations, instead of updating them to account for changes in genetic parameters, was analyzed. The benefit of using optimum weights was very small ranging from 0% to about 1.5% for all cases studied. Recurrence formulae to predict genetic parameters and responses at each generation of selection are given for both index and tandem selection. A comparison between expected response in the aggregate genotype at equilibrium from index and tandem selection is made considering two traits of economic importance. The results indicate that although index selection is more efficient for improving the aggregate breeding value, its relative efficiency with respect to tandem selection decreases after repeated cycles of selection. The reduction in relative efficiency is highest with the highest selection intensity and heritabilities and with negative correlations between the two traits. The advantage of index over tandem selection might be further reduced if changes in genetic parameters due to gene frequency changes produced by selection, random fluctuations due to the finite size of the population, and errors in estimation of parameters, were also considered.  相似文献   

5.
We have developed a variable gap penalty function for use in the comparison program COMPARER which aligns protein sequences on the basis of their 3-D structures. For deletions and insertions, components are a function of structural features of individual amino acid residues (e.g. secondary structure and accessibility). We have also obtained relative weights for different features used in the comparison by examining the equivalent residues in weight matrices and in alignments for pairs of 3-D structures where the equivalencies are relatively unambiguous. We have used the new parameters and the variable gap penalty function in COMPARER to align protein structures in the Brookhaven Data Bank. The variable gap penalty function is useful especially in avoiding gaps in secondary structure elements and the new feature weights give improved alignments. The alignments for both azurins and plastocyanins and N- and C-terminal lobes for aspartic proteinases are discussed.  相似文献   

6.
We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.  相似文献   

7.
Gray RJ 《Biometrics》2000,56(2):571-576
An estimator of the regression parameters in a semiparametric transformed linear survival model is examined. This estimator consists of a single Newton-like update of the solution to a rank-based estimating equation from an initial consistent estimator. An automated penalized likelihood algorithm is proposed for estimating the optimal weight function for the estimating equations and the error hazard function that is needed in the variance estimator. In simulations, the estimated optimal weights are found to give reasonably efficient estimators of the regression parameters, and the variance estimators are found to perform well. The methodology is applied to an analysis of prognostic factors in non-Hodgkin's lymphoma.  相似文献   

8.
Summary The standard estimator for the cause‐specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is a weighted empirical cumulative distribution function and the other a product‐limit estimator. This equivalence suggests an alternative view of the analysis of time‐to‐event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause‐specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause‐specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time‐dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non‐AIDS related cumulative mortality.  相似文献   

9.
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.  相似文献   

10.
Experimental investigations have revealed that synapses possess interesting and, in some cases, unexpected properties. We propose a theoretical framework that accounts for three of these properties: typical central synapses are noisy, the distribution of synaptic weights among central synapses is wide, and synaptic connectivity between neurons is sparse. We also comment on the possibility that synaptic weights may vary in discrete steps. Our approach is based on maximizing information storage capacity of neural tissue under resource constraints. Based on previous experimental and theoretical work, we use volume as a limited resource and utilize the empirical relationship between volume and synaptic weight. Solutions of our constrained optimization problems are not only consistent with existing experimental measurements but also make nontrivial predictions.  相似文献   

11.
Belovsky's (1978) model for optimal size in moose is applied to red deer. Criticisms are levelled at Belovsky's model. In particular, the equation he derives relating net energy intake to body weight is shown to be fundamentally invalid. Modifications are suggested using existing published information and an analysis of previously unpublished data on red deer stomach size as a function of body weight. I conclude that quantitative predictions of optimal body size are likely to be very difficult as slight changes in the relevant parameters lead to huge changes in predicted optimal weights.  相似文献   

12.
In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.  相似文献   

13.
Dunson DB  Park JH 《Biometrika》2008,95(2):307-323
We propose a class of kernel stick-breaking processes for uncountable collections of dependent random probability measures. The process is constructed by first introducing an infinite sequence of random locations. Independent random probability measures and beta-distributed random weights are assigned to each location. Predictor-dependent random probability measures are then constructed by mixing over the locations, with stick-breaking probabilities expressed as a kernel multiplied by the beta weights. Some theoretical properties of the process are described, including a covariate-dependent prediction rule. A retrospective Markov chain Monte Carlo algorithm is developed for posterior computation, and the methods are illustrated using a simulated example and an epidemiological application.  相似文献   

14.
A method for simultaneous determination of molar weights (M) and lateral diffusion constants (D) of particles in three- and two-dimensional systems is described. Spontaneous concentration fluctuations in space and time are analyzed, by monitoring fluctuations in the fluorescence from fluorescein-labeled molecules (1 dye/molecule is sufficient), excited by a rotating laser spot. For particles in solution, M values are determined over the range of 3 x 10(2) to 3 x 10(11) daltons, and D values can be determined from approximately 10(-7) to 10(-10) cm2/s. The time for a determination is approximately 1 min. Aggregation can be followed by changes of either M or D. This method is used to study the calcium dependence of vesicle aggregation or fusion, and the time course of aggregate formation of porin (an Escherichia Coli outer membrane protein) in lipid monolayers. Essential parameters for the development of the method are described. Equations to estimate the signal-to-noise ratios and to find the optimal free parameters for a specific application are derived. The theoretical predictions for the correlation function of the signal and for the signal-to-noise ratio are compared with observed values.  相似文献   

15.
Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error.  相似文献   

16.
Relative weighting of characters used in taxonomic decisions is detected by comparison with taxonomic models in which characters are given equal weights. Classifications are analysed for implied distance inequalities between triplets of taxa, and the minimal weights applied to the distances between taxa necessary to satisfy these constraints are estimated. Weights acting as multipliers on interacting characters are compared with weight estimations: geometric parameters which depend upon the relative locations of the taxa in taxonomic space.  相似文献   

17.
In population‐based case‐control studies, it is of great public‐health importance to estimate the disease incidence rates associated with different levels of risk factors. This estimation is complicated by the fact that in such studies the selection probabilities for the cases and controls are unequal. A further complication arises when the subjects who are selected into the study do not participate (i.e. become nonrespondents) and nonrespondents differ systematically from respondents. In this paper, we show how to account for unequal selection probabilities as well as differential nonresponses in the incidence estimation. We use two logistic models, one relating the disease incidence rate to the risk factors, and one modelling the predictors that affect the nonresponse probability. After estimating the regression parameters in the nonresponse model, we estimate the regression parameters in the disease incidence model by a weighted estimating function that weights a respondent's contribution to the likelihood score function by the inverse of the product of his/her selection probability and his/her model‐predicted response probability. The resulting estimators of the regression parameters and the corresponding estimators of the incidence rates are shown to be consistent and asymptotically normal with easily estimated variances. Simulation results demonstrate that the asymptotic approximations are adequate for practical use and that failure to adjust for nonresponses could result in severe biases. An illustration with data from a cardiovascular study that motivated this work is presented.  相似文献   

18.
MacNeil D  Eliasmith C 《PloS one》2011,6(9):e22885
A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.  相似文献   

19.
A scaling serves to determine a certain characteristic as a function of a set of variables. It is usually represented in a power-law form in which a constant factor and exponents are the scaling parameters. If there is no theoretical basis to define the values of the scaling parameters, they are determined empirically by fitting them to a certain database using the ordinary least squares regression. It was proposed for various purposes to replace individual primary variables with a combination of these variables in a power-law form when determining the scaling parameters. It is shown that the standard procedure for constructing an empirical scaling in new combined variables gives a scaling equivalent to the primary one. Without any additional modifications in the procedure for determining the scaling parameters, this way of combining the variables seems to be fruitless.  相似文献   

20.
An adaptive estimator model of human spatial orientation is presented. The adaptive model dynamically weights sensory error signals. More specific, the model weights the difference between expected and actual sensory signals as a function of environmental conditions. The model does not require any changes in model parameters. Differences with existing models of spatial orientation are that: (1) environmental conditions are not specified but estimated, (2) the sensor noise characteristics are the only parameters supplied by the model designer, (3) history-dependent effects and mental resources can be modelled, and (4) vestibular thresholds are not included in the model; instead vestibular-related threshold effects are predicted by the model. The model was applied to human stance control and evaluated with results of a visually induced sway experiment. From these experiments it is known that the amplitude of visually induced sway reaches a saturation level as the stimulus level increases. This saturation level is higher when the support base is sway referenced. For subjects experiencing vestibular loss, these saturation effects do not occur. Unknown sensory noise characteristics were found by matching model predictions with these experimental results. Using only five model parameters, far more than five data points were successfully predicted. Model predictions showed that both the saturation levels are vestibular related since removal of the vestibular organs in the model removed the saturation effects, as was also shown in the experiments. It seems that the nature of these vestibular-related threshold effects is not physical, since in the model no threshold is included. The model results suggest that vestibular-related thresholds are the result of the processing of noisy sensory and motor output signals. Model analysis suggests that, especially for slow and small movements, the environment postural orientation can not be estimated optimally, which causes sensory illusions. The model also confirms the experimental finding that postural orientation is history dependent and can be shaped by instruction or mental knowledge. In addition the model predicts that: (1) vestibular-loss patients cannot handle sensory conflicting situations and will fall down, (2) during sinusoidal support-base translations vestibular function is needed to prevent falling, (3) loss of somatosensory information from the feet results in larger postural sway for sinusoidal support-base translations, and (4) loss of vestibular function results in falling for large support-base rotations with the eyes closed. These predictions are in agreement with experimental results. Received: 12 November 1999 / Accepted in revised form: 30 June 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号