首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
 Multivariate analysis is a branch of statistics that successfully exploits the powerful tools of linear algebra to obtain a fairly comprehensive theory of estimation. The purpose of this paper is to explore to what extent a linear theory of estimation can be developed in the context of coalescent models used in the analysis of DNA polymorphism. We consider a large class of coalescent models, of which the neutral infinite sites model is one example. In the process, we discover several limitations of linear estimators that are quite distinct from those in the classical theory. In particular, we prove that there does not exist a uniformly BLUE (best linear unbiased estimator) for the scaled mutation parameter, under the assumptions of the neutral model of evolution. In fact, we show that no linear estimator performs uniformly better than the Watterson (1975) method based on the total number of segregating sites. For certain coalescent models, the segregating-sites estimator is actually optimal. The general conclusion is the following. If genealogical information is useful for estimating the rate of evolution, then there is no optimal linear method. If there is an optimal linear method, then no information other than the total number of segregating sites is needed. Received: 29 July 1998 / Revised version: 9 October 1998  相似文献   

2.
Fleck LM 《New biotechnology》2012,29(6):757-768
In the age of genomic medicine we can often now do the genetic testing that will permit more accurate personal tailoring of medications to obtain the best therapeutic results. This is certainly a medically and morally desirable result. However, in other areas of medicine pharmacogenomics is generating consequences that are much less ethically benign and much less amenable to a satisfactory ethical resolution. More specifically, we will often find ourselves left with 'wicked problems,' 'ragged edges,' and well-disguised ethical precipices. This will be especially true with regard to these extraordinarily expensive cancer drugs that generally yield only extra weeks or extra months of life. Our key ethical question is this: Does every individual faced with cancer have a just claim to receive treatment with one of more of these targeted cancer therapies at social expense? If any of these drugs literally made the difference between an unlimited life expectancy (a cure) and a premature death, that would be a powerful moral consideration in favor of saying that such individuals had a strong just claim to that drug. However, what we are beginning to discover is that different individuals with different genotypes respond more or less positively to these targeted drugs with some in a cohort gaining a couple extra years of life while others gain only extra weeks or months. Should only the strongest responders have a just claim to these drugs at social expense when there is no bright line that separates strong responders from modest responders from marginal responders? This is the key ethical issue we address. We argue that no ethical theory yields a satisfactory answer to this question, that we need instead fair and respectful processes of rational democratic deliberation.  相似文献   

3.
The potential and limitations of life cycle assessment and environmental systems analysis tools in general are evaluated. More specifically this is done by exploring the limits of what can be shown by LCA and other tools. This is done from several perspectives. First, experiences from current LCAs and methodology discussions are used including a discussion on the type of impacts typically included, quality of inventory data, methodological choices in relation to time aspects, allocation, characterisation and weighting methods and uncertainties in describing the real world. Second, conclusions from the theory of science are practised. It is concluded that it can in general not be shown that one product is environmentally preferable to another one, even if this happens to be the case. This conclusion has important policy implications. If policy changes require that it must be shown that one product is more (or less) environmentally preferable before any action can be taken, then it is likely that no action is ever going to take place. If we want changes to be made, decisions must be taken on a less rigid basis. It is expected that in this decision making process, LCA can be a useful input. Since it is the only tool that can be used for product comparisons over the whole life cycle, it can not be replaced by any other tool and should be used. Increased harmonisation of LCA methodology may increase the acceptability of chosen methods and increase the usefulness of the tool.  相似文献   

4.
Matrifocality is a feature of Caribbean communities in which mothers and adult daughters often form the household core. I argue that daughter-biased parental care underlies matrifocality. Parental investment (PI) theory predicts sex-biased care, but factors promoting daughter preference are not always clearly specified. If sons are more likely than daughters to experience unpredictable hazards, then parents may bias their efforts toward daughters. In this study, I examine gender differences in rural Dominica and test PI predictions. Men were more likely to be poor and develop alcoholism and less likely to migrate or attend high school than women were. Educational outcomes showed a Trivers-Willard effect: Boys from unfavorable family environments were less likely to receive secondary education than were other boys, but there was no association for girls. PI variables generally accounted for less variance in men's outcomes than women's, suggesting that unpredictable hazards for sons may promote daughter preference and matrifocality.  相似文献   

5.
This article examines how perceptual illusions become reliable markers of truth in the context of experimental psychology. As laboratory tools, illusions travel across time, place, and media: from the Torres Strait expedition at the close of the nineteenth century to a contemporary psychology lab that utilizes virtual reality. In this historical and ethnographic study, something that might otherwise be considered misleading (illusion) becomes an epistemological guide. Illusion takes on a reality and in so doing raises questions about what ought and ought not to be considered real. This research thus joins other anthropologies that expand what counts as ‘the real’. These anthropologies of the unreal are proliferating of late and breathe hope back into human and nonhuman futures by reconsidering what constitutes being in the world. The reality of illusion highlights a phenomenological position in which reality is the world as perceptually experienced. Further, as this investigation unfolds in the laboratory, it becomes clear that the unreal is not set apart from but incorporated into knowledge systems.  相似文献   

6.
This article discusses whether “sustainability” has a physical meaning in applied thermodynamics. If it has, then it should be possible to derive general principles and rules for devising “sustainable systems.” If not, then other sides of the issue retain their relevance, but thermodynamic laws are not appropriate by themselves to decide whether a system or a scenario is sustainable. Here, we make use of a single axiom: that final consumption (material or immaterial) can be quantified solely in terms of equivalent primary exergy flows. On this basis, we develop a system theory that shows that if “simple” systems are based solely on the exploitation of fossil resources, they cannot be thermodynamically “sustainable.” But as renewable resources are brought into the picture and the system complexity grows, there are thresholds below or beyond which the system exhibits an ability to maintain itself (perhaps through fluctuations), in a self‐preserving (i.e., a sustainable) state. It appears that both complexity and the degree of nonlinearity of the transfer functions of the systems play a major role and—even for some of the simplest cases—lead to nontrivial solutions in phase space. Therefore, even if the examples presented in the article can be considered rather crude approximations to real, complex systems at best, the results show a trend that is worth further consideration.  相似文献   

7.
S R Kristensen  M H?rder 《Enzyme》1988,39(4):205-212
The association between ATP depletion and enzyme release from cells has been described in two different ways: as a more or less linear dependence, or with a threshold value below which the enzyme release will start. We have investigated the association between ATP depletion caused by various metabolic inhibitors and enzyme release on quiescent fibroblasts. We found that the enzyme release never started before the ATP had decreased to a critical low level. Addition of glucose to cells while ATP was still above this critical level led to a regeneration of ATP and enzyme release did not occur. If ATP was lowered to 35-40% and kept there for 24 h, the enzyme release was minimal. These results support the threshold theory for release of enzymes from cells.  相似文献   

8.
We consider a two-species competition model in which the species have the same population dynamics but different dispersal strategies. Both species disperse by a combination of random diffusion and advection along environmental gradients, with the same random dispersal rates but different advection coefficients. Regarding these advection coefficients as movement strategies of the species, we investigate their course of evolution. By applying invasion analysis we find that if the spatial environmental variation is less than a critical value, there is a unique evolutionarily singular strategy, which is also evolutionarily stable. If the spatial environmental variation exceeds the critical value, there can be three or more evolutionarily singular strategies, one of which is not evolutionarily stable. Our results suggest that the evolution of conditional dispersal of organisms depends upon the spatial heterogeneity of the environment in a subtle way.  相似文献   

9.
In terms of making genes expression data more interpretable and comprehensible, there exists a significant superiority on sparse methods. Many sparse methods, such as penalized matrix decomposition (PMD) and sparse principal component analysis (SPCA), have been applied to extract plants core genes. Supervised algorithms, especially the support vector machine-recursive feature elimination (SVM-RFE) method, always have good performance in gene selection. In this paper, we draw into class information via the total scatter matrix and put forward a class-information-based penalized matrix decomposition (CIPMD) method to improve the gene identification performance of PMD-based method. Firstly, the total scatter matrix is obtained based on different samples of the gene expression data. Secondly, a new data matrix is constructed by decomposing the total scatter matrix. Thirdly, the new data matrix is decomposed by PMD to obtain the sparse eigensamples. Finally, the core genes are identified according to the nonzero entries in eigensamples. The results on simulation data show that CIPMD method can reach higher identification accuracies than the conventional gene identification methods. Moreover, the results on real gene expression data demonstrate that CIPMD method can identify more core genes closely related to the abiotic stresses than the other methods.  相似文献   

10.
MOTIVATION: Significance analysis of differential expression in DNA microarray data is an important task. Much of the current research is focused on developing improved tests and software tools. The task is difficult not only owing to the high dimensionality of the data (number of genes), but also because of the often non-negligible presence of missing values. There is thus a great need to reliably impute these missing values prior to the statistical analyses. Many imputation methods have been developed for DNA microarray data, but their impact on statistical analyses has not been well studied. In this work we examine how missing values and their imputation affect significance analysis of differential expression. RESULTS: We develop a new imputation method (LinCmb) that is superior to the widely used methods in terms of normalized root mean squared error. Its estimates are the convex combinations of the estimates of existing methods. We find that LinCmb adapts to the structure of the data: If the data are heterogeneous or if there are few missing values, LinCmb puts more weight on local imputation methods; if the data are homogeneous or if there are many missing values, LinCmb puts more weight on global imputation methods. Thus, LinCmb is a useful tool to understand the merits of different imputation methods. We also demonstrate that missing values affect significance analysis. Two datasets, different amounts of missing values, different imputation methods, the standard t-test and the regularized t-test and ANOVA are employed in the simulations. We conclude that good imputation alleviates the impact of missing values and should be an integral part of microarray data analysis. The most competitive methods are LinCmb, GMC and BPCA. Popular imputation schemes such as SVD, row mean, and KNN all exhibit high variance and poor performance. The regularized t-test is less affected by missing values than the standard t-test. AVAILABILITY: Matlab code is available on request from the authors.  相似文献   

11.
One of the remarkable features of networks is module that can provide useful insights into not only network organizations but also functional behaviors between their components. Comprehensive efforts have been devoted to investigating cohesive modules in the past decade. However, it is still not clear whether there are important structural characteristics of the nodes that do not belong to any cohesive module. In order to answer this question, we performed a large-scale analysis on 25 complex networks with different types and scales using our recently developed BTS (bintree seeking) algorithm, which is able to detect both cohesive and sparse modules in the network. Our results reveal that the sparse modules composed by the cohesively isolated nodes widely co-exist with the cohesive modules. Detailed analysis shows that both types of modules provide better characterization for the division of a network into functional units than merely cohesive modules, because the sparse modules possibly re-organize the nodes in the so-called cohesive modules, which lack obvious modular significance, into meaningful groups. Compared with cohesive modules, the sizes of sparse ones are generally smaller. Sparse modules are also found to have preferences in social and biological networks than others.  相似文献   

12.
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.  相似文献   

13.
For clonal lineages of finite size that differ in their deleterious mutational effects, the probability of fixation is investigated by mathematical theory and Monte Carlo simulations. If these fitness effects are sufficiently small in one or both lineages, then the lineage with the less deleterious effects will become fixed with high probability. If, however, in both lineages the deleterious effects are larger than a threshold s(c), then the probability of fixation is independent of the fitness effects and depends only on the initial frequencies of the lineages. This threshold decreases with decreasing genomic mutation rate U and increases with population size N. (For N = 10(5), we have s(c) approximately = 0.1 if U = 1, and s(c) approximately = 0.015 if U = 0.1). Above the threshold, the competition is not driven by the ratio of mean fitnesses of the lineages, but by the relative sizes of the zero-mutation classes, which are independent of the fitness effects of the mutations. After the loss of the zero-mutation class of a lineage, the other lineage will spread to fixation with high probability and within a short time span. If the mutation rates of the lineages differ substantially, the lineage with the lower mutation rate is fixed with very high probability unless the lineage with the larger mutation rate has very slightly deleterious mutational effects. If the mutation rates differ by not more than a few percent, then the lineage with the higher mutation rate and the more deleterious effects can become fixed with appreciable probability for a certain range of parameters. The independence of the fixation probability on the fitness effects in a single population leads to dramatic effects in metapopulations: lineages with more deleterious effects have a much higher fixation probability. The critical value s(c), above which this phenomenon occurs, decreases as the migration rate between the subpopulations decreases.  相似文献   

14.
K Berk 《Biometrics》1987,43(2):385-398
Repeated-measures experiments involve two or more intended measurements per subject. If the within-subjects design is the same for each subject and no data are missing, then the analysis is relatively simple and there are readily available programs that do the analysis automatically. However, if the data are incomplete, and do not have the same arrangement for each subject, then the analysis becomes much more difficult. Beginning with procedures that are not optimal but are comparatively simple, we discuss unbalanced linear model analysis and then normal maximum likelihood (ML) procedures. Included are ML and REML (restricted maximum likelihood) estimators for the mixed model and also estimators for a model that allows arbitrary within-subject covariance matrices. The objective is to give procedures that can be implemented with available software.  相似文献   

15.
Addition of a suspension of a surface membrane enriched fraction prepared from confluent 3T3 cells to sparse 3T3 cells in culture results in a concentration dependent and saturable decrease in the rate of DNA synthesis. The inhibition of cell growth by membranes resembles the inhibition of cell growth observed at confluent cell densities by a number of criteria: (1) In both cases the cells are arrested in the G1 protion of the cell cycle; (2) the inhibition by membranes or by high local cell density can to a large extent be compensated for by raising the serum concentration or by addition of fibroblast growth factor plus dexamethasone. Membranes prepared from sparse cultures inhibit less well than membranes from confluent cultures in a manner which suggests that binding of membranes to cells is not by itself sufficient to cause inhibition of cell growth. The inhibitory activity has a subcellular distribution similar to phosphodiesterase (a plasma membrane marker) and appears to reside in one or more intrinsic membrane components. Maximally, membranes can arrest about 40% of the cell population in each cell cycle. Plasma membranes obtained from sparse 3T3 cells are less inhibitory than membranes obtained from confluent cells. This suggests either that the inhibitory component(s) in the plasma membrane responsible for growth inhibition may be in part induced by high cell density, or that this component(s) may be lost from these membranes during purification.  相似文献   

16.
Frank Livingstone proclaims himself to be the last living proponent of the single species hypothesis. In sharp contrast, a species-rich, bushy phylogeny is favored by most human paleontologists. Is Livingstone's proclamation merely contrarian posturing, or does closer inspection warrant reconsideration of just how speciose the hominin lineage is? The high-speciation perspective draws on evidence of speciosity in the Cercopithecoidea and punctuated equilibria theory for support. If blue monkeys and redtail monkeys are indistinguishable skeletally, this reasoning goes, or if red colobus and black and white colobus are likewise indistinguishable, should we not expect that there are more species of hominin than is apparent from skeletal evidence alone? A contrarian perspective notes that not all monkey taxa are speciose. Importantly, two broadly distributed, partly terrestrial monkeys have not speciated at all: vervets and baboons. Nor are monkeys the first choice as a hominin speciation model. If expectations of species numbers are based on the Hominoidea, a taxon more closely related to hominins, more similar in body size, and found in more hominin-like habitats than monkeys, a single-species perspective is more appealing. No great ape genus has even two sympatric species. Moreover, despite a separation of 1.6 Ma, West African chimpanzees have not speciated from Pt. troglodytes nor Pt. schweinfurthii. It is notable that no two contemporaneous species of hominin were separated by significantly more than this interval. A biological--as opposed to an ecological or geographical--species definition would place all hominins in a single, phenotypically diverse species. Since divergence from the chimpanzee, "species" distinctness in hominins may have been maintained by temporary allopatry and centripetal niche separation. The hominin lineage may have evolved as a single, phenotypically diverse, reticulately evolving species.  相似文献   

17.
Raphael K. Didham 《Oikos》2006,113(2):357-362
T. Fukami and W. G. Lee argue that the logical expectation from ecological theory is that competitively-structured assemblages will be more likely to exhibit alternative stable states than abiotically-structured assemblages. We suggest that there are several important misinterpretations in their arguments, and that the substance of their hypothesis has both a weak basis in ecological theory and is not supported by empirical evidence which shows that alternative stable states occur more frequently in natural systems subject to moderate- to harsh abiotic extremes. While this debate is founded in ecological theory, it has important applied implications for restoration management. Sound theoretical predictions about when to expect alternative stable states can only aid more effective restoration if theoretical expectations can be shown to translate into predictable empirical outcomes. If strongly abiotically- or disturbance-structured systems are more likely to exhibit catastrophic phase shifts in community structure that can be resilient to management efforts, then restoration ecologists will need to treat these systems differently in terms of the types of management inputs that are required.  相似文献   

18.
MOTIVATION: Many practical pattern recognition problems require non-negativity constraints. For example, pixels in digital images and chemical concentrations in bioinformatics are non-negative. Sparse non-negative matrix factorizations (NMFs) are useful when the degree of sparseness in the non-negative basis matrix or the non-negative coefficient matrix in an NMF needs to be controlled in approximating high-dimensional data in a lower dimensional space. RESULTS: In this article, we introduce a novel formulation of sparse NMF and show how the new formulation leads to a convergent sparse NMF algorithm via alternating non-negativity-constrained least squares. We apply our sparse NMF algorithm to cancer-class discovery and gene expression data analysis and offer biological analysis of the results obtained. Our experimental results illustrate that the proposed sparse NMF algorithm often achieves better clustering performance with shorter computing time compared to other existing NMF algorithms. AVAILABILITY: The software is available as supplementary material.  相似文献   

19.
Multiple components linear least-squares methods have been proposed for the detection of periodic components in nonsinusoidal longitudinal time series. However, a proper test for comparison of parameters obtained from this method for two or more time series is not yet available. Accordingly, we propose two methods, one parametric and one nonparametric, to compare parameters from rhythmometric models with multiple components. The parametric method is based on techniques commonly and generally employed in linear regression analysis. The comparison of parameters among two or more time series is accomplished by the use of so-called dummy variables. The nonparametric method is based on bootstrap techniques. This approach basically tests if the difference in any given parameter obtained by fitting a model with the same periods to two different longitudinal time series differs from zero. This method calculates a confidence interval for the difference in the tested parameter. If this interval does not contain zero, it can be concluded that the parameters obtained from the two time series are different with high probability. An estimation of the p-value for the corresponding test can also be calculated. By the use of similar bootstrap techniques, confidence intervals can also be obtained for any parameter derived from the multiple component fit of several periods to nonsinusoidal longitudinal time series, including the orthophase (peak time), bathyphase (trough time), and global amplitude (difference between the maximum and the minimum) of the fitted model waveform. These methods represent a valuable tool for the comparison of rhythm parameters obtained by multiple component analysis, and they render this approach as a generally applicable one for waveform representation and detection of periodicities in nonsinusoidal, sparse, and noisy longitudinal time series sampled with either equidistant or unequidistant observations.  相似文献   

20.
Summary The general life history problem concerns the optimal allocation of resources to growth, survival and reproduction. We analysed this problem for a perennial model organism that decides once each year to switch from growth to reproduction. As a fitness measure we used the Malthusian parameterr, which we calculated from the Euler-Lotka equation. Trade-offs were incorporated by assuming that fecundity is size dependent, so that increased fecundity could only be gained by devoting more time to growth and less time to reproduction. To calculate numerically the optimalr for different growth dynamics and mortality regimes, we used a simplified version of the simulated annealing method. The major differences among optimal life histories resulted from different accumulation patterns of intrinsic mortalities resulting from reproductive costs. If these mortalities were accumulated throughout life, i.e. if they were senescent, a bangbang strategy was optimal, in which there was a single switch from growth to reproduction: after the age at maturity all resources were allocated to reproduction. If reproductive costs did not carry over from year to year, i.e. if they were not senescent, the optimal resource allocation resulted in a graded switch strategy and growth became indeterminate. Our numerical approach brings two major advantages for solving optimization problems in life history theory. First, its implementation is very simple, even for complex models that are analytically intractable. Such intractability emerged in our model when we introduced reproductive costs representing an intrinsic mortality. Second, it is not a backward algorithm. This means that lifespan does not have to be fixed at the begining of the computation. Instead, lifespan itself is a trait that can evolve. We suggest that heuristic algorithms are good tools for solving complex optimality problems in life history theory, in particular questions concerning the evolution of lifespan and senescence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号