首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 374 毫秒
1.
2.
Conceptual and logistical challenges associated with the design and analysis of ecological restoration experiments are often viewed as being insurmountable, thereby limiting the potential value of restoration experiments as tests of ecological theory. Such research constraints are, however, not unique within the environmental sciences. Numerous natural and anthropogenic disturbances represent unplanned, uncontrollable events that cannot be replicated or studied using traditional experimental approaches and statistical analyses. A broad mix of appropriate research approaches (e.g., long-term studies, large-scale comparative studies, space-for-time substitution, modeling, and focused experimentation) and analytical tools (e.g., observational, spatial, and temporal statistics) are available and required to advance restoration ecology as a scientific discipline. In this article, research design and analytical options are described and assessed in relation to their applicability to restoration ecology. Significant research benefits may be derived from explicitly defining conceptual models and presuppositions, developing multiple working hypotheses, and developing and archiving high-quality data and metadata. Flexibility in research approaches and statistical analyses, high-quality databases, and new sampling approaches that support research at broader spatial and temporal scales are critical for enhancing ecological understanding and supporting further development of restoration ecology as a scientific discipline.  相似文献   

3.
4.
In their ambitious Evolutionary Anthropology paper, Winterhalder and Smith 1 review the history, theory, and methods of human behavioral ecology (HBE). In establishing how HBE differs from traditional approaches within sociocultural anthropology, they and others laud its hypothetical‐deductive research method. 1 - 3 Our aim is to critically examine how human behavioral ecologists conduct their research, specifically how they analyze and interpret data as evidence for scientific hypotheses. Through computer simulations and a review of empirical studies of human sex ratios, we consider some limitations of the status quo and present alternatives that could strengthen the field. In particular, we suggest that because human behavioral ecologists often consider multiple hypotheses, they should use statistical approaches that can quantify the evidence in empirical data for competing hypotheses. Although we focus on HBE, the principles of this paper apply broadly within biological anthropology.  相似文献   

5.
While genome-wide association studies (GWAS) have primarily examined populations of European ancestry, more recent studies often involve additional populations, including admixed populations such as African Americans and Latinos. In admixed populations, linkage disequilibrium (LD) exists both at a fine scale in ancestral populations and at a coarse scale (admixture-LD) due to chromosomal segments of distinct ancestry. Disease association statistics in admixed populations have previously considered SNP association (LD mapping) or admixture association (mapping by admixture-LD), but not both. Here, we introduce a new statistical framework for combining SNP and admixture association in case-control studies, as well as methods for local ancestry-aware imputation. We illustrate the gain in statistical power achieved by these methods by analyzing data of 6,209 unrelated African Americans from the CARe project genotyped on the Affymetrix 6.0 chip, in conjunction with both simulated and real phenotypes, as well as by analyzing the FGFR2 locus using breast cancer GWAS data from 5,761 African-American women. We show that, at typed SNPs, our method yields an 8% increase in statistical power for finding disease risk loci compared to the power achieved by standard methods in case-control studies. At imputed SNPs, we observe an 11% increase in statistical power for mapping disease loci when our local ancestry-aware imputation framework and the new scoring statistic are jointly employed. Finally, we show that our method increases statistical power in regions harboring the causal SNP in the case when the causal SNP is untyped and cannot be imputed. Our methods and our publicly available software are broadly applicable to GWAS in admixed populations.  相似文献   

6.
Infectious disease ecology has recently raised its public profile beyond the scientific community due to the major threats that wildlife infections pose to biological conservation, animal welfare, human health and food security. As we start unravelling the full extent of emerging infectious diseases, there is an urgent need to facilitate multidisciplinary research in this area. Even though research in ecology has always had a strong theoretical component, cultural and technical hurdles often hamper direct collaboration between theoreticians and empiricists. Building upon our collective experience of multidisciplinary research and teaching in this area, we propose practical guidelines to help with effective integration among mathematical modelling, fieldwork and laboratory work. Modelling tools can be used at all steps of a field-based research programme, from the formulation of working hypotheses to field study design and data analysis. We illustrate our model-guided fieldwork framework with two case studies we have been conducting on wildlife infectious diseases: plague transmission in prairie dogs and lyssavirus dynamics in American and African bats. These demonstrate that mechanistic models, if properly integrated in research programmes, can provide a framework for holistic approaches to complex biological systems.  相似文献   

7.
Identifying rare variants that contribute to complex diseases is challenging because of the low statistical power in current tests comparing cases with controls. Here, we propose a novel and powerful rare variants association test based on the deviation of the observed mutation burden of a gene in cases from a baseline predicted by a weighted recursive truncated negative-binomial regression (RUNNER) on genomic features available from public data. Simulation studies show that RUNNER is substantially more powerful than state-of-the-art rare variant association tests and has reasonable type 1 error rates even for stratified populations or in small samples. Applied to real case-control data, RUNNER recapitulates known genes of Hirschsprung disease and Alzheimer''s disease missed by current methods and detects promising new candidate genes for both disorders. In a case-only study, RUNNER successfully detected a known causal gene of amyotrophic lateral sclerosis. The present study provides a powerful and robust method to identify susceptibility genes with rare risk variants for complex diseases.  相似文献   

8.
This review critically evaluates the animal literature concerning the effects of weight cycling on factors related to development of obesity, diabetes, hypertension, and hyperlipidemia. Although human studies have been used to retrospectively examine the relationship between fluctuations in body weight and a variety of disease markers, direct causal links between weight cycling and negative health effects have been inferred from a series of scientific publications using animals as subjects. We use data from 24 such publications to evaluate evidence for and against a series of hypotheses that have been suggested regarding weight cycling and health. Although there are some intriguing results, there is currently little evidence to support any of these hypotheses. However, methodological limitations were identified in many of these studies, and caution should be used in making definitive decisions about weight cycling. Weight cycling studies could be improved by including more appropriate controls, comparing controls to weight cycling animals at more appropriate time points, and giving more attention to potential effects of diet composition. While more careful research is needed, at this time we conclude that the published animal literature does not justify any warnings about the hazards of weight cycling.  相似文献   

9.
Econometricians Daniel McFadden and James Heckman won the 2000 Nobel Prize in economics for their work on discrete choice models and selection bias. Statisticians and epidemiologists have made similar contributions to medicine with their work on case-control studies, analysis of incomplete data, and causal inference. In spite of repeated nominations of such eminent figures as Bradford Hill and Richard Doll, however, the Nobel Prize in physiology and medicine has never been awarded for work in biostatistics or epidemiology. (The "exception who proves the rule" is Ronald Ross, who, in 1902, won the second medical Nobel for his discovery that the mosquito was the vector for malaria. Ross then went on to develop the mathematics of epidemic theory--which he considered his most important scientific contribution-and applied his insights to malaria control programs.) The low esteem accorded epidemiology and biostatistics in some medical circles, and increasingly among the public, correlates highly with the contradictory results from observational studies that are displayed so prominently in the lay press. In spite of its demonstrated efficacy in saving lives, the "black box" approach of risk factor epidemiology is not well respected. To correct these unfortunate perceptions, statisticians would do well to follow more closely their own teachings: conduct larger, fewer studies designed to test specific hypotheses, follow strict protocols for study design and analysis, better integrate statistical findings with those from the laboratory, and exercise greater caution in promoting apparently positive results.  相似文献   

10.
Gene expression data can provide a very rich source of information for elucidating the biological function on the pathway level if the experimental design considers the needs of the statistical analysis methods. The purpose of this paper is to provide a comparative analysis of statistical methods for detecting the differentially expression of pathways (DEP). In contrast to many other studies conducted so far, we use three novel simulation types, producing a more realistic correlation structure than previous simulation methods. This includes also the generation of surrogate data from two large-scale microarray experiments from prostate cancer and ALL. As a result from our comprehensive analysis of 41,004 parameter configurations, we find that each method should only be applied if certain conditions of the data from a pathway are met. Further, we provide method-specific estimates for the optimal sample size for microarray experiments aiming to identify DEP in order to avoid an underpowered design. Our study highlights the sensitivity of the studied methods on the parameters of the system.  相似文献   

11.
The popular defense of intelligent design/creationism (ID) theories, as well as theories in evolutionary biology, especially from the perspective that both are worthy of scientific consideration, is that empirical evidence has been presented that supports both. Both schools of thought have had a tendency to rely on the same class of evidence, namely, the observations of organisms that are in need of being explained by those theories. The result is conflation of the evidence that prompts one to infer hypotheses applying ID or evolutionary theories with the evidence that would be required to critically test those theories. Evidence is discussed in the contexts of inferring theories/hypotheses, suggesting what would be possible tests, and actual testing. These three classes of inference being abduction, deduction, and induction, respectively. Identifying these different inferential processes in evolutionary biology and ID allow for showing that the evidence to which theories and hypotheses provide understanding cannot be the same evidence supporting those theories and hypotheses. This clarification provides a strong criterion for showing the inability of an ID theory to be of utility in the ongoing process of acquiring causal understanding, that is the hallmark of science.  相似文献   

12.
In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An "experiment" is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, while other "factorial" designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.  相似文献   

13.
This paper examines competing theories for cases in which both the data and the hypotheses can be represented as distance matrices. A test due to Dow & Cheverud has been used for such comparisons in anthropology, but when data are spatially, temporally, or phylogenetically autocorrelated, this test may be far too liberal. We examine a classification procedure based on ratios of probabilities obtained from Mantel tests of the competing hypotheses and find that design matrices describing only lag-one connections and those eliminating common connections of competing hypotheses are the most informative. We apply this method to simulated gene-frequency data in a 7×7 chessboard representing a stepping-stone model and discriminate between alternative theories with a 7% misclassification rate. We also apply these techniques to the current controversy concerning the origin of anatomically modern humans by testing design matrices representing regional continuity and single African origins. The outcome for lag-one matrices and those showing only unique lag-one differences indicate that the single African origin of anatomically modern humans fits the distance matrix based on 165 characters of 83 fossil crania better than the competing theory. However, we also tested a design matrix describing single origin out of southwest Asia. This design matrix was clearly most similar to the data in all tested cases. These results make the regional-continuity theory a less likely explanation for the observed cranial differences than the two single-origin theories. Of these, single southwest Asian origins seems the more likely interpretation of the data.  相似文献   

14.
Microarray data quality analysis: lessons from the AFGC project   总被引:10,自引:0,他引:10  
Genome-wide expression profiling with DNA microarrays has and will provide a great deal of data to the plant scientific community. However, reliability concerns have required the development data quality tests for common systematic biases. Fortunately, most large-scale systematic biases are detectable and some are correctable by normalization. Technical replication experiments and statistical surveys indicate that these biases vary widely in severity and appearance. As a result, no single normalization or correction method currently available is able to address all the issues. However, careful sequence selection, array design, experimental design and experimental annotation can substantially improve the quality and biological of microarray data. In this review, we discuss these issues with reference to examples from the Arabidopsis Functional Genomics Consortium (AFGC) microarray project.  相似文献   

15.
The primary assumption within the recent personality and political orientations literature is that personality traits cause people to develop political attitudes. In contrast, research relying on traditional psychological and developmental theories suggests the relationship between most personality dimensions and political orientations are either not significant or weak. Research from behavioral genetics suggests the covariance between personality and political preferences is not causal, but due to a common, latent genetic factor that mutually influences both. The contradictory assumptions and findings from these research streams have yet to be resolved. This is in part due to the reliance on cross-sectional data and the lack of longitudinal genetically informative data. Here, using two independent longitudinal genetically informative samples, we examine the joint development of personality traits and attitude dimensions to explore the underlying causal mechanisms that drive the relationship between these features and provide a first step in resolving the causal question. We find change in personality over a ten-year period does not predict change in political attitudes, which does not support a causal relationship between personality traits and political attitudes as is frequently assumed. Rather, political attitudes are often more stable than the key personality traits assumed to be predicting them. Finally, the results from our genetic models find that no additional variance is accounted for by the causal pathway from personality traits to political attitudes. Our findings remain consistent with the original construction of the five-factor model of personality and developmental theories on attitude formation, but challenge recent work in this area.  相似文献   

16.
Extracting network-based functional relationships within genomic datasets is an important challenge in the computational analysis of large-scale data. Although many methods, both public and commercial, have been developed, the problem of identifying networks of interactions that are most relevant to the given input data still remains an open issue. Here, we have leveraged the method of random walks on graphs as a powerful platform for scoring network components based on simultaneous assessment of the experimental data as well as local network connectivity. Using this method, NetWalk, we can calculate distribution of Edge Flux values associated with each interaction in the network, which reflects the relevance of interactions based on the experimental data. We show that network-based analyses of genomic data are simpler and more accurate using NetWalk than with some of the currently employed methods. We also present NetWalk analysis of microarray gene expression data from MCF7 cells exposed to different doses of doxorubicin, which reveals a switch-like pattern in the p53 regulated network in cell cycle arrest and apoptosis. Our analyses demonstrate the use of NetWalk as a valuable tool in generating high-confidence hypotheses from high-content genomic data.  相似文献   

17.
A search for predictive understanding of plant responses to elevated [CO2]   总被引:1,自引:0,他引:1  
This paper reviews two decades of effort by the scientific community in a search for predictive understanding of plant responses to elevated [CO2]. To evaluate the progress of research in leaf photosynthesis, plant respiration, root nutrient uptake, and carbon partitioning, we divided scientific activities into four phases: (I) initial assessments derived from our existing knowledge base to provide frameworks for experimental studies; (II) experimental tests of the initial assessments; (III) in cases where assessments were invalidated, synthesis of experimental results to stimulate alternative hypotheses and further experimentation; and (IV) formation of new knowledge. This paper suggests that photosynthetic research may have gone through all four phases, considering that (a) variable responses of photosynthesis to [CO2] are generally explainable, (b) extrapolation of leaf-level studies to the global scale has been examined, and (c) molecular studies are under way. Investigation of plant respiratory responses to [CO2] has reached the third phase: experimental results have been accumulated, and mechanistic approaches are being developed to examine alternative hypotheses in search for new concepts and/or new quantitative frameworks to understand respiratory responses to elevated [CO2]. The study of nutrient uptake kinetics is still in the second phase: experimental evidence has contradicted some of the initial assessments, and more experimental studies need to be designed before generalizations can be made. It is quite unfortunate that we have not made much progress in understanding mechanisms of carbon partitioning during the past two decades. This is due in part to the fact that some of the holistic theories, such as functional balance and optimality, have not evolved into testable hypotheses to guide experimental studies. This paper urges modelers to play an increasing role in plant–CO2 research by disassembling these existing theories into hypotheses and urges experimentalists to design experiments to examine these holistic concepts.  相似文献   

18.
Genome-wide expression profiling with DNA microarrays has and will provide a great deal of data to the plant scientific community. However, reliability concerns have required the development data quality tests for common systematic biases. Fortunately, most large-scale systematic biases are detectable and some are correctable by normalization. Technical replication experiments and statistical surveys indicate that these biases vary widely in severity and appearance. As a result, no single normalization or correction method currently available is able to address all the issues. However, careful sequence selection, array design, experimental design and experimental annotation can substantially improve the quality and biological of microarray data. In this review, we discuss these issues with reference to examples from the Arabidopsis Functional Genomics Consortium (AFGC) microarray project.  相似文献   

19.
For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号