首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary   Rapid, on-ground assessments of vegetation condition are frequently used as a basis for landholder education, development applications, distributing incentive funds, prescribing restoration treatments and monitoring change. We provide an overview of methods used to rapidly assess vegetation condition for these purposes. We encourage those developing new approaches to work through the steps we have presented here, namely define management objectives and operational constraints; develop an appropriate conceptual framework for the ecosystems under consideration; select an appropriate suite of indicators; and consider the options available for combining these into an index. We argue that information must be gained from broader scales to make decisions about the condition of individual sites. Remote sensing and spatial modelling might be more appropriate methods than on-ground assessments for obtaining this information. However, we believe that spatial prediction of vegetation condition will only add value to on-ground assessments rather than replace them. This is because the current techniques for spatially predicting vegetation condition cannot capture all of the information in a site assessment or at the required level of accuracy, and maps cannot replace the exchange of information between assessors and land managers that is an important component of on-ground assessment. There is scope for more sophistication in the way on-ground assessments of vegetation condition are undertaken, but the challenge will be to maintain the simplicity that makes rapid on-ground assessment a popular vehicle for informing natural resource management. We encourage greater peer review and publishing in this field to facilitate greater exchange of ideas and experiences.  相似文献   

2.
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

Author summary

Subjective assessments of the merit and likely impact of scientific publications are routinely made by scientists during their own research, and as part of promotion, appointment, and government committees. Using two large datasets in which scientists have made qualitative assessments of scientific merit, we show that scientists are poor at judging scientific merit and the likely impact of a paper, and that their judgment is strongly influenced by the journal in which the paper is published. We also demonstrate that the number of citations a paper accumulates is a poor measure of merit and we argue that although it is likely to be poor, the impact factor, of the journal in which a paper is published, may be the best measure of scientific merit currently available.  相似文献   

3.
4.
Abstract

The geomagnetic field effect hypothesis and the variability among replications in germination and growth tests in Triticum. – During previous eperiments on germination and growth of Triticum, we have often found many cases of heteroscedasticity and of statistically significant differences among contemporary replications (i.e. germinators). In such experiments all the caryopses were oriented in the same way inside the germinators, whilst the germinators were randomly set inside the thermostats: therefore the seeds of different germinators might result differently oriented. Some researchers report that seeds differently oriented according to the lines of force of the geomagnetic field, may respond with different germination, growth rates and growth directions of shoots and roots (magnetotropism). The present investigation was designed to ascertain if these effects might be responsible of the variability among the contemporary replications. The different caryopses orientation within laboratory geomagnetic field, does not give evidence of any effect on germination, growth rate and direction. The caryopses show no germination sensibility neither related to different age, light and temperature conditions, neither to different cultivars, neither to varying orientation from the geomagnetic North every 15° and to varying time of ? preorientation ? in dry conditions. Differences between the variability of randomly and North oriented subsamples are not observed. No statistically significant differences are observed among shoots and roots lenghts of seedlings from differently oriented caryopses neither in different environmental conditions, nor as function of the orientation (24 angles of 15°), nor after different preorientations, nor after repeating the test on 608 subsamples (12.000 seedlings). Therefore the variability among contemporary replications is not imputable to the geomagnetic field. The variability and the distributions structure analysis suggest that both the heteroscedasticity and the significant differences among the replications may be due to an interaction among the caryopses inside the germinators through the substrate. This is probably related to the cultivar characteristics. We emphasize that an accurate evaluation of the variability is particularly necessary in germination and growth tests.  相似文献   

5.
The chronic diseases, comorbidities and rapidly changing needs of frail older persons increase the complexity of caregiving. A comprehensive, systematic and structured collection of data on the status of the frail older person is presumed to be essential in facilitating decision-making and thus improving the quality of care provided. However, the way in which an assessment is completed has a substantial impact on the quality and value of the results. This study examines the online completion of interRAI Home Care assessments, the possible causes for incomplete assessments and the consequences of these factors with respect to the quality of care received. Our findings indicate high nurse engagement and poor physician participation. We also observed the poor completion of items in predominantly medically- oriented sections characterized by, first, the fact that the assessors felt incapable of answering certain questions, second, the absence of required data or of a competent person to fill out the data, and third, the lack of tools necessary for essential measurements. The incompleteness of assessments has a clear negative influence on outcome generation. Moreover, without the added value of support outcomes, the improvement of care quality can be impeded and information technology can easily be seen as burdensome by the assessors. We have observed that multidisciplinary cooperation is an important prerequisite to establishing high-quality assessments aimed at improving the quality of care.  相似文献   

6.
S. Gavrilets  G. de-Jong 《Genetics》1993,134(2):609-625
We show that in polymorphic populations many polygenic traits pleiotropically related to fitness are expected to be under apparent ``stabilizing selection' independently of the real selection acting on the population. This occurs, for example, if the genetic system is at a stable polymorphic equilibrium determined by selection and the nonadditive contributions of the loci to the trait value either are absent, or are random and independent of those to fitness. Stabilizing selection is also observed if the polygenic system is at an equilibrium determined by a balance between selection and mutation (or migration) when both additive and nonadditive contributions of the loci to the trait value are random and independent of those to fitness. We also compare different viability models that can maintain genetic variability at many loci with respect to their ability to account for the strong stabilizing selection on an additive trait. Let V(m) be the genetic variance supplied by mutation (or migration) each generation, V(g) be the genotypic variance maintained in the population, and n be the number of the loci influencing fitness. We demonstrate that in mutation (migration)-selection balance models the strength of apparent stabilizing selection is order V(m)/V(g). In the overdominant model and in the symmetric viability model the strength of apparent stabilizing selection is approximately 1/(2n) that of total selection on the whole phenotype. We show that a selection system that involves pairwise additive by additive epistasis in maintaining variability can lead to a lower genetic load and genetic variance in fitness (approximately 1/(2n) times) than an equivalent selection system that involves overdominance. We show that, in the epistatic model, the apparent stabilizing selection on an additive trait can be as strong as the total selection on the whole phenotype.  相似文献   

7.
Recent increases in international trade have increased the cost to control and eradicate exotic species. Although many species are under quarantine control for agriculture, forestry, and public health, most species invisible to the naked eye are ignored because of the lack of both specialized assessors and risk assessments. We developed a species risk assessment particularly adapted to fungi, nematodes, and mites, that might be unintentionally introduced with exotic forest products and become threats to terrestrial ecosystems. We developed our assessment with reference to existing risk assessments for exotic organisms, including their ecological features such as phoresy and parasitism. We then tested our assessment with well-known organisms and assessed the risks of organisms unintentionally introduced into Japan. The assessment demonstrated scientifically acceptable scores for each organism. We suggest quarantine control of risk pathways as a practical approach for controlling unintentionally introduced organisms that are invisible to the naked eye.  相似文献   

8.
An “expansive” risk assessment approach is illustrated, characterizing dose–response relationships for salmonellosis in light of the full body of evidence for human and murine superorganisms. Risk assessments often require analysis of costs and benefits for supporting public health decisions. Decision-makers and the public need to understand uncertainty in such analyses for two reasons. Uncertainty analyses provide a range of possibilities within a framework of present scientific knowledge, thus helping to avoid undesirable consequences associated with the selected policies. And, it encourages the risk assessors to scrutinize all available data and models, thus helping avoid subjective or systematic errors. Without the full analysis of uncertainty, decisions could be biased by judgments based solely on default assumptions, beliefs, and statistical analyses of selected correlative data. Alternative data and theories that incorporate variability and heterogeneity for the human and murine superorganisms, particularly colonization resistance, are emerging as major influences for microbial risk assessment. Salmonellosis risk assessments are often based on conservative default models derived from selected sets of outbreak data that overestimate illness. Consequently, the full extent of uncertainty of estimates of annual number of illnesses is not incorporated in risk assessments and the presently used models may be incorrect.  相似文献   

9.
The magnitude of impacts some alien species cause to native environments makes them targets for regulation and management. However, which species to target is not always clear, and comparisons of a wide variety of impacts are necessary. Impact scoring systems can aid management prioritization of alien species. For such tools to be objective, they need to be robust to assessor bias. Here, we assess the newly proposed Environmental Impact Classification for Alien Taxa (EICAT) used for amphibians and test how outcomes differ between assessors. Two independent assessments were made by Kraus (Annual Review of Ecology Evolution and Systematics, 46, 2015, 75‐97) and Kumschick et al. (Neobiota, 33, 2017, 53‐66), including independent literature searches for impact records. Most of the differences between these two classifications can be attributed to different literature search strategies used with only one‐third of the combined number of references shared between both studies. For the commonly assessed species, the classification of maximum impacts for most species is similar between assessors, but there are differences in the more detailed assessments. We clarify one specific issue resulting from different interpretations of EICAT, namely the practical interpretation and assigning of disease impacts in the absence of direct evidence of transmission from alien to native species. The differences between assessments outlined here cannot be attributed to features of the scheme. Reporting bias should be avoided by assessing all alien species rather than only the seemingly high‐impacting ones, which also improves the utility of the data for management and prioritization for future research. Furthermore, assessments of the same taxon by various assessors and a structured review process for assessments, as proposed by Hawkins et al. (Diversity and Distributions, 21, 2015, 1360), can ensure that biases can be avoided and all important literature is included.  相似文献   

10.
Recent research has demonstrated that nonchemical stressors may alter the toxicity from chemical exposures. This may have public health implications for low socioeconomic status (SES) communities that may be disproportionately exposed to toxic chemicals and various types of community and personal stressors. Nonchemical stressors may introduce an important source of variability that needs to be considered by risk assessors. Herein, we propose a framework for determining if a chemical–nonchemical interaction exists and, if so, options for incorporating interaction information into risk assessments. We use the increasingly recognized interaction between lead and psychosocial stress to illustrate the framework. We found that lead exposure occurs disproportionately in low SES groups that also tend to face high levels of psychosocial stress; that stress and lead both affect neurodevelopment and that this occurs via similar pathways involving the hypothalamic-pituitary axis. Further, several epidemiological and experimental studies have provided evidence for an interaction between lead and psychosocial stress. The implications of this interaction for risk assessment are also discussed.  相似文献   

11.
The efficiency of visual assessment for grain yield and its components in spring barley rows was examined using a number of assessors. In 20 out of 28 combinations of assessor and character, only 20 or less lines were needed to be retained from the set of 99 lines to save at least 50% of the best 10 lines. Assessments of tillers/row and to a lesser extent 1000-grain weight were generally more effective than assessments of yield/row and grain/ear. The low effectiveness of assessment of grains/ear was attributed to inadequate sampling of ears. Four out of five assessors showed bias towards assessment of tillers/row, the most easily assessed character, in their assessments of yield/row. Experienced barley workers were more successful in their assessments than others less familiar with the crop. Specially developed keys, aimed at making visual assessment more objective, generally had only small positive effects on the efficiency of assessment. Repeated assessments of characters by some assessors were consistent. It was concluded that visual assessment rather than direct measurement should be recognised as a basic tool of breeding in the early generations, using a large number of lines as a ‘safety net’ allowing the loss of some of the best lines.  相似文献   

12.
In microarray studies it is common that the number of replications (i.e. the sample size) is small and that the distribution of expression values differs from normality. In this situation, permutation and bootstrap tests may be appropriate for the identification of differentially expressed genes. However, unlike bootstrap tests, permutation tests are not suitable for very small sample sizes, such as three per group. A variety of different bootstrap tests exists. For example, it is possible to adjust the data to have a common mean before the bootstrap samples are drawn. For small significance levels, which can occur when a large number of genes is investigated, the original bootstrap test, as well as a bootstrap test suggested for the Behrens-Fisher problem, have no power in cases of very small sample sizes. In contrast, the modified test based on adjusted data is powerful. Using a Monte Carlo simulation study, we demonstrate that the difference in power can be huge. In addition, the different tests are illustrated using microarray data.  相似文献   

13.
ABSTRACT: BACKGROUND: In the last years GWA studies have successfully identified common SNPs associated with complex diseases. However, most of the variants found this way account for only a small portion of the trait variance. This fact leads researchers to focus on rare-variant mapping with large scale sequencing, which can be facilitated by using linkage information. The question arises why linkage analysis often fails to identify genes when analyzing complex diseases. Using simulations we have investigated the power of parametric and nonparametric linkage statistics (KC-LOD, NPL, LOD and MOD scores), to detect the effect of genes responsible for complex diseases using different pedigree structures. RESULTS: As expected, a small number of pedigrees with less than three affected individuals has low power to map disease genes with modest effect. Interestingly, the power decreases when unaffected individuals are included in the analysis, irrespective of the true mode of inheritance. Furthermore, we found that the best performing statistic depends not only on the type of pedigrees but also on the true mode of inheritance. CONCLUSIONS: When applied in a sensible way linkage is an appropriate and robust technique to map genes for complex disease. Unlike association analysis, linkage analysis is not hampered by allelic heterogeneity. So, why does linkage analysis often fail with complex diseases? Evidently, when using an insufficient number of small pedigrees, one might miss a true genetic linkage when actually a real effect exists. Furthermore, we show that the test statistic has an important effect on the power to detect linkage as well. Therefore, a linkage analysis might fail if an inadequate test statistic is employed. We provide recommendations regarding the most favorable test statistics, in terms of power, for a given mode of inheritance and type of pedigrees under study, in order to reduce the probability to miss a true linkage.  相似文献   

14.
Human and ecological health risk assessments and the decisions that stem from them require the acquisition and analysis of data. In agencies that are responsible for health risk decision-making, data (and/or opinions/judgments) are obtained from sources such as scientific literature, analytical and process measurements, expert elicitation, inspection findings, and public and private research institutions. Although the particulars of conducting health risk assessments of given disciplines may be dramatically different, a common concern is the subjective nature of judging data utility. Often risk assessors are limited to available data that may not be completely appropriate to address the question being asked. Data utility refers to the ability of available data to support a risk-based decision for a particular risk assessment. This article familiarizes the audience with the concept of data utility and is intended to raise the awareness of data collectors (e.g., researchers), risk assessors, and risk managers to data utility issues in health risk assessments so data collection and use will be improved. In order to emphasize the cross-cutting nature of data utility, the discussion has not been organized into a classical partitioning of risk assessment concerns as being either human health- or ecological health-oriented, as per the U.S. Environmental Protection Agency's Superfund Program.  相似文献   

15.
The current development of densely spaced collections of single nucleotide polymorphisms (SNPs) will lead to genomewide association studies for a wide range of diseases in many different populations. Determinations of the appropriate number of SNPs to genotype involve a balancing of power and cost. Several variables are important in these determinations. We show that there are different combinations of sample size and marker density that can be expected to achieve the same power. Within certain bounds, investigators can choose between designs with more subjects and fewer markers or those with more markers and fewer subjects. Which designs are more cost-effective depends on the cost of phenotyping versus the cost of genotyping. We show that, under the assumption of a set cost for genotyping, one can calculate a "threshold cost" for phenotyping; when phenotyping costs per subject are less than this threshold, designs with more subjects will be more cost-effective than designs with more markers. This framework for determining a cost-effective study will aid in the planning of studies, especially if there are choices to be made with respect to phenotyping methods or study populations.  相似文献   

16.
The effectiveness and credibility of environmental decisions depend on the information provided by scientific assessments. However, the conflicting assessments provided by government agencies, industries, and environmental advocacy groups suggest that biases occur during assessment processes. Sources of bias include personal bias, regulatory capture, advocacy, reliance on volunteer assessors, biased stakeholder and peer review processes, literature searches, standardization of data, inappropriate standards of proof, misinterpretation, and ambiguity. Assessors can adopt practices to increase objectivity, transparency, and clarity. Decision-makers, managers of assessors, and institutions that commission assessments can adopt other practices that reduce pressures on assessors and reduce opportunities for expression of the personal biases of assessors. Environmental assessment should be recognized as a discipline with its own technical and ethical best practices.  相似文献   

17.
Summary We have independently repeated the computer simulations on which Nei and Tateno (1978) base their criticism of REH theory and have extended the analysis to include mRNAs as well as proteins. The simulation data confirm the correctness of the REH method. The high average value of the fixation intensity 2 found by Nei and Tateno is due to two factors: 1) they reported only the five replications in which 2 was high, excluding the forty-five replications containing the more representative data;and 2) the lack of information, inherent to protein sequence data, about fixed mutations at the third nucleotide position within codons, as the values are lower when the estimate is made from the mRNAs that code for the proteins. REH values calculated from protein or nucleic acid data on the basis of the equiprobability of genetic events underestimate, not overestimate, the total fixed mutations. In REH theory the experimental data determine the estimate T2 of the time average number of codons that have been free to fix mutations during a given period of divergence. In the method of Nei and Tateno it is assumed, despite evidence to the contrary, that every amino acid position may fix a mutation. Under the latter assumption, the measure X2 of genetic divergence suggested by Nei and Tateno is not tenable: values of X2 for the hemoglobin divergences are less than the minimum number of fixed substitutions known to have occurred.Within the context of REH theory, a paradox, first posed by Zuckerkandl, with respect to the high rate of covarion turnover and the nature of general function sites in proteins is resolved.  相似文献   

18.
In quantitative trait locus (QTL) mapping studies, it is mandatory that the available financial resources are spent in such a way that the power for detection of QTL is maximized. The objective of this study was to optimize for three different fixed budgets the power of QTL detection 1 − β* in recombinant inbred line (RIL) populations derived from a nested design by varying (1) the genetic complexity of the trait, (2) the costs for developing, genotyping, and phenotyping RILs, (3) the total number of RILs, and (4) the number of environments and replications per environment used for phenotyping. Our computer simulations were based on empirical data of 653 single nucleotide polymorphism markers of 26 diverse maize inbred lines which were selected on the basis of 100 simple sequence repeat markers out of a worldwide sample of 260 maize inbreds to capture the maximum genetic diversity. For the standard scenario of costs, the optimum number of test environments (E opt) ranged across the examined total budgets from 7 to 19 in the scenarios with 25 QTL. In comparison, the E opt values observed for the scenarios with 50 and 100 QTL were slightly higher. Our finding of differences in 1 − β* estimates between experiments with optimally and sub-optimally allocated resources illustrated the potential to improve the power for QTL detection without increasing the total resources necessary for a QTL mapping experiment. Furthermore, the results of our study indicated that also in studies using the latest genomics tools to dissect quantitative traits, it is required to evaluate the individuals of the mapping population in a high number of environments with a high number of replications per environment.  相似文献   

19.
The frequency of cluster-randomized trials (CRTs) in peer-reviewed literature has increased exponentially over the past two decades. CRTs are a valuable tool for studying interventions that cannot be effectively implemented or randomized at the individual level. However, some aspects of the design and analysis of data from CRTs are more complex than those for individually randomized controlled trials. One of the key components to designing a successful CRT is calculating the proper sample size (i.e. number of clusters) needed to attain an acceptable level of statistical power. In order to do this, a researcher must make assumptions about the value of several variables, including a fixed mean cluster size. In practice, cluster size can often vary dramatically. Few studies account for the effect of cluster size variation when assessing the statistical power for a given trial. We conducted a simulation study to investigate how the statistical power of CRTs changes with variable cluster sizes. In general, we observed that increases in cluster size variability lead to a decrease in power.  相似文献   

20.
Ecological risk assessments often include mechanistic food chain models based on toxicity reference values (TRVs) and a hazard quotient approach. TRVs intended for screening purposes or as part of a larger weight-of-evidence (WOE) assessment are readily available. However, our experience suggests that food chain models using screening-level TRVs often form the primary basis for risk management at smaller industrial sites being redeveloped for residential or urban parkland uses. Iterative improvement of a food chain model or the incorporation of multiple lines of evidence for these sites are often impractical from a cost-benefit perspective when compared to remedial alternatives. We recommend risk assessors examine the assumptions and factors in the TRV derivation process, and where appropriate, modify the TRVs to improve their ecological relevance. Five areas where uncertainty likely contributes to excessively conservative hazard quotients are identified for consideration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号