首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Evaluation of Measurement Data - Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides general rules for evaluating and expressing uncertainty in measurement. When a measurand, y, is calculated from other measurements through a functional relationship, uncertainties in the input variables will propagate through the calculation to an uncertainty in the output y. The manner in which such uncertainties are propagated through a functional relationship provides much of the mathematical challenge to fully understanding the GUM.The aim of this review is to provide a general overview of the GUM and to show how the calculation of uncertainty in the measurand may be achieved through a functional relationship. That is, starting with the general equation for combining uncertainty components as outlined in the GUM, we show how this general equation can be applied to various functional relationships in order to derive a combined standard uncertainty for the output value of the particular function (the measurand). The GUM equation may be applied to any mathematical form or functional relationship (the starting point for laboratory calculations) and describes the propagation of uncertainty from the input variable(s) to the output value of the function (the end point or outcome of the laboratory calculation). A rule-based approach is suggested with a number of the more common rules tabulated for the routine calculation of measurement uncertainty.  相似文献   

2.
The prediction of space radiation induced cancer risk carries large uncertainties with two of the largest uncertainties being radiation quality and dose-rate effects. In risk models the ratio of the quality factor (QF) to the dose and dose-rate reduction effectiveness factor (DDREF) parameter is used to scale organ doses for cosmic ray proton and high charge and energy (HZE) particles to a hazard rate for γ-rays derived from human epidemiology data. In previous work, particle track structure concepts were used to formulate a space radiation QF function that is dependent on particle charge number Z, and kinetic energy per atomic mass unit, E. QF uncertainties where represented by subjective probability distribution functions (PDF) for the three QF parameters that described its maximum value and shape parameters for Z and E dependences. Here I report on an analysis of a maximum QF parameter and its uncertainty using mouse tumor induction data. Because experimental data for risks at low doses of γ-rays are highly uncertain which impacts estimates of maximum values of relative biological effectiveness (RBEmax), I developed an alternate QF model, denoted QFγAcute where QFs are defined relative to higher acute γ-ray doses (0.5 to 3 Gy). The alternate model reduces the dependence of risk projections on the DDREF, however a DDREF is still needed for risk estimates for high-energy protons and other primary or secondary sparsely ionizing space radiation components. Risk projections (upper confidence levels (CL)) for space missions show a reduction of about 40% (CL∼50%) using the QFγAcute model compared the QFs based on RBEmax and about 25% (CL∼35%) compared to previous estimates. In addition, I discuss how a possible qualitative difference leading to increased tumor lethality for HZE particles compared to low LET radiation and background tumors remains a large uncertainty in risk estimates.  相似文献   

3.
A convenient method for evaluation of biochemical reaction rate coefficients and their uncertainties is described. The motivation for developing this method was the complexity of existing statistical methods for analysis of biochemical rate equations, as well as the shortcomings of linear approaches, such as Lineweaver-Burk plots. The nonlinear least-squares method provides accurate estimates of the rate coefficients and their uncertainties from experimental data. Linearized methods that involve inversion of data are unreliable since several important assumptions of linear regression are violated. Furthermore, when linearized methods are used, there is no basis for calculation of the uncertainties in the rate coefficients. Uncertainty estimates are crucial to studies involving comparisons of rates for different organisms or environmental conditions. The spreadsheet method uses weighted least-squares analysis to determine the best-fit values of the rate coefficients for the integrated Monod equation. Although the integrated Monod equation is an implicit expression of substrate concentration, weighted least-squares analysis can be employed to calculate approximate differences in substrate concentration between model predictions and data. An iterative search routine in a spreadsheet program is utilized to search for the best-fit values of the coefficients by minimizing the sum of squared weighted errors. The uncertainties in the best-fit values of the rate coefficients are calculated by an approximate method that can also be implemented in a spreadsheet. The uncertainty method can be used to calculate single-parameter (coefficient) confidence intervals, degrees of correlation between parameters, and joint confidence regions for two or more parameters. Example sets of calculations are presented for acetate utilization by a methanogenic mixed culture and trichloroethylene cometabolism by a methane-oxidizing mixed culture. An additional advantage of application of this method to the integrated Monod equation compared with application of linearized methods is the economy of obtaining rate coefficients from a single batch experiment or a few batch experiments rather than having to obtain large numbers of initial rate measurements. However, when initial rate measurements are used, this method can still be used with greater reliability than linearized approaches.  相似文献   

4.
A computer-assisted method for determining population counts using the 'most probable number' (MPN) was developed. The Microsoft Excel spreadsheet and its Solver tool were used to generate MPNs, error estimates and confidence limits. Our method was flexible, allowing the use of unbalanced replication schemes and varying replication numbers and inoculation volumes. Furthermore, it required no programming skills and generated fast results, which were comparable to those of standard MPN tables and MPN software.  相似文献   

5.
Following an evaluation of the various methods available for non-destructive biomass estimation in short rotation forestry, a standardised procedure was defined and incorporated into a computer programme (BioEst). Special efforts were made to ensure that the system can be used by people who are unfamiliar with computers and mathematics. BioEst provides an interface between a calliper and a spreadsheet programme which was written in Microsoft Excel macro language. Therefore, it is simple to modify the programme and create personal protocols. BioEst can be run on a portable PC with Microsoft Excel for Windows. The computer continuously recalculates an estimate of the amount of biomass per hectare, as well as some summary statistics, when fed data on shoot diameter obtained by making row-section-wise measurements with a standard digital calliper. BioEst is available without cost from the author.  相似文献   

6.
Assurance of monoclonality of recombinant cell lines is a critical issue to gain regulatory approval in biological license application (BLA). Some of the requirements of regulatory agencies are the use of proper documentations and appropriate statistical analysis to demonstrate monoclonality. In some cases, one round may be sufficient to demonstrate monoclonality. In this article, we propose the use of confidence intervals for assessing monoclonality for limiting dilution cloning in the generation of recombinant manufacturing cell lines based on a single round. The use of confidence intervals instead of point estimates allow practitioners to account for the uncertainty present in the data when assessing whether an estimated level of monoclonality is consistent with regulatory requirements. In other cases, one round may not be sufficient and two consecutive rounds are required to assess monoclonality. When two consecutive subclonings are required, we improved the present methodology by reducing the infinite series proposed by Coller and Coller (Hybridoma 1983;2:91–96) to a simpler series. The proposed simpler series provides more accurate and reliable results. It also reduces the level of computation and can be easily implemented in any spreadsheet program like Microsoft Excel. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1061–1068, 2016  相似文献   

7.
In a recent epidemiological study, Bayesian uncertainties on lung doses have been calculated to determine lung cancer risk from occupational exposures to plutonium. These calculations used a revised version of the Human Respiratory Tract Model (HRTM) published by the ICRP. In addition to the Bayesian analyses, which give probability distributions of doses, point estimates of doses (single estimates without uncertainty) were also provided for that study using the existing HRTM as it is described in ICRP Publication 66; these are to be used in a preliminary analysis of risk. To infer the differences between the point estimates and Bayesian uncertainty analyses, this paper applies the methodology to former workers of the United Kingdom Atomic Energy Authority (UKAEA), who constituted a subset of the study cohort. The resulting probability distributions of lung doses are compared with the point estimates obtained for each worker. It is shown that mean posterior lung doses are around two- to fourfold higher than point estimates and that uncertainties on doses vary over a wide range, greater than two orders of magnitude for some lung tissues. In addition, we demonstrate that uncertainties on the parameter values, rather than the model structure, are largely responsible for these effects. Of these it appears to be the parameters describing absorption from the lungs to blood that have the greatest impact on estimates of lung doses from urine bioassay. Therefore, accurate determination of the chemical form of inhaled plutonium and the absorption parameter values for these materials is important for obtaining reliable estimates of lung doses and hence risk from occupational exposures to plutonium.  相似文献   

8.
Aims: The purpose of this work was to derive a simple Excel spreadsheet and a set of standard tables of most probable number (MPN) values that can be applied by users of International Standard Methods to obtain the same output values for MPN, SD of the MPN, 95% confidence limits and test validity. With respect to the latter, it is considered that the Blodgett concept of ‘rarity’ is more valuable than the frequently used approach of improbability (vide de Man). Methods and Results: The paper describes the statistical procedures used in the work and the reasons for introducing a new set of conceptual and practical approaches to the determination of MPNs and their parameters. Examples of MPNs derived using these procedures are provided. The Excel spreadsheet can be downloaded from http://www.wiwiss.fu‐berlin.de/institute/iso/mitarbeiter/wilrich/index.html . Conclusions: The application of the revised approach to the determination of MPN parameters permits those who wish to use tabulated values, and those who require access to a simple spreadsheet to determine values for nonstandard test protocols, to obtain the same output values for any specific set of multiple test results. The concept of ‘rarity’ is a more easily understood parameter to describe test result combinations that are not statistically valid. Provision of the SD of the log MPN value permits derivation of uncertainty parameters that have not previously been possible. Significance and Impact of the Study: A consistent approach for the derivation of MPNs and their parameters is essential for coherence between International Standard Methods. It is intended that future microbiology standard methods will be based on the procedures described in this paper.  相似文献   

9.
In response to growing awareness of climate change, requests to establish product carbon footprints have been increasing. Product carbon footprints are life cycle assessments restricted to just one impact category, global warming. Product carbon footprint studies generate life cycle inventory results, listing the environmental emissions of greenhouse gases from a product’s lifecycle, and characterize these by their global warming potentials, producing product carbon footprints that are commonly communicated as point values. In the present research we show that the uncertainties surrounding these point values necessitate more sophisticated ways of communicating product carbon footprints, using different sizes of catfish (Pangasius spp.) farms in Vietnam as a case study. As most product carbon footprint studies only have a comparative meaning, we used dependent sampling to produce relative results in order to increase the power for identifying environmentally superior products. We therefore argue that product carbon footprints, supported by quantitative uncertainty estimates, should be used to test hypotheses, rather than to provide point value estimates or plain confidence intervals of products’ environmental performance.  相似文献   

10.
A certain minimal amount of RNA from biological samples is necessary to perform a microarray experiment with suitable replication. In some cases, the amount of RNA available is insufficient, necessitating RNA amplification prior to target synthesis. However, there is some uncertainty about the reliability of targets that have been generated from amplified RNA, because of nonlinearity and preferential amplification. This current work develops a straightforward strategy to assess the reliability of microarray data obtained from amplified RNA. The tabular method we developed, which utilises a Down-Up-Missing-Below (DUMB) classification scheme, shows that microarrays generated with amplified RNA targets are reliable within constraints. There was an increase in false negatives because of the need for increased filtering. Furthermore, this analysis method is generic and can be broadly applied to evaluate all microarray data. A copy of the Microsoft Excel spreadsheet is available upon request from Edward Bearden.  相似文献   

11.
Schütz E  von Ahsen N 《BioTechniques》1999,27(6):1218-22, 1224
The use of thermodynamic parameters for the calculation of oligonucleotide duplex stability provides the best estimates of oligonucleotide melting temperatures (Tm). Such estimates can be used for evidence-based design of molecular biological experiments in which oligonucleotide melting behavior is a critical issue, such as temperature or denaturing gradient gel electrophoreses, Southern blotting or hybridization probe assays on the LightCycler. We have developed a user friendly program for Tm calculation of matched and mismatched probes using the spreadsheet software Microsoft Excel. The most recently published values for entropy and enthalpy of Watson-Crick paris are used, and salt and oligonucleotide concentrations are considered. The 5' and 3' end stability is calculated for the estimation of primer specificity. In addition, the influence of all possible mutations under a given probe can be calculated automatically. The experimental evaluation of predicted Tm with the LightCycler, based on 14 hybridization probes for different gene loci, showed an excellent fit between measured results and values predicted with the thermodynamic model in 14 matched, 25 single mismatched and 8 two-point mismatched assays (r = 0.98; Sy. x = 0.90; y = 1.01 x -0.38). This program is extremely useful for the design of oligonucleotide probes because the use of probes that do not discriminate with a reasonable Tm difference between wild-type and mutation can be avoided in advance.  相似文献   

12.
The generation interval is the interval between the time when an individual is infected by an infector and the time when this infector was infected. Its distribution underpins estimates of the reproductive number and hence informs public health strategies. Empirical generation-interval distributions are often derived from contact-tracing data. But linking observed generation intervals to the underlying generation interval required for modelling purposes is surprisingly not straightforward, and misspecifications can lead to incorrect estimates of the reproductive number, with the potential to misguide interventions to stop or slow an epidemic. Here, we clarify the theoretical framework for three conceptually different generation-interval distributions: the ‘intrinsic’ one typically used in mathematical models and the ‘forward’ and ‘backward’ ones typically observed from contact-tracing data, looking, respectively, forward or backward in time. We explain how the relationship between these distributions changes as an epidemic progresses and discuss how empirical generation-interval data can be used to correctly inform mathematical models.  相似文献   

13.
Delamarche C 《BioTechniques》2000,29(1):100-4, 106-7
Interpretation of multiple sequence alignments is of major interest for the prediction of functional and structural domains in proteins or for the organization of related sequences in families and subfamilies. However, a necessity for the bench scientist is the use of outstanding programs in a friendly computing environment. This paper describes Color and Graphic Display (CGD), a set of modules that runs as part of the Microsoft Excel spreadsheet to color and analyze multiple sequence alignments. Discussed here are the main functions of CGD and the use of the program to highlight residues of importance in a water channel family. Although CGD was created for protein sequences, most of the modules are compatible with DNA sequences.  相似文献   

14.
For infectious disease dynamical models to inform policy for containment of infectious diseases the models must be able to predict; however, it is well recognised that such prediction will never be perfect. Nevertheless, the consensus is that although models are uncertain, some may yet inform effective action. This assumes that the quality of a model can be ascertained in order to evaluate sufficiently model uncertainties, and to decide whether or not, or in what ways or under what conditions, the model should be ‘used’. We examined uncertainty in modelling, utilising a range of data: interviews with scientists, policy-makers and advisors, and analysis of policy documents, scientific publications and reports of major inquiries into key livestock epidemics. We show that the discourse of uncertainty in infectious disease models is multi-layered, flexible, contingent, embedded in context and plays a critical role in negotiating model credibility. We argue that usability and stability of a model is an outcome of the negotiation that occurs within the networks and discourses surrounding it. This negotiation employs a range of discursive devices that renders uncertainty in infectious disease modelling a plastic quality that is amenable to ‘interpretive flexibility’. The utility of models in the face of uncertainty is a function of this flexibility, the negotiation this allows, and the contexts in which model outputs are framed and interpreted in the decision making process. We contend that rather than being based predominantly on beliefs about quality, the usefulness and authority of a model may at times be primarily based on its functional status within the broad social and political environment in which it acts.  相似文献   

15.

Background

This study determines ‘correlation constants’ between the gold standard histological measurement of retinal thickness and the newer spectral-domain optical coherence tomography (SD-OCT) technology in adult C57BL/6 mice.

Methods

Forty-eight eyes from adult mice underwent SD-OCT imaging and then were histologically prepared for frozen sectioning with H&E staining. Retinal thickness was measured via 10x light microscopy. SD-OCT images and histological sections were standardized to three anatomical sites relative to the optic nerve head (ONH) location. The ratios between SD-OCT to histological thickness for total retinal thickness (TRT) and six sublayers were defined as ‘correlation constants’.

Results

Mean (± SE) TRT for SD-OCT and histological sections was 210.95 µm (±1.09) and 219.58 µm (±2.67), respectively. The mean ‘correlation constant’ for TRT between the SD-OCT and histological sections was 0.96. The retinal thickness for all sublayers measured by SD-OCT vs. histology were also similar, the ‘correlation constant’ values ranged from 0.70 to 1.17. All SD-OCT and histological measurements demonstrated highly significant (p<0.01) strong positive correlations.

Conclusion

This study establishes conversion factors for the translation of ex vivo data into in vivo information; thus enhancing the applicability of SD-OCT in translational research.  相似文献   

16.
Resting-state functional magnetic resonance imaging (rs-fMRI) is widely used to investigate the functional architecture of the healthy human brain and how it is affected by learning, lifelong development, brain disorders or pharmacological intervention. Non-sensory experiences are prevalent during rest and must arise from ongoing brain activity, yet little is known about this relationship. Here, we used two runs of rs-fMRI both immediately followed by the Amsterdam Resting-State Questionnaire (ARSQ) to investigate the relationship between functional connectivity within ten large-scale functional brain networks and ten dimensions of thoughts and feelings experienced during the scan in 106 healthy participants. We identified 11 positive associations between brain-network functional connectivity and ARSQ dimensions. ‘Sleepiness’ exhibited significant associations with functional connectivity within Visual, Sensorimotor and Default Mode networks. Similar associations were observed for ‘Visual Thought’ and ‘Discontinuity of Mind’, which may relate to variation in imagery and thought control mediated by arousal fluctuations. Our findings show that self-reports of thoughts and feelings experienced during a rs-fMRI scan help understand the functional significance of variations in functional connectivity, which should be of special relevance to clinical studies.  相似文献   

17.

Background

Highly parallel sequencing technologies have become important tools in the analysis of sequence polymorphisms on a genomic scale. However, the development of customized software to analyze data produced by these methods has lagged behind.

Methods/Principal Findings

Here I describe a tool, ‘galign’, designed to identify polymorphisms between sequence reads obtained using Illumina/Solexa technology and a reference genome. The ‘galign’ alignment tool does not use Smith-Waterman matrices for sequence comparisons. Instead, a simple algorithm comparing parsed sequence reads to parsed reference genome sequences is used. ‘galign’ output is geared towards immediate user application, displaying polymorphism locations, nucleotide changes, and relevant predicted amino-acid changes for ease of information processing. To do so, ‘galign’ requires several accessory files easily derived from an annotated reference genome. Direct sequencing as well as in silico studies demonstrate that ‘galign’ provides lesion predictions comparable in accuracy to available prediction programs, accompanied by greater processing speed and more user-friendly output. We demonstrate the use of ‘galign’ to identify mutations leading to phenotypic consequences in C. elegans.

Conclusion/Significance

Our studies suggest that ‘galign’ is a useful tool for polymorphism discovery, and is of immediate utility for sequence mining in C. elegans.  相似文献   

18.
Process‐based model analyses are often used to estimate changes in soil organic carbon (SOC), particularly at regional to continental scales. However, uncertainties are rarely evaluated, and so it is difficult to determine how much confidence can be placed in the results. Our objective was to quantify uncertainties across multiple scales in a process‐based model analysis, and provide 95% confidence intervals for the estimates. Specifically, we used the Century ecosystem model to estimate changes in SOC stocks for US croplands during the 1990s, addressing uncertainties in model inputs, structure and scaling of results from point locations to regions and the entire country. Overall, SOC stocks increased in US croplands by 14.6 Tg C yr?1 from 1990 to 1995 and 17.5 Tg C yr?1 during 1995 to 2000, and uncertainties were ±22% and ±16% for the two time periods, respectively. Uncertainties were inversely related to spatial scale, with median uncertainties at the regional scale estimated at ±118% and ±114% during the early and latter part of 1990s, and even higher at the site scale with estimates at ±739% and ±674% for the time periods, respectively. This relationship appeared to be driven by the amount of the SOC stock change; changes in stocks that exceeded 200 Gg C yr?1 represented a threshold where uncertainties were always lower than ±100%. Consequently, the amount of uncertainty in estimates derived from process‐based models will partly depend on the level of SOC accumulation or loss. In general, the majority of uncertainty was associated with model structure in this application, and so attaining higher levels of precision in the estimates will largely depend on improving the model algorithms and parameterization, as well as increasing the number of measurement sites used to evaluate the structural uncertainty.  相似文献   

19.
Capsid-displayed adenoviral peptide libraries have been a significant, yet unfeasible goal in biotechnology. Three barriers have made this difficult: the large size of the viral genome, the low efficiency of converting plasmid-based genomes into packaged adenovirus and the fact that library amplification is hampered by the ability of two (or more) virus to co-infect one cell. Here, we present a novel vector system, pFex, which is capable of overcoming all three barriers. With pFex, modified fiber genes are recombined into the natural genetic locus of adenovirus through unidirectional Cre–lox recombination. Modified-fiber genes can be directly shuttled into replicating viral genomes in mammalian cells. The ‘acceptor’ vector does not contain the fiber gene, and therefore does not propagate until it has received a ‘donor’ fiber gene. Therefore, This methodology overcomes the low efficiency of transfecting large viral genomes and bypasses the need for transition to functional virus. Thus, with a fiber-shuttle library, one can generate and evaluate large numbers of fiber-modified adenovirus simultaneously. Finally, successful fiber genes can be rescued from virus and recombined back into shuttle plasmids, avoiding the need to propagate mixed viral pools. For proof of principal, we use this new system to screen a capsid-displayed peptide library for retargeted viral infection.  相似文献   

20.
Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号