首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
提出具有测量误差的结构回归模型,研究可交换条件下多维协变量的测量误差对平均处理效应估计的影响,在没有其它的附加条件下,尽管大多数模型参数不可识别,平均处理效应仍可识别,由于平均处理效应的极大似然估计求解困难,建议在实际中使用拟极大似然估计作为替代。  相似文献   

3.
Multisensor data fusion (MDF) is an emerging technology to fuse data from multiple sensors in order to make a more accurate estimation of the environment through measurement and detection. Applications of MDF cross a wide spectrum in military and civilian areas. With the rapid evolution of computers and the proliferation of micro-mechanical/electrical systems sensors, the utilization of MDF is being popularized in research and applications. This paper focuses on application of MDF for high quality data analysis and processing in measurement and instrumentation. A practical, general data fusion scheme was established on the basis of feature extraction and merge of data from multiple sensors. This scheme integrates artificial neural networks for high performance pattern recognition. A number of successful applications in areas of NDI (Non-Destructive Inspection) corrosion detection, food quality and safety characterization, and precision agriculture are described and discussed in order to motivate new applications in these or other areas. This paper gives an overall picture of using the MDF method to increase the accuracy of data analysis and processing in measurement and instrumentation in different areas of applications.  相似文献   

4.
The Spearman-Brown Prophesy formula, derived from psychometrics, may be used in anthropometric studies to describe the relationship between the intraclass reliability coefficient for a single measurement and the reliability resulting from the mean of replicate measurements. This theory may be applied to determine expected reliabilities of anthropometric protocols using replicate measurements and to determine the numbers of replicate measurements necessary to achieve desired levels of reliability.  相似文献   

5.
给出可交换条件下单个协变量的带有测量误差的多维结构回归模型,利用该模型研究总体平均处理效应的估计,给出当暴露组和对照组的协变量测量误差同分布时总体平均处理效应的拟极大似然估计及其性质.  相似文献   

6.
    
Shepherd BE  Yu C 《Biometrics》2011,67(3):1083-1091
A data coordinating team performed onsite audits and discovered discrepancies between the data sent to the coordinating center and that recorded at sites. We present statistical methods for incorporating audit results into analyses. This can be thought of as a measurement error problem, where the distribution of errors is a mixture with a point mass at 0. If the error rate is nonzero, then even if the mean of the discrepancy between the reported and correct values of a predictor is 0, naive estimates of the association between two continuous variables will be biased. We consider scenarios where there are (1) errors in the predictor, (2) errors in the outcome, and (3) possibly correlated errors in the predictor and outcome. We show how to incorporate the error rate and magnitude, estimated from a random subset (the audited records), to compute unbiased estimates of association and proper confidence intervals. We then extend these results to multiple linear regression where multiple covariates may be incorrect in the database and the rate and magnitude of the errors may depend on study site. We study the finite sample properties of our estimators using simulations, discuss some practical considerations, and illustrate our methods with data from 2815 HIV-infected patients in Latin America, of whom 234 had their data audited using a sequential auditing plan.  相似文献   

7.
When any process of measuring is considered, one of the basic questions is how to assess the precision of measurement methods and/or instruments. In this paper, this question is formulated and solved as a problem of tolerance regions for absolute and relative normaly distributed errors of measurements.  相似文献   

8.
It is shown that any discrete distribution with non-negative support has a representation in terms of an extended Poisson process (or pure birth process). A particular extension of the simple Poisson process is proposed: one that admits a variety of distributions; the equations for such processes may be readily solved numerically. An analytical approximation for the solution is given, leading to approximate mean-variance relationships. The resulting distributions are then applied to analyses of some biological data-sets.  相似文献   

9.
Quality Assessment and Data Analysis for microRNA Expression Arrays   总被引:1,自引:0,他引:1  
MicroRNAs are small (~22 nt) RNAs that regulate gene expression and play important roles in both normal and disease physiology. The use of microarrays for global characterization of microRNA expression is becoming increasingly popular and has the potential to be a widely used and valuable research tool. However, microarray profiling of microRNA expression raises a number of data analytic challenges that must be addressed in order to obtain reliable results. We introduce here a universal reference microRNA reagent set as well as a series of nonhuman spiked-in synthetic microRNA controls, and demonstrate their use for quality control and between-array normalization of microRNA expression data. We also introduce diagnostic plots designed to assess and compare various normalization methods. We anticipate that the reagents and analytic approach presented here will be useful for improving the reliability of microRNA microarray experiments.  相似文献   

10.
Information Quality (IQ) is a critical factor for the success of many activities in the information age, including the development of data warehouses and implementation of data mining. The issue of IQ risk is recognized during the process of data mining; however, there is no formal methodological approach to dealing with such issues.

Consequently, it is essential to measure the risk of IQ in a data warehouse to ensure success in implementing data mining. This article presents a methodology to determine three IQ risk characteristics: accuracy, comprehensiveness, and non-membership. The methodology provides a set of quantitative models to examine how the quality risks of source information affect the quality for information outputs produced using the relational algebra operations: Restriction, Projection, and Cubic product. It can be used to determine how quality risks associated with diverse data sources affect the derived data. The study also develops a data cube model and associated algebra to support IQ risk operations.  相似文献   


11.
In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments’ efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that—in some setups—a certain extent of misforecasting is desirable from the firm’s point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that—in particular for relatively good forecasters—most of our results are robust to changes in setting the parameters of our multi-agent simulation model.  相似文献   

12.
对测量误差的处理、控制和评定方法是决定测量数据质量即实验室质量水平的基础。实验室条件建设是降低和控制测量误差的保障,实验室质量控制的有效手段是建立适合的实验室条件。  相似文献   

13.
度量误差模型及其应用   总被引:1,自引:0,他引:1  
本文介绍度量误差模型的基本概念和参数估计的基本结果以及与通常回归之间的关系.并讨论了这个模型在生物学中应用的可能性.  相似文献   

14.
Water Quality Quantification: Basics and Implementation   总被引:1,自引:0,他引:1  
Quantitative estimation of water quality and its relationships with management activities is a necessary step in efficient water resources management. However, water quality is typically defined in abstract terms and management activities are rarely quantified with respect to their impact on lake water quality. Here we show by demonstration of systems for Lake Kinneret, Israel and the Naroch Lakes of Belarus how water quality can be quantified in relation to lake management activities to be a part of sustainable management.  相似文献   

15.
基于模型V=aDb,首先在Matlab下用模拟实验的方法,研究了度量误差对模型参数估计的影响,结果表明:当V的误差固定而D的误差不断增大时,用通常最小二乘法对模型进行参数估计,参数a的估计值不断增大,参数b的估计值不断减小,参数估计值随着 D的度量误差的增大越来越远离参数真实值;然后对消除度量误差影响的参数估计方法进行研究,分别用回归校准法、模拟外推法和度量误差模型方法对V和D都有度量误差的数据进行参数估计,结果表明:回归校准法、模拟外推法和度量误差模型方法都能得到参数的无偏估计,克服了用通常最小二乘法进行估计造成的参数估计的系统偏差,结果进一步表明度量误差模型方法优于回归校准法和模拟外推法.  相似文献   

16.
The aim of this paper is to give a brief summary concerning important methodological aspects in establishing the reliability of empirical water quality data. These considerations are relevant for applied work e.g. monitoring programmes, as well as theoretical research, e.g. to validate models. The paper concerns data from Swedish lakes on Hg in pike, perch, water and sediments, and a broad set of limnological data (pH, Secchi depth, temperature, alkalinity, total-P, conductivity, Fe, Ca, hardness, chlorophyll-a and colour). These standard parameters generally vary in a lake, both temporally and areally. The focus of this paper is on such variations and how to express lake-typical values. There are large differences in analytical reliability for different parametres; e.g. Hg (in fish and sediments but not in water) and lake pH can generally be determined with a comparatively great accuracy; the average relative standard deviation (V) is only about 2-3% for pH. Colour, Fe-, total-P-concentration and alkalinity, on the other hand, generally give high V-values. In natural lakes, the variability is often at least twice as large as the “methodological” variability for parameters such as colour, P, Fe and alkalinity (V-values ranging between 20 and 40% on average in our lakes). This implies that for most parameters one must analyse many samples to obtain representative, lake-typical values with a given statistical reliability. A general furmula expressing how many samples are required to establish lake-typical mean values is discussed as well as statistical aspects concerning the range of empirical data in models based on such data.  相似文献   

17.
In the United States, the racial and ethnic statistics published by the National Center for Health Statistics (NCHS) assume that each member of the U.S. population has a race and ethnicity and that if a member is black or white with respect to his risk of one disease, he is the same race with respect to his risk of another. Such an assumption is mistaken. Race and ethnicity are taken by the NCHS to be an intrinsic property of members of a population, when they should be taken to depend on interest. The actual or underlying race or ethnicity of members of a population depends on the risk whose variation within the population we wish to describe or explain.
Michael RootEmail:
  相似文献   

18.

Background

The measurement of cardiac troponin is crucial in the diagnosis of myocardial infarction. The performance of troponin measurement is most conveniently monitored by external quality assessment (EQA) programs. The commutability of EQA samples is often unknown and the effectiveness of EQA programs is limited.

Methods

Commutability of possible EQA materials was evaluated. Commercial control materials used in an EQA program, human serum pools prepared from patient samples, purified analyte preparations, swine sera from model animals and a set of patient samples were measured for cTnI with 4 assays including Abbott Architect, Beckman Access, Ortho Vitros and Siemens Centaur. The measurement results were logarithm-transformed, and the transformed data for patient samples were pairwise analyzed with Deming regression and 95% prediction intervals were calculated for each pair of assays. The commutability of the materials was evaluated by comparing the logarithmic results of the materials with the limits of the intervals. Matrix-related biases were estimated for noncommutable materials. The impact of matrix-related bias on EQA was analyzed and a possible correction for the bias was proposed.

Results

Human serum pools were commutable for all assays; purified analyte preparations were commutable for 2 of the 6 assay pairs; commercial control materials and swine sera were all noncommutable; swine sera showed no reactivity to Vitros assay. The matrix-related biases for noncommutable materials ranged from −83% to 944%. Matrix-related biases of the EQA materials caused major abnormal between-assay variations in the EQA program and correction of the biases normalized the variations.

Conclusion

Commutability of materials has major impact on the effectiveness of EQA programs for cTnI measurement. Human serum pools prepared from patient samples are commutable and other materials are mostly noncommutable. EQA programs should include at least one human serum pool to allow proper interpretation of EQA results.  相似文献   

19.
This article describes methods and issues that are specific to the assessment of change in tumor characteristics as measured using quantitative magnetic resonance (MR) techniques and how this relates to the establishment of quantitative MR imaging (MRI) biomarkers of patient response to therapy. The initial focus is on the various sources of bias and variance in the measurement of microvascular parameters and diffusion parameters as such parameters are being used relatively commonly as secondary or exploratory end points in current phase 1/2 clinical trails of conventional and targeted therapies. Several ongoing initiatives that seek to identify the magnitude of some of the sources of measurement variations are then discussed. Finally, resources being made available through the National Cancer Institute Reference Image Database to Evaluate Response (RIDER) project that might be of use in investigations of quantitative MRI biomarker change analysis are described. These resources include 1) data from phantom-based assessment of system response, including short-term (1 hour) and moderate-term (1 week) contrast response and relaxation time measurement, 2) data obtained from repeated dynamic contrast agent-enhanced MRI studies in intracranial tumors, and 3) data obtained from repeated diffusion MRI studies in both breast and brain. A concluding section briefly discusses issues that must be addressed to allow the transition of MR-based imaging biomarker measures from their current role as secondary/exploratory end points in clinical trials to primary/surrogate markers of response and, ultimately, in clinical application.  相似文献   

20.
The success of projects involving assessment of insect biodiversity depends on many things, but one which is often overlooked is the maintenance of data integrity. This is an issue best considered from project conception, through the design phase to the completion of the sample, specimen and data processing phase. This paper considers some guiding principles and details some logical steps that will help avoid loss of data integrity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号