首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The parallel conductance volume, created by the conductivity of structures surrounding the ventricular blood pool, can be estimated by using a saline dilution technique. This paper examines the use of a novel volume reduction method, during a standard vena caval preload reduction maneuver, as an alternative to the routinely used saline dilution method to calibrate conductance catheter measurements in the left (LV) and right ventricle (RV) of animals and humans. The serial reproducibility of both methods was examined by measurement of percent difference, and by assessing the coefficient of repeatability 1) between two measurements within the same subject, 2) between the two techniques, and 3) interobserver variability. The effect of ventricular size and contractile state on the volume reduction technique was also observed. It was essential to ensure the technique was not affected by inotropic state. The volume reduction technique and saline dilution method were repeated at three different loading states (baseline, 5, and 10 microg x kg(-1) x min(-1) of dobutamine). The coefficient of repeatability between serial measurements was similar for both the volume reduction and saline dilution methods, and good interobserver variability was demonstrated. The volume reduction technique was compared with the saline dilution technique over a large range of ventricular sizes. No significant difference was observed in the RV or LV of adult humans or in the LV of neonatal pigs and children. There was no significant effect on either the saline dilution or the volume reduction technique as the inotropic state increased. In conclusion, the volume reduction technique is neither affected by ventricular size nor contractile state, is repeatable between different observers, and can be used to substitute the saline dilution method when preload reduction of the ventricle is being employed.  相似文献   

2.
We present a novel approach for measuring topical microbicide gel dilution using optical imaging. The approach compares gel thickness measurements from fluorimetry and multiplexed low coherence interferometry in order to calculate dilution of a gel. As a microbicide gel becomes diluted at fixed thickness, its mLCI thickness measurement remains constant, while the fluorimetry signal decreases in intensity. The difference between the two measurements is related to the extent of gel dilution. These two optical modalities are implemented in a single endoscopic instrument that enables simultaneous data collection. A preliminary validation study was performed with in vitro placebo gel measurements taken in a controlled test socket. It was found that change in slope of the regression line between fluorimetry and mLCI based measurements indicates dilution. A dilution calibration curve was then generated by repeating the test socket measurements with serial dilutions of placebo gel with vaginal fluid simulant. This methodology can provide valuable dilution information on candidate microbicide products, which could substantially enhance our understanding of their in vivo functioning.  相似文献   

3.
We examined the quantitation of myosin regulatory light chain phosphorylation (MRLCP) by Western blot and found both offset and saturation errors. The desirable characteristics of an MRLCP assay are that the dynamic range be 60- to 100-fold and that the detection threshold be known and preferably very small relative to total MRLC concentration. No technique examined provided all these characteristics. However, accurate measurements can be obtained by including serial dilutions of the sample to provide a fractional calibration scale in terms of the dephosphorylated light chain and by using interpolation of the phosphorylated band signal intensity to provide values for the relative phosphorylation ratio. We found that this method offers several advantages over methods that rely on signal ratios from single samples: The dilution ratio method is less subject to errors from differences in protein load, it offers estimates of the error in the individual measurement, and has some redundancy that increases the likelihood of obtaining a valid measurement despite gel or membrane artifacts.  相似文献   

4.
The problem of missing data is common in all fields of science. Various methods of estimating missing values in a dataset exist, such as deletion of cases, insertion of sample mean, and linear regression. Each approach presents problems inherent in the method itself or in the nature of the pattern of missing data. We report a method that (1) is more general in application and (2) provides better estimates than traditional approaches, such as one-step regression. The model is general in that it may be applied to singular matrices, such as small datasets or those that contain dummy or index variables. The strength of the model is that it builds a regression equation iteratively, using a bootstrap method. The precision of the regressed estimates of a variable increases as regressed estimates of the predictor variables improve. We illustrate this method with a set of measurements of European Upper Paleolithic and Mesolithic human postcranial remains, as well as a set of primate anthropometric data. First, simulation tests using the primate data set involved randomly turning 20% of the values to "missing". In each case, the first iteration produced significantly better estimates than other estimating techniques. Second, we applied our method to the incomplete set of human postcranial measurements. MISDAT estimates always perform better than replacement of missing data by means and better than classical multiple regression. As with classical multiple regression, MISDAT performs when squared multiple correlation values approach the reliability of the measurement to be estimated, e.g., above about 0. 8.  相似文献   

5.
Film coating thickness of minitablets was estimated in-line during coating in a fluid-bed equipment by means of visual imaging. An existing, commercially available image acquisition system was used for image acquisition, while dedicated image analysis and data analysis methods were developed for this purpose. The methods were first tested against simulated minitablet’s images and after that examined on a laboratory-scale fluid-bed Wurster coating process. An observation window cleaning mechanism was developed for this purpose. Six batches of minitablets were coated in total, using two different dispersions, where for the second dispersion coating endpoint was determined based on the in-line measurement. Coating thickness estimates were calculated from the increasing size distributions of the minitablet’s major and minor lengths, assessed from the acquired images. Information on both the minitablet’s average band and average cap coating thicknesses was obtained. The in-line coating thickness estimates were compared to the coating thickness weight gain calculations and the optical microscope measurements as a reference method. Average band coating thickness estimate was found the most accurate in comparison to microscope measurements, with root mean square error of 1.30 μm. The window cleaning mechanism was crucial for the accuracy of the in-line measurements as was evident from the corresponding decrease of the root mean square error (9.52 μm, band coating thickness). The presented visual imaging approach exhibits accuracy of at least 2 μm and is not susceptible to coating formulation or color variations. It presents a promising alternative to other existing techniques for the in-line coating thickness estimation.  相似文献   

6.
Metabolic fluxes, estimated from stable isotope studies, provide a key to quantifying physiology in fields ranging from metabolic engineering to the analysis of human metabolic diseases. A serious drawback of the flux estimation method in current use is that it does not produce confidence limits for the estimated fluxes. Without this information it is difficult to interpret flux results and expand the physiological significance of flux studies. To address this shortcoming we derived analytical expressions of flux sensitivities with respect to isotope measurements and measurement errors. These tools allow the determination of local statistical properties of fluxes and relative importance of measurements. Furthermore, we developed an efficient algorithm to determine accurate flux confidence intervals and demonstrated that confidence intervals obtained with this method closely approximate true flux uncertainty. In contrast, confidence intervals approximated from local estimates of standard deviations are inappropriate due to inherent system nonlinearities. We applied these methods to analyze the statistical significance and confidence of estimated gluconeogenesis fluxes from human studies with [U-13C]glucose as tracer and found true limits for flux estimation in specific human isotopic protocols.  相似文献   

7.
Participant-level meta-analysis across multiple studies increases the sample size for pooled analyses, thereby improving precision in effect estimates and enabling subgroup analyses. For analyses involving biomarker measurements as an exposure of interest, investigators must first calibrate the data to address measurement variability arising from usage of different laboratories and/or assays. In practice, the calibration process involves reassaying a random subset of biospecimens from each study at a central laboratory and fitting models that relate the study-specific “local” and central laboratory measurements. Previous work in this area treats the calibration process from the perspective of measurement error techniques and imputes the estimated central laboratory value among individuals with only a local laboratory measurement. In this work, we propose a repeated measures method to calibrate biomarker measurements pooled from multiple studies with study-specific calibration subsets. We account for correlation between measurements made on the same person and between measurements made at the same laboratory. We demonstrate that the repeated measures approach provides valid inference, and compare it to existing calibration approaches grounded in measurement error techniques in an example describing the association between circulating vitamin D and stroke.  相似文献   

8.
We propose a new method for using validation data to correct self-reported weight and height in surveys that do not measure respondents. The standard correction in prior research regresses actual measures on reported values using an external validation dataset, and then uses the estimated coefficients to predict actual measures in the primary dataset. This approach requires the strong assumption that the expectations of measured weight and height conditional on the reported values are the same in both datasets. In contrast, we use percentile ranks rather than levels of reported weight and height. Our approach requires the weaker assumption that the conditional expectations of actual measures are increasing in reported values in both samples. This makes our correction more robust to differences in measurement error across surveys as long as both surveys represent the same population. We examine three nationally representative datasets and find that misreporting appears to be sensitive to differences in survey context. When we compare predicted BMI distributions using the two validation approaches, we find that the standard correction is affected by differences in misreporting while our correction is not. Finally, we present several examples that demonstrate the potential importance of our correction for future econometric analyses and estimates of obesity rates.  相似文献   

9.
For the quantitative analysis of an unknown sample a calibration curve should be obtained, as analytical instruments give relative, rather than absolute measurements. Therefore, researchers should make standard samples with various known concentrations, measure each standard and the unknown sample, and then determine the concentration of the unknown by comparing the measured value to those of the standards. These procedures are tedious and time-consuming. Therefore, we developed a polymer based microfluidic device from polydimethylsiloxane, which integrates serial dilution and capillary electrophoresis functions in a single device. The integrated microchip can provide a one-step analytical tool, and thus replace the complex experimental procedures. Two plastic syringes, one containing a buffer solution and the other a standard solution, were connected to two inlet holes on a microchip, and pushed by a hydrodynamic force. The standard sample is serially diluted to various concentrations through the microfluidic networks. The diluted samples are sequentially introduced through microchannels by electro-osmotic force, and their laser-induced fluorescence signals measured by capillary electrophoresis. We demonstrate the integrated microchip performance by measuring the fluorescence signals of fluorescein at various concentrations. The calibration curve obtained from the electropherograms showed the expected linearity.  相似文献   

10.
Mass spectrometric (MS) isotopomer analysis has become a standard tool for investigating biological systems using stable isotopes. In particular, metabolic flux analysis uses mass isotopomers of metabolic products typically formed from 13C-labeled substrates to quantitate intracellular pathway fluxes. In the current work, we describe a model-driven method of numerical bias estimation regarding MS isotopomer analysis. Correct bias estimation is crucial for measuring statistical qualities of measurements and obtaining reliable fluxes. The model we developed for bias estimation corrects a priori unknown systematic errors unique for each individual mass isotopomer peak. For validation, we carried out both computational simulations and experimental measurements. From stochastic simulations, it was observed that carbon mass isotopomer distributions and measurement noise can be determined much more precisely only if signals are corrected for possible systematic errors. By removing the estimated background signals, the residuals resulting from experimental measurement and model expectation became consistent with normality, experimental variability was reduced, and data consistency was improved. The method is useful for obtaining systematic error-free data from 13C tracer experiments and can also be extended to other stable isotopes. As a result, the reliability of metabolic fluxes that are typically computed from mass isotopomer measurements is increased.  相似文献   

11.
Population abundances are rarely, if ever, known. Instead, they are estimated with some amount of uncertainty. The resulting measurement error has its consequences on subsequent analyses that model population dynamics and estimate probabilities about abundances at future points in time. This article addresses some outstanding questions on the consequences of measurement error in one such dynamic model, the random walk with drift model, and proposes some new ways to correct for measurement error. We present a broad and realistic class of measurement error models that allows both heteroskedasticity and possible correlation in the measurement errors, and we provide analytical results about the biases of estimators that ignore the measurement error. Our new estimators include both method of moments estimators and "pseudo"-estimators that proceed from both observed estimates of population abundance and estimates of parameters in the measurement error model. We derive the asymptotic properties of our methods and existing methods, and we compare their finite-sample performance with a simulation experiment. We also examine the practical implications of the methods by using them to analyze two existing population dynamics data sets.  相似文献   

12.
We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model''s estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data.  相似文献   

13.
Abstract: Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1–100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36–42%, and associated standard errors increased 38–55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species. (JOURNAL OF WILDLIFE MANAGEMENT 72(3):808–813; 2008)  相似文献   

14.
This paper addresses the problem of estimating an age-at-death distribution or paleodemographic profile from osteological data. It is demonstrated that the classical two-stage procedure whereby one first constructs estimates of age-at-death of individual skeletons and then uses these age estimates to obtain a paleodemographic profile is not a correct approach. This is a consequence of Bayes' theorem. Instead, we demonstrate a valid approach that proceeds from the opposite starting point: given skeletal age-at-death, one first estimates the probability of assigning the skeleton into a specific osteological age-indicator stage. We show that this leads to a statistically valid method for obtaining a paleodemographic profile, and moreover, that valid individual age estimation itself requires a demographic profile and therefore is done subsequent to its construction. Individual age estimation thus becomes the last rather than the first step in the estimation procedure. A central concept of our statistical approach is that of a weight function. A weight function is associated with each osteological age-indicator stage or category, and provides the probability that a specific age indicator stage is observed, given age-at-death of the individual. We recommend that weight functions be estimated nonparametrically from a reference data set. In their entirety, the weight functions characterize the relevant stochastic properties of a chosen age indicator. For actual estimation of the paleodemographic profile, a parametric age distribution in the target sample is assumed. The maximum likelihood method is used to identify the unknown parameters of this distribution. As some components are estimated nonparametrically, one then has a semiparametric model. We show how to obtain valid estimates of individual age-at-death, confidence regions, and goodness-of-fit tests. The methods are illustrated with both real and simulated data.  相似文献   

15.
An alteration to Woodward's methods is recommended for deriving a 1 — α confidence interval for microbial density using serial dilutions with most-probable-number (MPN) estimates. Outcomes of the serial dilution test are ordered by their MPNs. A lower limit for the confidence interval corresponding to an outcome y is the density for which y and all higher ordered outcomes have total probability α/2. An upper limit is derived in the analogous way. An alteration increases the lowest lower limits and decreases the highest upper limits. For comparison, a method that is optimal in the sense of null hypothesis rejection is described. This method ranks outcomes dependent upon the microbial density in question, using proportional first derivatives of the probabilities. These and currently used methods are compared. The recommended method is shown to be more desirable in certain respects, although resulting in slightly wider confidence intervals than De Man's (1983) method.  相似文献   

16.
The formation of a soil ingestion distribution based on pooling data from current soil ingestion studies is appealing. An important issue in forming such a distribution is what to do with negative soil ingestion estimates for particular subjects, because they comprise approximately 10 to 40% of the total soil ingestion estimates. A method of correcting for the negative estimates of soil ingestion is to make use of the “soil ingestion detection limit”;. An appropriate methodology for forming estimates of such detection limits is available in the literature. This paper discusses appropriate use of the existing soil ingestion detection limit methodology in forming a pooled database using current soil ingestion study data. The discussion focuses attention on the current limitations of children's soil ingestion data and potential pitfalls in applying the detection limit model when generating a soil ingestion distribution. In summary, currently available soil ingestion data are not sufficiently reliable to impute individual soil ingestion estimates below the detection limit. Research directed toward identifying and quantifying individual error in soil ingestion estimates is needed to overcome this limitation.  相似文献   

17.
Bioassays using serial soil dilutions and most probable number (MPN) estimations have been used by various authors to quantify inoculum of soil-borne plant pathogens. The requirements of a reliable bioassay are discussed; they include a good choice of dilution series and reproducible growing conditions. Sources of computer programs for analysis of the data are listed. The importance of testing the fit to the mathematical model used is illustrated and emphasised. Factors affecting the size and stability of the standard errors and of the inherent bias of the most probable number estimate are discussed. Equations are presented for calculating the expected standard errors, approximate confidence limits and least significant differences for different dilution factors and numbers of replicates. The benefits of using uneven replication are illustrated. Mathematical considerations show that the technique should enable differences of an order of magnitude to be detected and MPNs should be quoted with a maximum of two significant figures. Dilution ratios as large as 10 should be avoided. Statistical and biological difficulties, especially in standardising growing conditions when soil moisture is critical, indicate that results should normally be regarded as relative, rather than absolute, measurements of inoculum.  相似文献   

18.
We consider here the problem of classifying a macro-level object based on measurements of embedded (micro-level) observations within each object, for example, classifying a patient based on measurements on a collection of a random number of their cells. Classification problems with this hierarchical, nested structure have not received the same statistical understanding as the general classification problem. Some heuristic approaches have been developed and a few authors have proposed formal statistical models. We focus on the problem where heterogeneity exists between the macro-level objects within a class. We propose a model-based statistical methodology that models the log-odds of the macro-level object belonging to a class using a latent-class variable model to account for this heterogeneity. The latent classes are estimated by clustering the macro-level object density estimates. We apply this method to the detection of patients with cervical neoplasia based on quantitative cytology measurements on cells in a Papanicolaou smear. Quantitative cytology is much cheaper and potentially can take less time than the current standard of care. The results show that the automated quantitative cytology using the proposed method is roughly equivalent to clinical cytopathology and shows significant improvement over a statistical model that does not account for the heterogeneity of the data.  相似文献   

19.
As much of the focus of genetics and molecular biology has shifted toward the systems level, it has become increasingly important to accurately extract biologically relevant signal from thousands of related measurements. The common property among these high-dimensional biological studies is that the measured features have a rich and largely unknown underlying structure. One example of much recent interest is identifying differentially expressed genes in comparative microarray experiments. We propose a new approach aimed at optimally performing many hypothesis tests in a high-dimensional study. This approach estimates the optimal discovery procedure (ODP), which has recently been introduced and theoretically shown to optimally perform multiple significance tests. Whereas existing procedures essentially use data from only one feature at a time, the ODP approach uses the relevant information from the entire data set when testing each feature. In particular, we propose a generally applicable estimate of the ODP for identifying differentially expressed genes in microarray experiments. This microarray method consistently shows favorable performance over five highly used existing methods. For example, in testing for differential expression between two breast cancer tumor types, the ODP provides increases from 72% to 185% in the number of genes called significant at a false discovery rate of 3%. Our proposed microarray method is freely available to academic users in the open-source, point-and-click EDGE software package.  相似文献   

20.
Accurate measurements of metabolic fluxes in living cells are central to metabolism research and metabolic engineering. The gold standard method is model-based metabolic flux analysis (MFA), where fluxes are estimated indirectly from mass isotopomer data with the use of a mathematical model of the metabolic network. A critical step in MFA is model selection: choosing what compartments, metabolites, and reactions to include in the metabolic network model. Model selection is often done informally during the modelling process, based on the same data that is used for model fitting (estimation data). This can lead to either overly complex models (overfitting) or too simple ones (underfitting), in both cases resulting in poor flux estimates. Here, we propose a method for model selection based on independent validation data. We demonstrate in simulation studies that this method consistently chooses the correct model in a way that is independent on errors in measurement uncertainty. This independence is beneficial, since estimating the true magnitude of these errors can be difficult. In contrast, commonly used model selection methods based on the χ2-test choose different model structures depending on the believed measurement uncertainty; this can lead to errors in flux estimates, especially when the magnitude of the error is substantially off. We present a new approach for quantification of prediction uncertainty of mass isotopomer distributions in other labelling experiments, to check for problems with too much or too little novelty in the validation data. Finally, in an isotope tracing study on human mammary epithelial cells, the validation-based model selection method identified pyruvate carboxylase as a key model component. Our results argue that validation-based model selection should be an integral part of MFA model development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号