首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Estimating the encounter rate variance in distance sampling   总被引:1,自引:0,他引:1  
Summary .  The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias.  相似文献   

2.
Proschan MA  Wittes J 《Biometrics》2000,56(4):1183-1187
Sample size calculations for a continuous outcome require specification of the anticipated variance; inaccurate specification can result in an underpowered or overpowered study. For this reason, adaptive methods whereby sample size is recalculated using the variance of a subsample have become increasingly popular. The first proposal of this type (Stein, 1945, Annals of Mathematical Statistics 16, 243-258) used all of the data to estimate the mean difference but only the first stage data to estimate the variance. Stein's procedure is not commonly used because many people perceive it as ignoring relevant data. This is especially problematic when the first stage sample size is small, as would be the case if the anticipated total sample size were small. A more naive approach uses in the denominator of the final test statistic the variance estimate based on all of the data. Applying the Helmert transformation, we show why this naive approach underestimates the true variance and how to construct an unbiased estimate that uses all of the data. We prove that the type I error rate of our procedure cannot exceed alpha.  相似文献   

3.
There are many sources of error on the path from field sample acquisition to subsample analysis. This paper examines one potential source, the subsampling of a processed field sample. Five archived ground field samples were subsampled to determine the optimal number of increments to construct a 10-g subsample. Bulk samples ranged from 338 g to 2150 g. The analytes were energetic compounds: crystalline, easy-to-grind explosives and difficult-to-grind propellants in a nitrocellulose matrix. A two-phase study was conducted with moderately high concentration samples and low concentration samples of each type of analyte. All samples were ground with a puck mill according to EPA method 8330B and analyzed on liquid chromatography instrumentation. Up to 40 increments were used to build each subsample and seven replicates executed for each test. Results demonstrate that for a well-ground and mixed sample, a single 10 g subsample is sufficient. For triplicate subsamples, however, 20 to 40 increments will give a result much closer to the concentration of the bulk sample. To minimize overall error due to incomplete mixing, improper grinding, or very low concentrations, we recommend about 30 increments be taken over the complete sample to construct the subsample.  相似文献   

4.
Sampling variability and colonization rate of introduced substrates (plastic trays filled with pebble and cobble) in two southwestern Virginia streams are described. Substrates were rapidly colonized by aquatic macroinvertebrates, but colonization rates differed between years, possibly due to annual variability in macroinvertebrate abundance. To examine the applicability of using these substrates for biomonitoring benthic communities, trays were placed at several locations in a river receiving power plant discharges. Only six samples were necessary to detect a 15%reduction in macroinvertebrate density and a 12% reduction in number of taxa at effluent sites. Benthic communities established on rock-filled trays and multiplate samplers collected from the same stations during the same period were compared. Although multiplate samplers were more variable than rock trays and were selective for different taxa, both substrate types showed significant differences in community parameters among locations. Rock trays at all sites were dominated by Cheumatopsyche sp., whereas chironomids were more abundant on multiplate samplers. The relative abundance of mayflies was reduced at the effluent site on both substrate types.  相似文献   

5.
The yeast two-hybrid system is a molecular genetic test for protein interaction. Here we describe a step by step procedure to screen for proteins that interact with a protein of interest using the two-hybrid system. This process includes, construction and testing of the bait plasmid, screening a plasmid library for interacting fusion proteins, elimination of false positives and deletion analysis of true positives. This procedure is designed to allow investigators to identify proteins and their encoding cDNAs that have a biologically significant interaction with your protein of interest.  相似文献   

6.
Data on the bulk chemical composition (C, N, P, S, total chlorophyll) of particulate matter were obtained on five occasions over one or two tidal cycles in the Jade Bay, Lower Saxonian Wadden Sea. Sampling intervals were usually one hour. The results show pronounced short-term variability for all parameters which is controlled to a large extent by physical processes such as erosion of surficial sediments at high current speeds and sedimentation at slack water. Contributions from living phytoplankton biomass were low while C∶N ratios indicate a high abundance of microheterotrophs. Organic phosphorus accounted for 36% of total P, exchangeable ammonium made up 1.9% of total nitrogen on average.  相似文献   

7.
System response data for step changes in input tracer concentration have been obtained for two different impeller agitated continuous flow mixing systems containing aqueous polysaccharide solutions. The vessel volumes were 1.6 and 10.9 liters. Polysaccharide concentration, dilution rate, and impeller speed were varied according to a plan devised using dimensional analysis and assuming that bulk motion is the predominant mass transport mechanism in the system. The data show that this is not true and that serious errors may occur if scale-up calculations are based on assuming that bulk motion predominates. Under the operating conditions used, perfect mixing was not observed.  相似文献   

8.
Family-based association methods have been developed primarily for autosomal markers. The X-linked sibling transmission/disequilibrium test (XS-TDT) and the reconstruction-combined TDT for X-chromosome markers (XRC-TDT) are the first association-based methods for testing markers on the X chromosome in family data sets. These are valid tests of association in family triads or discordant sib pairs but are not theoretically valid in multiplex families when linkage is present. Recently, XPDT and XMCPDT, modified versions of the pedigree disequilibrium test (PDT), were proposed. Like the PDT, XPDT compares genotype transmissions from parents to affected offspring or genotypes of discordant siblings; however, the XPDT can have low power if there are many missing parental genotypes. XMCPDT uses a Monte Carlo sampling approach to infer missing parental genotypes on the basis of true or estimated population allele frequencies. Although the XMCPDT was shown to be more powerful than the XPDT, variability in the statistic due to the use of an estimate of allele frequency is not properly accounted for. Here, we present a novel family-based test of association, X-APL, a modification of the test for association in the presence of linkage (APL) test. Like the APL, X-APL can use singleton or multiplex families and properly infers missing parental genotypes in linkage regions by considering identity-by-descent parameters for affected siblings. Sampling variability of parameter estimates is accounted for through a bootstrap procedure. X-APL can test individual marker loci or X-chromosome haplotypes. To allow for different penetrances in males and females, separate sex-specific tests are provided. Using simulated data, we demonstrated validity and showed that the X-APL is more powerful than alternative tests. To show its utility and to discuss interpretation in real-data analysis, we also applied the X-APL to candidate-gene data in a sample of families with Parkinson disease.  相似文献   

9.
Part 1 of this study summarizes data for a field investigation of contaminant concentration variability within individual, discrete soil samples (intra-sample variability) and between closely spaced, “co-located” samples (inter-sample variability). Hundreds of discrete samples were collected from three sites known respectively to be contaminated with arsenic, lead, and polychlorinated biphenyls. Intra-sample variability was assessed by testing soil from ten points within a minimally disturbed sample collected at each of 24 grid points. Inter-sample variability was assessed by testing five co-located samples collected within a 0.5-m diameter of each grid point. Multi Increment soil samples (triplicates) were collected at each study site for comparison. The study data demonstrate that the concentration of a contaminant reported for a given discrete soil sample is largely random within a relatively narrow (max:min <2X) to a very wide (max:min >100X) range of possibilities at any given sample collection point. The magnitude of variability depends in part on the contaminant type and the nature of the release. The study highlights the unavoidable randomness of contaminant concentrations reported in discrete soil samples and the unavoidable error and inefficiency associated with the use of discrete soil sample data for decision making in environmental investigations.  相似文献   

10.
A study was conducted to determine the distribution of deoxynivalenol (DON) and ochratoxin A (OTA) in a lot of 261 of wheat kernels. Within this study, two different sampling and sample preparation strategies were carried out. On the one hand, following the official commission regulation 401/2006/EC, an aggregate sample out of 100 incremental samples was build, homogenized and prepared for laboratory analysis. On the other hand each individual subsample was investigated for its deoxynivalenol and ochratoxin A content. The determined concentration of DON in the individual samples was in a range from 830 up to 2655 (μg/kg, for OTA results ranged from < 0.2 up to 8.6 μg/kg. Thus, a coefficient of variance of 25% for DON and 200% for OTA was achieved. From this, a spot formation for OTA was observed and the average value of the 100 incremental samples did not correspond to the achieved value for the aggregate sample. While the DON contamination at this concentration range seems to be more even, consequently the result of the aggregate sample was in accordance with the average value. In addition a sample communition study was performed to answer the question whether the time consuming process of grinding of the whole aggregate sample is necessary or not. The results of this study show that contamination of whole wheat kernels with DON is at the same level within a 1 kg sample (CV 16%), while OTA contamination shows high variability (CV 94%). At least for OTA this study indicated that an extensive and complete sample communition of the high volume aggregate sample is necessary.  相似文献   

11.
Lifetime performance variability is a powerful tool for evaluating herd management. Although efficiency is a key aspect of performance, it has not been integrated into existing studies on the variability of lifetime performance. The goal of the present article is to analyse the effects of various herd management options on the variability of lifetime performance by integrating criteria relative to feed efficiency. A herd model developed for dairy goat systems was used in three virtual experiments to test the effects of the diet energy level, the segmentation of the feeding plan and the mean production potential of the herd on the variability of lifetime performance. Principal component analysis showed that the variability of lifetime performance was structured around the first axis related to longevity and production and the second related to the variables used in feed efficiency calculation. The intra-management variability was expressed on the first axis (longevity and production), whereas the inter-management variability was expressed on the second axis (feed efficiency) and was mainly influenced by the combination of the diet energy level and the mean production potential. Similar feed efficiencies were attained with different management options. Still, such combinations relied on different biological bases and, at the level of the individual, contrasting results were observed in the relationship between the obtained pattern of performance (in response to diet energy) and the reference pattern of performance (defined by the production potential). Indeed, our results showed that over-feeding interacted with the feeding plan segmentation: a high level of feeding plan segmentation generated a low proportion of individuals at equilibrium with their production potential, whereas a single ration generated a larger proportion. At the herd level, the diet energy level and the herd production potential had marked effects on production and efficiency due to dilution of fixed production costs (i.e. maintenance requirements). Management options led to similar production and feed efficiencies at the herd level while giving large contrasts in the proportions of individuals at equilibrium with their production potential. These results suggested that analysing individual variability on the basis of criteria related to production processes could improve the assessment of herd management. The herd model opens promising perspectives in studying whether individual variability represents an advantage for herd performance.  相似文献   

12.
Application of the PM6 method to modeling proteins   总被引:1,自引:0,他引:1  
The applicability of the newly developed PM6 method for modeling proteins is investigated. In order to allow the geometries of such large systems to be optimized rapidly, three modifications were made to the conventional semiempirical procedure: the matrix algebra method for solving the self-consistent field (SCF) equations was replaced with a localized molecular orbital method (MOZYME), Baker’s Eigenfollowing technique for geometry optimization was replaced with the L-BFGS function minimizer, and some of the integrals used in the NDDO set of approximations were replaced with point-charge and polarization functions. The resulting method was used in the unconstrained geometry optimization of 45 proteins ranging in size from a simple nonapeptide of 244 atoms to an importin consisting of 14,566 atoms. For most systems, PM6 gave structures in good agreement with the reported X-ray structures. Some derived properties, such as pKa and bulk elastic modulus, were also calculated. The applicability of PM6 to model transition states was investigated by simulating a hypothetical reaction step in the chymotrypsin-catalyzed hydrolysis of a peptide bond. A proposed technique for generating accurate protein geometries, starting with X-ray structures, was examined. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users. This work was funded by the National Institutes of Health Grant No. 1 R43 GM083178-01  相似文献   

13.
Sampling precision was investigated for Tylenchulus semipenetrans juveniles and males in soil and females from roots and for citrus fibrous root mass density. For the case of two composite samples of 15 cores each, counts of juvenile and male nematodes were estimated to be within 40% of μ, at P < 0.06 (α) in orchards where x̄ > 1,500 nematodes/100 cm³ soil. A similar level of α was estimated for measurements of fibrous root mass density, but at a precision level of 25% of μ. Densities of female nematodes were estimated with less precision than juveniles and males. Precision estimates from a general sample plan derived from Taylor''s Power Law were in good agreement with estimates from individual orchards. Two aspects involved in deriving sampling plans for management advisory purposes were investigated. A minimum of five to six preliminary samples were required to appreciably reduce bias toward underestimation of σ. The use of a Student''s t value rather than a standard normal deviate in formulae to estimate sample size increased the estimates by an average of three units. Cases in which the use of z rather than Student''s t is appropriate for these formulae are discussed.  相似文献   

14.
Mitochondria and Neurodegeneration   总被引:2,自引:0,他引:2  
Many lines of evidence suggest that mitochondria have a central role in ageing-related neurodegenerative diseases. However, despite the evidence of morphological, biochemical and molecular abnormalities in mitochondria in various tissues of patients with neurodegenerative disorders, the question “is mitochondrial dysfunction a necessary step in neurodegeneration?” is still unanswered. In this review, we highlight some of the major neurodegenerative disorders (Alzheimer’s disease, Parkinson’s disease, Amyotrophic lateral sclerosis and Huntington’s disease) and discuss the role of the mitochondria in the pathogenetic cascade leading to neurodegeneration.  相似文献   

15.
Clinical trials are often planned with high uncertainty about the variance of the primary outcome variable. A poor estimate of the variance, however, may lead to an over‐ or underpowered study. In the internal pilot study design, the sample variance is calculated at an interim step and the sample size can be adjusted if necessary. The available recalculation procedures use the data of those patients for sample size recalculation that have already completed the study. In this article, we consider a variance estimator that takes into account both the data at the endpoint and at an intermediate point of the treatment phase. We derive asymptotic properties of this estimator and the relating sample size recalculation procedure. In a simulation study, the performance of the proposed approach is evaluated and compared with the procedure that uses only long‐term data. Simulation results demonstrate that the sample size resulting from the proposed procedure shows in general a smaller variability. At the same time, the Type I error rate is not inflated and the achieved power is close to the desired value.  相似文献   

16.

Background  

Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT).  相似文献   

17.
菌株变异性是影响食源性致病菌风险评估结果准确性的一个重要因素,它普遍存在于单核细胞增生李斯特氏菌、沙门氏菌等各种食源性致病菌中。菌株变异性是菌株之间的固有差异,不能通过改变试验方法或改善试验方案消除。本文针对近年来菌株变异性的研究内容,基于菌株变异性对风险评估结果的影响,从食品链中变异性的来源、食源性致病菌表型变异性以及整合菌株生长、失活变异性到预测微生物模型中的方法三个方面进行综述,并指出目前菌株变异性研究中的不足,提出深入研究菌株变异性机制,扩展不同来源变异性的比较,进一步整合基因表达、蛋白质和细胞代谢等菌株变异性于预测模型中的建议。  相似文献   

18.
A study has been carried out on the Moselle River by means of a microtechnique based on the most-probable-number method for fecal coliform enumeration. This microtechnique, in which each serial dilution of a sample is inoculated into all 96 wells of a microplate, was compared with the standard membrane filter method. It showed a marked overestimation of about 14% due, probably, to the lack of absolute specificity of the method. The high precision of the microtechnique (13%, in terms of the coefficient of variation for log most probable number) and its relative independence from the influence of bacterial density allowed the use of analysis of variance to investigate the effects of spatial and temporal bacterial heterogeneity on the estimation of coliforms. Variability among replicate samples, subsamples, handling, and analytical errors were considered as the major sources of variation in bacterial titration. Variances associated with individual components of the sampling procedure were isolated, and optimal replications of each step were determined. Temporal variation was shown to be more influential than the other three components (most probable number, subsample, sample to sample), which were approximately equal in effect. However, the incidence of sample-to-sample variability (16%, in terms of the coefficient of variation for log most probable number) caused by spatial heterogeneity of bacterial populations in the Moselle River is shown and emphasized. Consequently, we recommend that replicate samples be taken on each occasion when conducting a sampling program for a stream pollution survey.  相似文献   

19.
This study aimed to determine the minimum time required for assessing spatiotemporal variability during continuous running at different submaximal velocities and, thereby, the number of steps required. Nineteen trained endurance runners performed an incremental running protocol, with a 3-min recording period at 10, 12, 14 and 16 km/h. Spatiotemporal parameters (contact and flight times, step length and step frequency) were measured using the OptoGait system and step variability was considered for each parameter, in terms of within-participants standard deviation (SD) and coefficient of variation (CV%). Step variability was considered over six different durations at every velocity tested: 0–10 s, 0–20 s, 0–30 s, 0–60 s, 0–120 s and 0–180 s. The repeated measures ANOVA revealed no significant differences in the magnitude of the four spatiotemporal parameters between the recording intervals at each running velocity tested (p ≥ 0.05, ICC > 0.90). The post-hoc analysis confirmed no significant differences in step variability (SD and CV% of each spatiotemporal parameter at any velocity tested) between measurements. The Bland-Altman limits of agreement method showed that longer recording intervals yield smaller systematic bias, random errors, and narrower limits of agreement, regardless of running velocity. The results suggest that the duration of the recording period required to estimate spatiotemporal variability plays an important role in the accuracy of the measurement, regardless of running velocity (10–16 km/h).  相似文献   

20.
Cellulosic depth filters embedded with diatomaceous earth are widely used to remove colloidal cell debris from centrate as a secondary clarification step during the harvest of mammalian cell culture fluid. The high cost associated with process failure in a GMP (Good Manufacturing Practice) environment highlights the need for a robust process scale depth filter sizing that allows for (1) stochastic batch‐to‐batch variations from filter media, bioreactor feed and operation, and (2) systematic scaling differences in average performance between filter sizes and formats. Matched‐lot depth filter media tested at the same conditions with consecutive batches of the same molecule were used to assess the sources and magnitudes of process variability. Depth filter sizing safety factors of 1.2–1.6 allow a filtration process to compensate for random batch‐to‐batch process variations. Matched‐lot depth filter media in four different devices tested simultaneously at the same conditions was used with a common feed to assess scaling effects. All filter devices showed <11% capacity difference and the Pod format devices showed no statistically different capacity differences. © 2015 American Institute of Chemical Engineers Biotechnol. Prog., 31:1542–1550, 2015  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号