首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Lawrence Livermore National Laboratory (LLNL) uses a cost-effective sampling (CES) methodology to evaluate and review ground water contaminant data and optimize the site's ground water monitoring plan. The CES methodology is part of LLNL's regulatory approved compliance monitoring plan (Lamarre et al., 1996 Lamarre, A. L., Nichols, E. M., Berg, L. L., Dresen, M. D., Gelinas, R. J., Bainer, R. W. and Folsom, E. N. 1996. Compliance monitoring plan for the Lawrence Livermore National Laboratory Livermore Site UCRL-AR-120936 [Google Scholar]). It allows LLNL to adjust the ground water sampling plan every quarter in response to changing conditions at the site. Since the use of the CES methodology has been approved by the appropriate regulatory agencies, such adjustments do not need additional regulatory approval. This permits LLNL to respond more quickly to changing conditions. The CES methodology bases the sampling frequency for each location on trend, variability, and magnitude statistics describing the contaminants at that location, and on the input of the technical staff (hydrologists, chemists, statisticians, and project leaders). After initial setup is complete, each application of CES takes only a few days for as many as 400 wells. Effective use of the CES methodology requires sufficient data, an understanding of contaminant transport at the site, and an adequate number of monitoring wells downgradient of the contamination. The initial implementation of CES at LLNL in 1992 produced a 40% reduction in the required number of annual routine ground water samples at LLNL. This has saved LLNL $390,000 annually in sampling, analysis, and data management costs.  相似文献   

2.
3.
Ranked set sampling is a method which may be used to increase the efficiency of the estimator of the mean of a population. Ranked set sampling with size biased probability of selection (i.e., the items are selected with probability proportion to its size) is combined with the line intercept method to increase the efficency of estimating cover, density and total amount of some variable of interest (e.g. biomass). A two-stage sampling plan is suggested with line intercept sampling in the first stage. Simple random sampling and ranked set sampling are compared in the second stage to show that the unbiased estimators of density, cover and total amount of some variable of interest based on ranked set sampling have smaller variances than the usual unbiased estimator based on simple random sampling. Efficiency is increased by reducing the number of items which are measured on a transect or by increasing the number of independent transects utilized in a study area. An application procedure is given for estimation of coverage, density and number of stems of mountain mahogany (Cercocarpus montanus) in a study area east of Laramie, Wyoming.  相似文献   

4.
基因芯片实验要得到可靠的生物学结论,必须基于优化的实验设计和科学的数据分析。讨论了与基因芯片数据分析方法相关的实验设计方面的几个问题,简述了差异表达分析、聚类分析及功能富集分析等分析方法及其进展,并介绍了部分软件及应用。  相似文献   

5.
Documenting the effects of novel forms of enrichment is becoming increasingly important within the field of environmental enrichment. Appropriate documentation and evaluation must accompany any enrichment research project in order for accurate results to be obtained. The objective of the present study was to provide an example of how the level of effort in documenting the effect of enrichment is linked to how it is evaluated. This study was carried out on eight cheetahs (Acinonyx jubatus) at Fota Wildlife Park, Ireland. Temporal feeding variation was the enrichment type used during this study. Behavior data were collected in five different ways in order to simulate varying degrees of effort. Randomization tests were utilized to analyze behavior data. Significant behavioral differences were observed in the first four sampling methods with patterns of behavior remaining similar in all five methods. However, only the most time intensive method concurred with findings previously published utilizing this form of enrichment. No significant differences in behavior were detected when the least time intensive method was used. Between 1 and 2 hr of data collection daily is necessary to evaluate temporal feeding variation accurately. However, 30–45 min of data collection also gave an insight into the effectiveness of the enrichment. Methods of evaluation can influence the interpretations of the strength of the enriching effect of the treatment. Appropriate evaluation and accurate reporting of enrichment is crucial for the future development of the environmental enrichment field. Zoo Biol 32:262–268, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

6.
The following comments pertain to "Sampling Methods and Interpretation of Correlation: A Comparative Analysis of Seven Cross-Cultural Samples," Richard P. Chaney and Rogelio Ruiz Revilla, AA 71:597–633.  相似文献   

7.
The comparison between measurements effected by different apparati shows that a given class of instruments provides an integral set of measurements. This characteristic has a certain disadvantage: the modification of the spectral bandwidth is limited to an attenuation of 3 dB. However, it also has certain advantages. The integral measurement (or the counting measurement) makes it possible to satisfy Shannon's sampling criteria by avoiding the necessary anti-aliasing filtering which is commonly impossible to realize on measurements of a biological nature. A second advantage is linked to the reduction of the background noise on the band.  相似文献   

8.
The comparison between measurements effected by different apparati shows that a given class of instruments provides an integral set of measurements. This characteristic has a certain disadvantage: the modification of the spectral bandwidth is limited to an attenuation of 3 dB. However, it also has certain advantages. The integral measurement (or the counting measurement) makes it possible to satisfy Shannon's sampling criteria by avoiding the necessary anti-aliasing filtering which is commonly impossible to realize on measurements of a biological nature. A second advantage is linked to the reduction of the background noise on the band.  相似文献   

9.
DNA sequences from three mitochondrial genes and one nuclear gene were analyzed to determine the phylogeny of the Malagasy primate family Lemuridae. Whether analyzed separately or in combination, the data consistently indicate that Eulemur species comprise a clade that is sister to a Lemur catta plus Hapalemur clade. The genus Varecia is basal to both. Resolution of cladogenic events within Eulemur was found to be extremely problematic with a total of six alternative arrangements offered by various data sets and weighting regimes. We attempt to determine the best arrangement of Eulemur taxa through a variety of character and taxon sampling strategies. Because our study includes all but one Eulemur species, increased taxon sampling is probably not an option for enhancing phylogenetic accuracy. We find, however, that the combined genetic data set is more robust to changes in taxon sample than are any of the individual data sets, suggesting that increased character sampling stabilizes phylogenetic resolution. Nonetheless, due to the difficult nature of the problem, we may have to accept certain aspects of Eulemur interrelationships as uncertain.  相似文献   

10.
11.
This paper reviews some of the important methods for estimating animal numbers or densities based on (i) direct counts of population units as used in quadrat, strip, line-transect and line-intercept sampling and (ii) indirect counts and indices, such as capture-mark recapture, change-in-ratio, catch-effort methods and indices based on track counts, call, roadside and pellet-group counts, etc.  相似文献   

12.
Summary We discuss design and analysis of longitudinal studies after case–control sampling, wherein interest is in the relationship between a longitudinal binary response that is related to the sampling (case–control) variable, and a set of covariates. We propose a semiparametric modeling framework based on a marginal longitudinal binary response model and an ancillary model for subjects' case–control status. In this approach, the analyst must posit the population prevalence of being a case, which is then used to compute an offset term in the ancillary model. Parameter estimates from this model are used to compute offsets for the longitudinal response model. Examining the impact of population prevalence and ancillary model misspecification, we show that time‐invariant covariate parameter estimates, other than the intercept, are reasonably robust, but intercept and time‐varying covariate parameter estimates can be sensitive to such misspecification. We study design and analysis issues impacting study efficiency, namely: choice of sampling variable and the strength of its relationship to the response, sample stratification, choice of working covariance weighting, and degree of flexibility of the ancillary model. The research is motivated by a longitudinal study following case–control sampling of the time course of attention deficit hyperactivity disorder (ADHD) symptoms.  相似文献   

13.
An increasing number of studies are using landscape genomics to investigate local adaptation in wild and domestic populations. Implementation of this approach requires the sampling phase to consider the complexity of environmental settings and the burden of logistical constraints. These important aspects are often underestimated in the literature dedicated to sampling strategies. In this study, we computed simulated genomic data sets to run against actual environmental data in order to trial landscape genomics experiments under distinct sampling strategies. These strategies differed by design approach (to enhance environmental and/or geographical representativeness at study sites), number of sampling locations and sample sizes. We then evaluated how these elements affected statistical performances (power and false discoveries) under two antithetical demographic scenarios. Our results highlight the importance of selecting an appropriate sample size, which should be modified based on the demographic characteristics of the studied population. For species with limited dispersal, sample sizes above 200 units are generally sufficient to detect most adaptive signals, while in random mating populations this threshold should be increased to 400 units. Furthermore, we describe a design approach that maximizes both environmental and geographical representativeness of sampling sites and show how it systematically outperforms random or regular sampling schemes. Finally, we show that although having more sampling locations (between 40 and 50 sites) increase statistical power and reduce false discovery rate, similar results can be achieved with a moderate number of sites (20 sites). Overall, this study provides valuable guidelines for optimizing sampling strategies for landscape genomics experiments.  相似文献   

14.
Summary .  We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike's information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.  相似文献   

15.
新一代高通量RNA测序数据的处理与分析   总被引:4,自引:0,他引:4  
随着新一代高通量DNA测序技术的快速发展,RNA测序(RNA-seq)已成为基因表达和转录组分析新的重要手段.RNA-seq技术产生的海量数据为生物信息学带来了新的机遇和挑战.有效地对测序数据进行针对性的生物信息学处理和分析,成为RNA-seq技术能否在科学探索中发挥重大作用的关键.以新一代Illumina/Solexa测序平台所产生的数据为例,在扼要介绍高通量RNA-seq测序流程的基础上,对RNA-seq数据处理和分析的方法和现有软件做一个较为全面的综述,并对其中有待进一步研究的问题进行展望.  相似文献   

16.
Chao A  Lin CW 《Biometrics》2012,68(3):912-921
Summary A number of species richness estimators have been developed under the model that individuals (or sampling units) are sampled with replacement. However, if sampling is done without replacement so that no sampled unit can be repeatedly observed, then the traditional estimators for sampling with replacement tend to overestimate richness for relatively high-sampling fractions (ratio of sample size to the total number of sampling units) and do not converge to the true species richness when the sampling fraction approaches one. Based on abundance data or replicated incidence data, we propose a nonparametric lower bound for species richness in a single community and also a lower bound for the number of species shared by multiple communities. Our proposed lower bounds are derived under very general sampling models. They are universally valid for all types of species abundance distributions and species detection probabilities. For abundance data, individuals' detectabilities are allowed to be heterogeneous among species. For replicated incidence data, the selected sampling units (e.g., quadrats) need not be fully censused and species can be spatially aggregated. All bounds converge correctly to the true parameters when the sampling fraction approaches one. Real data sets are used for illustration. We also test the proposed bounds by using subsamples generated from large real surveys or censuses, and their performance is compared with that of some previous estimators.  相似文献   

17.
The improved accessibility to data that can be used in human health risk assessment (HHRA) necessitates advanced methods to optimally incorporate them in HHRA analyses. This article investigates the application of data fusion methods to handling multiple sources of data in HHRA and its components. This application can be performed at two levels, first, as an integrative framework that incorporates various pieces of information with knowledge bases to build an improved knowledge about an entity and its behavior, and second, in a more specific manner, to combine multiple values for a state of a certain feature or variable (e.g., toxicity) into a single estimation. This work first reviews data fusion formalisms in terms of architectures and techniques that correspond to each of the two mentioned levels. Then, by handling several data fusion problems related to HHRA components, it illustrates the benefits and challenges in their application.  相似文献   

18.
The author compares 12 hierarchical models in the aim of estimating the abundance of fish in alpine streams by using removal sampling data collected at multiple locations. The most expanded model accounts for (i) variability of the abundance among locations, (ii) variability of the catchability among locations, and (iii) residual variability of the catchability among fish. Eleven model reductions are considered depending which variability is included in the model. The more restrictive model considers none of the aforementioned variabilities. Computations of the latter model can be achieved by using the algorithm presented by Carle and Strub (Biometrics 1978, 34 , 621–630). Maximum a posteriori and interval estimates of the parameters as well as the Akaike and the Bayesian information criterions of model fit are computed by using samples simulated by a Markov chain Monte Carlo method. The models are compared by using a trout (Salmo trutta fario) parr (0+) removal sampling data set collected at three locations in the Pyrénées mountain range (Haute‐Garonne, France) in July 2006. Results suggest that, in this case study, variability of the catchability is not significant, either among fish or locations. Variability of the abundance among locations is significant. 95% interval estimates of the abundances at the three locations are [0.15, 0.24], [0.26, 0.36], and [0.45, 0.58] parrs per m2. Such differences are likely the consequence of habitat variability.  相似文献   

19.
Species Richness and Invasion Vectors: Sampling Techniques and Biases   总被引:2,自引:0,他引:2  
During a European Union Concerted Action study on species introductions, an intercalibration workshop on ship ballast water sampling techniques considered various phytoplankton and zooplankton sampling methods. For the first time, all the techniques presently in use worldwide were compared using a plankton tower as a model ballast tank spiked with the brine shrimp and oyster larvae while phytoplankton samples were taken simultaneously in the field (Helgoland Harbour, Germany). Three cone-shaped and 11 non-cone shaped plankton nets of different sizes and designs were employed. Net lengths varied from 50 to 300 cm, diameters 9.7–50 cm, and mesh sizes 10–100 μm. Three pumps, a Ruttner sampler, and a bucket previously used in ballast water sampling studies were also compared. This first assessment indicates that for sampling ballast water a wide range of techniques may be needed. Each method showed different results in efficiency and it is unlikely that any of the methods will sample all taxa. Although several methods proved to be valid elements of a hypothetical `tool box' of effective ship sampling techniques. The Ruttner water sampler and the pump P30 provide suitable means for the quantitative phytoplankton sampling, whereas other pumps prevailed during the qualitative trial. Pump P15 and cone-shaped nets were the best methods used for quantitative zooplankton sampling. It is recommended that a further exercise involving a wider range of taxa be examined in a larger series of mesocosms in conjunction with promising treatment measures for managing ballast water. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

20.
Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号