首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In cell kinetic experiments, as in many other branches of biological science, we often sample at various stages of an experiment: we may for example take a sample of animals, then from each study a sample of sites, and from each site take replicate observations. This sampling process can be optimized to give maximum precision to an estimated quantity, but care must be taken in analysing data so gathered because the analysis depends on the precise sampling strategy.  相似文献   

2.
The robot automation of sampling and the subsequent treatment and storage of aliquots during mammalian cell cultivations was investigated. The complete setup, the development and testing of the sampling device, the robot arm, and the cell imaging system are described. The developed sampling device is directly coupled to a pilot bioreactor. It allows the computerized sterile filling of cell broth into 50 mL sample tubes. After each sampling the whole tubing system is steam sterilized. For further off-line treatment a robot takes the sample to the different devices. This robot is equipped with a camera and a force/torque sensor. A color-based object recognition guides the arm in a complex surrounding with different illumination situations, enabling the robot to load the sampling device with tubes and take the sample to further devices. For necessary pipetting and refilling we developed a computerized device. Cells are automatically stained and counted using an imaging system. The cell number and viability is automatically saved in a process control system together with the on-line parameters. During several cultivations in 20 and 100 L scale these main components of the automation strategy were successfully tested.  相似文献   

3.
Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5–97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.  相似文献   

4.
Uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation-based measurements of network observables, with applications ranging from epidemics, through social networks to Internet modeling. Existing graph sampling methods are either link-swap based (Markov-Chain Monte Carlo algorithms) or stub-matching based (the Configuration Model). Both types are ill-controlled, with typically unknown mixing times for link-swap methods and uncontrolled rejections for the Configuration Model. Here we propose an efficient, polynomial time algorithm that generates statistically independent graph samples with a given, arbitrary, degree sequence. The algorithm provides a weight associated with each sample, allowing the observable to be measured either uniformly over the graph ensemble, or, alternatively, with a desired distribution. Unlike other algorithms, this method always produces a sample, without back-tracking or rejections. Using a central limit theorem-based reasoning, we argue, that for large , and for degree sequences admitting many realizations, the sample weights are expected to have a lognormal distribution. As examples, we apply our algorithm to generate networks with degree sequences drawn from power-law distributions and from binomial distributions.  相似文献   

5.
The Stone Brusher is designed to take qualitative or semi-quantitative samples of material attached to stones at 7–50 cm depth in running or stagnant waters. The epilithic material is dislodged from the stone surface with a rotating brush enclosed in a chamber and the material is drawn up directly into the sample bottle with an air-cylinder. The operator takes a sample quickly and without putting hands into the water. The sampling area is about 28 cm2. The sampler is made of plastic, stainless steel and aluminium and weighs 3.1 kg. The equipment is robust and easily handled and it is constructed to meet the demand for standardized sampling for research and environmental monitoring and to improve working conditions for sampling personnel. The equipment allows sampling from bedrock and large stones that cannot be lifted from the bottom and it can be used for reliable sampling also in fast-flowing streams where the dislodged material is easily flushed away. Using Near-Infrared Spectroscopy and diatom analyses, this new sampler is evaluated in comparison to the recognized toothbrush method, which indicates that the Stone Brusher reduces sampling variability compared with the toothbrush method.  相似文献   

6.
Microplate fecal coliform method to monitor stream water pollution.   总被引:1,自引:0,他引:1       下载免费PDF全文
A study has been carried out on the Moselle River by means of a microtechnique based on the most-probable-number method for fecal coliform enumeration. This microtechnique, in which each serial dilution of a sample is inoculated into all 96 wells of a microplate, was compared with the standard membrane filter method. It showed a marked overestimation of about 14% due, probably, to the lack of absolute specificity of the method. The high precision of the microtechnique (13%, in terms of the coefficient of variation for log most probable number) and its relative independence from the influence of bacterial density allowed the use of analysis of variance to investigate the effects of spatial and temporal bacterial heterogeneity on the estimation of coliforms. Variability among replicate samples, subsamples, handling, and analytical errors were considered as the major sources of variation in bacterial titration. Variances associated with individual components of the sampling procedure were isolated, and optimal replications of each step were determined. Temporal variation was shown to be more influential than the other three components (most probable number, subsample, sample to sample), which were approximately equal in effect. However, the incidence of sample-to-sample variability (16%, in terms of the coefficient of variation for log most probable number) caused by spatial heterogeneity of bacterial populations in the Moselle River is shown and emphasized. Consequently, we recommend that replicate samples be taken on each occasion when conducting a sampling program for a stream pollution survey.  相似文献   

7.
Increased taxon sampling greatly reduces phylogenetic error   总被引:1,自引:0,他引:1  
Several authors have argued recently that extensive taxon sampling has a positive and important effect on the accuracy of phylogenetic estimates. However, other authors have argued that there is little benefit of extensive taxon sampling, and so phylogenetic problems can or should be reduced to a few exemplar taxa as a means of reducing the computational complexity of the phylogenetic analysis. In this paper we examined five aspects of study design that may have led to these different perspectives. First, we considered the measurement of phylogenetic error across a wide range of taxon sample sizes, and conclude that the expected error based on randomly selecting trees (which varies by taxon sample size) must be considered in evaluating error in studies of the effects of taxon sampling. Second, we addressed the scope of the phylogenetic problems defined by different samples of taxa, and argue that phylogenetic scope needs to be considered in evaluating the importance of taxon-sampling strategies. Third, we examined the claim that fast and simple tree searches are as effective as more thorough searches at finding near-optimal trees that minimize error. We show that a more complete search of tree space reduces phylogenetic error, especially as the taxon sample size increases. Fourth, we examined the effects of simple versus complex simulation models on taxonomic sampling studies. Although benefits of taxon sampling are apparent for all models, data generated under more complex models of evolution produce higher overall levels of error and show greater positive effects of increased taxon sampling. Fifth, we asked if different phylogenetic optimality criteria show different effects of taxon sampling. Although we found strong differences in effectiveness of different optimality criteria as a function of taxon sample size, increased taxon sampling improved the results from all the common optimality criteria. Nonetheless, the method that showed the lowest overall performance (minimum evolution) also showed the least improvement from increased taxon sampling. Taking each of these results into account re-enforces the conclusion that increased sampling of taxa is one of the most important ways to increase overall phylogenetic accuracy.  相似文献   

8.
Both significant positive and negative relationships between the magnitude of research findings (their 'effect size') and their year of publication have been reported in a few areas of biology. These trends have been attributed to Kuhnian paradigm shifts, scientific fads and bias in the choice of study systems. Here we test whether or not these isolated cases reflect a more general trend. We examined the relationship using effect sizes extracted from 44 peer-reviewed meta-analyses covering a wide range of topics in ecological and evolutionary biology. On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort. Although these results have several possible explanations, it is suggested that a publication bias against non-significant or weaker findings offers the most parsimonious explanation. As in the medical sciences, non-significant results may take longer to publish and studies with both small sample sizes and non-significant results may be less likely to be published.  相似文献   

9.
Aims:  To investigate the effectiveness of pooled sampling methods for detection of Salmonella in turkey flocks.
Methods and Results:  Individual turkey droppings were taken from 43 flocks, with half the dropping tested for S almonella as an individual sample and the other half included in a pool of five. A pair of boot swabs and a dust sample were also taken from each flock. The results were analysed using Bayesian methods in the absence of a gold standard. This showed a dilution effect of mixing true-positive with negative samples, but even with this the pooled faecal samples were found to be a highly efficient method of testing compared with individual faecal samples. The more samples included in the pool, the more sensitive the pooled sampling method was predicted to be. The sensitivity of dust sampling was much more sensitive than faecal sampling at low prevalence.
Conclusions:  Pooled faecal sampling is an efficient method of Salmonella detection in turkey flocks. The additional testing of a dust sample greatly increased the effectiveness of sampling, especially at low prevalence.
Significance and Impact of the Study:  This is the first study to relate the sensitivity of the sampling methods to the within-flock prevalence.  相似文献   

10.
Population genetic data can provide valuable information on the demography of a species. For rare and elusive marine megafauna, samples for generating the data are traditionally obtained from tissue biopsies, which can be logistically difficult and expensive to collect and require invasive sampling techniques. Analysis of environmental DNA (eDNA) offers an alternative, minimally invasive approach to provide important genetic information. Although eDNA approaches have been studied extensively for species detection and biodiversity monitoring in metabarcoding studies, the potential for the technique to address population-level questions remains largely unexplored. Here, we applied “eDNA haplotyping” to obtain estimates of the intraspecific genetic diversity of a whale shark (Rhincodon typus) aggregation at Ningaloo reef, Australia. Over 2 weeks, we collected seawater samples directly behind individual sharks prior to taking a tissue biopsy sample from the same animal. Our data showed a 100% match between mtDNA sequences recovered in the eDNA and tissue sample for all 28 individuals sampled. In the seawater samples, >97% of all reads were assigned to six dominant haplotypes, and a clear dominant signal (~99% of sample reads) was recovered in each sample. Our study demonstrates accurate individual-level haplotyping from seawater eDNA. When DNA from one individual clearly dominates each eDNA sample, it provides many of the same opportunities for population genetic analyses as a tissue sample, potentially removing the need for tissue sampling. Our results show that eDNA approaches for population-level analyses have the potential to supply critical demographic data for the conservation and management of marine megafauna.  相似文献   

11.
The accurate reconstruction of palaeobiodiversity patterns is central to a detailed understanding of the macroevolutionary history of a group of organisms. However, there is increasing evidence that diversity patterns observed directly from the fossil record are strongly influenced by fluctuations in the quality of our sampling of the rock record; thus, any patterns we see may reflect sampling biases, rather than genuine biological signals. Previous dinosaur diversity studies have suggested that fluctuations in sauropodomorph palaeobiodiversity reflect genuine biological signals, in comparison to theropods and ornithischians whose diversity seems to be largely controlled by the rock record. Most previous diversity analyses that have attempted to take into account the effects of sampling biases have used only a single method or proxy: here we use a number of techniques in order to elucidate diversity. A global database of all known sauropodomorph body fossil occurrences (2024) was constructed. A taxic diversity curve for all valid sauropodomorph genera was extracted from this database and compared statistically with several sampling proxies (rock outcrop area and dinosaur‐bearing formations and collections), each of which captures a different aspect of fossil record sampling. Phylogenetic diversity estimates, residuals and sample‐based rarefaction (including the first attempt to capture ‘cryptic’ diversity in dinosaurs) were implemented to investigate further the effects of sampling. After ‘removal’ of biases, sauropodomorph diversity appears to be genuinely high in the Norian, Pliensbachian–Toarcian, Bathonian–Callovian and Kimmeridgian–Tithonian (with a small peak in the Aptian), whereas low diversity levels are recorded for the Oxfordian and Berriasian–Barremian, with the Jurassic/Cretaceous boundary seemingly representing a real diversity trough. Observed diversity in the remaining Triassic–Jurassic stages appears to be largely driven by sampling effort. Late Cretaceous diversity is difficult to elucidate and it is possible that this interval remains relatively under‐sampled. Despite its distortion by sampling biases, much of sauropodomorph palaeobiodiversity can be interpreted as a reflection of genuine biological signals, and fluctuations in sea level may account for some of these diversity patterns.  相似文献   

12.
Time course experiments with microarrays have begun to provide a glimpse into the dynamic behavior of gene expression. In a typical experiment, scientists use microarrays to measure the abundance of mRNA at discrete time points after the onset of a stimulus. Recently, there has been much work on using these data to infer causal regulatory networks that model how genes influence each other. However, microarray studies typically have slow sampling rates that can lead to temporal aggregation of the signal. That is, each successive sampling point represents the sum of all signal changes since the previous sample. In this paper, we show that temporal aggregation can bias algorithms for causal inference and lead them to discover spurious relations that would not be found if the signal were sampled at a much faster rate. We discuss the implications of temporal aggregation on inference, the problems it creates, and potential directions for solutions.  相似文献   

13.
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first ‘basic’ scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second ‘cautious’ scheme, an adaptation is made to ensure that correctly classifying a farm as ‘bad’ is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the ‘cautious’ scheme for which a sampling protocol has also been developed.  相似文献   

14.
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools—linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be biased, sometimes severely, when sample inclusion probabilities were ignored, while IPB sampling effectively produced unbiased parameter estimates.  相似文献   

15.
Frequently it is reasonable for a sample surveyor to view the finite population of interest as an independent sample of size N from an infinite super-population. This super-population viewpoint is contrasted to the classical frequentist theory of finite population sampling and the classical theory of infinite population sampling. A new technique for making inferences about finite population "parameters' is developed and shown to be applicable for any survey design. Two example applications are given: the estimation of strata- and population means in stratified sampling and the use of the so-called regression estimators for the same purpose.  相似文献   

16.
Barabesi L  Pisani C 《Biometrics》2002,58(3):586-592
In practical ecological sampling studies, a certain design (such as plot sampling or line-intercept sampling) is usually replicated more than once. For each replication, the Horvitz-Thompson estimation of the objective parameter is considered. Finally, an overall estimator is achieved by averaging the single Horvitz-Thompson estimators. Because the design replications are drawn independently and under the same conditions, the overall estimator is simply the sample mean of the Horvitz-Thompson estimators under simple random sampling. This procedure may be wisely improved by using ranked set sampling. Hence, we propose the replicated protocol under ranked set sampling, which gives rise to a more accurate estimation than the replicated protocol under simple random sampling.  相似文献   

17.
This study consisted of a stratified sampling, randomly taken, of the soil from the squares and parks of the city of La Plata, Province of Buenos Aires, in order to establish the prevalence of contamination caused by Toxocara sp. A total 242 soil samples was examined. From each sample a 10 grams aliquot was taken, washed in a 0.2% Tween 80 solution, and processed using the technique of concentration by flotation with sugar solution. There was a prevalence of 13.2%. In each positive sample, the quantity of eggs varied from 1 to 4. Toxocara sp. eggs were observed in 15 out of 22 squares and parks investigated. The sampling design and the processing method employed were satisfactory for the recovering and identification of Toxocara sp. eggs.  相似文献   

18.
The composition of the gut microbiota is associated with various disease states, most notably inflammatory bowel disease, obesity and malnutrition. This underlines that analysis of intestinal microbiota is potentially an interesting target for clinical diagnostics. Currently, the most commonly used sample types are feces and mucosal biopsy specimens. Because sampling method, storage and processing of samples impact microbiota analysis, each sample type has its own limitations. An ideal sample type for use in routine diagnostics should be easy to obtain in a standardized fashion without perturbation of the microbiota. Rectal swabs may satisfy these criteria, but little is known about microbiota analysis on these sample types. In this study we investigated the characteristics and applicability of rectal swabs for gut microbiota profiling in a clinical routine setting in patients presenting with various gastro-intestinal disorders. We found that rectal swabs appeared to be a convenient means of sampling the human gut microbiota. Swabs can be performed on demand, whenever a patient presents; swab-derived microbiota profiles are reproducible, whether they are gathered at home by patients or by medical professionals in an outpatient setting and may be ideally suited for clinical diagnostics and large-scale studies.  相似文献   

19.

Background

Typically, a two-phase (double) sampling strategy is employed when classifications are subject to error and there is a gold standard (perfect) classifier available. Two-phase sampling involves classifying the entire sample with an imperfect classifier, and a subset of the sample with the gold-standard.

Methodology/Principal Findings

In this paper we consider an alternative strategy termed reclassification sampling, which involves classifying individuals using the imperfect classifier more than one time. Estimates of sensitivity, specificity and prevalence are provided for reclassification sampling, when either one or two binary classifications of each individual using the imperfect classifier are available. Robustness of estimates and design decisions to model assumptions are considered. Software is provided to compute estimates and provide advice on the optimal sampling strategy.

Conclusions/Significance

Reclassification sampling is shown to be cost-effective (lower standard error of estimates for the same cost) for estimating prevalence as compared to two-phase sampling in many practical situations.  相似文献   

20.
The Korean guidelines developed by the Ministry of Environment for soil investigations do not seriously take into account statistical characteristics of collected data and statistical assumptions required for the methods applied. In this article, we point out the statistical omissions in the Korean guidelines and propose some supplements to them. Systematic sampling is recommended, since systematic sampling raises sample representativeness and provides a more efficient allocation of resources that lead to cost-savings. The type of statistical inference should be determined according to the objective of the investigation and the presence of normality. We provide a diagram for selecting an appropriate type of inference. We also introduce power transformation and propose a clustering-based stratification method for improving the accuracy of analysis and the normality condition of data. Both methods are illustrated with real datasets collected from a northern region of South Korea. One of those non-normal datasets was normalized simply by applying power transformation. The other needed to be clustered into two heterogeneous groups by our proposed method before transformation, which enables applying normality-based methods to the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号