首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Mass spectrometric profiling approaches such as MALDI‐TOF and SELDI‐TOF are increasingly being used in disease marker discovery, particularly in the lower molecular weight proteome. However, little consideration has been given to the issue of sample size in experimental design. The aim of this study was to develop a protocol for the use of sample size calculations in proteomic profiling studies using MS. These sample size calculations can be based on a simple linear mixed model which allows the inclusion of estimates of biological and technical variation inherent in the experiment. The use of a pilot experiment to estimate these components of variance is investigated and is shown to work well when compared with larger studies. Examination of data from a number of studies using different sample types and different chromatographic surfaces shows the need for sample‐ and preparation‐specific sample size calculations.  相似文献   

2.
Shaham S 《PloS one》2007,2(11):e1117
In genetic screens, the number of mutagenized gametes examined is an important parameter for evaluating screen progress, the number of genes of a given mutable phenotype, gene size, cost, and labor. Since genetic screens often entail examination of thousands or tens of thousands of animals, strategies for optimizing genetics screens are important for minimizing effort while maximizing the number of mutagenized gametes examined. To date, such strategies have not been described for genetic screens in the nematode Caenorhabditis elegans. Here we review general principles of genetic screens in C. elegans, and use a modified binomial strategy to obtain a general expression for the number of mutagenized gametes examined in a genetic screen. We use this expression to calculate optimal screening parameters for a large range of genetic screen types. In addition, we developed a simple online genetic-screen-optimization tool that can be used independently of this paper. Our results demonstrate that choosing the optimal F2-to-F1 screening ratio can significantly improve screen efficiency.  相似文献   

3.
Principal component (PCA) and factor analysis (FA) are widely used in animal behaviour research. However, many authors automatically follow questionable practices implemented by default in general‐purpose statistical software. Worse still, the results of such analyses in research reports typically omit many crucial details which may hamper their evaluation. This article provides simple non‐technical guidelines for PCA and FA. A standard for reporting the results of these analyses is suggested. Studies using PCA and FA must report: (1) whether the correlation or covariance matrix was used; (2) sample size, preferably as a footnote to the table of factor loadings; (3) indices of sampling adequacy; (4) how the number of factors was assessed; (5) communalities when sample size is small; (6) details of factor rotation; (7) if factor scores are computed, present determinacy indices; (8) preferably they should publish the original correlation matrix.  相似文献   

4.
Sample sizes of welfare assessment protocols must warrant to reflect prevalences on-farm properly – regardless of farm size. Still, solely a fixed sample size was specified for the Welfare Quality® protocol for sows and piglets. The present study investigated whether animals may be assessed from only one body side as applied in the protocol and whether the pre-set sample size of 30 animals mirrors the prevalences of the animal-based indicators on-farm in the gestation unit considering different farm sizes. All indicators were assessed for both sides of an animal’s body by one observer on 13 farms in Germany, which were visited five times within 10 months. The farm visits were treated as independent since different animals were housed in the gestation units. The number of sows in the gestation units varied between 18 and 549 animals. The comparison of sides was carried out calculating exact agreement between animals’ sides and a Wilcoxon signed-rank test (W). The results signified that it is sufficient to assess the animal from one side (exact agreement: 88.3% to 99.5%, except for bursitis (70.0%); W: P-values 0.14 to 0.92). However, if side preferences existed in the indicator bursitis a potential bias must be considered. In the following, the sample size was evaluated by comparing samples’ prevalences against true prevalence, that is, the prevalence of all observed animals in the gestation unit in each farm visit. Therefore, subsets of data were generated by applying simple random sampling without replacement. The samples randomly included the animals’ right or left sides. Linear regression was rated as appropriate provided: coefficient of determination R2 ≥ 0.90, slope = 1 and intercept = 0 signifying exact agreement. The results revealed that the sample size required by the protocol and the application of calculation formulas are solely appropriate to mirror the prevalences of frequent indicators in the gestation unit, for example, bursitis (mean prevalence 34.4%). Using a proportion of animals, for example, a sample of 30% of all observed animals in a farm visit, pointed out that proportions must increase with indicators’ underlying prevalence narrowing 0.00%. Local infections (mean prevalence 13.3%) needed samples including 60% of all observed animals in each farm visit, whereas vulva lesions (mean prevalence 7.28%) only reached accuracy with the inclusion of 70% of the animals. Indicators with a mean prevalence of <1% were not analysed but can most likely only be ascertained by the assessment of all animals.  相似文献   

5.
Reducing the number of animal subjects used in biomedical experiments is desirable for ethical and practical reasons. Previous reviews of the benefits of reducing sample sizes have focused on improving experimental designs and methods of statistical analysis, but reducing the size of control groups has been considered rarely. We discuss how the number of current control animals can be reduced, without loss of statistical power, by incorporating information from historical controls, i.e. subjects used as controls in similar previous experiments. Using example data from published reports, we describe how to incorporate information from historical controls under a range of assumptions that might be made in biomedical experiments. Assuming more similarities between historical and current controls yields higher savings and allows the use of smaller current control groups. We conducted simulations, based on typical designs and sample sizes, to quantify how different assumptions about historical controls affect the power of statistical tests. We show that, under our simulation conditions, the number of current control subjects can be reduced by more than half by including historical controls in the analyses. In other experimental scenarios, control groups may be unnecessary. Paying attention to both the function and to the statistical requirements of control groups would result in reducing the total number of animals used in experiments, saving time, effort and money, and bringing research with animals within ethically acceptable bounds.  相似文献   

6.
Statistical sample size calculation is a crucial part of planning nonhuman animal experiments in basic medical research. The 3R principle intends to reduce the number of animals to a sufficient minimum. When planning experiments, one may consider the impact of less rigorous assumptions during sample size determination as it might result in a considerable reduction in the number of required animals. Sample size calculations conducted for 111 biometrical reports were repeated. The original effect size assumptions remained unchanged, but the basic properties (type 1 error 5%, two-sided hypothesis, 80% power) were varied. The analyses showed that a less rigorous assumption on the type 1 error level (one-sided 5% instead of two-sided 5%) was associated with a savings potential of 14% regarding the original number of required animals. Animal experiments are predominantly exploratory studies. In light of the demonstrated potential reduction in the numbers of required animals, researchers should discuss whether less rigorous assumptions during the process of sample size calculation may be reasonable for the purpose of optimizing the number of animals in experiments according to the 3R principle.  相似文献   

7.
N J Aebischer 《Biometrics》1986,42(4):973-979
Estimates of population size obtained by capture-recapture methods refer solely to the catchable portion of a population. Given a population containing marked animals, two closed-form maximum likelihood estimators of the proportion of uncatchable animals are presented. They are based on twice sampling the proportion of marked animals in the population: the first sample is drawn from catchable animals only, the second from mixed catchable and uncatchable animals. If the individuals in the first sample are not available to the second sample, both samples must be taken from a representative subpopulation of known size. The quantities required may be obtained during a standard capture-recapture session, provided the sampling methods meet the relevant assumptions; the ensuing estimate of population size can then be corrected for uncatchability. The technique is illustrated for eider ducks, using data from Coulson (1984, Ibis 126, 525-543).  相似文献   

8.
Animals of different sizes tend to move in a dynamically similar manner when travelling at speeds corresponding to equal values of a dimensionless parameter (DP) called the Froude number. Consequently, the Froude number has been widely used for defining equivalent speeds and predicting speeds of locomotion by extinct species and on other planets. However, experiments using simulated reduced gravity have demonstrated that equality of the Froude number does not guarantee dynamic similarity. This has cast doubt upon the usefulness of the Froude number in locomotion research. Here we use dimensional analysis of the planar spring-mass model, combined with Buckingham's Pi-Theorem, to demonstrate that four DPs must be equal for dynamic similarity in bouncing gaits such as trotting, hopping and bipedal running. This can be reduced to three DPs by applying the constraint of maintaining a constant average speed of locomotion. Sensitivity analysis indicates that all of these DPs are important for predicting dynamic similarity. We show that the reason humans do not run in a dynamically similar manner at equal Froude number in different levels of simulated reduced gravity is that dimensionless leg stiffness decreases as gravity increases. The reason that the Froude number can predict dynamic similarity in Earth gravity is that dimensionless leg stiffness and dimensionless vertical landing speed are both independent of size. In conclusion, although equal Froude number is not sufficient for dynamic similarity, it is a necessary condition. Therefore, to detect fundamental differences in locomotion, animals of different sizes should be compared at equal Froude number, so that they can be as close to dynamic similarity as possible. More generally, the concept of dynamic similarity provides a powerful framework within which similarities and differences in locomotion can be interpreted.  相似文献   

9.
It is now recognized that genetic data have an important role in the management of wild and captive populations. Valuable samples and data can often be obtained when animals are handled for other reasons, but unless the personnel involved are aware of the correct procedures, the sample taken will probably be useless. This article outlines appropriate procedures, and the data that can be obtained from them, so that when an opportunity arises, a quick decision can be made about feasibility of sampling and the usefulness of the data for management or research. This paper does not cover the veterinary aspects of obtaining the sample, nor its laboratory analysis, which are assumed to be well understood by the donor and recipient organizations, respectively. Every attempt has been made to develop techniques which are simple, and allow storage of the sample while arrangements are made with a suitable recipient laboratory.  相似文献   

10.
The available information on sample size requirements of mixture analysis methods is insufficient to permit a precise evaluation of the potential problems facing practical applications of mixture analysis. We use results from Monte Carlo simulation to assess the sample size requirements of a simple mixture analysis method under conditions relevant to biological applications of mixture analysis. The mixture model used includes two univariate normal components with equal variances but assumes that the researcher is ignorant as to the equality of the variances. The method used relies on the EM algorithm to compute the maximum likelihood estimates of the mixture parameters, and the likelihood ratio test to assess the number of components in the mixtures. Our results suggest that sample sizes close to 500 or 1000 data may be required to adequately solve mixtures commonly found in biology. Sample sizes of 500 or 1000 are difficult to achieve. However, use of this MA method may be a reasonable option when the researcher deals with problems which are intractable by other means. Copyright 1999 Academic Press.  相似文献   

11.
Education and training in microsurgical techniques have historically relied on the use of live animal models. Due to an increase in the numbers of microsurgical operations in recent times, the number of trainees in this highly-specialised surgical field has continued to grow. However, strict legislation, greater public awareness, and an increasing sensitivity toward the ethical aspects of scientific research and medical education, emphatically demand a significant reduction in the numbers of animals used in surgical and academic education. Hence, a growing number of articles are reporting on the use of alternatives to live animals in microsurgical education and training. In this review, we report on the current trends in the development and use of microsurgical training models, and on their potential to reduce the number of live animals used for this purpose. We also share our experiences in this field, resulting from our performance of numerous microsurgical courses each year, over more than ten years. The porcine heart, in microvascular surgery training, and the fresh chicken leg, in microneurosurgical and microvascular surgery training, are excellent models for the teaching of basic techniques to the microsurgical novice. Depending on the selected level of expertise of the trainee, these alternative models are capable of reducing the numbers of live animals used by 80-100%. For an even more enhanced, "closer-to-real-life" scenario, these non-animated vessels can be perfused by a pulsatile pump. Thus, it is currently possible to provide excellent and in-depth training in microsurgical techniques, even when the number of live animals used is reduced to a minimum. With these new and innovative techniques, trainees are able to learn and prepare themselves for the clinical situation, with the sacrifice of considerably fewer laboratory animals than would have occurred previously.  相似文献   

12.
Estimating optimal sample size for microbiological surveys is a challenge for laboratory managers. When insufficient sampling is conducted, biased inferences are likely; however, when excessive sampling is conducted valuable laboratory resources are wasted. This report presents a statistical model for the estimation of the sample size appropriate for the accurate identification of the bacterial subtypes of interest in a specimen. This applied model for microbiology laboratory use is based on a Bayesian mode of inference, which combines two inputs: (ii) a prespecified estimate, or prior distribution statement, based on available scientific knowledge and (ii) observed data. The specific inputs for the model are a prior distribution statement of the number of strains per specimen provided by an informed microbiologist and data from a microbiological survey indicating the number of strains per specimen. The model output is an updated probability distribution of strains per specimen, which can be used to estimate the probability of observing all strains present according to the number of colonies that are sampled. In this report two scenarios that illustrate the use of the model to estimate bacterial colony sample size requirements are presented. In the first scenario, bacterial colony sample size is estimated to correctly identify Campylobacter amplified restriction fragment length polymorphism types on broiler carcasses. The second scenario estimates bacterial colony sample size to correctly identify Salmonella enterica serotype Enteritidis phage types in fecal drag swabs from egg-laying poultry flocks. An advantage of the model is that as updated inputs from ongoing surveys are incorporated into the model, increasingly precise sample size estimates are likely to be made.  相似文献   

13.
Macer D 《Bioethics》1989,3(3):226-235
Macer explores whether it is possible to genetically alter animals to reduce or eliminate their capacity to feel pain, whether it would be ethical to do so, and how we would regard animals that do not feel pain. A possible use for such animals would be as subjects for laboratory research. Among the scientific, philosophical, and ethical uncertainties of pain that Macer considers are: can we define pain? how do we measure pain and anxiety? is pain always related to suffering? what is the minimum level of pain that a being must be able to feel before we reach the conclusion that it should not be used by other beings? are we justified in using beings that do not feel pain when we would not be if they did feel pain and suffer from it?  相似文献   

14.
Estimating optimal sample size for microbiological surveys is a challenge for laboratory managers. When insufficient sampling is conducted, biased inferences are likely; however, when excessive sampling is conducted valuable laboratory resources are wasted. This report presents a statistical model for the estimation of the sample size appropriate for the accurate identification of the bacterial subtypes of interest in a specimen. This applied model for microbiology laboratory use is based on a Bayesian mode of inference, which combines two inputs: (ii) a prespecified estimate, or prior distribution statement, based on available scientific knowledge and (ii) observed data. The specific inputs for the model are a prior distribution statement of the number of strains per specimen provided by an informed microbiologist and data from a microbiological survey indicating the number of strains per specimen. The model output is an updated probability distribution of strains per specimen, which can be used to estimate the probability of observing all strains present according to the number of colonies that are sampled. In this report two scenarios that illustrate the use of the model to estimate bacterial colony sample size requirements are presented. In the first scenario, bacterial colony sample size is estimated to correctly identify Campylobacter amplified restriction fragment length polymorphism types on broiler carcasses. The second scenario estimates bacterial colony sample size to correctly identify Salmonella enterica serotype Enteritidis phage types in fecal drag swabs from egg-laying poultry flocks. An advantage of the model is that as updated inputs from ongoing surveys are incorporated into the model, increasingly precise sample size estimates are likely to be made.  相似文献   

15.
Scaling effects in the fatigue strength of bones from different animals   总被引:3,自引:0,他引:3  
The bones of vertebrates are all made from the same basic material, despite a huge variation in size from one species to another. This introduces a problem: large structures are more prone to fatigue failure (stress fracture) than smaller structures made of the same material. This implies that bones in larger animals cannot withstand as much stress in daily use as bones in smaller animals. In fact, this is not the case, because all bones experience approximately the same stresses and strains in use. This implies a variation in the underlying material: bone material in large animals must have superior fatigue properties to offset the disadvantages of size. This hypothesis is tested here by reference to fatigue data from the literature, taken from a range of animals from cows to mice. Fatigue strength was plotted as a function of stressed volume and modelled mathematically using a Weibull distribution. This shows a general tendency for fatigue strength to reduce as volume increases. But when the volume effect is taken into account, there remains a tendency for bones from smaller animals to have lower fatigue strength. This can be modelled by a simple variation in one of the parameters in the Weibull equation, which defines the intrinsic fatigue strength of the material. When extrapolated to the size of the whole bone for each animal, all bones were found to have the same fatigue strength. This resolves the anomaly and implies a complex system in which the underlying structure of bone varies with animal size in order to cancel out scaling effects.  相似文献   

16.
One strategy for localization of a quantitative-trait locus (QTL) is to test whether the distribution of a quantitative trait depends on the number of copies of a specific genetic-marker allele that an individual possesses. This approach tests for association between alleles at the marker and the QTL, and it assumes that association is a consequence of the marker being physically close to the QTL. However, problems can occur when data are not from a homogeneous population, since associations can arise irrespective of a genetic marker being in physical proximity to the QTL-that is, no information is gained regarding localization. Methods to address this problem have recently been proposed. These proposed methods use family data for indirect stratification of a population, thereby removing the effect of associations that are due to unknown population substructure. They are, however, restricted in terms of the number of children per family that can be used in the analysis. Here we introduce tests that can be used on family data with parent and child genotypes, with child genotypes only, or with a combination of these types of families, without size restrictions. Furthermore, equations that allow one to determine the sample size needed to achieve desired power are derived. By means of simulation, we demonstrate that the existing tests have an elevated false-positive rate when the size restrictions are not followed and that a good deal of information is lost as a result of adherence to the size restrictions. Finally, we introduce permutation procedures that are recommended for small samples but that can also be used for extensions of the tests to multiallelic markers and to the simultaneous use of more than one marker.  相似文献   

17.
Haplotype analyses have become increasingly common in genetic studies of human disease because of their ability to identify unique chromosomal segments likely to harbor disease-predisposing genes. The study of haplotypes is also used to investigate many population processes, such as migration and immigration rates, linkage-disequilibrium strength, and the relatedness of populations. Unfortunately, many haplotype-analysis methods require phase information that can be difficult to obtain from samples of nonhaploid species. There are, however, strategies for estimating haplotype frequencies from unphased diploid genotype data collected on a sample of individuals that make use of the expectation-maximization (EM) algorithm to overcome the missing phase information. The accuracy of such strategies, compared with other phase-determination methods, must be assessed before their use can be advocated. In this study, we consider and explore sources of error between EM-derived haplotype frequency estimates and their population parameters, noting that much of this error is due to sampling error, which is inherent in all studies, even when phase can be determined. In light of this, we focus on the additional error between haplotype frequencies within a sample data set and EM-derived haplotype frequency estimates incurred by the estimation procedure. We assess the accuracy of haplotype frequency estimation as a function of a number of factors, including sample size, number of loci studied, allele frequencies, and locus-specific allelic departures from Hardy-Weinberg and linkage equilibrium. We point out the relative impacts of sampling error and estimation error, calling attention to the pronounced accuracy of EM estimates once sampling error has been accounted for. We also suggest that many factors that may influence accuracy can be assessed empirically within a data set-a fact that can be used to create "diagnostics" that a user can turn to for assessing potential inaccuracies in estimation.  相似文献   

18.
Methods for choosing an appropriate sample size in animal experiments have received much attention in the statistical and biological literature. Due to ethical constraints the number of animals used is always reduced where possible. However, as the number of animals decreases so the risk of obtaining inconclusive results increases. By using a more efficient experimental design we can, for a given number of animals, reduce this risk. In this paper two popular cases are considered, where planned comparisons are made to compare treatments back to control and when researchers plan to make all pairwise comparisons. By using theoretical and empirical techniques we show that for studies where all pairwise comparisons are made the traditional balanced design, as suggested in the literature, maximises sensitivity. For studies that involve planned comparisons of the treatment groups back to the control group, which are inherently more sensitive due to the reduced multiple testing burden, the sensitivity is maximised by increasing the number of animals in the control group while decreasing the number in the treated groups.  相似文献   

19.
The primary goal of an animal care and use program (ACUP) should be to ensure animal well-being while fostering progressive science. Both the Animal Welfare Act (and associated regulations) and the Public Health Service (PHS) Policy require the institutional animal care and use committee (IACUC) to provide oversight of the animal program through continuing reviews to ensure that procedures are performed as approved by the committee. But for many committees the semiannual assessment does not provide an opportunity to observe research procedures being performed. Furthermore, IACUC members are typically volunteers with other full-time commitments and may not be able to dedicate sufficient time to observe protocol performance. Postapproval monitoring (PAM) is a tool that the IACUC can use to ensure that the institution fulfills its regulatory obligation for animal program oversight. When performed by attentive and observant individuals, PAM can extend the IACUC's oversight, management, training, and communication resources, regardless of program size or complexity. No defined PAM process fits all institutions or all situations; rather, the monitoring must match the program under review. Nonetheless, certain concepts, concerns, and conditions affect all PAM processes; they are described in this article. Regardless of the style or depth of PAM chosen for a given program, one thing is sure: failure of the IACUC to engage all available and effective oversight methods to ensure humane, compassionate, efficient, and progressive animal care and use is a disservice to the institution, to the research community and to the animals used for biomedical research, testing, or teaching.  相似文献   

20.
The binary classification of landscapes into suitable vs. unsuitable areas underlies several prominent theories in conservation biogeography. However, a binary classification is not always appropriate. The textural discontinuity hypothesis provides an alternative theoretical framework to examine the geographical distribution of species, and does not rely on a binary classification scheme. The texture of a given landscape is the combination of its vertical structural complexity and horizontal spatial grain. The textural discontinuity hypothesis states that biophysical features in the environment are scaled in a discontinuous way, and that discontinuities in the body size distribution of animals mirror these biophysical discontinuities. As a result of this relationship, a complex landscape texture should be associated with small‐bodied animals, whereas a simple landscape texture should be associated with larger‐bodied animals. We examined this hypothesis for birds in five landscapes in south‐eastern Australia that represented a gradient from simple to complex landscape texture. In landscapes with a complex texture, the number of detections of small birds was higher than expected, and the number of detections of larger‐bodied birds was lower than expected. The opposite pattern was found in landscapes with a simple texture. The pattern remained significant when only bird species found in each of the five landscapes were considered, which demonstrated that the association of landscape texture with body size was not an artefact of landscapes differing in their species pools. Understanding the effects of landscape texture on species distribution patterns may be a promising research frontier for conservation biogeography. We hypothesize that the active management of landscape texture may be used to attract or deter animals of certain body sizes. Consistent with other theories, the textural discontinuity hypothesis therefore suggests that managing entire landscapes, rather than only predefined patches, is an important conservation strategy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号