首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Quantifying temporal variability in population abundances   总被引:3,自引:0,他引:3  
Joel P. Heath 《Oikos》2006,115(3):573-581
Understanding variability of population abundances is of central concern to theoretical and applied evolutionary ecology, yet quantifying the conceptually simple idea has been substantially problematic. Standard statistical measures of variability are particularly biassed by rare events, zero counts and other 'non-Gaussian' behaviour, which are often inappropriately weighted or excluded from analysis. I conjecture that these problems are primarily a function of calculating variation as deviation from an average abundance, while the average may not be static, nor actually reflect abundance at any point in the time series. Here I describe a simple metric (population variability PV) that quantifies variability as the average percent difference between all combinations of observed abundances. Zero counts can be included if desired. Similar to standard metrics, variability is measured on a proportional scale, facilitating comparative applications. Standard metrics are based on Gaussian distributions, are over-sensitive to rare events and heavy tailed behaviour, and can inappropriately indicate 'more time-more variation' effects (reddened spectrum). Here I demonstrate that, while PV behaves similarly for 'normal' time series, it is independent of deviation from mean abundance for heavy tailed distributions, its robustness to non-Gaussian behaviour resolves artificial reddened spectrum issues, and variability calculated using PV from short time series is substantially more accurate at estimating known long term variability than standard metrics. PV therefore provides common ground for evaluating the variability of populations undergoing different dynamics, and with different statistical distributions of abundance, and can be easily generalized to a variety of contexts and disciplines.  相似文献   

3.
The objective of this study was to assess how accurately and repeatably the Iscan system measures force and pressure in the natural patellofemoral joint. These measurements must be made to test widely held assumptions about the relationships between mechanics, pain and cartilage degeneration. We assessed the system's accuracy by using test rigs in a materials testing machine to apply known forces and force distributions across the sensor. The root mean squared error in measuring resultant force (for five trials at each of seven load levels) was 6.5±4.4% (mean±standard deviation over all trials at all load levels), while the absolute error was −5.5±5.6%. For force distribution, the root mean squared error (for five trials at each of five force distributions) was 0.86±0.58%, while the absolute error was −0.22±1.03%. We assessed the repeatability of the system's measurements of patellofemoral contact force, pressure and force distribution in four cadaver specimens loaded in continuous and static flexion. Variability in measurement (standard deviation expressed as a percentage of the mean) was 9.1% for resultant force measurements and 3.0% for force distribution measurements for static loads, and 7.3% for resultant force and 2.2% for force distribution measurements for continuous flexion. Cementing the sensor to the cartilage lowered readings of resultant force by 31±32% (mean±standard deviation), area by 24±13% and mean pressure by 9±34% (relative to the uncemented sensor). Maximum pressure measurement, however, was 24±43% higher in the cemented sensor than in the uncemented sensor. The results suggest that the sensor measures force distribution more accurately and repeatably than absolute force. A limitation of our work, however, is that the sensor must be cemented to the patellar articular surface to make the force distribution measurements, and our results suggest that this process reduces the accuracy of force, pressure and area measurements. Our results suggest that the Iscan system's pressure measurement accuracy and repeatability are comparable to that of Fuji Prescale film, but its advantages are that it is thinner than most Fuji Prescale film, it measures contact area more accurately and that it makes continuous measurements of force, pressure and area.  相似文献   

4.
Population modeling for a squirrel monkey colony breeding in a captive laboratory environment was approached with the use of two different mathematical modeling techniques. Deterministic modeling was used initially on a spreadsheet to estimate future census figures for animals in various age/sex classes. Historical data were taken as input parameters for the model, combined with harvesting policies to calculate future population figures in the colony. This was followed by a more sophisticated stochastic model that is capable of accommodating random variations in biological phenomena, as well as smoothing out measurement errors. Point estimates (means) for input parameters used in the deterministic model are replaced by probability distributions fitted into historical data from colony records. With the use of Crystal Ball (Decisioneering, Inc., Denver, CO) software, user-selected distributions are embedded in appropriate cells in the spreadsheet model. A Monte Carlo simulation scheme running within the spreadsheet draws (on each cycle) random values for input parameters from the distribution embedded in each relevant cell, and thus generates output values for forecast variables. After several thousand runs, a distribution is formed at the output end representing estimates for population figures (forecast variables) in the form of probability distributions. Such distributions provide the decision-maker with a mathematical habitat for statistical analysis in a stochastic setting. In addition to providing standard statistical measures (e.g., mean, variance, and range) that describe the location and shape of the distribution, this approach offers the potential for investigating crucial issues such as conditions surrounding the plausibility of extinction.  相似文献   

5.
Kostal L  Lansky P  Pokora O 《PloS one》2011,6(7):e21998
During the stationary part of neuronal spiking response, the stimulus can be encoded in the firing rate, but also in the statistical structure of the interspike intervals. We propose and discuss two information-based measures of statistical dispersion of the interspike interval distribution, the entropy-based dispersion and Fisher information-based dispersion. The measures are compared with the frequently used concept of standard deviation. It is shown, that standard deviation is not well suited to quantify some aspects of dispersion that are often expected intuitively, such as the degree of randomness. The proposed dispersion measures are not entirely independent, although each describes the interspike intervals from a different point of view. The new methods are applied to common models of neuronal firing and to both simulated and experimental data.  相似文献   

6.
Plant abundance data are often analysed using standard statistical procedures without considering their distributional features and the underlying ecological processes. However, plant abundance data, e.g. when measured in biodiversity monitoring programs, are often sampled using a hierarchical sampling procedure, and since plant abundance data in a hierarchical sampling procedure are typically both zero-inflated and over-dispersed, the use of a standard statistical procedure is sub-optimal and not the best possible practice in the modelling of plant abundance data. Two distributions (the zero-inflated generalised binomial distribution and the zero-inflated bounded beta distribution) are suggested as possible distributions for analysing either discrete, continuous, or ordinal hierarchically sampled plant cover data.  相似文献   

7.

Background

DNA microarrays are a powerful technology that can provide a wealth of gene expression data for disease studies, drug development, and a wide scope of other investigations. Because of the large volume and inherent variability of DNA microarray data, many new statistical methods have been developed for evaluating the significance of the observed differences in gene expression. However, until now little attention has been given to the characterization of dispersion of DNA microarray data.

Results

Here we examine the expression data obtained from 682 Affymetrix GeneChips® with 22 different types and we demonstrate that the Gaussian (normal) frequency distribution is characteristic for the variability of gene expression values. However, typically 5 to 15% of the samples deviate from normality. Furthermore, it is shown that the frequency distributions of the difference of expression in subsets of ordered, consecutive pairs of genes (consecutive samples) in pair-wise comparisons of replicate experiments are also normal. We describe a consecutive sampling method, which is employed to calculate the characteristic function approximating standard deviation and show that the standard deviation derived from the consecutive samples is equivalent to the standard deviation obtained from individual genes. Finally, we determine the boundaries of probability intervals and demonstrate that the coefficients defining the intervals are independent of sample characteristics, variability of data, laboratory conditions and type of chips. These coefficients are very closely correlated with Student's t-distribution.

Conclusion

In this study we ascertained that the non-systematic variations possess Gaussian distribution, determined the probability intervals and demonstrated that the K α coefficients defining these intervals are invariant; these coefficients offer a convenient universal measure of dispersion of data. The fact that the K α distributions are so close to t-distribution and independent of conditions and type of arrays suggests that the quantitative data provided by Affymetrix technology give "true" representation of physical processes, involved in measurement of RNA abundance.

Reviewers

This article was reviewed by Yoav Gilad (nominated by Doron Lancet), Sach Mukherjee (nominated by Sandrine Dudoit) and Amir Niknejad and Shmuel Friedland (nominated by Neil Smalheiser).  相似文献   

8.
To test the hypotheses that butterflies in an intact lowland rainforest are randomly distributed in space and time, a guild of nymphalid butterflies was sampled at monthly intervals for one year by trapping 883 individuals of 91 species in the canopy and understory of four contiguous, intact forest plots and one naturally occurring lake edge. The overall species abundance distribution was well described by a log-normal distribution. Total species diversity (γ-diversity) was partitioned into additive components within and among community subdivisions (α-diversity and β-diversity) in vertical, horizontal and temporal dimensions. Although community subdivisions showed high similarity (l-β-diversity/γ-diversity), significant β-diversity existed in each dimension. Individual abundance and observed species richness were lower in the canopy man in the understory, but rarefaction analysis suggested that the underlying species richness was similar in both canopy and understory. Observed species richness varied among four contiguous forest plots, and was lowest in the lake edge plot. Rarefaction and species accumulation curves showed that one forest plot and the lake edge had significantly lower species richness than other forest plots. Within any given month, only a small fraction of total sample species richness was represented by a single plot and height (canopy or understory). Comparison of this study to a similar one done in disturbed forest showed diat butterfly diversity at a naturally occurring lake edge differed strongly from a pasture-forest edge. Further comparison showed that species abundance distributions from intact and disturbed forest areas had variances that differed significandy, suggesting mat in addition to extrapolation, rarefaction and species accumulation techniques, the shapes of species abundance distributions are fundamental to assessing diversity among sites. This study shows the necessity for long-term sampling of diverse communities in space and time to assess tropical insect diversity among different areas, and the need of such studies is discussed in relation to tropical ecology and quick surveys in conservation biology.  相似文献   

9.
By starting from the Johnson distribution pioneered by Johnson ( 1949 ), we propose a broad class of distributions with bounded support on the basis of the symmetric family of distributions. The new class of distributions provides a rich source of alternative distributions for analyzing univariate bounded data. A comprehensive account of the mathematical properties of the new family is provided. We briefly discuss estimation of the model parameters of the new class of distributions based on two estimation methods. Additionally, a new regression model is introduced by considering the distribution proposed in this article, which is useful for situations where the response is restricted to the standard unit interval and the regression structure involves regressors and unknown parameters. The regression model allows to model both location and dispersion effects. We define two residuals for the proposed regression model to assess departures from model assumptions as well as to detect outlying observations, and discuss some influence methods such as the local influence and generalized leverage. Finally, an application to real data is presented to show the usefulness of the new regression model.  相似文献   

10.
Pennello GA  Devesa SS  Gail MH 《Biometrics》1999,55(3):774-781
Commonly used methods for depicting geographic variation in cancer rates are based on rankings. They identify where the rates are high and low but do not indicate the magnitude of the rates nor their variability. Yet such measures of variability may be useful in suggesting which types of cancer warrant further analytic studies of localized risk factors. We consider a mixed effects model in which the logarithm of the mean Poisson rate is additive in fixed stratum effects (e.g., age effects) and in logarithms of random relative risk effects associated with geographic areas. These random effects are assumed to follow a gamma distribution with unit mean and variance 1/alpha, similar to Clayton and Kaldor (1987, Biometrics 43, 671-681). We present maximum likelihood and method-of-moments estimates with standard errors for inference on alpha -1/2, the relative risk standard deviation (RRSD). The moment estimates rely on only the first two moments of the Poisson and gamma distributions but have larger standard errors than the maximum likelihood estimates. We compare these estimates with other measures of variability. Several examples suggest that the RRSD estimates have advantages compared to other measures of variability.  相似文献   

11.
Spatial distributions of biological variables are often well-characterized with pairwise measures of spatial autocorrelation. In this article, the probability theory for products and covariances of join-count spatial autocorrelation measures are developed for spatial distributions of multiple nominal (e.g. species or genotypes) types. This more fully describes the joint distributions of pairwise measures in spatial distributions of multiple (i.e. more than two) types. An example is given on how the covariances can be used for finding standard errors of weighted averages of join-counts in spatial autocorrelation analysis of more than two types, as is typical for genetic data for multiallelic loci.  相似文献   

12.
Matrix population models are a standard tool for studying stage‐structured populations, but they are not flexible in describing stage duration distributions. This study describes a method for modeling various such distributions in matrix models. The method uses a mixture of two negative binomial distributions (parametrized using a maximum likelihood method) to approximate a target (true) distribution. To examine the performance of the method, populations consisting of two life stages (juvenile and adult) were considered. The juvenile duration distribution followed a gamma distribution, lognormal distribution, or zero‐truncated (over‐dispersed) Poisson distribution, each of which represents a target distribution to be approximated by a mixture distribution. The true population growth rate based on a target distribution was obtained using an individual‐based model, and the extent to which matrix models can approximate the target dynamics was examined. The results show that the method generally works well for the examined target distributions, but is prone to biased predictions under some conditions. In addition, the method works uniformly better than an existing method whose performance was also examined for comparison. Other details regarding parameter estimation and model development are also discussed.  相似文献   

13.
The integro-differential growth model of Eakman, Fredriekson, and Tsuehiya has been employed to fit cell size distribution data for Schizosaccharomyces pombe grown in a chemostat under severe product inhibition by ethanol. The distributions were obtained with a Coulter aperture and an electronic system patterned after that of Harvey and Marr. Four parameters—mean cell division size, cell division size standard deviation, daughter cell size standard deviation, and a growth rate coefficient—were calculated for models where the cell growth rate was inversely proportional to size, constant, and proportional to size. A fourth model, one where sigmoidal growth behavior was simulated by two linear growth segments, was also investigated. Linear and sigmoidal models fit the distribution data best. While the mean cell division size remained relatively constant at all growth rates, standard deviation of division size distribution increased with increasing holding times. Standard deviation of the daughter size distribution remained small at all dilution rates. Unlike previous findings with other organisms, the average cell size of Schizosaccharomyces pobme increased at low growth rates.  相似文献   

14.
Power calculations of a statistical test require that the underlying population distribution(s) be completely specified. Statisticians, in practice, may not have complete knowledge of the entire nature of the underlying distribution(s) and are at a loss for calculating the exact power of the test. In such cases, an estimate of the power would provide a suitable substitute. In this paper, we are interested in estimating the power of the Kruskal-Wallis one-way analysis of variance by ranks test for a location shift. We investigated an extension of a data-based power estimation method presented by Collings and Hamilton (1988), which requires no prior knowledge of the underlying population distributions other than necessary to perform the Kruskal-Wallis test for a location shift. This method utilizes bootstrapping techniques to produce a power estimate based on the empirical cumulative distribution functions of the sample data. We performed a simulation study of the extended power estimator under the conditions of k = 3 and k = 5 samples of equal sizes m = 10 and m = 20, with four underlying continuous distributions that possessed various location configurations. Our simulation study demonstates that the Extended Average × & Y power estimation method is a reliable estimator of the power of the Kruskal-Wallis test for k = 3 samples, and a more conservative to a mild overestimator of the true power for k = 5 samples.  相似文献   

15.
Increased intra-subject variability of reaction times (ISV-RT) is one of the most consistent findings in attention-deficit/hyperactivity disorder (ADHD). Although the nature of this phenomenon is still unclear, it has been hypothesised to reflect interference from the Default Mode Network (DMN). So far, ISV-RT has been operationally defined either as a frequency spectrum of the underlying RT time series, or as a measure of dispersion of the RT scores distribution. Here, we use a novel RT analysis framework to link these hitherto unconnected facets of ISV-RT by determining the sensitivity of different measures of RT dispersion to the frequency content of the underlying RT time series. N=27 patients with ADHD and N=26 healthy controls performed several visual N-back tasks. Different measures of RT dispersion were repeatedly modelled after individual frequency bands of the underlying RT time series had been either extracted or suppressed using frequency-domain filtering. We found that the intra-subject standard deviation of RT preserves the “1/f noise” characteristic typical of human RT data. Furthermore and most importantly, we found that the ex-Gaussian parameter τ is rather exclusively sensitive to frequencies below 0.025 Hz in the underlying RT time series and that the particularly slow RTs, which nourish τ, occur regularly as part of an quasi-periodic, ultra-slow RT fluctuation. Overall, our results are compatible with the idea that ISV-RT is modulated by an endogenous, slowly fluctuating process that may reflect DMN interference.  相似文献   

16.
A stochastic model for hospital infection incorporating both direct transmission and indirect transmission via free-living bacteria in the environment is investigated. We examine the long term behavior of the model by calculating a stationary distribution and normal approximation of the distribution. The quasi-stationary distribution of the model is studied to investigate the models’ behavior before extinction and the time to extinction. Numerical results show agreement between the calculated distributions and results of event-driven simulations. Hand hygiene of volunteers is more effective in terms of reducing the mean (or standard deviation) of the stationary distribution of colonized patients and the expected time to extinction compared to hand hygiene of health care workers (HCWs), on the basis of our parameter values. However, the indirect (or direct) transmission rate can lead to either increase or decrease in the standard deviation of the stationary distribution, but the impact of the indirect transmission is much greater than that of the direct transmission. The findings suggest that isolation of new admitted colonized patients is most effective in reducing both the mean and standard deviation of the stationary distribution and measures related to indirect transmission are secondary in their effects compared to other interventions.  相似文献   

17.
We studied the independent influence of changes in perfusion on pulmonary gas exchange in the left lower lobe (LLL) of anesthetized dogs. Blood flow to the LLL (QLLL) was raised 50% (increased QLLL) or reduced 50% (decreased QLLL) from baseline by partial occlusion of the right or left pulmonary artery, respectively. Minute ventilation and alveolar PCO2 of the LLL remained constant throughout the study. We determined ventilation-perfusion distributions of the LLL using the multiple inert gas elimination technique. Increased QLLL impaired LLL pulmonary gas exchange. All dispersion indexes and all arterial-alveolar difference areas increased (P less than 0.01). Decreased QLLL increased the log standard deviation of the perfusion distribution (P less than 0.05) and reduced the log standard deviation of the ventilation distribution (P less than 0.01) but did not affect the dispersion indexes or alveolar-arterial difference areas. We conclude that ventilation-perfusion heterogeneity is increased by independent changes in perfusion from normal baseline blood flow, even when ventilation and alveolar gas composition remain constant.  相似文献   

18.
The distance distributions between successive occurrences of the same oligonucleotides in chromosomal DNA are studied, in different classes of higher eucaryotic organisms. A two-parameter modeling is undertaken and applied on the distance distribution of quintuplets (sequences of size five bps) and hexaplets (sequences of size six bps); the first parameter k refers to the short range exponential decay of the distributions, whereas the second parameter m refers to the power law behavior. A two-dimensional scatter plot representing the model equation demonstrates that the points corresponding to the distance distribution of oligonucleotides containing the CG consensus sequence (promoter of the RNA polymerase II) cluster together (group α), apart from all other oligonucleotides (group β). This is shown for the available chordata Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, Gallus gallus and Danio rerio. This clustering is less evident in lower Animalia and plants, such as Drosophila melanogaster, Caenorhabditis elegans and Arabidopsis thaliana. Moreover, in all organisms the oligonucleotides which contain any consensus sequence are found to be described by long range distributions, whereas all others have a stronger influence of short range decay.Various measures are introduced and evaluated, to numerically characterize the clustering of the two groups. The one which most clearly discriminates the two classes is shown to be the proximity factor.  相似文献   

19.
Aim Determining the mechanisms underlying climatic limitation of species distributions is essential for understanding responses to current climatic change. Disentangling direct (e.g. physiological) and indirect (e.g. trophic) effects of climate on distributions through occurrence‐based modelling is problematic because most species use the same area for both shelter and food acquisition. By focusing on marine birds that breed on land but feed at sea, we exploit a rare opportunity to dissociate direct from indirect climatic effects on endothermic species. Location Coastal Europe. Methods We developed climate‐response surfaces (CRS) for 13 seabird species in coastal Europe, linking terrestrial climatic variables considered important for heat transfer with presence/absence data across each species’ entire European breeding range. Agreement between modelled and actual distribution was assessed for jackknifed samples using area under the curve (AUC) of receiver operating characteristic plots. Higher AUC values indicated closer correspondence between observed breeding distribution and terrestrial climate. We assessed the influence of several ecological factors on model performance across species. Results Species maximum foraging range and breeding latitude explained the greatest proportion of variation in AUC across species. AUC was positively related to both latitude and foraging range. Main conclusions The positive relationship between foraging range and AUC suggests that species foraging further are more likely to be constrained by environmental heat stress conditions at the breeding site. One plausible explanation is that long foraging trips result in one parent spending long periods in continuous nest attendance, exposed to such conditions. These may include negative impacts through predation and parasitism in addition to physiological responses to the thermal environment, which probably explains why our models performed better for species breeding at higher latitudes, where such species interactions are considered less important. These data highlight the importance of considering physiological impacts of climate for endothermic species, and suggest that widespread oceanographic changes that reduce prey quality and quantity for seabirds at sea may be exacerbated by additional impacts of climate at the breeding site.  相似文献   

20.
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号