首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Linear models are typically used to analyze multivariate longitudinal data. With these models, estimating the covariance matrix is not easy because the covariance matrix should account for complex correlated structures: the correlation between responses at each time point, the correlation within separate responses over time, and the cross-correlation between different responses at different times. In addition, the estimated covariance matrix should satisfy the positive definiteness condition, and it may be heteroscedastic. However, in practice, the structure of the covariance matrix is assumed to be homoscedastic and highly parsimonious, such as exchangeable or autoregressive with order one. These assumptions are too strong and result in inefficient estimates of the effects of covariates. Several studies have been conducted to solve these restrictions using modified Cholesky decomposition (MCD) and linear covariance models. However, modeling the correlation between responses at each time point is not easy because there is no natural ordering of the responses. In this paper, we use MCD and hypersphere decomposition to model the complex correlation structures for multivariate longitudinal data. We observe that the estimated covariance matrix using the decompositions is positive-definite and can be heteroscedastic and that it is also interpretable. The proposed methods are illustrated using data from a nonalcoholic fatty liver disease study.  相似文献   

2.
Population dynamic models combine density dependence and environmental effects. Ignoring sampling uncertainty might lead to biased estimation of the strength of density dependence. This is typically addressed using state‐space model approaches, which integrate sampling error and population process estimates. Such models seldom include an explicit link between the sampling procedures and the true abundance, which is common in capture–recapture settings. However, many of the models proposed to estimate abundance in the presence of capture heterogeneity lead to incomplete likelihood functions and cannot be straightforwardly included in state‐space models. We assessed the importance of estimating sampling error explicitly by taking an intermediate approach between ignoring uncertainty in abundance estimates and fully specified state‐space models for density‐dependence estimation based on autoregressive processes. First, we estimated individual capture probabilities based on a heterogeneity model for a closed population, using a conditional multinomial likelihood, followed by a Horvitz–Thompson estimate for abundance. Second, we estimated coefficients of autoregressive models for the log abundance. Inference was performed using the methodology of integrated nested Laplace approximation (INLA). We performed an extensive simulation study to compare our approach with estimates disregarding capture history information, and using R‐package VGAM, for different parameter specifications. The methods were then applied to a real data set of gray‐sided voles Myodes rufocanus from Northern Norway. We found that density‐dependence estimation was improved when explicitly modeling sampling error in scenarios with low process variances, in which differences in coverage reached up to 8% in estimating the coefficients of the autoregressive processes. In this case, the bias also increased assuming a Poisson distribution in the observational model. For high process variances, the differences between methods were small and it appeared less important to model heterogeneity.  相似文献   

3.
Statistical methods are discussed, which are used in the analysis of point patterns. Special attention has been paid to their application in ecological research. Some new procedures are presented, which seem to be better compatible with the needs of the ecologist.It is pointed out that patterns can usually be described in terms of an appropriate trend surface as well as in terms of mutual interactions. This circumstance restricts the value of the analysis of point patterns for ecological research in tracing the mechanisms which are connected with the distribution of individuals.After having discussed the current sampling designs with respect to point patterns, the, estimation of the local intensity is treated. Although the so-called distance-method has got considerable attention in this respect, it is stated that this method is not very appropriate for this purpose. For two sampling designs, it is illustrated how to estimate functions, which describe density variation in the field.Further, a procedure is proposed which estimates the covariance curve, as well as the total amount of interaction in the pattern. The relation of the statistic with the covariance curve has been pointed out. An improvement has been proposed of the well-known Greigh-Smith method, i.e. the estimation of the variance curve.The estimation procedures proposed have been illustrated by three examples from the field, i.e. dispersal patterns of barnacles, anemones and glassworts, all belonging to low structured communities. They are presented in the Appendix. Monte Carlo-methods are used to study the properties of some statistical procedures.This paper has been part of a Ph.D. thesis, State Univ. Leiden, May 1977.  相似文献   

4.
5.
Basing on the approach by McLachlan (1977) a procedure for the conditional and common error estimation of the classification error in discriminance analysis is described for k ≧ 2 classes. As a rapid procedure for large sample sizes and feature numbers, a modification of the resubstitution method is proposed being favourable with respect to computing time. Both methods provide useful estimations for the probability of misclassification. In calculating the weighting function w, deviations from preconditions known from the MANOVA such as the skewness, the truncation or the inequality of the covariance matrices, hardly play any role; it appears that only a variation of the sample sizes of the classes substantially influences the weighting functions. The error rates of the tested error estimation methods likewise in effect depend on the sample sizes of the classes. Violations of the mentioned preconditions in the form described above result in different variations of the error estimates, depending on these sample sizes. A comparison between error estimation and allocation relative to a simulated population demonstrates the goodness of the used error estimation procedures.  相似文献   

6.
Cox point process is a process class for hierarchical modelling of systems of non-interacting points in Rd under environmental heterogeneity which is modelled through a random intensity function. In this work a class of Cox processes is suggested where the random intensity is generated by a random closed set. Such heterogeneity appears for example in forestry where silvicultural treatments like harvesting and site-preparation create geometrical patterns for tree density variation in two different phases. In this paper the second order property, important both in data analysis and in the context of spatial sampling, is derived. The usefulness of the random set generated Cox process is highly increased, if for each point it is observed whether it is included in the random set or not. This additional information is easy and economical to obtain in many cases and is hence of practical value; it leads to marks for the points. The resulting random set marked Cox process is a marked point process where the marks are intensity-dependent. The problem with set-marking is that the marks are not a representative sample from the random set. This paper derives the second order property of the random set marked Cox process and suggests a practical estimation method for area fraction and covariance of the random set and for the point densities within and outside the random set. A simulated example and a forestry example are given.  相似文献   

7.
Growing interest in adaptive evolution in natural populations has spurred efforts to infer genetic components of variance and covariance of quantitative characters. Here, I review difficulties inherent in the usual least-squares methods of estimation. A useful alternative approach is that of maximum likelihood (ML). Its particular advantage over least squares is that estimation and testing procedures are well defined, regardless of the design of the data. A modified version of ML, REML, eliminates the bias of ML estimates of variance components. Expressions for the expected bias and variance of estimates obtained from balanced, fully hierarchical designs are presented for ML and REML. Analyses of data simulated from balanced, hierarchical designs reveal differences in the properties of ML, REML, and F-ratio tests of significance. A second simulation study compares properties of REML estimates obtained from a balanced, fully hierarchical design (within-generation analysis) with those from a sampling design including phenotypic data on parents and multiple progeny. It also illustrates the effects of imposing nonnegativity constraints on the estimates. Finally, it reveals that predictions of the behavior of significance tests based on asymptotic theory are not accurate when sample size is small and that constraining the estimates seriously affects properties of the tests. Because of their great flexibility, likelihood methods can serve as a useful tool for estimation of quantitative-genetic parameters in natural populations. Difficulties involved in hypothesis testing remain to be solved.  相似文献   

8.
This paper studies the time-dependent power spectral density (PSD) estimation of nonstationary surface electromyography (SEMG) signals and its application to fatigue analysis during isometric muscle contraction. The conventional time-dependent PSD estimation methods exhibit large variabilities in estimating the instantaneous SEMG parameters so that they often fail to identify the changing patterns of short-period SEMG signals and gauge the extent of fatigue in specific muscle groups. To address this problem, a time-varying autoregressive (TVAR) model is proposed in this paper to describe the SEMG signal, and then the recursive least-squares (RLS) and basis function expansion (BFE) methods are used to estimate the model coefficients and the time-dependent PSD. The instantaneous parameters extracted from the PSD estimation are evaluated and compared in terms of reliability, accuracy, and complexity. Experimental results on synthesized and real SEMG data show that the proposed TVAR-model-based PSD estimators can achieve more stable and precise instantaneous parameter estimation than conventional methods.  相似文献   

9.
This study compared spontaneous baroreflex sensitivity (BRS) estimates obtained from an identical set of data by 11 European centers using different methods and procedures. Noninvasive blood pressure (BP) and ECG recordings were obtained in 21 subjects, including 2 subjects with established baroreflex failure. Twenty-one estimates of BRS were obtained by methods including the two main techniques of BRS estimates, i.e., the spectral analysis (11 procedures) and the sequence method (7 procedures) but also one trigonometric regressive spectral analysis method (TRS), one exogenous model with autoregressive input method (X-AR), and one Z method. With subjects in a supine position, BRS estimates obtained with calculations of alpha-coefficient or gain of the transfer function in both the low-frequency band or high-frequency band, TRS, and sequence methods gave strongly related results. Conversely, weighted gain, X-AR, and Z exhibited lower agreement with all the other techniques. In addition, the use of mean BP instead of systolic BP in the sequence method decreased the relationships with the other estimates. Some procedures were unable to provide results when BRS estimates were expected to be very low in data sets (in patients with established baroreflex failure). The failure to provide BRS values was due to setting of algorithmic parameters too strictly. The discrepancies between procedures show that the choice of parameters and data handling should be considered before BRS estimation. These data are available on the web site (http://www.cbi.polimi.it/glossary/eurobavar.html) to allow the comparison of new techniques with this set of results.  相似文献   

10.
Abstract Leaf area index (L) is a critical variable in monitoring and modelling forest condition and growth and is therefore important for foresters and environmental scientists to measure routinely and accurately. We compared three different methods for estimating L: a plant canopy analyser (PCA), a point‐quadrat camera method and digital hemispherical photography at a native eucalypt forest canopy at Tumbarumba in southern New South Wales, Australia. All of these methods produced indirect estimates of L based on the close coupling between radiation penetration and canopy structure. The individual L estimates were compared, and the potential advantages and disadvantages of each method were discussed in relation to use in forest inventory and in field data collection programmes for remote sensing calibration and verification. The comparison indicated that all three methods, PCA, digital hemispherical photography and the modified point‐quadrat camera method, produced similar estimates with a standard error between techniques of less than 0.2 L units. All methods, however, provided biased estimates of L and calibration is required to derive true stand L. A key benefit, however, of all of these estimation methods is that observations can be collected in a short period of time (1–2 h of field‐work per plot).  相似文献   

11.
Karin Meyer  Mark Kirkpatrick 《Genetics》2010,185(3):1097-1110
Obtaining accurate estimates of the genetic covariance matrix for multivariate data is a fundamental task in quantitative genetics and important for both evolutionary biologists and plant or animal breeders. Classical methods for estimating are well known to suffer from substantial sampling errors; importantly, its leading eigenvalues are systematically overestimated. This article proposes a framework that exploits information in the phenotypic covariance matrix in a new way to obtain more accurate estimates of . The approach focuses on the “canonical heritabilities” (the eigenvalues of ), which may be estimated with more precision than those of because is estimated more accurately. Our method uses penalized maximum likelihood and shrinkage to reduce bias in estimates of the canonical heritabilities. This in turn can be exploited to get substantial reductions in bias for estimates of the eigenvalues of and a reduction in sampling errors for estimates of . Simulations show that improvements are greatest when sample sizes are small and the canonical heritabilities are closely spaced. An application to data from beef cattle demonstrates the efficacy this approach and the effect on estimates of heritabilities and correlations. Penalized estimation is recommended for multivariate analyses involving more than a few traits or problems with limited data.QUANTITATIVE geneticists, including evolutionary biologists and plant and animal breeders, are increasingly dependent on multivariate analyses of genetic variation, for example, to understand evolutionary constraints and design efficient selection programs. New challenges arise when one moves from estimating the genetic variance of a single phenotype to the multivariate setting. An important but unresolved issue is how best to deal with sampling variation and the corresponding bias in the eigenvalues of estimates for the genetic covariance matrix, . It is well known that estimates for the largest eigenvalues of a covariance matrix are biased upward and those for the smallest eigenvalues are biased downward (Lawley 1956; Hayes and Hill 1981). For genetic problems, where we need to estimate at least two covariance matrices simultaneously, this tends to be exacerbated, especially for . In turn, this can result in invalid estimates of , i.e., estimates with negative eigenvalues, and can produce systematic errors in predictions for the response to selection.There has been longstanding interest in “regularization” of covariance matrices, in particular for cases where the ratio between the number of observations and the number of variables is small. Various studies recently employed such techniques for the analysis of high-dimensional, genomic data. In general, this involves a compromise between additional bias and reduced sampling variation of “improved” estimators that have less statistical risk than standard methods (Bickel and Li 2006). For instance, various types of shrinkage estimators of covariance matrices have been suggested that counteract bias in estimates of eigenvalues by shrinking all sample eigenvalues toward their mean. Often this is equivalent to a weighted combination of the sample covariance matrix and a target matrix, assumed to have a simple structure. A common choice for the latter is an identity matrix. This yields a ridge regression type formulation (Hoerl and Kennard 1970). Numerous simulation studies in a variety of settings are available, which demonstrate that regularization can yield closer agreement between estimated and population covariance matrices, less variable estimates of model terms, or improved performance of statistical tests.In quantitative genetic analyses, we attempt to partition observed, overall (phenotypic) covariances into their genetic and environmental components. Typically, this results in strong sampling correlations between them. Hence, while the partitioning into sources of variation and estimates of individual covariance matrices may be subject to substantial sampling variances, their sum, i.e., the phenotypic covariance matrix, can generally be estimated much more accurately. This has led to suggestions to “borrow strength” from estimates of phenotypic components to estimate the genetic covariances. In particular, Hayes and Hill (1981) proposed a method termed “bending” that involved regressing the eigenvalues of the product of the genetic and the inverse of the phenotypic covariance matrix toward their mean. One objective of this procedure was to ensure that estimates of the genetic covariance matrix from an analysis of variance were positive definite. In addition, the authors showed by simulation that shrinking eigenvalues even further than needed to make all values nonnegative could improve the achieved response to selection when using the resulting estimates to derive weights for a selection index, especially for estimation based on small samples. Subsequent work demonstrated that bending could also be advantageous in more general scenarios such as indexes that included information from relatives (Meyer and Hill 1983).Modern, mixed model (“animal model”)-based analyses to estimate genetic parameters using maximum likelihood or Bayesian methods generally constrain estimates to the parameter space, so that—at the expense of introducing some bias—estimates of covariance matrices are positive semidefinite. However, the problems arising from substantial sampling variation in multivariate analyses remain. In spite of increasing applications of such analyses in scenarios where data sets are invariably small, e.g., the analysis of data from natural populations (e.g., Kruuk et al. 2008), there has been little interest in regularization and shrinkage techniques in genetic parameter estimation, other than through the use of informative priors in a Bayesian context. Instead, suggestions for improved estimation have focused on parsimonious modeling of covariance matrices, e.g., through reduced rank estimation or by imposing a known structure, such as a factor-analytic structure (Kirkpatrick and Meyer 2004; Meyer 2009), or by fitting covariance functions for longitudinal data (Kirkpatrick et al. 1990). While such methods can be highly advantageous when the underlying assumptions are at least approximately correct, data-driven methods of regularization may be preferable in other scenarios.This article explores the scope for improved estimation of genetic covariance matrices by implementing the equivalent to bending within animal model-type analyses. We begin with a review of the underlying statistical principles (which the impatient reader might skip), examining the concept of improved estimation, its implementation via shrinkage estimators or penalized estimation, and selected applications. We then describe a penalized restricted maximum-likelihood (REML) procedure for the estimation of genetic covariance matrices that utilizes information from its phenotypic counterparts and present a simulation study demonstrating the effect of penalties on parameter estimates and their sampling properties. The article concludes with an application to a problem relevant in genetic improvement of beef cattle and a discussion.  相似文献   

12.
Abstract: Although previous research and theory has suggested that wild turkey (Meleagris gallopavo) populations may be subject to some form of density dependence, there has been no effort to estimate and incorporate a density-dependence parameter into wild turkey population models. To estimate a functional relationship for density dependence in wild turkey, we analyzed a set of harvest-index time series from 11 state wildlife agencies. We tested for lagged correlations between annual harvest indices using partial autocorrelation analysis. We assessed the ability of the density-dependent theta-Ricker model to explain harvest indices over time relative to exponential or random walk growth models. We tested the homogeneity of the density-dependence parameter estimates (θ) from 3 different harvest indices (spring harvest no. reported harvest/effort, survey harvest/effort) and calculated a weighted average based on each estimate's variance and its estimated covariance with the other indices. To estimate the potential bias in parameter estimates from measurement error, we conducted a simulation study using the theta-Ricker with known values and lognormally distributed measurement error. Partial autocorrelation function analysis indicated that harvest indices were significantly correlated only with their value at the previous time step. The theta-Ricker model performed better than the exponential growth or random walk models for all 3 indices. Simulation of known parameters and measurement error indicated a strong positive upward bias in the density-dependent parameter estimate, with increasing measurement error. The average density-dependence estimate, corrected for measurement error ranged 0.25 ≤ θC ≤ 0.49, depending on the amount of measurement error and assumed spring harvest rate. We infer that density dependence is nonlinear in wild turkey, where growth rates are maximized at 39-42% of carrying capacity. The annual yield produced by density-dependent population growth will tend to be less than that caused by extrinsic environmental factors. This study indicates that both density-dependent and density-independent processes are important to wild turkey population growth, and we make initial suggestions on incorporating both into harvest management strategies.  相似文献   

13.
Meyer K  Kirkpatrick M 《Genetics》2008,180(2):1153-1166
Eigenvalues and eigenvectors of covariance matrices are important statistics for multivariate problems in many applications, including quantitative genetics. Estimates of these quantities are subject to different types of bias. This article reviews and extends the existing theory on these biases, considering a balanced one-way classification and restricted maximum-likelihood estimation. Biases are due to the spread of sample roots and arise from ignoring selected principal components when imposing constraints on the parameter space, to ensure positive semidefinite estimates or to estimate covariance matrices of chosen, reduced rank. In addition, it is shown that reduced-rank estimators that consider only the leading eigenvalues and -vectors of the "between-group" covariance matrix may be biased due to selecting the wrong subset of principal components. In a genetic context, with groups representing families, this bias is inverse proportional to the degree of genetic relationship among family members, but is independent of sample size. Theoretical results are supplemented by a simulation study, demonstrating close agreement between predicted and observed bias for large samples. It is emphasized that the rank of the genetic covariance matrix should be chosen sufficiently large to accommodate all important genetic principal components, even though, paradoxically, this may require including a number of components with negligible eigenvalues. A strategy for rank selection in practical analyses is outlined.  相似文献   

14.
Guan Y 《Biometrics》2011,67(3):926-936
Summary We introduce novel regression extrapolation based methods to correct the often large bias in subsampling variance estimation as well as hypothesis testing for spatial point and marked point processes. For variance estimation, our proposed estimators are linear combinations of the usual subsampling variance estimator based on subblock sizes in a continuous interval. We show that they can achieve better rates in mean squared error than the usual subsampling variance estimator. In particular, for n×n observation windows, the optimal rate of n?2 can be achieved if the data have a finite dependence range. For hypothesis testing, we apply the proposed regression extrapolation directly to the test statistics based on different subblock sizes, and therefore avoid the need to conduct bias correction for each element in the covariance matrix used to set up the test statistics. We assess the numerical performance of the proposed methods through simulation, and apply them to analyze a tropical forest data set.  相似文献   

15.
We present the application of a nonparametric method to performing functional principal component analysis for functional curve data that consist of measurements of a random trajectory for a sample of subjects. This design typically consists of an irregular grid of time points on which repeated measurements are taken for a number of subjects. We introduce shrinkage estimates for the functional principal component scores that serve as the random effects in the model. Scatterplot smoothing methods are used to estimate the mean function and covariance surface of this model. We propose improved estimation in the neighborhood of and at the diagonal of the covariance surface, where the measurement errors are reflected. The presence of additive measurement errors motivates shrinkage estimates for the functional principal component scores. Shrinkage estimates are developed through best linear prediction and in a generalized version, aiming at minimizing one-curve-leave-out prediction error. The estimation of individual trajectories combines data obtained from that individual as well as all other individuals. We apply our methods to new data regarding the analysis of the level of 14C-folate in plasma as a function of time since dosing of healthy adults with a small tracer dose of 14C-folic acid. A time transformation was incorporated to handle design irregularity concerning the time points on which the measurements were taken. The proposed methodology, incorporating shrinkage and data-adaptive features, is seen to be well suited for describing population kinetics of 14C-folate-specific activity and random effects, and can also be applied to other functional data analysis problems.  相似文献   

16.
Despite the widespread recognition of the importance of monitoring, only a few studies have explored how estimates of vital rates and predictions of population dynamics change with additional data collected along the monitoring program. We investigate how estimates of survival and individual growth, along with predictions about future population size, change with additional years of monitoring and data collected, using as a model system freshwater populations of marble (Salmo marmoratus), rainbow (Oncorhynchus mykiss), and brown trout (Salmo trutta L.) living in Western Slovenian streams. Fish were sampled twice a year between 2004 and 2015. We found that in 3 out of 4 populations, a few years of data (3 or 4 sampling occasions, between 300 and 500 tagged individuals for survival, 100–200 for growth) provided the same estimates of average survival and growth as those obtained with data from more than 15 sampling occasions, while the estimation of the range of survival (i.e., the difference, over all sampling occasions considered, between maximum and minimum survival estimated in a sampling occasion) required more sampling occasions (up to 22 for marble trout), with little reduction of uncertainty around the point estimates. Predictions of mean density and variation in density over time did not change with more data collected after the first 5 years (i.e., 10 sampling occasions) and overall were within 10% of the observed mean and variation in density over the whole monitoring program.  相似文献   

17.
Hypervolume approaches are used to quantify functional diversity and quantify environmental niches for species distribution modelling. Recently, Qiao et al. ( 2016 ) criticized our geometrical kernel density estimation (KDE) method for measuring hypervolumes. They used a simulation analysis to argue that the method yields high error rates and makes biased estimates of fundamental niches. Here, we show that (a) KDE output depends in useful ways on dataset size and bias, (b) other species distribution modelling methods make equally stringent but different assumptions about dataset bias, (c) simulation results presented by Qiao et al. ( 2016 ) were incorrect, with revised analyses showing performance comparable to other methods, and (d) hypervolume methods are more general than KDE and have other benefits for niche modelling. As a result, our KDE method remains a promising tool for species distribution modelling.  相似文献   

18.
Precise measures of population abundance and trend are needed for species conservation; these are most difficult to obtain for rare and rapidly changing populations. We compare uncertainty in densities estimated from spatio–temporal models with that from standard design-based methods. Spatio–temporal models allow us to target priority areas where, and at times when, a population may most benefit. Generalised additive models were fitted to a 31-year time series of point-transect surveys of an endangered Hawaiian forest bird, the Hawai‘i ‘ākepa Loxops coccineus. This allowed us to estimate bird densities over space and time. We used two methods to quantify uncertainty in density estimates from the spatio–temporal model: the delta method (which assumes independence between detection and distribution parameters) and a variance propagation method. With the delta method we observed a 52% decrease in the width of the design-based 95% confidence interval (CI), while we observed a 37% decrease in CI width when propagating the variance. We mapped bird densities as they changed across space and time, allowing managers to evaluate management actions. Integrating detection function modelling with spatio–temporal modelling exploits survey data more efficiently by producing finer-grained abundance estimates than are possible with design-based methods as well as producing more precise abundance estimates. Model-based approaches require switching from making assumptions about the survey design to assumptions about bird distribution. Such a switch warrants consideration. In this case the model-based approach benefits conservation planning through improved management efficiency and reduced costs by taking into account both spatial shifts and temporal changes in population abundance and distribution.  相似文献   

19.
D Gianola  R L Fernando  S Im  J L Foulley 《Génome》1989,31(2):768-777
Conceptual aspects of estimation of genetic components of variance and covariance under selection are discussed, with special attention to likelihood methods. Certain selection processes are described and alternative likelihoods that can be used for analysis are specified. There is a mathematical relationship between the likelihoods that permits comparing the relative amount of information contained in them. Theoretical arguments and evidence indicate that point inferences made from likelihood functions are not affected by some forms of selection.  相似文献   

20.
Abstract. The use of digital images in measuring plant species cover and species number in boreal forest vegetation was studied. Plant cover was estimated by manual delineation on photographs and subsequent digitized measurement of the areas. This value was regarded as the reference and compared with cover obtained by visual estimate, point frequency and automatic image analysis methods. The automatic image analysis was based on scanned photographs. Supervised image classification with ERDAS software was then used to distinguish between the covers of different plant species. When comparing the ability of the methods to detect species, the visual estimate method gave values similar to the reference. The study material was collected from three sites along a heavy‐metal pollution transect in western Finland. All four methods detected, in a similar manner, differences between the plant species abundances along the transect. Compared with the reference, the digital images underestimated the cover of the lichens Cladina spp. and Cetraria islandica, but gave similar estimates for the dwarf shrubs Empetrum nigrum and Vaccinium vitis‐idaea. The point frequency method overestimated the cover of all the species studied. The visual estimates of lichens were close to the reference, while the dwarf shrub covers were overestimated. The number of species detected using supervised image analysis and the point frequency method was lower than that with visual estimation. Visual estimation was faster, and the estimate closer to the reference cover values than the others. Digital images may be useful in detecting changes in some selected species in vegetation with a simple vertical structure but with taller, multilayered vegetation and a higher species number, the reliability of the cover estimates is lower.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号