首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Seaman SR  White IR  Copas AJ  Li L 《Biometrics》2012,68(1):129-137
Two approaches commonly used to deal with missing data are multiple imputation (MI) and inverse-probability weighting (IPW). IPW is also used to adjust for unequal sampling fractions. MI is generally more efficient than IPW but more complex. Whereas IPW requires only a model for the probability that an individual has complete data (a univariate outcome), MI needs a model for the joint distribution of the missing data (a multivariate outcome) given the observed data. Inadequacies in either model may lead to important bias if large amounts of data are missing. A third approach combines MI and IPW to give a doubly robust estimator. A fourth approach (IPW/MI) combines MI and IPW but, unlike doubly robust methods, imputes only isolated missing values and uses weights to account for remaining larger blocks of unimputed missing data, such as would arise, e.g., in a cohort study subject to sample attrition, and/or unequal sampling fractions. In this article, we examine the performance, in terms of bias and efficiency, of IPW/MI relative to MI and IPW alone and investigate whether the Rubin's rules variance estimator is valid for IPW/MI. We prove that the Rubin's rules variance estimator is valid for IPW/MI for linear regression with an imputed outcome, we present simulations supporting the use of this variance estimator in more general settings, and we demonstrate that IPW/MI can have advantages over alternatives. IPW/MI is applied to data from the National Child Development Study.  相似文献   

2.
Marginal structural models (MSMs) have been proposed for estimating a treatment's effect, in the presence of time‐dependent confounding. We aimed to evaluate the performance of the Cox MSM in the presence of missing data and to explore methods to adjust for missingness. We simulated data with a continuous time‐dependent confounder and a binary treatment. We explored two classes of missing data: (i) missed visits, which resemble clinical cohort studies; (ii) missing confounder's values, which correspond to interval cohort studies. Missing data were generated under various mechanisms. In the first class, the source of the bias was the extreme treatment weights. Truncation or normalization improved estimation. Therefore, particular attention must be paid to the distribution of weights, and truncation or normalization should be applied if extreme weights are noticed. In the second case, bias was due to the misspecification of the treatment model. Last observation carried forward (LOCF), multiple imputation (MI), and inverse probability of missingness weighting (IPMW) were used to correct for the missingness. We found that alternatives, especially the IPMW method, perform better than the classic LOCF method. Nevertheless, in situations with high marker's variance and rarely recorded measurements none of the examined method adequately corrected the bias.  相似文献   

3.
Lee BK  Lessler J  Stuart EA 《PloS one》2011,6(3):e18174
Propensity score weighting is sensitive to model misspecification and outlying weights that can unduly influence results. The authors investigated whether trimming large weights downward can improve the performance of propensity score weighting and whether the benefits of trimming differ by propensity score estimation method. In a simulation study, the authors examined the performance of weight trimming following logistic regression, classification and regression trees (CART), boosted CART, and random forests to estimate propensity score weights. Results indicate that although misspecified logistic regression propensity score models yield increased bias and standard errors, weight trimming following logistic regression can improve the accuracy and precision of final parameter estimates. In contrast, weight trimming did not improve the performance of boosted CART and random forests. The performance of boosted CART and random forests without weight trimming was similar to the best performance obtainable by weight trimmed logistic regression estimated propensity scores. While trimming may be used to optimize propensity score weights estimated using logistic regression, the optimal level of trimming is difficult to determine. These results indicate that although trimming can improve inferences in some settings, in order to consistently improve the performance of propensity score weighting, analysts should focus on the procedures leading to the generation of weights (i.e., proper specification of the propensity score model) rather than relying on ad-hoc methods such as weight trimming.  相似文献   

4.
Statistical and biochemical studies of the genetic code have found evidence of nonrandom patterns in the distribution of codon assignments. It has, for example, been shown that the code minimizes the effects of point mutation or mistranslation: erroneous codons are either synonymous or code for an amino acid with chemical properties very similar to those of the one that would have been present had the error not occurred. This work has suggested that the second base of codons is less efficient in this respect, by about three orders of magnitude, than the first and third bases. These results are based on the assumption that all forms of error at all bases are equally likely. We extend this work to investigate (1) the effect of weighting transition errors differently from transversion errors and (2) the effect of weighting each base differently, depending on reported mistranslation biases. We find that if the bias affects all codon positions equally, as might be expected were the code adapted to a mutational environment with transition/transversion bias, then any reasonable transition/transversion bias increases the relative efficiency of the second base by an order of magnitude. In addition, if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection. Received: 25 July 1997 / Accepted: 9 January 1998  相似文献   

5.
A survey of fauna-focused papers in Restoration Ecology indicates that increased attention is being paid to this component of the biota. Although much of this work is for monitoring, a growing number of studies relate to the economic or ecological value of animals in restored land. There is still a bias toward vertebrates over invertebrates, although the proportion of invertebrate-focused papers is steadily increasing. Analysis of these papers suggests that greater synergy would have been obtained if standardized protocols had been used and, in the case of invertebrates, studies would have been more informative if species-level identifications had been obtained. Partnerships with industry should allow long-term studies to be performed, which would provide more reliable information than that yielded from chronosequence-type investigations.  相似文献   

6.
Purpose

Decisions based on life cycle sustainability assessment (LCSA) pose a multi-criteria decision issue, as impacts on the three different sustainability dimensions have to be considered which themselves are often measured through several indicators. To support decision-making at companies, a method to interpret multi-criteria assessment and emerging trade-offs would be beneficial. This research aims at enabling decision-making within LCSA by introducing weights to the sustainability dimensions.

Methods

To derive weights, 54 decision-makers of different functions at a German automotive company were asked via limit conjoint analysis how they ranked the economic, environmental, and social performance of a vehicle component. Results were evaluated for the entire sample and by functional clusters. Additionally, sustainability respondents, i.e., respondents that dealt with sustainability in their daily business, were contrasted with non-sustainability respondents. As a last step, the impact of outliers was determined. From this analysis, practical implications for ensuring company-optimal decision-making in regard to product sustainability were derived.

Results and discussion

The results showed a large spread in weighting without clear clustering. On average, all sustainability dimensions were considered almost equally important: the economic dimension tallied at 33.5%, the environmental at 35.2%, and the social at 31.2%. Results were robust as adjusting for outliers changed weights on average by less than 10%. Results by function showed low consistency within clusters hinting that weighting was more of a personal than a functional issue. Sustainability respondents weighted the social before the environmental and economic dimension while non-sustainability respondents put the economic before the other two dimensions. Provided that the results of this research could be generalized, the retrieved weighting set was seen as a good way to introduce weights into an operationalized LCSA framework as it represented the quantification of the already existing decision process. Therefore, the acceptance of this weighting set within the respective company was expected to be increased.

Conclusions

It could be shown that conjoint analysis enabled decision-making within LCSA by introducing weights to solve a multi-criteria decision issue. Furthermore, implications for practitioners could be derived to ensure company-optimal decision-making related to product sustainability. Future research should look at expanding the sample size and geographical scope as well as investigating the weighting of indicators within sustainability dimensions and the drivers that influence personal decision-making in regard to weighting sustainability dimensions.

  相似文献   

7.
Breast cancer is the most common malignancy affecting women, and its incidence has been increasing in many countries. The aetiology of breast cancer is poorly understood, so there is concern as to which factors in our environment or lifestyle are responsible for the increase. There is a need for reliable risk assessment, which involves the steps of hazard identification, hazard evaluation, exposure evaluation and risk estimation. Short-term laboratory tests and long-term tests in animals are useful for priority-setting, but quantitative human risk assessment should preferably involve observations of humans. Epidemiological studies vary in the degree of reliance that can be placed on their results. The main types of epidemiological investigation are illustrated by recent examples from the literature on breast cancer. Careful judgement is required in assessing whether any association between a factor and a disease is likely to be causal. The injectable contraceptive, depot medroxyprogesterone acetate (DMPA, ‘Depo-Provera’), has been controversial because it caused malignant mammary tumours in beagle dogs. Two recent case-control studies found no overall association between DMPA and the risk of breast cancer in women. There was some evidence of increased risk in certain sub-groups of women, which could be interpreted with more confidence if there were a better understanding of the biology of human breast cancer. Nevertheless, the results do not support the prediction from beagle experiments that DMPA might increase the overall risk of breast cancer.  相似文献   

8.
An increasing number of software tools support designers and other decision makers in making design, production, and purchasing decisions. Some of these tools provide quantitative information on environmental impacts such as climate change, human toxicity, or resource use during the life cycle of these products. Very little is known, however, about how these tools are actually used, what kind of modeling and presentation approaches users really want, or whether the information provided is likely to be used the way the developers intended. A survey of users of one such software tool revealed that although users want more transparency, about half also want an easy-to-use tool and would accept built-in assumptions; that most users prefer modeling of environmental impacts beyond the stressor level, and the largest group of respondents wants results simultaneously on the stressor, impact potential, and damage level; and that although many users look for aggregated information on impacts and costs, a majority do not trust that such an aggregation is valid or believe that there are tradeoffs among impacts. Further, our results show that the temporal and spatial scales of single impact categories explain only about 6% of the variation in the weights between impact categories set by respondents if the weights are set first. If the weights are set after respondents specify temporal and spatial scales, however, these scales explain about 24% of the variation. These results not only help method and tool developers to reconsider some previous assumptions, but also suggest a number of research questions that may need to be addressed in a more focused investigation.  相似文献   

9.

Purpose

Weighting in Life Cycle Assessment (LCA) is a much-debated topic. Various tools have been used for weighting in LCA, Multi-Criteria Decision Analysis (MCDA) being one of the most common. However, it has not been thoroughly assessed how weight elicitation techniques of MCDA with different scales (interval and ratio) along with external and internal normalisation affect weighting and subsequent results. The aim of this survey is to compare different techniques in an illustrative example in the building sector.

Methods

A panel of Nordic LCA experts accomplished six weighting exercises. The different weight elicitation techniques are SWING which is based on the interval scale; Simple Multi-Attribute Rating Technique (SMART) and Analytic Hierarchy Process (AHP) which is based on the ratio scale. Information on the case study was provided for the panellists, along with characterised or normalised impact assessment scores. However, in the first weighting exercise, the panellists were not provided with any scores or background information, but they had to complete the weighting at a more general level. With the weights provided by the panel, the environmental impacts of three alternative house types were aggregated. The calculations were based on three well-grounded aggregation rules, which are commonly used in the field of LCA or decision analysis.

Results and discussion

In the illustrative construction example, the different aggregation rules had the biggest impact on the results. The results were different in the six calculation methods: when externally normalised scores were applied, house type A was superior in most of the calculations, but when internal normalisation was accomplished, house type C was superior. By using equal weights, similar results were obtained. None of the panellists intuitively considered A as the superior house type, but in some of the calculations, this was indeed the case. Furthermore, the results refer to the fact that the panellists completed the weighting on the basis of their general knowledge, without taking the features of different weight elicitation techniques into account.

Conclusions

External normalisation provides information on a magnitude of impacts, and in some cases, external normalisation may be a more influential factor than weighting. Based on the results, it cannot be stated which different weight elicitation technique is the most suitable for LCA. However, the method should be selected based on the aims and purpose of the study. Moreover, the elicitation questions should be explained with care to experts so that they interpret the questions as intended.  相似文献   

10.
1. The normalization of biochemical data to weight them appropriately for parameter estimation is considered, with reference particularly to data from tracer kinetics and enzyme kinetics. If the data are in replicate, it is recommended that the sum of squared deviations for each experimental variable at each time or concentration point is divided by the local variance at that point. 2. If there is only one observation for each variable at each sampling point, normalization may still be required if the observations cover more than one order of magnitude, but there is no absolute criterion for judging the effect of the weighting that is produced. The goodness of fit that is produced by minimizing the weighted sum of squares of deviations must be judged subjectively. It is suggested that the goodness of fit may be regarded as satisfactory if the data points are distributed uniformly on either side of the fitted curve. A chi-square test may be used to decide whether the distribution is abnormal. The proportion of the residual variance associated with points on one or other side of the fitted curve may also be taken into account, because this gives an indication of the sensitivity of the residual variance to movement of the curve away from particular data points. These criteria for judging the effect of weighting are only valid if the model equation may reasonably be expected to apply to all the data points. 3. On this basis, normalizing by dividing the deviation for each data point by the experimental observation or by the equivalent value calculated by the model equation may both be shown to produce a consistent bias for numerically small observations, the former biasing the curve towards the smallest observations, the latter tending to produce a curve that is above the numerically smaller data points. It was found that dividing each deviation by the mean of observed and calculated variable appropriate to it produces a weighting that is fairly free from bias as judged by the criteria mentioned above. This normalization factor was tested on published data from both tracer kinetics and enzyme kinetics.  相似文献   

11.
Errors in decision‐making in animals can be partially explained by adaptive evolution, and error management theory explains that cognitive biases result from the asymmetric costs of false‐positive and false‐negative errors. Error rates that result from the cognitive bias may differ between sexes. In addition, females are expected to have higher feeding rates than males because of the high energy requirements of gamete production. Thus, females may suffer relatively larger costs from false‐negative errors (i.e. non‐feeding) than males, and female decisions would be biased to reduce these costs if the costs of false‐positive errors are not as high. Females would consequently overestimate their capacity in relation to the probability of predation success. We tested this hypothesis using the Japanese pygmy squid Idiosepius paradoxus. Our results show that size differences between the squid and prey shrimp affected predatory attacks, and that predatory attacks succeeded more often when the predator was relatively larger than the prey. Nevertheless, compared to male predatory attacks, female squid frequently attacked even if their size was relatively small compared to the prey, suggesting that the females overestimated their probability of success. However, if the females failed in the first attack, they subsequently adjusted their attack threshold: squid did not attack again if the prey size was relatively larger. These results suggest a sex‐specific cognitive bias, that is females skewed judgment in decision‐making for the first predation attack, but they also show that squid can modify their threshold to determine whether they should attack in subsequent encounters.  相似文献   

12.
This article investigates how environmental trade-offs are handled in life-cycle assessment (LCA) studies in some Nordic companies. Through interviews, the use and understanding of weighting methods in decision making was studied. The analysis shows that the decision makers require methods with which to aggregate and help interpret the complex information from life-cycle inventories. They agreed that it was not their own values that should be reflected in such methods, but they were found to have different opinions concerning the value basis that should be used. The analysis also investigates the difficulties arising from using such methods. The decision makers seemed to give a broader meaning to the term weighting, and were more concerned with the comparison between environmental and other aspects than the weighting of different environmental impacts. A conclusion is that decision makers need to be more involved in modeling and interpretation. The role of the analyst should be to interpret the information needs of the decision maker, and help him or her make methodological choices that are consistent with these needs and relevant from his or her point of view. To achieve this, it is important that decision makers do not view LCA as a highly standardized calculation tool, but as a flexible process of collecting, organizing, and interpreting environmental information. Such an approach to LCA increases the chances that the results will be regarded as relevant and useful.  相似文献   

13.
Why weight?     
Whether phylogenetic data should be differentially or equally weighted is currently debated. Further, if differential weighting is to be explored, there is no consensus among investigators as to which weighting scheme is most appropriate. Mitochondrial genome data offer a powerful tool in assessment of differential weighting schemes because taxa can be selected from which a highly corroborated phylogeny is available (so that accuracy can be assessed), and it can be assumed that different data partitions share the same history (so that gene-sorting issues are not so problematic). Using mitochondrial data from 17 mammalian genomes, we evaluated the most commonly used weighting schemes, such as successive weighting, transversion weighting, codon-based weighting, and amino acid coding, and compared them to more complex weighting schemes including a 6-parameter weighting, pseudoreplicate reweighting, and tri-level weighting. We found that the most commonly used weighting schemes perform the worst with these data. Some of the more complex schemes perform well, however, none of them is consistently superior. These results support ones biases; if one has a predilection to avoid differential weighting, these data support equally weighted parsimony and maximum likelihood. Others might be encouraged by these results to try weighting as a form of data exploration.  相似文献   

14.

Introduction

Respondent-driven sampling (RDS) is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.

Methods

Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods), and also of presentation for interview if offered a coupon by age and socioeconomic status group.

Results

Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19–29%), but had little effect for sexual activity or HIV status.

Conclusions

Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.  相似文献   

15.
In this paper, we investigate K‐group comparisons on survival endpoints for observational studies. In clinical databases for observational studies, treatment for patients are chosen with probabilities varying depending on their baseline characteristics. This often results in noncomparable treatment groups because of imbalance in baseline characteristics of patients among treatment groups. In order to overcome this issue, we conduct propensity analysis and match the subjects with similar propensity scores across treatment groups or compare weighted group means (or weighted survival curves for censored outcome variables) using the inverse probability weighting (IPW). To this end, multinomial logistic regression has been a popular propensity analysis method to estimate the weights. We propose to use decision tree method as an alternative propensity analysis due to its simplicity and robustness. We also propose IPW rank statistics, called Dunnett‐type test and ANOVA‐type test, to compare 3 or more treatment groups on survival endpoints. Using simulations, we evaluate the finite sample performance of the weighted rank statistics combined with these propensity analysis methods. We demonstrate these methods with a real data example. The IPW method also allows us for unbiased estimation of population parameters of each treatment group. In this paper, we limit our discussions to survival outcomes, but all the methods can be easily modified for any type of outcomes, such as binary or continuous variables.  相似文献   

16.
Aquatic macroinvertebrates are commonly used biological indicators for assessing the health of freshwater ecosystems. However, counting all the invertebrates in the large samples that are usually collected for rapid site assessment is time-consuming and costly. Therefore, sub-sampling is often done with fixed time or fixed count live-sorting in the field or with preserved material using sample splitters in the laboratory. We investigate the differences between site assessments provided when the two sub-sampling approaches (Live-sort and Lab-sort) were used in conjunction with predictive bioassessment models. The samples showed a method bias. The Live-sort sub-samples tended to have more large, conspicuous invertebrates and often fewer small and, or cryptic animals that were more likely to be found in Lab-sort samples where a microscope was used. The Live-sort method recovered 4–6 more taxa than Lab-sorting in spring, but not in autumn. The magnitude of the significant differences between Live-sort and Lab-sort predictive model outputs, observed to expected (O/E) taxa scores, for the same sites ranged from 0.12 to 0.53. These differences in the methods resulted in different assessments of some sites only and the number of sites that were assessed differently depended on the season, with spring samples showing most disparity. The samples may differ most in spring because many of the invertebrates are larger at that time (and thus are more conspicuous targets for live-sorters). The Live-sort data cannot be run through a predictive model created from Lab-sort data (and vice versa) because of the taxonomic differences in sub-sample composition and the sub-sampling methods must be standardized within and among studies if biological assessment is to provide valid comparisons of site condition. Assessments that rely on the Live-sorting method may indicate that sites are ‘less impaired’ in spring compared to autumn because more taxa are retrieved in spring when they are larger and more visible. Laboratory sub-sampling may return fewer taxa in spring, which may affect assessments relying on taxonomic richness.  相似文献   

17.
Mutagenicity studies have been used to identify specific agents as potential carconogens or other human health hazards; however, they have been used minimally for risk assessment or in determining permissible levels of human exposure. The poor predictive value of in vitro mutagenesis tests for carcinogenic activity and a lack of mechanistic understanding of the roles of mutagens in the induction of specific cancers have made these tests unattractive for the purpose of risk assessment. However, the limited resources available for carcinogen testing and large number of chemicals which need to be evaluated necessitate the incorporation of more efficient methods into the evaluation process. In vivo genetic toxicity testing can be recommended for this purpose because in vivo assays incorporate the metabolic activation pathways that are relevant to humans. We propose the use of a multiple end-point in vivo comprehensive testing protocol (CTP) using rodents. Studies using sub-acute exposure to low levels of test agents by routes consistent with human exposure can be a useful adjunct to methods currently used to provide data for risk assessment. Evaluations can include metabolic and pharmacokinetic endpoints, in addition to genetic toxicity studies, in order to provide a comprehensive examination of the mechanism of toxicity of the agent. A parallelogram approach can be used to estimate effects in non-accessible human tissues by using data from accessible human tissues and analogous tissues in animals. A categorical risk assessment procedure can be used which would consider, in order of priority, genetic damage in man, genetic damage in animals that is highly relevant to disease outcome (mutation, chromosome damage), and data from animals that is of less certain relevance to disease. Action levels of environmental exposure would be determined based on the lowest observed effect levels or the highest observed no effect levels, using sub-acute low level exposure studies in rodents. As an example, the known genotoxic effects of benzene exposure at low levels in man and animals are discussed. The lowest observed genotoxic effects were observed at about 1–10 parts per million for man and 0.04–0.1 parts per million in subacute animal studies. If genetic toxicity is to achieve a prominent role in evaluating carcinogens and characterizing germ-cell mutagens, minimal testing requirements must be established to ascertain the risk associated with environmental mutagen exposure. The use of the in vivo approach described here should provide the information needed to meet this goal. In addition, it should allow truly epigenetic or non-genotoxic carcinogens to be distinguished from the genotoxic carcinogens that are not detected by in vitro methods.  相似文献   

18.
Greenland S 《Biometrics》2000,56(3):915-921
Regression models with random coefficients arise naturally in both frequentist and Bayesian approaches to estimation problems. They are becoming widely available in standard computer packages under the headings of generalized linear mixed models, hierarchical models, and multilevel models. I here argue that such models offer a more scientifically defensible framework for epidemiologic analysis than the fixed-effects models now prevalent in epidemiology. The argument invokes an antiparsimony principle attributed to L. J. Savage, which is that models should be rich enough to reflect the complexity of the relations under study. It also invokes the countervailing principle that you cannot estimate anything if you try to estimate everything (often used to justify parsimony). Regression with random coefficients offers a rational compromise between these principles as well as an alternative to analyses based on standard variable-selection algorithms and their attendant distortion of uncertainty assessments. These points are illustrated with an analysis of data on diet, nutrition, and breast cancer.  相似文献   

19.
Applications for the commercial release of herbicide-resistant crops, most of them transgenic, are likely to become more frequent in the coming years. The ecological concerns raised by their large scale use call for risk-assessment studies. One of the major issues in such studies is the relative fitness of the resistant line compared to the susceptible when no herbicide is applied since this will largely determine the long-term fate of the resistance gene outside of the field. Here we report on a comparison of a sulfonylurea-resistant line of white-chicory regenerated from a non-mutagenized cell culture with a supposedly isogenic susceptible biotype. The plants were grown in experimental plots at a range of densities in a replacement series. The reproductive output of the plants decreased with increasing density but no significant difference was found between the two lines for any vegetative or reproductive trait at any density. This suggests that no cost is associated with the mutation causing the resistance and that the resistance gene would not be selected against if it escaped to populations of wild chicories.  相似文献   

20.
Capture‐recapture estimates of abundance using photographic identification data are sensitive to the quality of photographs used and distinctiveness of individuals in the population. Here analyses are presented for examining the effects of photographic quality and individual animal distinctiveness scores and for objectively selecting a subset of data to use for capture‐recapture analyses using humpback whale (Megaptera novaeangliae) data from a 2‐year study in the North Atlantic. Photographs were evaluated for their level of quality and whales for their level of individual distinctiveness. Photographic quality scores had a 0.21 probability of changing by a single‐quality level, and there were no changes by two or more levels. Individual distinctiveness scores were not independent of photographic quality scores. Estimates of abundance decreased as poor‐quality photographs were removed. An appropriate balance between precision and bias in abundance estimates was achieved by removing the lowest‐quality photographs and those of incompletely photographed flukes given our assumptions about the true population abundance. A simulation of the selection process implied that, if the estimates are negatively biased by heterogeneity, the increase in bias produced by decreasing the sample size is not more than 2%. Capture frequencies were independent of individual distinctiveness scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号