首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Aerial survey is an important, widely employed approach for estimating free‐ranging wildlife over large or inaccessible study areas. We studied how a distance covariate influenced probability of double‐observer detections for birds counted during a helicopter survey in Canada’s central Arctic. Two observers, one behind the other but visually obscured from each other, counted birds in an incompletely shared field of view to a distance of 200 m. Each observer assigned detections to one of five 40‐m distance bins, guided by semi‐transparent marks on aircraft windows. Detections were recorded with distance bin, taxonomic group, wing‐flapping behavior, and group size. We compared two general model‐based estimation approaches pertinent to sampling wildlife under such situations. One was based on double‐observer methods without distance information, that provide sampling analogous to that required for mark–recapture (MR) estimation of detection probability, , and group abundance, , along a fixed‐width strip transect. The other method incorporated double‐observer MR with a categorical distance covariate (MRD). A priori, we were concerned that estimators from MR models were compromised by heterogeneity in due to un‐modeled distance information; that is, more distant birds are less likely to be detected by both observers, with the predicted effect that would be biased high, and biased low. We found that, despite increased complexity, MRD models (ΔAICc range: 0–16) fit data far better than MR models (ΔAICc range: 204–258). However, contrary to expectation, the more naïve MR estimators of were biased low in all cases, but only by 2%–5% in most cases. We suspect that this apparently anomalous finding was the result of specific limitations to, and trade‐offs in, visibility by observers on the survey platform used. While MR models provided acceptable point estimates of group abundance, their far higher stranded errors (0%–40%) compared to MRD estimates would compromise ability to detect temporal or spatial differences in abundance. Given improved precision of MRD models relative to MR models, and the possibility of bias when using MR methods from other survey platforms, we recommend avian ecologists use MRD protocols and estimation procedures when surveying Arctic bird populations.  相似文献   

2.
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability , both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence.  相似文献   

3.

Background

The group testing method has been proposed for the detection and estimation of genetically modified plants (adventitious presence of unwanted transgenic plants, AP). For binary response variables (presence or absence), group testing is efficient when the prevalence is low, so that estimation, detection, and sample size methods have been developed under the binomial model. However, when the event is rare (low prevalence <0.1), and testing occurs sequentially, inverse (negative) binomial pooled sampling may be preferred.

Methodology/Principal Findings

This research proposes three sample size procedures (two computational and one analytic) for estimating prevalence using group testing under inverse (negative) binomial sampling. These methods provide the required number of positive pools (), given a pool size (k), for estimating the proportion of AP plants using the Dorfman model and inverse (negative) binomial sampling. We give real and simulated examples to show how to apply these methods and the proposed sample-size formula. The Monte Carlo method was used to study the coverage and level of assurance achieved by the proposed sample sizes. An R program to create other scenarios is given in Appendix S2.

Conclusions

The three methods ensure precision in the estimated proportion of AP because they guarantee that the width (W) of the confidence interval (CI) will be equal to, or narrower than, the desired width (), with a probability of . With the Monte Carlo study we found that the computational Wald procedure (method 2) produces the more precise sample size (with coverage and assurance levels very close to nominal values) and that the samples size based on the Clopper-Pearson CI (method 1) is conservative (overestimates the sample size); the analytic Wald sample size method we developed (method 3) sometimes underestimated the optimum number of pools.  相似文献   

4.
Optical imaging plays a major role in disease detection in dermatology. However, current optical methods are limited by lack of three‐dimensional detection of pathophysiological parameters within skin. It was recently shown that single‐wavelength optoacoustic (photoacoustic) mesoscopy resolves skin morphology, i.e. melanin and blood vessels within epidermis and dermis. In this work we employed illumination at multiple wavelengths for enabling three‐dimensional multispectral optoacoustic mesoscopy (MSOM) of natural chromophores in human skin in vivo operating at 15–125 MHz. We employ a per‐pulse tunable laser to inherently co‐register spectral datasets, and reveal previously undisclosed insights of melanin, and blood oxygenation in human skin. We further reveal broadband absorption spectra of specific skin compartments. We discuss the potential of MSOM for label‐free visualization of physiological biomarkers in skin in vivo.

Cross‐sectional optoacoustic image of human skin in vivo. The epidermal layer is characterized by melanin absorption. A vascular network runs through the dermal layer, exhibiting blood oxygenation values of 50–90%. All scale bars: 250 µm  相似文献   


5.
Land‐cover and climate change are two main drivers of changes in species ranges. Yet, the majority of studies investigating the impacts of global change on biodiversity focus on one global change driver and usually use simulations to project biodiversity responses to future conditions. We conduct an empirical test of the relative and combined effects of land‐cover and climate change on species occurrence changes. Specifically, we examine whether observed local colonization and extinctions of North American birds between 1981–1985 and 2001–2005 are correlated with land‐cover and climate change and whether bird life history and ecological traits explain interspecific variation in observed occurrence changes. We fit logistic regression models to test the impact of physical land‐cover change, changes in net primary productivity, winter precipitation, mean summer temperature, and mean winter temperature on the probability of Ontario breeding bird local colonization and extinction. Models with climate change, land‐cover change, and the combination of these two drivers were the top ranked models of local colonization for 30%, 27%, and 29% of species, respectively. Conversely, models with climate change, land‐cover change, and the combination of these two drivers were the top ranked models of local extinction for 61%, 7%, and 9% of species, respectively. The quantitative impacts of land‐cover and climate change variables also vary among bird species. We then fit linear regression models to test whether the variation in regional colonization and extinction rate could be explained by mean body mass, migratory strategy, and habitat preference of birds. Overall, species traits were weakly correlated with heterogeneity in species occurrence changes. We provide empirical evidence showing that land‐cover change, climate change, and the combination of multiple global change drivers can differentially explain observed species local colonization and extinction.  相似文献   

6.

Aim

Global declines in large old trees from selective logging have degraded old‐forest ecosystems, which could lead to delayed declines or losses of old‐forest‐associated wildlife populations (i.e., extinction debt). We applied the declining population paradigm and explored potential evidence for extinction debt in an old‐forest dependent species across landscapes with different histories of large tree logging.

Location

Montane forests of the Sierra Nevada, California, USA.

Methods

We tested hypotheses about the influence of forest structure on territory extinction dynamics of the spotted owl (Strix occidentalis) using detection/non‐detection data from 1993 to 2011 across two land tenures: national forests, which experienced extensive large tree logging over the past century, and national parks, which did not.

Results

Large tree/high canopy cover forest was the best predictor of extinction rates and explained 26%–77% of model deviance. Owl territories with more large tree/high canopy cover forest had lower extinction rates, and this forest type was ~4 times more prevalent within owl territories in national parks ( = 19% of territory) than national forests ( = 4% of territory). As such, predicted extinction probability for an average owl territory was ~2.5 times greater in national forests than national parks, where occupancy was declining () and stable (), respectively. Large tree/high canopy cover forest remained consistently low, but did not decline, during the study period on national forests while owl declines were ongoing—an observation consistent with an extinction debt.

Main conclusions

In identifying a linkage between large trees and spotted owl dynamics at a regional scale, we provide evidence suggesting past logging of large old trees may have contributed to contemporary declines in an old‐forest species. Strengthening protections for remaining large old trees and promoting their recruitment in the future will be critical for biodiversity conservation in the world's forests.
  相似文献   

7.
Strategic conservation efforts for cryptic species, especially bats, are hindered by limited understanding of distribution and population trends. Integrating long‐term encounter surveys with multi‐season occupancy models provides a solution whereby inferences about changing occupancy probabilities and latent changes in abundance can be supported. When harnessed to a Bayesian inferential paradigm, this modeling framework offers flexibility for conservation programs that need to update prior model‐based understanding about at‐risk species with new data. This scenario is exemplified by a bat monitoring program in the Pacific Northwestern United States in which results from 8 years of surveys from 2003 to 2010 require updating with new data from 2016 to 2018. The new data were collected after the arrival of bat white‐nose syndrome and expansion of wind power generation, stressors expected to cause population declines in at least two vulnerable species, little brown bat (Myotis lucifugus) and the hoary bat (Lasiurus cinereus). We used multi‐season occupancy models with empirically informed prior distributions drawn from previous occupancy results (2003–2010) to assess evidence of contemporary decline in these two species. Empirically informed priors provided the bridge across the two monitoring periods and increased precision of parameter posterior distributions, but did not alter inferences relative to use of vague priors. We found evidence of region‐wide summertime decline for the hoary bat ( = 0.86 ± 0.10) since 2010, but no evidence of decline for the little brown bat ( = 1.1 ± 0.10). White‐nose syndrome was documented in the region in 2016 and may not yet have caused regional impact to the little brown bat. However, our discovery of hoary bat decline is consistent with the hypothesis that the longer duration and greater geographic extent of the wind energy stressor (collision and barotrauma) have impacted the species. These hypotheses can be evaluated and updated over time within our framework of pre–post impact monitoring and modeling. Our approach provides the foundation for a strategic evidence‐based conservation system and contributes to a growing preponderance of evidence from multiple lines of inquiry that bat species are declining.  相似文献   

8.
Summary Model‐based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, , denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008 , Biometrika 95, 635–651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999–2005. Using each approach, we estimate the short‐term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty.  相似文献   

9.
We derive a quantile-adjusted conditional maximum likelihood estimator for the dispersion parameter of the negative binomial distribution and compare its performance, in terms of bias, to various other methods. Our estimation scheme outperforms all other methods in very small samples, typical of those from serial analysis of gene expression studies, the motivating data for this study. The impact of dispersion estimation on hypothesis testing is studied. We derive an "exact" test that outperforms the standard approximate asymptotic tests.  相似文献   

10.
Matched case‐control paired data are commonly used to study the association between a disease and an exposure of interest. This work provides a consistent test for this association with respect to the conditional odds ratio (), which is a measure of association that is also valid in prospective studies. We formulate the test from the maximum likelihood (ML) estimate of by using data under inverse binomial sampling, in which individuals are selected sequentially to form matched pairs until for the first time one obtains either a prefixed number of index pairs with the case unexposed but the control exposed or with the case exposed but the control unexposed. We discuss the situation of possible early stopping. We compare numerically the performance of our procedure with a competitor proposed by Lui ( 1996 ) in terms of type I error rate, power, average sample number (ASN) and the corresponding standard error. Our numerical study shows a gain in sample size without loss in power as compared to the competitor. Finally, we use the data taken from a case‐control study on the use of X‐rays and the risk of childhood acute myeloid leukemia for illustration.  相似文献   

11.
Understanding spatiotemporal population trends and their drivers is a key aim in population ecology. We further need to be able to predict how the dynamics and sizes of populations are affected in the long term by changing landscapes and climate. However, predictions of future population trends are sensitive to a range of modeling assumptions. Deadwood‐dependent fungi are an excellent system for testing the performance of different predictive models of sessile species as these species have different rarity and spatial population dynamics, the populations are structured at different spatial scales, and they utilize distinct substrates. We tested how the projected large‐scale occupancies of species with differing landscape‐scale occupancies are affected over the coming century by different modeling assumptions. We compared projections based on occupancy models against colonization–extinction models, conducting the modeling at alternative spatial scales and using fine‐ or coarse‐resolution deadwood data. We also tested effects of key explanatory variables on species occurrence and colonization–extinction dynamics. The hierarchical Bayesian models applied were fitted to an extensive repeated survey of deadwood and fungi at 174 patches. We projected higher occurrence probabilities and more positive trends using the occupancy models compared to the colonization–extinction models, with greater difference for the species with lower occupancy, colonization rate, and colonization:extinction ratio than for the species with higher estimates of these statistics. The magnitude of future increase in occupancy depended strongly on the spatial modeling scale and resource resolution. We encourage using colonization–extinction models over occupancy models, modeling the process at the finest resource‐unit resolution that is utilizable by the species, and conducting projections for the same spatial scale and resource resolution at which the model fitting is conducted. Further, the models applied should include key variables driving the metapopulation dynamics, such as the availability of suitable resource units, habitat quality, and spatial connectivity.  相似文献   

12.
This paper is devoted to the analysis of a simple Lotka–Volterra food chain evolving in a stochastic environment. It can be seen as the companion paper of Hening and Nguyen (J Math Biol 76:697–754, 2018b) where we have characterized the persistence and extinction of such a food chain under the assumption that there is no intraspecific competition among predators. In the current paper, we focus on the case when all the species experience intracompetition. The food chain we analyze consists of one prey and \(n-1\) predators. The jth predator eats the \(j-1\)st species and is eaten by the \(j+1\)st predator; this way each species only interacts with at most two other species—the ones that are immediately above or below it in the trophic chain. We show that one can classify, based on the invasion rates of the predators (which we can determine from the interaction coefficients of the system via an algorithm), which species go extinct and which converge to their unique invariant probability measure. We obtain stronger results than in the case with no intraspecific competition because in this setting we can make use of the general results of Hening and Nguyen (Ann Appl Probab 28:1893–1942, 2018a). Unlike most of the results available in the literature, we provide an in-depth analysis for both non-degenerate and degenerate noise. We exhibit our general results by analyzing trophic cascades in a plant–herbivore–predator system and providing persistence/extinction criteria for food chains of length \(n\le 4\).  相似文献   

13.
The extensive spatial and temporal coverage of many citizen science datasets (CSD) makes them appealing for use in species distribution modeling and forecasting. However, a frequent limitation is the inability to validate results. Here, we aim to assess the reliability of CSD for forecasting species occurrence in response to national forest management projections (representing 160,366 km2) by comparison against forecasts from a model based on systematically collected colonization–extinction data. We fitted species distribution models using citizen science observations of an old‐forest indicator fungus Phellinus ferrugineofuscus. We applied five modeling approaches (generalized linear model, Poisson process model, Bayesian occupancy model, and two MaxEnt models). Models were used to forecast changes in occurrence in response to national forest management for 2020‐2110. Forecasts of species occurrence from models based on CSD were congruent with forecasts made using the colonization–extinction model based on systematically collected data, although different modeling methods indicated different levels of change. All models projected increased occurrence in set‐aside forest from 2020 to 2110: the projected increase varied between 125% and 195% among models based on CSD, in comparison with an increase of 129% according to the colonization–extinction model. All but one model based on CSD projected a decline in production forest, which varied between 11% and 49%, compared to a decline of 41% using the colonization–extinction model. All models thus highlighted the importance of protected old forest for P. ferrugineofuscus persistence. We conclude that models based on CSD can reproduce forecasts from models based on systematically collected colonization–extinction data and so lead to the same forest management conclusions. Our results show that the use of a suite of models allows CSD to be reliably applied to land management and conservation decision making, demonstrating that widely available CSD can be a valuable forecasting resource.  相似文献   

14.
The complementary log-log link was originally introduced in 1922 to R. A. Fisher, long before the logit and probit links. While the last two links are symmetric, the complementary log-log link is an asymmetrical link without a parameter associated with it. Several asymmetrical links with an extra parameter were proposed in the literature over last few years to deal with imbalanced data in binomial regression (when one of the classes is much smaller than the other); however, these do not necessarily have the cloglog link as a special case, with the exception of the link based on the generalized extreme value distribution. In this paper, we introduce flexible cloglog links for modeling binomial regression models that include an extra parameter associated with the link that explains some unbalancing for binomial outcomes. For all cases, the cloglog is a special case or the reciprocal version loglog link is obtained. A Bayesian Markov chain Monte Carlo inference approach is developed. Simulations study to evaluate the performance of the proposed algorithm is conducted and prior sensitivity analysis for the extra parameter shows that a uniform prior is the most convenient for all models. Additionally, two applications in medical data (age at menarche and pulmonary infection) illustrate the advantages of the proposed models.  相似文献   

15.
16.
The scavenging effects of eighteen thiazolyl thiazolidine‐2,4‐dione compounds (TTCs) on superoxide radical , hydroxyl radical HO?, and 1,1‐diphenyl‐2‐picrylhydrazyl (DPPH?) radical were evaluated by the chemiluminescence technique, electron spin resonance spectrometry (ESR) and visible spectrophotometry, respectively. The examined compounds were shown to have 27–59% scavenging ability, 19–69% HO? scavenging activity and 2–32% DPPH? scavenging ability. This property of the tested compound seems to be important in the prevention of various diseases of free radicals etiology. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
An analysis of mortality is undertaken in two breeds of pigs: Danish Landrace and Yorkshire. Zero-inflated and standard versions of hierarchical Poisson, binomial, and negative binomial Bayesian models were fitted using Markov chain Monte Carlo (MCMC). The objectives of the study were to investigate whether there is support for genetic variation for mortality and to study the quality of fit and predictive properties of the various models. In both breeds, the model that provided the best fit to the data was the standard binomial hierarchical model. The model that performed best in terms of the ability to predict the distribution of stillbirths was the hierarchical zero-inflated negative binomial model. The best fit of the binomial hierarchical model and of the zero-inflated hierarchical negative binomial model was obtained when genetic variation was included as a parameter. For the hierarchical binomial model, the estimate of the posterior mean of the additive genetic variance (posterior standard deviation in brackets) at the level of the logit of the probability of a stillbirth was 0.173(0.039) in Landrace and 0.202(0.048) in Yorkshire. The implications of these results from a breeding perspective are briefly discussed.LITTER size has been under selection in the Danish pig breeding program since the early 1990s and this resulted in considerable increase in total number born and also in the proportion of stillborn piglets (Sorensen et al. 2000; Su et al. 2007). A number of studies have reported genetic variation for mortality with heritabilities ranging from 0.03 to 0.12. These studies have either assumed normality of the sampling model for mortality (e.g., van Arendonk et al. 1996) or based inferences on a variety of threshold models (e.g., Roehe and Kalm 2000; Arango et al. 2006), and critical investigations of the quality of fit of the models used were not reported.Mortality data, regarded as a trait of the mother, show typically a large proportion of zeros (many litters do not have stillborn piglets). Formal genetic analyses of mortality in pigs accounting for this feature of the data are not available in the literature and this article attempts to fill this gap. The focus here is to study the quality of fit and predictive ability of a number of models and to investigate whether they provide statistical evidence for genetic variation for mortality. The statistical genetic analysis involves fitting various hierarchical models involving three discrete distributions: the Poisson, the binomial, and the negative binomial.The statistical analysis of counts based on discrete parametric distributions has a long and rich history (Johnson and Kotz 1969). In the case of unbounded counts, Poisson regression models are standard, whereas for bounded counts, when the response can be viewed as the number of successes out of a fixed number of trials, regression models based on the binomial distribution are often used (Hall 2000). A restriction of the Poisson model is that it imposes equality of mean and variance. Typically the distribution of counts is overdispersed. In the case of the binomial model the only free parameter is the probability of success, which results in a functional relationship between the mean and the variance. Several possible alternatives have been suggested to obtain more flexible models. For example, the negative binomial distribution has two parameters and allows the mean and variance to be fitted separately (Lawless 1987). An application of the negative binomial model in animal breeding can be found in Tempelman and Gianola (1996, 1999). In the same spirit, a robust alternative to the binomial model is the beta-binomial, which is a mixture of binomials where the unequal probabilities of success vary according to a beta-distribution. In general, hierarchical specifications are needed to explain extra variation that is not accounted for by the sampling model of the data. These involve assigning a distribution to the parameters of the sampling model, directly, as in the case of the negative binomial or beta-binomial models, or indirectly, by embedding these parameters in a linear structure that includes random effects as explanatory variables.There are situations where overdispersion is partly associated with an incidence of zero counts that is greater than expected under the sampling model, as in the present study. Hurdle models (Mullahy 1986; Winkelmann 2000) and zero-inflated models are two instances of finite mixture models commonly used to account for this characteristic of the data. In the present work the excess of zeros is studied using zero-inflated models that are described in Johnson and Kotz (1969) and extended by Lambert (1992). Ridout et al. (1998) provide a review of various zero-inflated models; recent applications of zero-inflated Poisson models in animal breeding are in Rodriguez-Motta et al. (2007) and in Naya et al. (2008). Zero-inflated models assume that the population consists of two sets of observations. In the first set, which may include zeros, observations are realizations from a discrete sampling process indexed by unknown parameters (this set is often referred to as the imperfect state); observations from the second set consist only of zeros and the parameter of interest is the proportion of these individuals. This set is often referred to as the perfect state. Either or both sets of parameters may depend on covariates.This article is organized as follows. material and methods describes the data, details of the models, and their Markov chain Monte Carlo (MCMC) implementation. This is followed by a presentation of the results of the analyses and of MCMC-driven explorative tools for model comparison. The article concludes with a discussion.  相似文献   

18.
  • 1 The equilibrium model of island biogeography developed in the 1960s by MacArthur and Wilson has provided an excellent framework in which to investigate the dynamics of species richness in island and island‐like systems. It is comparable in many respects to the Hardy–Weinberg equilibrium model used in genetics as the basis for defining a point of reference, thus allowing one to discover the factors that prevent equilibrium from being achieved. Hundreds of studies have used the model effectively, especially those dealing with brief spans of time and limited geographical areas. In spite of this utility, however, there are important limitations to the MacArthur–Wilson model, especially when we consider long‐term and large‐scale circumstances.
  • 2 Although their general theory is more complex, the MacArthur–Wilson equilibrium model treats colonization and extinction as the only two processes that are relevant to determining species richness. However, it is likely that phylogenetic diversification (phylogenesis) often takes place on the same time‐scale as colonization and extinction; for example, colonization, extinction, and phylogenesis among mammals on oceanic and/or old land‐bridge islands in South‐east Asia are all measured in units of time in the range of 10 000–1 million years, most often in units of 100 000 years.
  • 3 Phylogenesis is not a process that can be treated simply as ‘another form of colonization’, as it behaves differently than colonization. It interacts in a complex manner with both colonization and extinction, and can generate patterns of species richness almost independently of the other two processes. In addition, contrary to the implication of the MacArthur–Wilson model, extinction does not drive species richness in highly isolated archipelagoes (those that receive very few colonists) to progressively lower values; rather, phylogenesis is a common outcome in such archipelagoes, and species richness rises over time. In some specific instances, phylogenesis may have produced an average of 14 times as many species as direct colonization, and perhaps 36 species from one such colonization event. Old, stable, large archipelagoes should typically support not just endemic species but endemic clades, and the total number of species and the size of the endemic clades should increase with age of the archipelago.
  • 4 The existence of long‐term equilibrium in actual island archipelagoes is unlikely. The land masses that make up island archipelagoes are intrinsically unstable because the geological processes that cause their formation are dynamic, and substantial changes can occur (under some circumstances) on a time‐scale comparable to the processes of colonization, phylogenesis, and extinction. Large‐scale island‐like archipelagoes on continents also are unstable, in the medium term because of global climatic fluctuations, and in the long term because of the geologically ephemeral existence of, for example, individual mountain ranges.
  • 5 Examples of these phenomena using the mammals of South‐east Asia, especially the Philippines, make it clear that the best conceptual model of the long‐term dynamics of species richness in island archipelagoes would be one in which colonization, extinction, and phylogenesis are recognized to be of equivalent conceptual importance. Furthermore, we should expect species richness to be always in a dynamic state of disequilibrium due to the constantly changing geological/geographical circumstances in which that diversity exists, always a step or two out of phase with the constantly changing equilibrium point for species richness.
  相似文献   

19.
Here we describe the updated MolProbity rotamer‐library distributions derived from an order‐of‐magnitude larger and more stringently quality‐filtered dataset of about 8000 (vs. 500) protein chains, and we explain the resulting changes and improvements to model validation as seen by users. To include only side‐chains with satisfactory justification for their given conformation, we added residue‐specific filters for electron‐density value and model‐to‐density fit. The combined new protocol retains a million residues of data, while cleaning up false‐positive noise in the multi‐ datapoint distributions. It enables unambiguous characterization of conformational clusters nearly 1000‐fold less frequent than the most common ones. We describe examples of local interactions that favor these rare conformations, including the role of authentic covalent bond‐angle deviations in enabling presumably strained side‐chain conformations. Further, along with favored and outlier, an allowed category (0.3–2.0% occurrence in reference data) has been added, analogous to Ramachandran validation categories. The new rotamer distributions are used for current rotamer validation in MolProbity and PHENIX, and for rotamer choice in PHENIX model‐building and refinement. The multi‐dimensional distributions and Top8000 reference dataset are freely available on GitHub. These rotamers are termed “ultimate” because data sampling and quality are now fully adequate for this task, and also because we believe the future of conformational validation should integrate side‐chain with backbone criteria. Proteins 2016; 84:1177–1189. © 2016 Wiley Periodicals, Inc.  相似文献   

20.
When establishing a treatment in clinical trials, it is important to evaluate both effectiveness and toxicity. In phase II clinical trials, multinomial data are collected in m‐stage designs, especially in two‐stage () design. Exact tests on two proportions, for the response rate and for the nontoxicity rate, should be employed due to limited sample sizes. However, existing tests use certain parameter configurations at the boundary of null hypothesis space to determine rejection regions without showing that the maximum Type I error rate is achieved at the boundary of null hypothesis. In this paper, we show that the power function for each test in a large family of tests is nondecreasing in both and ; identify the parameter configurations at which the maximum Type I error rate and the minimum power are achieved and derive level‐α tests; provide optimal two‐stage designs with the least expected total sample size and the optimization algorithm; and extend the results to the case of . Some R‐codes are given in the Supporting Information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号