首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Estimating asymptotic size using the largest individuals per sample   总被引:1,自引:0,他引:1  
Summary Estimates of asymptotic size are especially useful for comparative studies of taxonomic groups in which animals mature at small sizes relative to their final asymptotic sizes. The largest individuals per sample can provide reasonable estimates of asymptotic size if three conditions are met: 1) at least some adults in a population are near their final asymptotic size, 2) samples of a reasonable size are likely to contain a largest individual that is near the average asymptotic size for the members of its sex, and 3) the coefficient of variation in asymptotic size is small for the members of each sex. In the current study, we show that all three of these conditions are met for one species of Anolis lizards (A. limifrons). For a series of samples from the genus Anolis, the largest individual per sample produces estimates of asymptotic size that are virtually identical to those produced by fitting field data on growth rates to nonlinear growth equations. These results suggest that the largest individual method can provide reasonable estimates of asymptotic size for the members of this genus, and imply that this method may also be useful for estimating asymptotic sizes in other taxa that satisfy the criteria listed above.  相似文献   

2.
Many social animals live in stable groups, and it has been argued that kinship plays a major role in their group formation process. In this study we present the mathematical analysis of a recent model which uses kinship as a main factor to explain observed group patterns in a finite sample of individuals. We describe the average number of groups and the probability distribution of group sizes predicted by this model. Our method is based on the study of recursive equations underlying these quantities. We obtain asymptotic equivalents for probability distributions and moments as the sample size increases, and we exhibit power-law behaviours. Computer simulations are also utilized to measure the extent to which the asymptotic approximation can be applied with confidence.  相似文献   

3.
The majority of published modeling work regarding the impact of mixing patterns among subgroups on the spread of HIV infection assumes either that the overall population size remains constant, the aggregate immigration to the population occurs at a constant annual rate, or that no immigration occurs and the population in question declines due to HIV or other causes. In this paper, immigration rates are modeled as simple functions of population size and may be interpreted as aggregate birth rates. This assumption implies asymptotic exponential growth in the disease-free population as long as per capita birth rates exceed per capita mortality rates. The introduction of HIV infection to such a population may change this situation, and the asymptotic population growth rate can be reduced substantially as a result. The specific manner in which this occurs depends in part upon difficult to observe mixing patterns among those with different sexual activity rates. Rather than attempting to explicitly model a variety of mixing patterns, a bound on the impact of worst-case mixing is produced, where "worst case" refers to the mixing pattern that maximizes the asymptotic prevalence of infection, which is equivalent to minimizing the asymptotic population growth rate. These new techniques are illustrated with a numerical example.  相似文献   

4.
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.  相似文献   

5.
The potential of viral contamination is a regulatory concern for continuous cell line-derived pharmaceutical proteins. Complementary and redundant safety steps, including an evaluation of the viral clearance capacity of unit operations in the purification process, are performed prior to registration and marketing of biotechnology pharmaceuticals. Because process refinement is frequently beneficial, CBER/FDA has published guidance facilitating process improvement by delineating specific instances where the bracketing and generic approaches are appropriate for virus removal validation. In this study, a generic/matrix study was performed using Q-Sepharose Fast Flow (QSFF) chromatography to determine if bracketing and generic validation can be applied to anion exchange chromatography. Key operational parameters were varied to upper and lower extreme values and the impact on viral clearance was assessed using simian virus 40 (SV40) as the model virus. Operational ranges for key chromatography parameters were identified where an SV40 log(10) reduction value (LRV) of >or=4.7 log(10) is consistently achieved. On the basis of the apparent robustness of SV40 removal by Q-anion exchange chromatography, we propose that the concept of "bracketed generic" validation can be applied to this and potentially other chromatography unit operations.  相似文献   

6.

Background

The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.

Methods and Findings

We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.

Conclusions

Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.  相似文献   

7.
In population genetics, under a neutral Wright-Fisher model, the scaling parameter straight theta=4Nmu represents twice the average number of new mutants per generation. The effective population size is N and mu is the mutation rate per sequence per generation. Watterson proposed a consistent estimator of this parameter based on the number of segregating sites in a sample of nucleotide sequences. We study the distribution of the Watterson estimator. Enlarging the size of the sample, we asymptotically set a Central Limit Theorem for the Watterson estimator. This exhibits asymptotic normality with a slow rate of convergence. We then prove the asymptotic efficiency of this estimator. In the second part, we illustrate the slow rate of convergence found in the Central Limit Theorem. To this end, by studying the confidence intervals, we show that the asymptotic Gaussian distribution is not a good approximation for the Watterson estimator.  相似文献   

8.
Establishing bioequivalence (BE) of drugs indicated to treat cancer poses special challenges. For ethical reasons, often, the studies need to be conducted in cancer patients rather than in healthy volunteers, especially when the drug is cytotoxic. The Biopharmaceutics Classification System (BCS) introduced by Amidon (1) and adopted by the FDA, presents opportunities to avoid conducting the bioequivalence studies in humans. This paper analyzes the application of the BCS approach by the generic pharmaceutical industry and the FDA to oncology drug products. To date, the FDA has granted BCS-based biowaivers for several drug products involving at least four different drug substances, used to treat cancer. Compared to in vivo BE studies, development of data to justify BCS waivers is considered somewhat easier, faster, and more cost effective. However, the FDA experience shows that the approval times for applications containing in vitro studies to support the BCS-based biowaivers are often as long as the applications containing in vivo BE studies, primarily because of inadequate information in the submissions. This paper deliberates some common causes for the delays in the approval of applications requesting BCS-based biowaivers for oncology drug products. Scientific considerations of conducting a non-BCS-based in vivo BE study for generic oncology drug products are also discussed. It is hoped that the information provided in our study would help the applicants to improve the quality of ANDA submissions in the future.KEY WORDS: Biopharmaceutics Classification System, bioequivalence, biowaiver, cancer, oncology  相似文献   

9.

Background

Generic drugs are used by millions of patients for economic reasons, so their evaluation must be highly transparent.

Objective

To assess the quality of reporting of bioequivalence trials comparing generic to brand-name drugs.

Methodology/Principal Findings

PubMed was searched for reports of bioequivalence trials comparing generic to brand-name drugs between January 2005 and December 2008. Articles were included if the aim of the study was to assess the bioequivalency of generic and brand-name drugs. We excluded case studies, pharmaco-economic evaluations, and validation dosage assays of drugs. We evaluated whether important information about funding, methodology, location of trials, and participants were reported. We also assessed whether the criteria required by the Food and Drug Administration (FDA) and the European Medicine Agency (EMA) to conclude bioequivalence were reported and that the conclusions were in agreement with the results. We identified 134 potentially relevant articles but eliminated 55 because the brand-name or generic drug status of the reference drug was unknown. Thus, we evaluated 79 articles. The funding source and location of the trial were reported in 41% and 56% of articles, respectively. The type of statistical analysis was reported in 94% of articles, but the methods to generate the randomization sequence and to conceal allocation were reported in only 15% and 5%, respectively. In total, 65 articles of single-dose trials (89%) concluded bioequivalence. Of these, 20 (31%) did not report the 3 criteria within the limits required by the FDA and 11 (17%) did not report the 2 criteria within the limits required by the EMA.

Conclusions/Significance

Important information to judge the validity and relevance of results are frequently missing in published reports of trials assessing generic drugs. The quality of reporting of such trials is in need of improvement.  相似文献   

10.
The correlation coefficient squared, r2, is commonly used to validate quantitative models on neural data, yet it is biased by trial-to-trial variability: as trial-to-trial variability increases, measured correlation to a model’s predictions decreases. As a result, models that perfectly explain neural tuning can appear to perform poorly. Many solutions to this problem have been proposed, but no consensus has been reached on which is the least biased estimator. Some currently used methods substantially overestimate model fit, and the utility of even the best performing methods is limited by the lack of confidence intervals and asymptotic analysis. We provide a new estimator, r^ER2, that outperforms all prior estimators in our testing, and we provide confidence intervals and asymptotic guarantees. We apply our estimator to a variety of neural data to validate its utility. We find that neural noise is often so great that confidence intervals of the estimator cover the entire possible range of values ([0, 1]), preventing meaningful evaluation of the quality of a model’s predictions. This leads us to propose the use of the signal-to-noise ratio (SNR) as a quality metric for making quantitative comparisons across neural recordings. Analyzing a variety of neural data sets, we find that up to ∼ 40% of some state-of-the-art neural recordings do not pass even a liberal SNR criterion. Moving toward more reliable estimates of correlation, and quantitatively comparing quality across recording modalities and data sets, will be critical to accelerating progress in modeling biological phenomena.  相似文献   

11.
The Pearson correlation coefficient and the Kendall correlation coefficient are two popular statistics for assessing the correlation between two variables in a bivariate sample. We indicate how both of these statistics are special cases of a general class of correlation statistics that is parameterized by gamma element of [0, 1]. The Pearson correlation coefficient is characterized by gamma = 1 and the Kendall correlation coefficient by gamma = 0, so they yield the upper and lower extremes of the class, respectively. The correlation coefficient characterized by gamma = 0.5 is of special interest because it only requires that first-order moments exist for the underlying bivariate distribution, whereas the Pearson correlation coefficient requires that second-order moments exist. We derive the asymptotic theory for the general class of sample correlation coefficients and then describe the use of this class of correlation statistics within the 2 x 2 crossover design. We illustrate the methodology using data from the CLIC trial of the Childhood Asthma Research and Education (CARE) Network.  相似文献   

12.
根据 FDA 和 CFDA 口服固体制剂溶出度试验技术指导原则的要求,为防止仿制药一致性评价过程中相似因子(f2)法的滥用和 不恰当应用,采用样本数据实例演示的方式说明多变量置信区间法和模型依赖法作为补充手段在溶出曲线相似性比较和 BE 风险预评估中 的重要性。  相似文献   

13.
We describe a simple yet general method to analyze networks of coupled identical nonlinear oscillators and study applications to fast synchronization, locomotion, and schooling. Specifically, we use nonlinear contraction theory to derive exact and global (rather than linearized) results on synchronization, antisynchronization, and oscillator death. The method can be applied to coupled networks of various structures and arbitrary size. For oscillators with positive definite diffusion coupling, it can be shown that synchronization always occurs globally for strong enough coupling strengths, and an explicit upper bound on the corresponding threshold can be computed through eigenvalue analysis. The discussion also extends to the case when network structure varies abruptly and asynchronously, as in flocks of oscillators or dynamic elements.  相似文献   

14.
15.
Summary A model primitive tRNA with the nucleotide sequence GGCCAAAAAAAGGCCp was synthesized using T4 RNA ligase. The nucleotide sequence of this newly synthesized oligonucleotide was confirmed by ladder analysis of several enzymatic digestion products. The secondary structure of the oligonucleotide was examined by comparison of the products of its digestion by single- and double-strand-specific nucleases with those of the digestion of the intermediate oligonucleotide GGCCAAAAAAAOH. The results indicated that the two GGCC segments of the 5 and 3 ends of the model tRNA may form base pairs in solution. The same conclusion was derived from the result of affinitycolumn chromatography of the model oligonucleotide. When32pGGCCAAAAAAAGGCCOH was passed through a poly(U)-agarose column, about 70% of the applied sample bound to the poly(U)-agarose. In contrast, when the model oligonucleotide was passed through a poly(C)-agarose column, only 15% of the sample bound to the poly(C)-agarose. These results indicate that the newly synthesized oligonucleotide adopts a hairpin structure in solution. Two aspects of a potential biological activity of the synthetic model tRNA were examined. It was found that the oligonucleotide can bind to poly(U)-programmed 30S ribosomes and is recognized by Q replicase as a template for RNA synthesis.  相似文献   

16.
Summary The size distribution of cell aggregates, and the effect of cell aggregate size on anthocyanin content of Daucus carota cells in suspension cultures, was studied. The profile of biomass distribution in various size groups of cell aggregates indicated that over 92% of biomass was present in the aggregates of 500–1500 m in diameter. The anthocyanin content increased initially with the increase in cell aggregate diameter up to 500–850 m, and decreased rapidly with the increase in the cell aggregate size above this critical diameter. On the other hand, the surface colour intensity showed a steady increase with the increase in cell aggregate size, indicating a steep radial gradient of anthocyanin content along the radius of the larger cell aggregates.  相似文献   

17.
The twenty two monoclonal antibodies (mAbs) currently marketed in the U.S. have captured almost half of the top-20 U.S. therapeutic biotechnology sales for 2007. Eight of these products have annual sales each of more than $1 B, were developed in the relatively short average period of six years, qualified for FDA programs designed to accelerate drug approval, and their cost has been reimbursed liberally by payers. With growth of the product class driven primarily by advancements in protein engineering and the low probability of generic threats, mAbs are now the largest class of biological therapies under development. The high cost of these drugs and the lack of generic competition conflict with a financially stressed health system, setting reimbursement by payers as the major limiting factor to growth. Advances in mAb engineering are likely to result in more effective mAb drugs and an expansion of the therapeutic indications covered by the class. The parallel development of biomarkers for identifying the patient subpopulations most likely to respond to treatment may lead to a more cost-effective use of these drugs. To achieve the success of the current top-tier mAbs, companies developing new mAb products must adapt to a significantly more challenging commercial environment.Key words: autoimmune, biosimilars, buy and bill, comparative trials, drug approval, monoclonal, oncology, reimbursement  相似文献   

18.
S L Beal 《Biometrics》1989,45(3):969-977
Sample size determination is usually based on the premise that a hypothesis test is to be used. A confidence interval can sometimes serve better than a hypothesis test. In this paper a method is presented for sample size determination based on the premise that a confidence interval for a simple mean, or for the difference between two means, with normally distributed data is to be used. For this purpose, a concept of power relevant to confidence intervals is given. Some useful tables giving required sample size using this method are also presented.  相似文献   

19.
Random simulation of complex dynamical systems is generally used in order to obtain information about their asymptotic behaviour (i.e., when time or size of the system tends towards infinity). A fortunate and welcome circumstance in most of the systems studied by physicists, biologists, and economists is the existence of an invariant measure in the state space allowing determination of the frequency with which observation of asymptotic states is possible. Regions found between contour lines of the surface density of this invariant measure are called confiners. An example of such confiners is given for a formal neural network capable of learning. Finally, an application of this methodology is proposed in studying dependency of the network's invariant measure with regard to: 1) the mode of neurone updating (parallel or sequential), and 2) boundary conditions of the network (searching for phase transitions).  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号