首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a new approach to fitting marginal models to clustered data when cluster size is informative. This approach uses a generalized estimating equation (GEE) that is weighted inversely with the cluster size. We show that our approach is asymptotically equivalent to within-cluster resampling (Hoffman, Sen, and Weinberg, 2001, Biometrika 73, 13-22), a computationally intensive approach in which replicate data sets containing a randomly selected observation from each cluster are analyzed, and the resulting estimates averaged. Using simulated data and an example involving dental health, we show the superior performance of our approach compared to unweighted GEE, the equivalence of our approach with WCR for large sample sizes, and the superior performance of our approach compared with WCR when sample sizes are small.  相似文献   

2.
生物多样性与生态系统多功能性(biodiversity and ecosystem multifunctionality, BEMF)之间的关系是目前生态学研究的一个热点, 其中, 生态系统多功能性(EMF)的测度方法是研究该问题的技术关键。由于缺乏统一的认识, 目前存在多个多功能性的测度方法, 这使人们对生物多样性与生态系统多功能性之间关系的理解复杂化。本文介绍了国际上常用的单功能法、功能-物种替代法、平均值法、单阈值法、多阈值法、直系同源基因法和多元模型法的原理及其特点, 并对其中较难理解的多阈值法进行了举例说明, 希望能对理解EMF的测度方法有所帮助。本文按不同的EMF测度方法对已发表的有关文章进行了归类, 以期帮助读者更好地选择EMF的测度方法。由于缺乏相对统一的、代表各个层次的生态系统功能的测度方法, 导致不同的研究结果难以相互比较, 严重限制了生物多样性与生态系统多功能性研究的发展; 所以, 研发新的、具有普遍适用性的EMF测度方法已成为当务之急。  相似文献   

3.
Dynamic implementation for software-based soft error tolerance method which can protect more types of codes can cover more soft errors. This paper explores soft error tolerance with dynamic software-based method. We propose a new dynamic software-based approach to tolerate soft errors. In our approach, the objective which is protected is dynamic program. For those protected dynamic binary codes, we make sure right control flow and right data flow to significant extent in our approach. Our approach copies every data and operates every operation twice to ensure those data stored into memory are right. Additionally, we ensure every branch instruction can jump to the right address by checking condition and destination address. Our approach is implemented by the technique dynamic binary instrumentation. Specifically, our tool is implemented on the basis of valgrind framework which is a heavyweight dynamic binary instrumentation tool. Our experimental results demonstrate that our approach can get higher reliability of dynamic software than those approaches which is implemented with static program protection method. However, our approach is only suitable for the system which has a strict requirement of reliability because our approach also sacrifices more performance of software than those static program protection methods.  相似文献   

4.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

5.
In this paper, an autonomic performance management approach is introduced that can be applied to a general class of web services deployed in large scale distributed environment. The proposed approach utilizes traditional large scale control-based algorithms by using interaction balance approach in web service environment for managing the response time and the system level power consumption. This approach is developed in a generic fashion that makes it suitable for web service deployments, where web service performance can be adjusted by using a finite set of control inputs. This approach maintains the service level agreements, maximizes the revenue, and minimizes the infrastructure operating cost. Additionally, the proposed approach is fault-tolerant with respect to the failures of the computing nodes inside the distributed deployment. Moreover, the computational overhead of the proposed approach can also be managed by using appropriate value of configuration parameters during its deployment.  相似文献   

6.
This paper tracks the commitments of mechanistic explanations focusing on the relation between activities at different levels. It is pointed out that the mechanistic approach is inherently committed to identifying causal connections at higher levels with causal connections at lower levels. For the mechanistic approach to succeed a mechanism as a whole must do the very same thing what its parts organised in a particular way do. The mechanistic approach must also utilise bridge principles connecting different causal terms of different theoretical vocabularies in order to make the identities of causal connections transparent. These general commitments get confronted with two claims made by certain proponents of the mechanistic approach: William Bechtel often argues that within the mechanistic framework it is possible to balance between reducing higher levels and maintaining their autonomy at the same time, whereas, in a recent paper, Craver and Bechtel argue that the mechanistic approach is able to make downward causation intelligible. The paper concludes that the mechanistic approach imbued with identity statements is no better candidate for anchoring higher levels to lower ones while maintaining their autonomy at the same time than standard reductive accounts are, and that what mechanistic explanations are able to do at best is showing that downward causation does not exist.  相似文献   

7.
Albert PS  Hunsberger S 《Biometrics》2005,61(4):1115-1120
Wang, Ke, and Brown (2003, Biometrics59, 804-812) developed a smoothing-based approach for modeling circadian rhythms with random effects. Their approach is flexible in that fixed and random covariates can affect both the amplitude and phase shift of a nonparametrically smoothed periodic function. In motivating their approach, Wang et al. stated that a simple sinusoidal function is too restrictive. In addition, they stated that "although adding harmonics can improve the fit, it is difficult to decide how many harmonics to include in the model, and the results are difficult to interpret." We disagree with the notion that harmonic models cannot be a useful tool in modeling longitudinal circadian rhythm data. In this note, we show how nonlinear mixed models with harmonic terms allow for a simple and flexible alternative to Wang et al.'s approach. We show how to choose the number of harmonics using penalized likelihood to flexibly model circadian rhythms and to estimate the effect of covariates on the rhythms. We fit harmonic models to the cortisol circadian rhythm data presented by Wang et al. to illustrate our approach. Furthermore, we evaluate the properties of our procedure with a small simulation study. The proposed parametric approach provides an alternative to Wang et al.'s semiparametric approach and has the added advantage of being easy to implement in most statistical software packages.  相似文献   

8.
目的探讨快速获取高质量的新生隐球菌总RNA的实验方法。方法选取新生隐球菌的荚膜株、荚膜缺陷株,分别设计采用4种方法提取总RNA:酸洗玻璃珠法、液氮研磨法、异硫氰酸胍一步法、冷酸洗玻璃珠联合Yeast RNA kit法。用紫外线分光光度计测量其OD260、OD280的值,并且进行琼脂糖凝胶电泳,同时应用定量PCR法鉴定RNA质量。结果酸洗玻璃珠法、液氮研磨法、异硫氰酸胍一步法、冷酸洗玻璃珠联合Yeast RNA kit法的RNA产量分别为0.2μg/105细胞、0.4μg/105细胞、0.1μg/105细胞、0.6μg/105细胞。结论冷酸洗玻璃珠联合Yeast RNA kit法提取的RNA均一性和完整性最好,是简便、快捷地提取具有荚膜和细胞壁双重屏障的新生隐球菌RNA的理想方法。  相似文献   

9.
Tang DI  Geller NL 《Biometrics》1999,55(4):1188-1192
A simple approach is given for conducting closed testing in clinical trials with multiple endpoints in which group sequential monitoring is planned. The approach allows a flexible stopping time; the earliest and latest stopping times are described. The paradigm is applicable both to clinical trials with multiple endpoints and to the one-sided multiple comparison problem of several treatments versus a control. The approach leads to enhancements of previous methods and suggestions for new methods. An example of a respiratory disease trial with four endpoints is given.  相似文献   

10.
When Hamilton defined the concept of inclusive fitness, he specifically was looking to define the fitness of an individual in terms of that individual's behavior, and the effects of its’ behavior on other related individuals. Although an intuitively attractive concept, issues of accounting for fitness, and correctly assigning it to the appropriate individual make this approach difficult to implement. The direct fitness approach has been suggested as a means of modeling kin selection while avoiding these issues. Whereas Hamilton's inclusive fitness approach assigns to the focal individual the fitness effects of its behavior on other related individuals, the direct fitness approach assigns the fitness effects of other actors to the focal individual. Contextual analysis was independently developed as a quantitative genetic approach for measuring multilevel selection in natural populations. Although the direct fitness approach and contextual analysis come from very different traditions, both methods rely on the same underlying equation, with the primary difference between the two approaches being that the direct fitness approach uses fitness optimization modeling, whereas with contextual analysis, the same equation is used to solve for the change in fitness associated with a change in phenotype when the population is away from the optimal phenotype.  相似文献   

11.
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.  相似文献   

12.
Moore JH  Hahn LW 《Bio Systems》2003,72(1-2):177-186
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two DNA sequence variations. In the present study, we evaluate whether the Petri net approach is capable of identifying biochemical networks that are consistent with disease susceptibility due to higher order nonlinear interactions between three DNA sequence variations. The results indicate that our model-building approach is capable of routinely identifying good, but not perfect, Petri net models. Ideas for improving the algorithm for this high-dimensional problem are presented.  相似文献   

13.
A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS) found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.  相似文献   

14.
Genomics-based methods are now commonplace in natural products research. A phylogeny-guided mining approach provides a means to quickly screen a large number of microbial genomes or metagenomes in search of new biosynthetic gene clusters of interest. In this approach, biosynthetic genes serve as molecular markers, and phylogenetic trees built with known and unknown marker gene sequences are used to quickly prioritize biosynthetic gene clusters for their metabolites characterization. An increase in the use of this approach has been observed for the last couple of years along with the emergence of low cost sequencing technologies. The aim of this review is to discuss the basic concept of a phylogeny-guided mining approach, and also to provide examples in which this approach was successfully applied to discover new natural products from microbial genomes and metagenomes. I believe that the phylogeny-guided mining approach will continue to play an important role in genomics-based natural products research.  相似文献   

15.
The generalized nonlinear Klien-Gordon equation plays an important role in quantum mechanics. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline is presented for the approximate solution of this equation with Dirichlet boundary conditions. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Several examples are discussed to exhibit the feasibility and capability of the approach. The absolute errors and error norms are also computed at different times to assess the performance of the proposed approach and the results were found to be in good agreement with known solutions and with existing schemes in literature.  相似文献   

16.
This study develops a hybrid material flow analysis (HMFA) method to evaluate the annual additional quantity of material stock, known as net additions to stock (NAS) at both micro‐ and macro‐levels through analyzing the fixed capital formation (FCF) and total supply in input‐output tables (IOTs). HMFA turns NAS from a balance indicator in the top‐down approach to an indicator with meaningful value in terms of urban ore evaluation. To verify the validity of HMFA, this study compares the developed HMFA with a top‐down approach and a bottom‐up approach through assessing the NAS of Taiwan and Germany. The quantity of NAS estimated by HMFA is considered as a more conservative upper bound than that by the top‐down approach, while underestimation often occurs with a bottom‐up approach. HMFA has been proven as an efficient and rational evaluation method which overcomes a key limitation in assessing micro‐level material stock by a top‐down approach, and solves the data demanding problem of the bottom‐up approach for quantifying material stock.  相似文献   

17.
Predicting the bioactivity of peptides and proteins is an important challenge in drug development and protein engineering. In this study we introduce a novel approach, the so-called “physics and chemistry-driven artificial neural network (Phys-Chem ANN)”, to deal with such a problem. Unlike the existing ANN approaches, which were designed under the inspiration of biological neural system, the Phys-Chem ANN approach is based on the physical and chemical principles, as well as the structural features of proteins. In the Phys-Chem ANN model the “hidden layers” are no longer virtual “neurons”, but real structural units of proteins and peptides. It is a hybridization approach, which combines the linear free energy concept of quantitative structure-activity relationship (QSAR) with the advanced mathematical technique of ANN. The Phys-Chem ANN approach has adopted an iterative and feedback procedure, incorporating both machine-learning and artificial intelligence capabilities. In addition to making more accurate predictions for the bioactivities of proteins and peptides than is possible with the traditional QSAR approach, the Phys-Chem ANN approach can also provide more insights about the relationship between bioactivities and the structures involved than the ANN approach does. As an example of the application of the Phys-Chem ANN approach, a predictive model for the conformational stability of human lysozyme is presented.  相似文献   

18.
The extremely complicated nature of many biological problems makes them bear the features of fuzzy sets, such as with vague, imprecise, noisy, ambiguous, or input-missing information For instance, the current data in classifying protein structural classes are typically a fuzzy set To deal with this kind of problem, the AAPCA (Amino Acid Principal Component Analysis) approach was introduced. In the AAPCA approach the 20-dimensional amino acid composition space is reduced to an orthogonal space with fewer dimensions, and the original base functions are converted into a set of orthogonal and normalized base functions The advantage of such an approach is that it can minimize the random errors and redundant information in protein dataset through a principal component selection, remarkably improving the success rates in predicting protein structural classes It is anticipated that the AAPCA approach can be used to deal with many other classification problems in proteins as well.  相似文献   

19.
Sequence-based approach for motif prediction is of great interest and remains a challenge. In this work, we develop a local combinational variable approach for sequence-based helix-turn-helix (HTH) motif prediction. First we choose a sequence data set for 88 proteins of 22 amino acids in length to launch an optimized traversal for extracting local combinational segments (LCS) from the data set. Then after LCS refinement, local combinational variables (LCV) are generated to construct prediction models for HTH motifs. Prediction ability of LCV sets at different thresholds is calculated to settle a moderate threshold. The large data set we used comprises 13 HTH families, with 17 455 sequences in total. Our approach predicts HTH motifs more precisely using only primary protein sequence information, with 93.29% accuracy, 93.93% sensitivity and 92.66% specificity. Prediction results of newly reported HTH-containing proteins compared with other prediction web service presents a good prediction model derived from the LCV approach. Comparisons with profile-HMM models from the Pfam protein families database show that the LCV approach maintains a good balance while dealing with HTH-containing proteins and non-HTH proteins at the same time. The LCV approach is to some extent a complementary to the profile-HMM models for its better identification of false-positive data. Furthermore, genome-wide predictions detect new HTH proteins in both Homo sapiens and Escherichia coli organisms, which enlarge applications of the LCV approach. Software for mining LCVs from sequence data set can be obtained from anonymous ftp site ftp://cheminfo.tongji.edu.cn/LCV/freely.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号