首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
2.
Purine nucleoside phosphorylase (PNP) catalyzes the phosphorolysis of the N-ribosidic bonds of purine nucleosides and deoxynucleosides. In human, PNP is the only route for degradation of deoxyguanosine and genetic deficiency of this enzyme leads to profound T-cell mediated immunosuppression. PNP is therefore a target for inhibitor development aiming at T-cell immune response modulation and its low resolution structure has been used for drug design. Here we report the structure of human PNP solved to 2.3A resolution using synchrotron radiation and cryocrystallographic techniques. This structure allowed a more precise analysis of the active site, generating a more reliable model for substrate binding. The higher resolution data allowed the identification of water molecules in the active site, which suggests binding partners for potential ligands. Furthermore, the present structure may be used in the new structure-based design of PNP inhibitors.  相似文献   

3.
Protein structure prediction methods typically use statistical potentials, which rely on statistics derived from a database of know protein structures. In the vast majority of cases, these potentials involve pairwise distances or contacts between amino acids or atoms. Although some potentials beyond pairwise interactions have been described, the formulation of a general multibody potential is seen as intractable due to the perceived limited amount of data. In this article, we show that it is possible to formulate a probabilistic model of higher order interactions in proteins, without arbitrarily limiting the number of contacts. The success of this approach is based on replacing a naive table‐based approach with a simple hierarchical model involving suitable probability distributions and conditional independence assumptions. The model captures the joint probability distribution of an amino acid and its neighbors, local structure and solvent exposure. We show that this model can be used to approximate the conditional probability distribution of an amino acid sequence given a structure using a pseudo‐likelihood approach. We verify the model by decoy recognition and site‐specific amino acid predictions. Our coarse‐grained model is compared to state‐of‐art methods that use full atomic detail. This article illustrates how the use of simple probabilistic models can lead to new opportunities in the treatment of nonlocal interactions in knowledge‐based protein structure prediction and design. Proteins 2013; 81:1340–1350. © 2013 Wiley Periodicals, Inc.  相似文献   

4.
In vitro cell imaging is a useful exploratory tool for cell behavior monitoring with a wide range of applications in cell biology and pharmacology. Combined with appropriate image analysis techniques, this approach has been shown to provide useful information on the detection and dynamic analysis of cell events. In this context, numerous efforts have been focused on cell migration analysis. In contrast, the cell division process has been the subject of fewer investigations. The present work focuses on this latter aspect and shows that, in complement to cell migration data, interesting information related to cell division can be extracted from phase-contrast time-lapse image series, in particular cell division duration, which is not provided by standard cell assays using endpoint analyses. We illustrate our approach by analyzing the effects induced by two sigma-1 receptor ligands (haloperidol and 4-IBP) on the behavior of two glioma cell lines using two in vitro cell models, i.e., the low-density individual cell model and the high-density scratch wound model. This illustration also shows that the data provided by our approach are suggestive as to the mechanism of action of compounds, and are thus capable of informing the appropriate selection of further time-consuming and more expensive biological evaluations required to elucidate a mechanism.  相似文献   

5.
We propose a self-consistent approach to analyze knowledge-based atom-atom potentials used to calculate protein-ligand binding energies. Ligands complexed to actual protein structures were first built using the SMoG growth procedure (DeWitte & Shakhnovich, 1996) with a chosen input potential. These model protein-ligand complexes were used to construct databases from which knowledge-based protein-ligand potentials were derived. We then tested several different modifications to such potentials and evaluated their performance on their ability to reconstruct the input potential using the statistical information available from a database composed of model complexes. Our data indicate that the most significant improvement resulted from properly accounting for the following key issues when estimating the reference state: (1) the presence of significant nonenergetic effects that influence the contact frequencies and (2) the presence of correlations in contact patterns due to chemical structure. The most successful procedure was applied to derive an atom-atom potential for real protein-ligand complexes. Despite the simplicity of the model (pairwise contact potential with a single interaction distance), the derived binding free energies showed a statistically significant correlation (approximately 0.65) with experimental binding scores for a diverse set of complexes.  相似文献   

6.
We present a model structure of a candidate tetramer for HIV-1 integrase. The model was built in three steps using data from fluorescence anisotropy, structures of the individual integrase domains, cross-linking data, and other biochemical data. First, the structure of the full-length integrase monomer was modeled using the individual domain structures and the hydrodynamic properties of the full-length protein that were recently measured by fluorescence depolarization. We calculated the rotational correlation times for different arrangements of three integrase domains, revealing that only structures with close proximity among the domains satisfied the experimental data. The orientations of the domains were constrained by iterative tests against the data on cross-linking and footprinting in integrase-DNA complexes. Second, the structure of an integrase dimer was obtained by joining the model monomers in accordance with the available dimeric crystal structures of the catalytic core. The hydrodynamic properties of the dimer were in agreement with the experimental values. Third, the active sites of the two model dimers were placed in agreement with the spacing between the sites of integration on target DNA as well as the integrase-DNA cross-linking data, resulting in twofold symmetry of a tetrameric complex. The model is consistent with the experimental data indicating that the F185K substitution, which is found in the model at a tetramerization interface, selectively disrupts correct complex formation in vitro and HIV replication in vivo. Our model of the integrase tetramer bound to DNA may help to design anti-integrase inhibitors.  相似文献   

7.
The hierarchical metaregression (HMR) approach is a multiparameter Bayesian approach for meta‐analysis, which generalizes the standard mixed effects models by explicitly modeling the data collection process in the meta‐analysis. The HMR allows to investigate the potential external validity of experimental results as well as to assess the internal validity of the studies included in a systematic review. The HMR automatically identifies studies presenting conflicting evidence and it downweights their influence in the meta‐analysis. In addition, the HMR allows to perform cross‐evidence synthesis, which combines aggregated results from randomized controlled trials to predict effectiveness in a single‐arm observational study with individual participant data (IPD). In this paper, we evaluate the HMR approach using simulated data examples. We present a new real case study in diabetes research, along with a new R package called jarbes (just a rather Bayesian evidence synthesis), which automatizes the complex computations involved in the HMR.  相似文献   

8.
In this article, a model‐free feedback control design is proposed for the drug administration in mixed cancer therapy. This strategy is very attractive because of the important issue of parameter uncertainties unavoidable when dealing with biological models. The proposed feedback scheme use past measurements to update an on‐line simplified model. The control design is then based on model predictive control in which a suitable switching is performed between two different cost functions. The effectiveness of the proposed model‐free control strategy is validated using a recently developed model (unknown to the controller) governing the cancer growth on a cells population level under combined immune and chemotherapy and using real human data. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009  相似文献   

9.
Model choice in linear mixed-effects models for longitudinal data is a challenging task. Apart from the selection of covariates, also the choice of the random effects and the residual correlation structure should be possible. Application of classical model choice criteria such as Akaike information criterion (AIC) or Bayesian information criterion is not obvious, and many versions do exist. In this article, a predictive cross-validation approach to model choice is proposed based on the logarithmic and the continuous ranked probability score. In contrast to full cross-validation, the model has to be fitted only once, which enables fast computations, even for large data sets. Relationships to the recently proposed conditional AIC are discussed. The methodology is applied to search for the best model to predict the course of CD4+ counts using data obtained from the Swiss HIV Cohort Study.  相似文献   

10.
Survival data are often modelled by the Cox proportional hazards model, which assumes that covariate effects are constant over time. In recent years however, several new approaches have been suggested which allow covariate effects to vary with time. Non-proportional hazard functions, with covariate effects changing dynamically, can be fitted using penalised spline (P-spline) smoothing. By utilising the link between P-spline smoothing and generalised linear mixed models, the smoothing parameters steering the amount of smoothing can be selected. A hybrid routine, combining the mixed model approach with a classical Akaike criterion, is suggested. This approach is evaluated with simulations and applied to data from the West of Scotland Coronary Prevention Study.  相似文献   

11.
Huang Y  Liu D  Wu H 《Biometrics》2006,62(2):413-423
HIV dynamics studies have significantly contributed to the understanding of HIV infection and antiviral treatment strategies. But most studies are limited to short-term viral dynamics due to the difficulty of establishing a relationship of antiviral response with multiple treatment factors such as drug exposure and drug susceptibility during long-term treatment. In this article, a mechanism-based dynamic model is proposed for characterizing long-term viral dynamics with antiretroviral therapy, described by a set of nonlinear differential equations without closed-form solutions. In this model we directly incorporate drug concentration, adherence, and drug susceptibility into a function of treatment efficacy, defined as an inhibition rate of virus replication. We investigate a Bayesian approach under the framework of hierarchical Bayesian (mixed-effects) models for estimating unknown dynamic parameters. In particular, interest focuses on estimating individual dynamic parameters. The proposed methods not only help to alleviate the difficulty in parameter identifiability, but also flexibly deal with sparse and unbalanced longitudinal data from individual subjects. For illustration purposes, we present one simulation example to implement the proposed approach and apply the methodology to a data set from an AIDS clinical trial. The basic concept of the longitudinal HIV dynamic systems and the proposed methodologies are generally applicable to any other biomedical dynamic systems.  相似文献   

12.
In long-term clinical studies, recurrent event data are sometimes collected and used to contrast the efficacies of two different treatments. The event reoccurrence rates can be compared using the popular negative binomial model, which incorporates information related to patient heterogeneity into a data analysis. For treatment allocation, a balanced approach in which equal sample sizes are obtained for both treatments is predominately adopted. However, if one treatment is superior, then it may be desirable to allocate fewer subjects to the less-effective treatment. To accommodate this objective, a sequential response-adaptive treatment allocation procedure is derived based on the doubly adaptive biased coin design. Our proposed treatment allocation schemes have been shown to be capable of reducing the number of subjects receiving the inferior treatment while simultaneously retaining a test power level that is comparable to that of a balanced design. The redesign of a clinical study illustrates the advantages of using our procedure.  相似文献   

13.
G Protein‐Coupled Receptors (GPCRs) are integral membrane proteins that play important role in regulating key physiological functions, and are targets of about 50% of all recently launched drugs. High‐resolution experimental structures are available only for very few GPCRs. As a result, structure‐based drug design efforts for GPCRs continue to rely on in silico modeling, which is considered to be an extremely difficult task especially for these receptors. Here, we describe Gmodel, a novel approach for building 3D atomic models of GPCRs using a normal mode‐based refinement of homology models. Gmodel uses a small set of relevant low‐frequency vibrational modes derived from Random Elastic Network model to efficiently sample the large‐scale receptor conformation changes and generate an ensemble of alternative models. These are used to assemble receptor–ligand complexes by docking a known active into each of the alternative models. Each of these is next filtered using restraints derived from known mutation and binding affinity data and is refined in the presence of the active ligand. In this study, Gmodel was applied to generate models of the antagonist form of histamine 3 (H3) receptor. The validity of this novel modeling approach is demonstrated by performing virtual screening (using the refined models) that consistently produces highly enriched hit lists. The models are further validated by analyzing the available SAR related to classical H3 antagonists, and are found to be in good agreement with the available experimental data, thus providing novel insights into the receptor–ligand interactions. Proteins 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

14.
In many clinical trials both repeated measures data and event history data are simultaneously observed from the same subject. These two types of responses are usually correlated, because they are from the same subject. In this article, we propose a joint model for the combined analysis of repeated measures data and event history data in the framework of hierarchical generalized linear models. The correlation between repeated measures and event time is modelled by introducing a shared random effect. The model parameters are estimated using the hierarchical‐likelihood approach. The proposed model is illustrated using a real data set for the renal transplant patients.  相似文献   

15.
This paper discusses multivariate interval-censored failure time data that occur when there exist several correlated survival times of interest and only interval-censored data are available for each survival time. Such data occur in many fields. One is tumorigenicity experiments, which usually concern different types of tumors, tumors occurring in different locations of animals, or together. For regression analysis of such data, we develop a marginal inference approach using the additive hazards model and apply it to a set of bivariate interval-censored data arising from a tumorigenicity experiment. Simulation studies are conducted for the evaluation of the presented approach and suggest that the approach performs well for practical situations.  相似文献   

16.
A common problem in environmental epidemiology is to estimate spatial variation in disease risk after accounting for known risk factors. In this paper we consider this problem in the context of matched case‐control studies. We extend the generalised additive model approach of Kelsall and Diggle (1998) to studies in which each case has been individually matched to a set of controls. We discuss a method for fitting this model to data, apply the method to a matched study on perinatal death in the North West Thames region of England and explain why, if spatial variation is of particular scientific interest, matching is undesirable.  相似文献   

17.
Lei Xu  Jun Shao 《Biometrics》2009,65(4):1175-1183
Summary In studies with longitudinal or panel data, missing responses often depend on values of responses through a subject‐level unobserved random effect. Besides the likelihood approach based on parametric models, there exists a semiparametric method, the approximate conditional model (ACM) approach, which relies on the availability of a summary statistic and a linear or polynomial approximation to some random effects. However, two important issues must be addressed in applying ACM. The first is how to find a summary statistic and the second is how to estimate the parameters in the original model using estimates of parameters in ACM. Our study is to address these two issues. For the first issue, we derive summary statistics under various situations. For the second issue, we propose to use a grouping method, instead of linear or polynomial approximation to random effects. Because the grouping method is a moment‐based approach, the conditions we assumed in deriving summary statistics are weaker than the existing ones in the literature. When the derived summary statistic is continuous, we propose to use a classification tree method to obtain an approximate summary statistic for grouping. Some simulation results are presented to study the finite sample performance of the proposed method. An application is illustrated using data from the study of Modification of Diet in Renal Disease.  相似文献   

18.
Process Analytical Technology (PAT) has been gaining a lot of momentum in the biopharmaceutical community because of the potential for continuous real time quality assurance resulting in improved operational control and compliance. In previous publications, we have demonstrated feasibility of applications involving use of high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) for real‐time pooling of process chromatography column. In this article we follow a similar approach to perform lab studies and create a model for a chromatography step of a different modality (hydrophobic interaction chromatography). It is seen that the predictions of the model compare well to actual experimental data, demonstrating the usefulness of the approach across the different modes of chromatography. Also, use of online HPLC when the step is scaled up to pilot scale (a 2294 fold scale‐up from a 3.4 mL column in the lab to a 7.8 L column in the pilot plant) and eventually to manufacturing scale (a 45930 fold scale‐up from a 3.4 mL column in the lab to a 158 L column in the manufacturing plant) is examined. Overall, the results confirm that for the application under consideration, online‐HPLC offers a feasible approach for analysis that can facilitate real‐time decisions for column pooling based on product quality attributes. The observations demonstrate that the proposed analytical scheme allows us to meet two of the key goals that have been outlined for PAT, i.e., “variability is managed by the process” and “product quality attributes can be accurately and reliably predicted over the design space established for materials used, process parameters, manufacturing, environmental, and other conditions”. The application presented here can be extended to other modes of process chromatography and/or HPLC analysis. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2010  相似文献   

19.
Linear‐mixed models are frequently used to obtain model‐based estimators in small area estimation (SAE) problems. Such models, however, are not suitable when the target variable exhibits a point mass at zero, a highly skewed distribution of the nonzero values and a strong spatial structure. In this paper, a SAE approach for dealing with such variables is suggested. We propose a two‐part random effects SAE model that includes a correlation structure on the area random effects that appears in the two parts and incorporates a bivariate smooth function of the geographical coordinates of units. To account for the skewness of the distribution of the positive values of the response variable, a Gamma model is adopted. To fit the model, to get small area estimates and to evaluate their precision, a hierarchical Bayesian approach is used. The study is motivated by a real SAE problem. We focus on estimation of the per‐farm average grape wine production in Tuscany, at subregional level, using the Farm Structure Survey data. Results from this real data application and those obtained by a model‐based simulation experiment show a satisfactory performance of the suggested SAE approach.  相似文献   

20.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号