首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
固有无序蛋白质(intrinsically disordered proteins,IDPs)是天然条件下自身不能折叠为明确唯一的空间结构,却具有生物学功能的一类新发现的蛋白质.这类蛋白质的发现是对传统的"结构-功能"关系认识模式的挑战.本文首先总结了无序蛋白质的实验鉴定手段、预测方法、数据库;并介绍了无序蛋白质结构(包括一级结构、二级结构、结构域无序性及变构效应)和功能特征;然后重点总结了无序蛋白质在进化角度研究的进展,包括无序区域产生的进化机制、进化速率,蛋白无序性的进化在蛋白质功能进化及生物学复杂性增加等方面的重要作用;最后展望了无序蛋白质在医药方面的应用前景.本文对于深入认识无序蛋白质的形成机制、结构和功能特征及其潜在的临床应用前景具有重要意义.  相似文献   

2.
In the precision medicine era, (prespecified) subgroup analyses are an integral part of clinical trials. Incorporating multiple populations and hypotheses in the design and analysis plan, adaptive designs promise flexibility and efficiency in such trials. Adaptations include (unblinded) interim analyses (IAs) or blinded sample size reviews. An IA offers the possibility to select promising subgroups and reallocate sample size in further stages. Trials with these features are known as adaptive enrichment designs. Such complex designs comprise many nuisance parameters, such as prevalences of the subgroups and variances of the outcomes in the subgroups. Additionally, a number of design options including the timepoint of the sample size review and timepoint of the IA have to be selected. Here, for normally distributed endpoints, we propose a strategy combining blinded sample size recalculation and adaptive enrichment at an IA, that is, at an early timepoint nuisance parameters are reestimated and the sample size is adjusted while subgroup selection and enrichment is performed later. We discuss implications of different scenarios concerning the variances as well as the timepoints of blinded review and IA and investigate the design characteristics in simulations. The proposed method maintains the desired power if planning assumptions were inaccurate and reduces the sample size and variability of the final sample size when an enrichment is performed. Having two separate timepoints for blinded sample size review and IA improves the timing of the latter and increases the probability to correctly enrich a subgroup.  相似文献   

3.
Englert S  Kieser M 《Biometrics》2012,68(3):886-892
Summary Phase II trials in oncology are usually conducted as single-arm two-stage designs with binary endpoints. Currently available adaptive design methods are tailored to comparative studies with continuous test statistics. Direct transfer of these methods to discrete test statistics results in conservative procedures and, therefore, in a loss in power. We propose a method based on the conditional error function principle that directly accounts for the discreteness of the outcome. It is shown how application of the method can be used to construct new phase II designs that are more efficient as compared to currently applied designs and that allow flexible mid-course design modifications. The proposed method is illustrated with a variety of frequently used phase II designs.  相似文献   

4.
Just as Saccharomyces cerevisiae itself provides a model for so many processes essential to eukaryotic life, we anticipate that the methods and the mindset that have moved yeast biological research "beyond the genome" provide a prototype for making similar progress in other organisms. In this review I describe the experimental processes, results and utility of the current large-scale experimental approaches that use genomic data to provide a functional analysis of the yeast genome. Electronic Publication  相似文献   

5.
Many variables and their interactions can affect a biotechnological process. Testing a large number of variables and all their possible interactions is a cumbersome task and its cost can be prohibitive. Several screening strategies, with a relatively low number of experiments, can be used to find which variables have the largest impact on the process and estimate the magnitude of their effect. One approach for process screening is the use of experimental designs, among which fractional factorial and Plackett–Burman designs are frequent choices. Other screening strategies involve the use of artificial neural networks (ANNs). The advantage of ANNs is that they have fewer assumptions than experimental designs, but they render black-box models (i.e., little information can be extracted about the process mechanics). In this paper, we simulate a biotechnological process (fed-batch growth of bakers yeast) to analyze and compare the effect of random experimental errors of different magnitudes and statistical distributions on experimental designs and ANNs. Except for the situation in which the error has a normal distribution and the standard deviation is constant, it was not possible to determine a clear-cut rule for favoring one screening strategy over the other. Instead, we found that the data can be better analyzed using both strategies simultaneously.  相似文献   

6.
Experimental designs are definded by introducing an assignment matrix Z. It is shown by block designs and double block designs that using Z or an operator on Z otherwise defined, well known designs can be got as special cases. Till now we didn' find an experimental design which could not be defined by our matrix Z. The definitions of properties of experimental designs can be given independently of the model of the statistical analysis. This is shown for the property of balance of block designs.  相似文献   

7.
The continued emergence of new SARS-CoV-2 variants has accentuated the growing need for fast and reliable methods for the design of potentially neutralizing antibodies (Abs) to counter immune evasion by the virus. Here, we report on the de novo computational design of high-affinity Ab variable regions (Fv) through the recombination of VDJ genes targeting the most solvent-exposed hACE2-binding residues of the SARS-CoV-2 spike receptor binding domain (RBD) protein using the software tool OptMAVEn-2.0. Subsequently, we carried out computational affinity maturation of the designed variable regions through amino acid substitutions for improved binding with the target epitope. Immunogenicity of designs was restricted by preferring designs that match sequences from a 9-mer library of “human Abs” based on a human string content score. We generated 106 different antibody designs and reported in detail on the top five that trade-off the greatest computational binding affinity for the RBD with human string content scores. We further describe computational evaluation of the top five designs produced by OptMAVEn-2.0 using a Rosetta-based approach. We used Rosetta SnugDock for local docking of the designs to evaluate their potential to bind the spike RBD and performed “forward folding” with DeepAb to assess their potential to fold into the designed structures. Ultimately, our results identified one designed Ab variable region, P1.D1, as a particularly promising candidate for experimental testing. This effort puts forth a computational workflow for the de novo design and evaluation of Abs that can quickly be adapted to target spike epitopes of emerging SARS-CoV-2 variants or other antigenic targets.  相似文献   

8.
The dual of incomplete block designs has been studied with th́eir applications in genetical experiments. Partial diallel crosses (PDC) of type I have been constructed using balanced incomplete block (BIB) designs, partially balanced incomplete block (PBIB) designs and their dual designs. Simplified analysis of PDC has been presented using the dual property of these designs. List of optimal PDC having simple analysis has been given.  相似文献   

9.
In the decade since their invention, spotted microarrays have been undergoing technical advances that have increased the utility, scope and precision of their ability to measure gene expression. At the same time, more researchers are taking advantage of the fundamentally quantitative nature of these tools with refined experimental designs and sophisticated statistical analyses. These new approaches utilise the power of microarrays to estimate differences in gene expression levels, rather than just categorising genes as up- or down-regulated, and allow the comparison of expression data across multiple samples. In this review, some of the technical aspects of spotted microarrays that can affect statistical inference are highlighted, and a discussion is provided of how several methods for estimating gene expression level across multiple samples deal with these challenges. The focus is on a Bayesian analysis method, BAGEL, which is easy to implement and produces easily interpreted results.  相似文献   

10.
In oncology, single‐arm two‐stage designs with binary endpoint are widely applied in phase II for the development of cytotoxic cancer therapies. Simon's optimal design with prefixed sample sizes in both stages minimizes the expected sample size under the null hypothesis and is one of the most popular designs. The search algorithms that are currently used to identify phase II designs showing prespecified characteristics are computationally intensive. For this reason, most authors impose restrictions on their search procedure. However, it remains unclear to what extent this approach influences the optimality of the resulting designs. This article describes an extension to fixed sample size phase II designs by allowing the sample size of stage two to depend on the number of responses observed in the first stage. Furthermore, we present a more efficient numerical algorithm that allows for an exhaustive search of designs. Comparisons between designs presented in the literature and the proposed optimal adaptive designs show that while the improvements are generally moderate, notable reductions in the average sample size can be achieved for specific parameter constellations when applying the new method and search strategy.  相似文献   

11.
Although there are several new designs for phase I cancer clinical trials including the continual reassessment method and accelerated titration design, the traditional algorithm-based designs, like the '3 + 3' design, are still widely used because of their practical simplicity. In this paper, we study some key statistical properties of the traditional algorithm-based designs in a general framework and derive the exact formulae for the corresponding statistical quantities. These quantities are important for the investigator to gain insights regarding the design of the trial, and are (i) the probability of a dose being chosen as the maximum tolerated dose (MTD); (ii) the expected number of patients treated at each dose level; (iii) target toxicity level (i.e. the expected dose-limiting toxicity (DLT) incidences at the MTD); (iv) expected DLT incidences at each dose level and (v) expected overall DLT incidences in the trial. Real examples of clinical trials are given, and a computer program to do the calculation can be found at the authors' website approximately linyo" locator-type="url">http://www2.umdnj.edu/ approximately linyo.  相似文献   

12.
Introduction: Advances in mass spectrometry-based proteomic technologies are enhancing studies of viral pathogenesis. Identification and quantification of host and viral proteins and modifications in cells and extracellular fluids during infection provides useful information about pathogenesis, and will be critical for directing clinical interventions and diagnostics.

Areas covered: Herein we review and discuss a broad range of global proteomic studies conducted during viral infection, including those of cellular responses, protein modifications, virion packaging, and serum proteomics. We focus on viruses that impact human health and focus on experimental designs that reveal disease processes and surrogate markers.

Expert commentary: Global proteomics is an important component of systems-level studies that aim to define how the interaction of humans and viruses leads to disease. Viral-community resource centers and strategies from other fields (e.g., cancer) will facilitate data sharing and platform-integration for systems-level analyses, and should provide recommended standards and assays for experimental designs and validation.  相似文献   


13.
We propose drug screening designs based on a Bayesian decision-theoretic approach. The discussion is motivated by screening designs for phase II studies. The proposed screening designs allow consideration of multiple treatments simultaneously. In each period, new treatments can arise and currently considered treatments can be dropped. Once a treatment is removed from the phase II screening trial, a terminal decision is made about abandoning the treatment or recommending it for a future confirmatory phase III study. The decision about dropping treatments from the active set is a sequential stopping decision. We propose a solution based on decision boundaries in the space of marginal posterior moments for the unknown parameter of interest that relates to each treatment. We present a Monte Carlo simulation algorithm to implement the proposed approach. We provide an implementation of the proposed method as an easy to use R library available for public domain download (http://www.stat.rice.edu/~rusi/ or http://odin.mdacc.tmc.edu/~pm/).  相似文献   

14.
The ever‐growing portable electronics and electric vehicle markets heavily influence the technological revolution of lithium batteries (LBs) toward higher energy densities for longer standby times or driving range. Thick electrode designs can substantially improve the electrode active material loading by minimizing the inactive component ratio at the device level, providing a great platform for enhancing the overall energy density of LBs. However, extensive efforts are still needed to address the challenges that accompany the increase in electrode thickness, not limited to sluggish charge kinetics and electrode mechanical instability. In this review, the principles and the recent developments in the fabrication of thick electrodes that focus on low‐tortuosity structural designs for rapid charge transport and integrated cell configuration for improved energy density, cell stability, and durability are summarized. Advanced thick electrode designs for application in emerging battery chemistries such as lithium metal electrodes, solid state electrolytes, and lithium–air batteries are also discussed with a perspective on their future opportunities and challenges. Finally, suggestions on the future directions of thick electrode battery development and research are suggested.  相似文献   

15.
Roy A  Bhaumik DK  Aryal S  Gibbons RD 《Biometrics》2007,63(3):699-707
Summary .   We consider the problem of sample size determination for three-level mixed-effects linear regression models for the analysis of clustered longitudinal data. Three-level designs are used in many areas, but in particular, multicenter randomized longitudinal clinical trials in medical or health-related research. In this case, level 1 represents measurement occasion, level 2 represents subject, and level 3 represents center. The model we consider involves random effects of the time trends at both the subject level and the center level. In the most common case, we have two random effects (constant and a single trend), at both subject and center levels. The approach presented here is general with respect to sampling proportions, number of groups, and attrition rates over time. In addition, we also develop a cost model, as an aid in selecting the most parsimonious of several possible competing models (i.e., different combinations of centers, subjects within centers, and measurement occasions). We derive sample size requirements (i.e., power characteristics) for a test of treatment-by-time interaction(s) for designs based on either subject-level or cluster-level randomization. The general methodology is illustrated using two characteristic examples.  相似文献   

16.
A widely used design principle for metabolic engineering of microorganisms aims to introduce interventions that enforce growth-coupled product synthesis such that the product of interest becomes a (mandatory) by-product of growth. However, different variants and partially contradicting notions of growth-coupled production (GCP) exist. Herein, we propose an ontology for the different degrees of GCP and clarify their relationships. Ordered by coupling degree, we distinguish four major classes: potentially, weakly, and directionally growth-coupled production (pGCP, wGCP, dGCP) as well as substrate-uptake coupled production (SUCP). We then extend the framework of Minimal Cut Sets (MCS), previously used to compute dGCP and SUCP strain designs, to allow inclusion of implicit optimality constraints, a feature required to compute pGCP and wGCP designs. This extension closes the gap between MCS-based and bilevel-based strain design approaches and enables computation (and comparison) of designs for all GCP classes within a single framework. By computing GCP strain designs for a range of products, we illustrate the hierarchical relationships between the different coupling degrees. We find that feasibility of coupling is not affected by the chosen GCP degree and that strongest coupling (SUCP) requires often only one or two more interventions than wGCP and dGCP. Finally, we show that the principle of coupling can be generalized to couple product synthesis with other cellular functions than growth, for example, with net ATP formation. This work provides important theoretical results and algorithmic developments and a unified terminology for computational strain design based on GCP.  相似文献   

17.
智能多肽是指智能响应外界刺激并做出相应回应的多肽。由于其形成过程为自发的自组装,故智能多肽又可称为自组装多肽。智能多肽的氨基酸构成使其拥有良好的生物相容性及生物可降解性,作为构筑基元拼接成为功能性材料,在新型生物材料方面展示出了广阔的应用前景。概括了智能多肽的性质、自组装机理及应用,重点阐述了它在生物能源、生物医学工程和分离工程上的应用,以期在系统认识智能多肽的基础上,发掘其应用潜能,突破开发瓶颈。  相似文献   

18.
Curve-free and model-based continual reassessment method designs   总被引:2,自引:0,他引:2  
O'Quigley J 《Biometrics》2002,58(1):245-249
Gasparini and Eisele (2000, Biometrics 56, 609 615) present a development of the continual reassessment method of O'Quigley, Pepe, and Fisher (1990, Biometrics 46, 33-48). They call their development a curve-free method for Phase I clinical trials. However, unless we are dealing with informative prior information, then the curve-free method coincides with the usual model-based continual reassessment method. Both methods are subject to arbitrary specification parameters, and we provide some discussion on this. Whatever choices are made for one method, there exists equivalent choices for the other method, where " equivalent" means that the operating characteristics (sequential dose allocation and final recommendation) are the same. The insightful development of Gasparini and Eisele provides clarification on some of the basic ideas behind the continual reassessment method, particularly when viewed from a Bayesian perspective. But their development does not lead to a new class of designs and the comparative results in their article, indicating some preference for curve-free designs over model-based designs, are simply reflecting a more fortunate choice of arbitrary specification parameters. Other choices could equally well have inversed their conclusion. A correct conclusion should be one of operational equivalence. The story is different for the case of informative priors, a situation that is inherently much more difficult. We discuss this. We also mention the important idea of two-stage designs (Moller, 1995, Statistics in Medicine 14, 911-922; O'Quigley and Shen, 1996, Biometrics 52, 163-174), arguing, via a simple comparison with the results of Gasparini and Eisele (2000), that there is room for notable gains here. Two-stage designs also have an advantage of avoiding the issue of prior specification altogether.  相似文献   

19.
Summary The precision of estimates of genetic variances and covariances obtained from multivariate selection experiments of various designs are discussed. The efficiencies of experimental designs are compared using criteria based on a confidence region of the estimated genetic parameters, with estimation using both responses and selection differentials and offspring-parent regression. A good selection criterion is shown to be to select individuals as parents using an index of the sums of squares and crossproducts of the phenotypic measurements. Formulae are given for the optimum selection proportion when the relative numbers of individuals in the parent and progeny generations are fixed or variable. Although the optimum depends on a priori knowledge of the genetic parameters to be estimated, the designs are very robust to poor estimates. For bivariate uncorrelated data, the variance of the estimated genetic parameters can be reduced by approximately 0.4 relative to designs of a more conventional nature when half of the individuals are selected on one trait and half on the other trait. There are larger reductions in variances if the traits are correlated.  相似文献   

20.
Gilmour SG 《Biometrics》2006,62(2):323-331
Many processes in the biological industries are studied using response surface methodology. The use of biological materials, however, means that run-to-run variation is typically much greater than that in many experiments in mechanical or chemical engineering and so the designs used require greater replication. The data analysis which is performed may involve some variable selection, as well as fitting polynomial response surface models. This implies that designs should allow the parameters of the model to be estimated nearly orthogonally. A class of three-level response surface designs is introduced which allows all except the quadratic parameters to be estimated orthogonally, as well as having a number of other useful properties. These subset designs are obtained by using two-level factorial designs in subsets of the factors, with the other factors being held at their middle level. This allows their properties to be easily explored. Replacing some of the two-level designs with fractional replicates broadens the class of useful designs, especially with five or more factors, and sometimes incomplete subsets can be used. It is very simple to include a few two- and four-level factors in these designs by excluding subsets with these factors at the middle level. Subset designs can be easily modified to include factors with five or more levels by allowing a different pair of levels to be used in different subsets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号