首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Pharmaceutical manufacturing processes consist of a series of stages (e.g., reaction, workup, isolation) to generate the active pharmaceutical ingredient (API). Outputs at intermediate stages (in-process control) and API need to be controlled within acceptance criteria to assure final drug product quality. In this paper, two methods based on tolerance interval to derive such acceptance criteria will be evaluated. The first method is serial worst case (SWC), an industry risk minimization strategy, wherein input materials and process parameters of a stage are fixed at their worst-case settings to calculate the maximum level expected from the stage. This maximum output then becomes input to the next stage wherein process parameters are again fixed at worst-case setting. The procedure is serially repeated throughout the process until the final stage. The calculated limits using SWC can be artificially high and may not reflect the actual process performance. The second method is the variation transmission (VT) using autoregressive model, wherein variation transmitted up to a stage is estimated by accounting for the recursive structure of the errors at each stage. Computer simulations at varying extent of variation transmission and process stage variability are performed. For the scenarios tested, VT method is demonstrated to better maintain the simulated confidence level and more precisely estimate the true proportion parameter than SWC. Real data examples are also presented that corroborate the findings from the simulation. Overall, VT is recommended for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.  相似文献   

2.
The concept of "design space" has been proposed in the ICH Q8 guideline and is gaining momentum in its application in the biotech industry. It has been defined as "the multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality." This paper presents a stepwise approach for defining process design space for a biologic product. A case study, involving P. pastoris fermentation, is presented to facilitate this. First, risk analysis via Failure Modes and Effects Analysis (FMEA) is performed to identify parameters for process characterization. Second, small-scale models are created and qualified prior to their use in these experimental studies. Third, studies are designed using Design of Experiments (DOE) in order for the data to be amenable for use in defining the process design space. Fourth, the studies are executed and the results analyzed for decisions on the criticality of the parameters as well as on establishing process design space. For the application under consideration, it is shown that the fermentation unit operation is very robust with a wide design space and no critical operating parameters. The approach presented here is not specific to the illustrated case study. It can be extended to other biotech unit operations and processes that can be scaled down and characterized at small scale.  相似文献   

3.
IMGT, the international ImMunoGeneTics information system(R) (http://imgt.cines.fr) is a high-quality integrated information system specializing in immunoglobulins (IG), T cell receptors (TR) and major histocompatibility complex (MHC) of human and other vertebrates. IMGT comprises IMGT/LIGM-DB, the comprehensive database of IG and TR sequences from human and other vertebrates (76 846 sequences in September 2003). In order to define the IMGT criteria necessary for standardized statistical analyses, the sequences of the IG variable regions (V-REGIONs) from productively rearranged human IG heavy (IGH) and IG light kappa (IGK) and lambda (IGL) chains were extracted from IMGT/LIGM-DB. The framework amino acid positions of 2474 V-REGIONs (1360 IGHV, 585 IGKV, 529 IGLV) were numbered according to the IMGT unique numbering. Two statistical methods (correspondence analysis and hierarchic classification) were used to analyze the 237 framework positions (80 for IGHV, 79 for IGKV, 78 for IGLV), for three properties (hydropathy, volume and chemical characteristics) of the 20 common amino acids. Results of the analyses are shown as standardized two-dimensional representations, designated as IMGT Colliers de Perles statistical profiles. They provide a characterization of the amino acid properties at each framework position of the expressed IG V-REGIONs, and a visualization of the resemblances and differences between heavy and light, and between kappa and lambda sequences. The standardized criteria defined in this paper, amino acid positions and property classes, will be useful to study the mutations and allele polymorphisms, to establish correlations between amino acids in the IG and TR protein three-dimensional structures and to extract new knowledge from V-like domains of chains, other than IG and TR, belonging to the immunoglobulin superfamily.  相似文献   

4.
A general mathematical framework has been proposed in this work for scheduling of a multiproduct and multipurpose facility involving manufacturing of biotech products. The specific problem involves several batch operations occurring in multiple units involving fixed processing time, unlimited storage policy, transition times, shared units, and deterministic and fixed data in the given time horizon. The different batch operations are modeled using state‐task network representation. Two different mathematical formulations are proposed based on discrete‐ and continuous‐time representations leading to a mixed‐integer linear programming model which is solved using General Algebraic Modeling System software. A case study based on a real facility is presented to illustrate the potential and applicability of the proposed models. The continuous‐time model required less number of events and has a smaller problem size compared to the discrete‐time model. © 2014 American Institute of Chemical Engineers Biotechnol. Prog., 30:1221–1230, 2014  相似文献   

5.
Perspectives on setting success criteria for wetland restoration   总被引:3,自引:0,他引:3  
The task of determining the success of wetland restoration has long been challenging and sometimes contentious because success is an imprecise term that means different things in different situations and to different people. Compliance success is determined by evaluating compliance with the terms of an agreement, e.g. a contract or permit, whereas functional success is determined by evaluating whether the ecological functions of the system have been restored. Compliance and functional success have historically focused on the individual project (the site being restored); we are only beginning to consider another important factor, the success of restoration at the landscape scale. Landscape success is a measure of how restoration (or management, in general) has contributed to the ecological integrity of the region or landscape and to achievement of goals such as the maintenance of biodiversity. The utility of all definitions of success is ultimately constrained by the current status of the science of restoration ecology and by our ability to use that information to make sound management decisions and to establish measurable success criteria. Measurements of vegetation are most commonly used in evaluations of restoration projects, with less frequent analysis of soils, fauna, and hydrologic characteristics. Although particular characteristics of projects, such as vegetative cover and production, can resemble those in similar naturally occurring wetlands, overall functional equivalency has not been demonstrated. However, ongoing research is providing information on what can and cannot be accomplished, valuable insights on how to correct mistakes, and new approaches to defining success. The challenge is how to recognize and deal with the uncertainty, given that projects are ecologically young and that our knowledge of the process of restoration is evolving. One way to deal with the uncertainty is to use scientific principles of hypothesis testing and model building in an adaptive management framework. In this way, options can be systematically evaluated and needs for corrective actions identified when a project is not progressing toward goals. By taking such an approach we can improve our ability to reliably restore wetlands while contributing to our understanding of the basic structure and function of ecosystems.  相似文献   

6.
周继华  来利明  郑元润 《生态学报》2015,35(19):6435-6438
模拟结果的准确性是衡量生态学模型是否成功的关键,但采用统计学方法判别模型模拟结果与观察值相符程度的报道较少。根据两个直线回归方程能否合并为一个方程的统计学检验方法,提出了通过检验观察值与模拟值直线回归方程和1∶1直线方程截距与斜率是否相同,进而在统计显著水平上判断生态学模型模拟值与观察值一致性的统计学检验方法。数据检验表明,此方法可以较好解决判断生态学模型模拟结果准确性的问题。  相似文献   

7.
《MABS-AUSTIN》2013,5(3):451-455
Quality by design (QbD) is an innovative approach to drug development that has started to be implemented into the regulatory framework, but currently mainly for chemical drugs. The recent marketing authorization of the first monoclonal antibody developed using extensive QbD concepts in the European Union paves the way for future further regulatory approvals of complex products employing this cutting-edge technological concept. In this paper, we report and comment on insights and lessons learnt from the non-public discussions in the European Medicines Agency's Biologicals Working Party and Committee for Medicinal Products for Human Use on the key issues during evaluation related to the implementation of an extensive QbD approach for biotechnology-derived medicinal products. Sharing these insights could prove useful for future developments in QbD for biotech products in general and monoclonal antibodies in particular.  相似文献   

8.
This paper presents an algorithm for determining the content of oil products in remediated soil that is safe for plants and microorganisms. The algorithm includes laboratory modeling of the remediation of soil samples that contain different amounts of soil products, determination of the biological parameters that provide the integral characterization of the soil state in these samples, and analysis of the results on the basis of odds-ratio statistics, which allows one to determine the content of oil products, starting from which the state of an oil-contaminated soil has no significant difference from a control soil.  相似文献   

9.
10.
MOTIVATION: With the advent of microarray chip technology, large data sets are emerging containing the simultaneous expression levels of thousands of genes at various time points during a biological process. Biologists are attempting to group genes based on the temporal pattern of their expression levels. While the use of hierarchical clustering (UPGMA) with correlation 'distance' has been the most common in the microarray studies, there are many more choices of clustering algorithms in pattern recognition and statistics literature. At the moment there do not seem to be any clear-cut guidelines regarding the choice of a clustering algorithm to be used for grouping genes based on their expression profiles. RESULTS: In this paper, we consider six clustering algorithms (of various flavors!) and evaluate their performances on a well-known publicly available microarray data set on sporulation of budding yeast and on two simulated data sets. Among other things, we formulate three reasonable validation strategies that can be used with any clustering algorithm when temporal observations or replications are present. We evaluate each of these six clustering methods with these validation measures. While the 'best' method is dependent on the exact validation strategy and the number of clusters to be used, overall Diana appears to be a solid performer. Interestingly, the performance of correlation-based hierarchical clustering and model-based clustering (another method that has been advocated by a number of researchers) appear to be on opposite extremes, depending on what validation measure one employs. Next it is shown that the group means produced by Diana are the closest and those produced by UPGMA are the farthest from a model profile based on a set of hand-picked genes. Availability: S+ codes for the partial least squares based clustering are available from the authors upon request. All other clustering methods considered have S+ implementation in the library MASS. S+ codes for calculating the validation measures are available from the authors upon request. The sporulation data set is publicly available at http://cmgm.stanford.edu/pbrown/sporulation  相似文献   

11.
Selected reaction monitoring (SRM) is a targeted mass spectrometric method that is increasingly used in proteomics for the detection and quantification of sets of preselected proteins at high sensitivity, reproducibility and accuracy. Currently, data from SRM measurements are mostly evaluated subjectively by manual inspection on the basis of ad hoc criteria, precluding the consistent analysis of different data sets and an objective assessment of their error rates. Here we present mProphet, a fully automated system that computes accurate error rates for the identification of targeted peptides in SRM data sets and maximizes specificity and sensitivity by combining relevant features in the data into a statistical model.  相似文献   

12.
Multipoint (MP) linkage analysis represents a valuable tool for whole-genome studies but suffers from the disadvantage that its probability distribution is unknown and varies as a function of marker information and density, genetic model, number and structure of pedigrees, and the affection status distribution [Xing and Elston: Genet Epidemiol 2006;30:447-458; Hodge et al.: Genet Epidemiol 2008;32:800-815]. This implies that the MP significance criterion can differ for each marker and each dataset, and this fact makes planning and evaluation of MP linkage studies difficult. One way to circumvent this difficulty is to use simulations or permutation testing. Another approach is to use an alternative statistical paradigm to assess the statistical evidence for linkage, one that does not require computation of a p value. Here we show how to use the evidential statistical paradigm for planning, conducting, and interpreting MP linkage studies when the disease model is known (lod analysis) or unknown (mod analysis). As a key feature, the evidential paradigm decouples uncertainty (i.e. error probabilities) from statistical evidence. In the planning stage, the user calculates error probabilities, as functions of one's design choices (sample size, choice of alternative hypothesis, choice of likelihood ratio (LR) criterion k) in order to ensure a reliable study design. In the data analysis stage one no longer pays attention to those error probabilities. In this stage, one calculates the LR for two simple hypotheses (i.e. trait locus is unlinked vs. trait locus is located at a particular position) as a function of the parameter of interest (position). The LR directly measures the strength of evidence for linkage in a given data set and remains completely divorced from the error probabilities calculated in the planning stage. An important consequence of this procedure is that one can use the same criterion k for all analyses. This contrasts with the situation described above, in which the value one uses to conclude significance may differ for each marker and each dataset in order to accommodate a fixed test size, α. In this study we accomplish two goals that lead to a general algorithm for conducting evidential MP linkage studies. (1) We provide two theoretical results that translate into guidelines for investigators conducting evidential MP linkage: (a) Comparing mods to lods, error rates (including probabilities of weak evidence) are generally higher for mods when the null hypothesis is true, but lower for mods in the presence of true linkage. Royall [J Am Stat Assoc 2000;95:760-780] has shown that errors based on lods are bounded and generally small. Therefore when the true disease model is unknown and one chooses to use mods, one needs to control misleading evidence rates only under the null hypothesis; (b) for any given pair of contiguous marker loci, error rates under the null are greatest at the midpoint between the markers spaced furthest apart, which provides an obvious simple alternative hypothesis to specify for planning MP linkage studies. (2) We demonstrate through extensive simulation that this evidential approach can yield low error rates under the null and alternative hypotheses for both lods and mods, despite the fact that mod scores are not true LRs. Using these results we provide a coherent approach to implement a MP linkage study using the evidential paradigm.  相似文献   

13.
Chen L  Storey JD 《Genetics》2006,173(4):2371-2381
Linkage analysis involves performing significance tests at many loci located throughout the genome. Traditional criteria for declaring a linkage statistically significant have been formulated with the goal of controlling the rate at which any single false positive occurs, called the genomewise error rate (GWER). As complex traits have become the focus of linkage analysis, it is increasingly common to expect that a number of loci are truly linked to the trait. This is especially true in mapping quantitative trait loci (QTL), where sometimes dozens of QTL may exist. Therefore, alternatives to the strict goal of preventing any single false positive have recently been explored, such as the false discovery rate (FDR) criterion. Here, we characterize some of the challenges that arise when defining relaxed significance criteria that allow for at least one false positive linkage to occur. In particular, we show that the FDR suffers from several problems when applied to linkage analysis of a single trait. We therefore conclude that the general applicability of FDR for declaring significant linkages in the analysis of a single trait is dubious. Instead, we propose a significance criterion that is more relaxed than the traditional GWER, but does not appear to suffer from the problems of the FDR. A generalized version of the GWER is proposed, called GWERk, that allows one to provide a more liberal balance between true positives and false positives at no additional cost in computation or assumptions.  相似文献   

14.
15.
16.
On-line monitoring of penicillin cultivation processes is crucial to the safe production of high-quality products. In the past, multiway principal component analysis (MPCA), a multivariate projection method, has been widely used to monitor batch and fed-batch processes. However, when MPCA is used for on-line batch monitoring, the future behavior of each new batch must be inferred up to the end of the batch operation at each time and the batch lengths must be equalized. This represents a major shortcoming because predicting the future observations without considering the dynamic relationships may distort the data information, leading to false alarms. In this paper, a new statistical batch monitoring approach based on variable-wise unfolding and time-varying score covariance structures is proposed in order to overcome the drawbacks of conventional MPCA and obtain better monitoring performance. The proposed method does not require prediction of the future values while the dynamic relations of data are preserved by using time-varying score covariance structures, and can be used to monitor batch processes in which the batch length varies. The proposed method was used to detect and identify faults in the fed-batch penicillin cultivation process, for four different fault scenarios. The simulation results clearly demonstrate the power and advantages of the proposed method in comparison to MPCA.  相似文献   

17.
18.
We describe biological and experimental factors that induce variability in reporter ion peak areas obtained from iTRAQ experiments. We demonstrate how these factors can be incorporated into a statistical model for use in evaluating differential protein expression and highlight the benefits of using analysis of variance to quantify fold change. We demonstrate the model's utility based on an analysis of iTRAQ data derived from a spike-in study.  相似文献   

19.
The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations. Only the model for one specimen met the validation criteria for average and peak pressure of both articulations; however the experimental measures for peak pressure also exhibited high variability. MRI-based modeling can reliably be used for evaluating the contact area and contact force with similar confidence as in currently available experimental techniques. Average contact pressure, and peak contact pressure were more variable from all measurement techniques, and these measures from MRI-based modeling should be used with some caution.  相似文献   

20.

Introduction

Different normalization methods are available for urinary data. However, it is unclear which method performs best in minimizing error variance on a certain data-set as no generally applicable empirical criteria have been established so far.

Objectives

The main aim of this study was to develop an applicable and formally correct algorithm to decide on the normalization method without using phenotypic information.

Methods

We proved mathematically for two classical measurement error models that the optimal normalization method generates the highest correlation between the normalized urinary metabolite concentrations and its blood concentrations or, respectively, its raw urinary concentrations. We then applied the two criteria to the urinary 1H-NMR measured metabolomic data from the Study of Health in Pomerania (SHIP-0; n?=?4068) under different normalization approaches and compared the results with in silico experiments to explore the effects of inflated error variance in the dilution estimation.

Results

In SHIP-0, we demonstrated consistently that probabilistic quotient normalization based on aligned spectra outperforms all other tested normalization methods. Creatinine normalization performed worst, while for unaligned data integral normalization seemed to most reasonable. The simulated and the actual data were in line with the theoretical modeling, underlining the general validity of the proposed criteria.

Conclusions

The problem of choosing the best normalization procedure for a certain data-set can be solved empirically. Thus, we recommend applying different normalization procedures to the data and comparing their performances via the statistical methodology explicated in this work. On the basis of classical measurement error models, the proposed algorithm will find the optimal normalization method.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号