首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Risk is by no means a simple concept. Natural variability and definitional problems with the concept of probability complicate the measurement and use of risk as an analytical tool. Variability requires that risk assessment methods separate natural from total risk when attempting to estimate anthropogenic risk. Failure to do so results in the over estimation of anthropogenic risk and the eventual loss of credibility for risk assessment methodologies. The common frequentist approach to probability is not consistent with anything but a modelling approach to risk assessment. When combined with its ability to account for natural variability, incorporate laboratory-assay data and offer complete statistical and experimental control, modelling is a promising approach to risk assessment. Modelling, however, is not without its drawbacks. Initialization bias can result in the over, or under, estimation of both natural and anthropogenic risk. Furthermore, model estimates are time dependent. The convergence of natural and anthropogenic risk poses problems for modelling-based risk assessment and requires clear statements as to the importance of the time dimension in risk assessment. When combined, the drawbacks to modelling-based risk assessment argue that risk should never be stated as a scalar quantity. Instead, modelling-based risk assessment should provide estimates of the complete range of risk measures (total, natural, and anthropogenic) as well as indications of convergence time. Only then can the modelling-based approach be viewed as the most appropriate means of carrying out scientifically credible risk assessment.  相似文献   

2.
Studies investigating dynamic susceptibility contrast magnetic resonance imaging-determined relative cerebral blood volume (rCBV) maps as a metric of treatment response assessment have generated conflicting results. We evaluated the potential of various analytical techniques to predict survival of patients with glioma treated with chemoradiation. rCBV maps were acquired in patients with high-grade gliomas at 0, 1, and 3 weeks into chemoradiation therapy. Various analytical techniques were applied to the same cohort of serial rCBV data for early assessment of survival. Three different methodologies were investigated: 1) percentage change of whole tumor statistics (i.e., mean, median, and percentiles), 2) physiological segmentation (low rCBV, medium rCBV, or high rCBV), and 3) a voxel-based approach, parametric response mapping (PRM). All analyses were performed using the same tumor contours, which were determined using contrast-enhanced T1-weighted and fluid attenuated inversion recovery images. The predictive potential of each response metric was assessed at 1-year and overall survival. PRM was the only analytical approach found to generate a response metric significantly predictive of patient 1-year survival. Time of acquisition and contour volume were not found to alter the sensitivity of the PRM approach for predicting overall survival. We have demonstrated the importance of the analytical approach in early response assessment using serial rCBV maps. The PRM analysis shows promise as a unified early and robust imaging biomarker of treatment response in patients diagnosed with high-grade gliomas.  相似文献   

3.
The successful implementation of process and product changes for a therapeutic protein drug, both during clinical development and after commercialization, requires a detailed evaluation of their impact on the protein's structure and biological functionality. This analysis is called a comparability exercise and includes a data driven assessment of biochemical equivalence and biological characterization using a cadre of analytical methodologies. This review focuses on describing analytical results and lessons learned from selected published therapeutic protein comparability case studies both for bulk drug substance and final drug product. An overview of the currently available analytical methodologies typically used is presented as well as a discussion of new emerging analytical techniques. The potential utility of several novel analytical approaches to comparability studies is discussed including distribution and stability of protein drugs in vivo, and enhanced evaluation of higher-order protein structure in actual formulations using hydrogen/deuterium exchange mass spectrometry, two-dimensional nuclear magnetic resonance fingerprinting or empirical phase diagrams. In addition, new methods for detecting and characterizing protein aggregates and particles are presented as these degradants are of current industry-wide concern. The critical role that analytical methodologies play in elucidating the structure–function relationships for therapeutic protein products during the overall assessment of comparability is discussed.  相似文献   

4.
Detailed structural analysis of high molecular weight human milk oligosaccharides (HMOs) is still a challenging task. Here we present a modular strategy for a flexible de novo structural characterization of this class of molecules. The protocol combines established techniques such as separation by two-dimensional high-performance liquid chromatography with different types of mass spectrometry, exoglycosidase digestion, and linkage analysis in an individual glycan-based manner. As a proof of principle, this approach was applied to two distinct HMO isomers representing a difucosylated octaose core and a trifucosylated decaose core. Obtained data revealed the presence of one terminal Lewis A and one internal Lewis X epitope in the case of the octaose and led to the identification of this molecule as a difucosylated iso-lacto-N-octaose. The trifucosylated, doubly branched lacto-N-neo-decaose was shown to represent a new type of HMO core structure in which the branched antenna is linked to carbon atom 3 of the innermost galactosyl residue. Hence, using this analytical protocol a novel HMO structure could be defined. Our results further demonstrate that a combination of different techniques may be required for de novo structural analysis of these molecules.  相似文献   

5.
Raman spectroscopy is a multipurpose analytical technology that has found great utility in real-time monitoring and control of critical performance parameters of cell culture processes. As a process analytical technology (PAT) tool, the performance of Raman spectroscopy relies on chemometric models that correlate Raman signals to the parameters of interest. The current calibration techniques yield highly specific models that are reliable only on the operating conditions they are calibrated in. Furthermore, once models are calibrated, it is typical for the model performance to degrade over time due to various recipe changes, raw material variability, and process drifts. Maintaining the performance of industrial Raman models is further complicated due to the lack of a systematic approach to assessing the performance of Raman models. In this article, we propose a real-time just-in-time learning (RT-JITL) framework for automatic calibration, assessment, and maintenance of industrial Raman models. Unlike traditional models, RT-JITL calibrates generic models that can be reliably deployed in cell culture experiments involving different modalities, cell lines, media compositions, and operating conditions. RT-JITL is a first fully integrated and fully autonomous platform offering a self-learning approach for calibrating and maintaining industrial Raman models. The efficacy of RT-JITL is demonstrated on experimental studies involving real-time predictions of various cell culture performance parameters, such as metabolite concentrations, viability, and viable cell density. RT-JITL framework introduces a paradigm shift in the way industrial Raman models are calibrated, assessed, and maintained, which to the best of authors' knowledge, have not been done before.  相似文献   

6.
This article evaluates selected sensitivity analysis methods applicable to risk assessment models with two-dimensional probabilistic frameworks, using a microbial food safety process risk model as a test-bed. Six sampling-based sensitivity analysis methods were evaluated including Pearson and Spearman correlation, sample and rank linear regression, and sample and rank stepwise regression. In a two-dimensional risk model, the identification of key controllable inputs that can be priorities for risk management can be confounded by uncertainty. However, despite uncertainty, results show that key inputs can be distinguished from those that are unimportant, and inputs can be grouped into categories of similar levels of importance. All selected methods are capable of identifying unimportant inputs, which is helpful in that efforts to collect data to improve the assessment or to focus risk management strategies can be prioritized elsewhere. Rank-based methods provided more robust insights with respect to the key sources of variability in that they produced narrower ranges of uncertainty for sensitivity results and more clear distinctions when comparing the importance of inputs or groups of inputs. Regression-based methods have advantages over correlation approaches because they can be configured to provide insight regarding interactions and nonlinearities in the model.  相似文献   

7.
Background, aim, and scope  Analysis of uncertainties plays a vital role in the interpretation of life cycle assessment findings. Some of these uncertainties arise from parametric data variability in life cycle inventory analysis. For instance, the efficiencies of manufacturing processes may vary among different industrial sites or geographic regions; or, in the case of new and unproven technologies, it is possible that prospective performance levels can only be estimated. Although such data variability is usually treated using a probabilistic framework, some recent work on the use of fuzzy sets or possibility theory has appeared in the literature. The latter school of thought is based on the notion that not all data variability can be properly described in terms of frequency of occurrence. In many cases, it is necessary to model the uncertainty associated with the subjective degree of plausibility of parameter values. Fuzzy set theory is appropriate for such uncertainties. However, the computations required for handling fuzzy quantities has not been fully integrated with the formal matrix-based life cycle inventory analysis (LCI) described by Heijungs and Suh (2002). Materials and methods  This paper integrates computations with fuzzy numbers into the matrix-based LCI computational model described in the literature. The approach uses fuzzy numbers to propagate the data variability in LCI calculations, and results in fuzzy distributions of the inventory results. The approach is developed based on similarities with the fuzzy economic input–output (EIO) model proposed by Buckley (Eur J Oper Res 39:54–60, 1989). Results  The matrix-based fuzzy LCI model is illustrated using three simple case studies. The first case shows how fuzzy inventory results arise in simple systems with variability in industrial efficiency and emissions data. The second case study illustrates how the model applies for life cycle systems with co-products, and thus requires the inclusion of displaced processes. The third case study demonstrates the use of the method in the context of comparing different carbon sequestration technologies. Discussion  These simple case studies illustrate the important features of the model, including possible computational issues that can arise with larger and more complex life cycle systems. Conclusions  A fuzzy matrix-based LCI model has been proposed. The model extends the conventional matrix-based LCI model to allow for computations with parametric data variability represented as fuzzy numbers. This approach is an alternative or complementary approach to interval analysis, probabilistic or Monte Carlo techniques. Recommendations and perspectives  Potential further work in this area includes extension of the fuzzy model to EIO-LCA models and to life cycle impact assessment (LCIA); development of hybrid fuzzy-probabilistic approaches; and integration with life cycle-based optimization or decision analysis. Additional theoretical work is needed for modeling correlations of the variability of parameters using interacting or correlated fuzzy numbers, which remains an unresolved computational issue. Furthermore, integration of the fuzzy model into LCA software can also be investigated.  相似文献   

8.
It is claimed that the modular approach to validation, which involves seven independent modules, will make the assessment of test validity more flexible and more efficient. In particular, the aspects of between-laboratory variability and predictive capacity are formally separated. Here, the main advantage of the approach is to offer the opportunity for reduced labour, and thus to allow study designs to be more time efficient and cost effective. The impact of this separation was analysed by taking the ECVAM validation study on in vitro methods for skin corrosivity as an example of a successful validation study - two of its methods triggered new OECD test guidelines. Lean study designs, which reduced the number of tests required by up to 60%, were simulated with the original validation data for the EPISKIN model. By using resampling techniques, we were able to demonstrate the effects of the lean designs on three between-laboratory variability measures and on the predictive capacity in terms of sensitivity and specificity, in comparison with the original study. Overall, the study results, especially the levels of confidence, were only slightly affected by the lean designs that were modelled. It is concluded that the separation of the two modules is a promising way to speed-up prospective validation studies and to substantially reduce costs, without compromising study quality.  相似文献   

9.
The literature contains many references to intervertebral disc analysis using experimental techniques which represent a basis for analytical and numerical approaches. The scatter of experimental results may be one reason for difficulties in interpretation. Apart from analytical approaches, which often seen inadequate to deal with the peculiar complexities of the problem, numerical techniques are reliable and can lead to significant results. In this work, a formulation based on the finite element method is described, adopting a nonlinear model with hyperelastic material configuration. Particular attention is paid to modelling the material constituting the nucleus, involving incompressibility characteristics and avoiding simulation techniques. This approach allows the mechanical behaviour of the real configuration of the disc to be investigated and also provides a reliable analysis of disc degeneration phenomena. The theoretical and operational aspects of the formulation are reported. The results obtained are compared with responses from various numerical and experimental data.  相似文献   

10.
Non-typable Haemophilus influenzae (NTHi) are small, gram-negative bacteria and are strictly human pathogens, causing acute otitis media, sinusitis and community-acquired pneumonia. There is no vaccine available for NTHi, as there is for H. influenzae type b. Recent advances in proteomic techniques are finding novel applications in the field of vaccinology. There are several protein separation techniques available today, each with inherent advantages and disadvantages. We employed a combined proteomics approach, including sequential extraction and analytical two-dimensional polyacrylamide electrophoresis (2D PAGE), and two-dimensional semi-preparative electrophoresis (2D PE), in order to study protein expression in the A4 NTHi strain. Although putative vaccine candidates were identified with both techniques, 11 of 15 proteins identified using the 2D PE approach were not identified by 2D PAGE, demonstrating the complementarily of the two methods.  相似文献   

11.
Large-scale metabolic profiling is expected to develop into an integral part of functional genomics and systems biology. The metabolome of a cell or an organism is chemically highly complex. Therefore, comprehensive biochemical phenotyping requires a multitude of analytical techniques. Here, we describe a profiling approach that combines separation by capillary liquid chromatography with the high resolution, high sensitivity, and high mass accuracy of quadrupole time-of-flight mass spectrometry. About 2000 different mass signals can be detected in extracts of Arabidopsis roots and leaves. Many of these originate from Arabidopsis secondary metabolites. Detection based on retention times and exact masses is robust and reproducible. The dynamic range is sufficient for the quantification of metabolites. Assessment of the reproducibility of the analysis showed that biological variability exceeds technical variability. Tools were optimized or established for the automatic data deconvolution and data processing. Subtle differences between samples can be detected as tested with the chalcone synthase deficient tt4 mutant. The accuracy of time-of-flight mass analysis allows to calculate elemental compositions and to tentatively identify metabolites. In-source fragmentation and tandem mass spectrometry can be used to gain structural information. This approach has the potential to significantly contribute to establishing the metabolome of Arabidopsis and other model systems. The principles of separation and mass analysis of this technique, together with its sensitivity and resolving power, greatly expand the range of metabolic profiling.  相似文献   

12.
The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an np constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.  相似文献   

13.
- Part 1: Present Situation and Future Perspectives Part 2: Application on an Island Economy Goal, Scope and Background Incorporation of exposure and risk concepts into life cycle impact assessment (LCIA) is often impaired by the number of sources and the complexity of site-specific impact assessment, especially when input-output (I-O) analysis is used to evaluate upstream processes. This makes it difficult to interpret LCIA outputs, especially in policy contexts. In this study, we develop an LCIA tool which takes into account the geographical variability in both emissions and exposure, and which can be applied to all economic sectors in I-O analysis. Our method relies on screening-level risk calculations and methods to estimate population exposure per unit of emissions from specific geographic locations. Methods We propose a simplified impact assessment approach using the concept of intake fraction, which is the fraction of a pollutant or its precursor emitted that is eventually inhaled or ingested by the population. Instead of running a complex site-specific exposure analysis, intake fractions allow for the accounting of the regional variability in exposure due to meteorological factors and population density without much computational burden. We calculate sector-specific intake fractions using previously-derived regression models and apply these values to the supply chain emissions to screen for the sectors whose emissions largely contribute to the total exposures. Thus, the analytical steps are simplified by relying on these screening-level risk calculations. We estimate population exposure per unit emissions from specific geographic locations only for the facilities and pollutants that pass an initial screening analysis. We test our analytical approach with reference to the case of increasing insulation for new single-family homes in the US. We quantify the public health costs from increasing insulation manufacturing and compare them with the benefits from energy savings, focusing on mortality and morbidity associated with exposure to primary and secondary fine particles (PM2.5) as well as cancer risk associated with exposure to toxic air pollutants. We estimate health impacts using concentration-response functions from the published literature and compare the costs and benefits of the program by assigning monetary values to the health risks. In the second part of this paper, we present the results of our case study and consider the implications for incorporating exposure and risk concepts into I-O LCA. Conclusions We have presented a methodology to incorporate regional variability in emissions and exposure into input-output LCA, using reduced-form information about the relationship between emissions and population exposure, along with standard input-output analysis and risk assessment methods. The location-weighted intake fractions can overcome the difficulty in incorporation of regional exposure in LCIA.  相似文献   

14.
The risk assessment process is a critical function for deployment toxicology research. It is essential to the decision making process related to establishing risk reduction procedures and for formulating appropriate exposure levels to protect naval personnel from potentially hazardous chemicals in the military that could result in a reduction in readiness operations. These decisions must be based on quality data from well-planned laboratory animal studies that guide the judgements, which result in effective risk characterization and risk management. The process of risk assessment in deployment toxicology essentially uses the same principles as civilian risk assessment, but adds activities essential to the military mission, including intended and unintended exposure to chemicals and chemical mixtures. Risk assessment and Navy deployment toxicology data are integrated into a systematic and well-planned approach to the organization of scientific information. The purpose of this paper is to outline the analytical framework used to develop strategies to protect the health of deployed Navy forces.  相似文献   

15.
Recent reports in the scientific literature and the media, related to elevated levels of polychlorinated biphenyls (PCBs) and polybrominated diethyl ethers (PBDEs) in farmed and wild salmon have had significant impacts on public opinion and consumer behavior, influencing the sales of farmed salmon in North America and Europe. The assessment of contaminants in fatty fish, an important source of omega-3 fatty acids, is therefore an exercise in balancing risks and benefits. Human health risk assessors and risk managers will benefit from an understanding of the level of uncertainty that is integrated into all aspects of evaluating risk in this context. Significant variability exists in the way in which analyses are conducted, how data are reported, and how they are used in risk assessments. We conducted an analytical review of PCB and PBDE data in farmed and wild salmon, and identified critical issues having implications on human health risk assessment from fish consumption. These issues include: analytical methodologies used, quantification issues, reporting of QA/QC information, tissue sampling, nature of tissue analyzed, and laboratory competence. This article reviews and outlines these issues, discusses their implications for human health risk assessment, and recommends the consistent application of analytical fish tissue data in human health risk assessment.  相似文献   

16.
The coefficient of variation CV (%) is widely used to measure the relative variation of a random variable to its mean or to assess and compare the performance of analytical techniques/equipments. A review is made of the existing multivariate extensions of the univariate CV where, instead of a random variable, a random vector is considered, and a novel definition is proposed. The multivariate CV obtained only requires the calculation of the mean vector, the covariance matrix and simple quadratic forms. No matrix inversion is needed which makes the new approach equally attractive in high dimensional as in very small sample size problems. As an illustration, the method is applied to electrophoresis data from external quality assessment in laboratory medicine, to phenotypic characteristics of pocket gophers and to a microarray data set.  相似文献   

17.
Antibody-drug conjugates (ADCs) are a growing class of biotherapeutics in which a potent small molecule is linked to an antibody. ADCs are highly complex and structurally heterogeneous, typically containing numerous product-related species. One of the most impactful steps in ADC development is the identification of critical quality attributes to determine product characteristics that may affect safety and efficacy. However, due to the additional complexity of ADCs relative to the parent antibodies, establishing a solid understanding of the major quality attributes and determining their criticality are a major undertaking in ADC development. Here, we review the development challenges, especially for reliable detection of quality attributes, citing literature and new data from our laboratories, highlight recent improvements in major analytical techniques for ADC characterization and control, and discuss newer techniques, such as two-dimensional liquid chromatography, that have potential to be included in analytical control strategies.  相似文献   

18.
Despite the highly convoluted nature of the human brain, neural field models typically treat the cortex as a planar two-dimensional sheet of ne;urons. Here, we present an approach for solving neural field equations on surfaces more akin to the cortical geometries typically obtained from neuroimaging data. Our approach involves solving the integral form of the partial integro-differential equation directly using collocation techniques alongside efficient numerical procedures for determining geodesic distances between neural units. To illustrate our methods, we study localised activity patterns in a two-dimensional neural field equation posed on a periodic square domain, the curved surface of a torus, and the cortical surface of a rat brain, the latter of which is constructed using neuroimaging data. Our results are twofold: Firstly, we find that collocation techniques are able to replicate solutions obtained using more standard Fourier based methods on a flat, periodic domain, independent of the underlying mesh. This result is particularly significant given the highly irregular nature of the type of meshes derived from modern neuroimaging data. And secondly, by deploying efficient numerical schemes to compute geodesics, our approach is not only capable of modelling macroscopic pattern formation on realistic cortical geometries, but can also be extended to include cortical architectures of more physiological relevance. Importantly, such an approach provides a means by which to investigate the influence of cortical geometry upon the nucleation and propagation of spatially localised neural activity and beyond. It thus promises to provide model-based insights into disorders like epilepsy, or spreading depression, as well as healthy cognitive processes like working memory or attention.  相似文献   

19.
Biotherapeutics, such as those derived from monoclonal antibodies (mAbs), are industrially produced in controlled multiunit operation bioprocesses. Each unit operation contributes to the final characteristics of the bioproduct. The complexity of the bioprocesses, the cellular machinery, and the bioproduct molecules, typically leads to inherent heterogeneity and variability of the final critical quality attributes (CQAs). In order to improve process control and increase product quality assurance, online and real-time monitoring of product CQAs is most relevant. In this review, the recent advances in CQAs monitoring of biotherapeutic drugs, with emphasis on mAbs, and throughout, the different bioprocess unit operations are reviewed. Recent analytical techniques used for assessment of product-related CQAs of mAbs are considered in light of the analytical speed and ability to measure different CQAs. Furthermore, the state of art modeling approaches for CQA estimation in real-time are presented as a viable alternative for real-time bioproduct CQA monitoring under the process analytical technology and quality-by-design frameworks in the biopharmaceutical industry, which have recently been demonstrated.  相似文献   

20.
Parametric life-cycle assessment (LCA) models have been integrated with traditional design tools and used to demonstrate the rapid elucidation of holistic, analytical trade-offs among detailed design variations. A different approach is needed, however, if analytical environmental assessment is to be incorporated in very early design stages. During early stages, there may be competing product concepts with dramatic differences. Detailed information is scarce, and decisions must be made quickly.
This article explores an approximate method for providing preliminary LCAs. In this method, learning algorithms trained using the known characteristics of existing products might allow environmental aspects of new product concepts to be approximated quickly during conceptual design without defining new models. Artificial neural networks are trained to generalize on product attributes, which are characteristics of product concepts, and environmental inventory data from pre-existing LCAs. The product design team then queries the trained artificial model with new high-level attributes to quickly obtain an impact assessment for a new product concept. Foundations for the learning system approach are established, and then an application within the distributed object-based modeling environment (DOME) is provided. Tests have shown that it is possible to predict life-cycle energy consumption, and that the method could be used to predict solid waste, greenhouse effect, ozone depletion, acidification, eutrophication, winter and summer smog.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号