首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.

Background

Detecting causality for short time-series data such as gene regulation data is quite important but it is usually very difficult. This can be used in many fields especially in biological systems. Recently, several powerful methods have been set up to solve this problem. However, it usually needs very long time-series data or much more samples for the existing methods to detect causality among the given or observed data. In our real applications, such as for biological systems, the obtained data or samples are short or small. Since the data or samples are highly depended on experiment or limited resource.

Results

In order to overcome these limitations, here we propose a new method called topologically equivalent position method which can detect causality for very short time-series data or small samples. This method is mainly based on attractor embedding theory in nonlinear dynamical systems. By comparing with inner composition alignment, we use theoretical models and real gene expression data to show the effectiveness of our method.

Conclusions

As a result, it shows our method can be effectively used in biological systems. We hope our method can be useful in many other fields in near future such as complex networks, ecological systems and so on.
  相似文献   

2.
3.
4.

Background

Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set.

Results

We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants.

Conclusions

The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model.  相似文献   

5.

Background

Copy number variants (CNV) are a potentially important component of the genetic contribution to risk of common complex diseases. Analysis of the association between CNVs and disease requires that uncertainty in CNV copy-number calls, which can be substantial, be taken into account; failure to consider this uncertainty can lead to biased results. Therefore, there is a need to develop and use appropriate statistical tools. To address this issue, we have developed CNVassoc, an R package for carrying out association analysis of common copy number variants in population-based studies. This package includes functions for testing for association with different classes of response variables (e.g. class status, censored data, counts) under a series of study designs (case-control, cohort, etc) and inheritance models, adjusting for covariates. The package includes functions for inferring copy number (CNV genotype calling), but can also accept copy number data generated by other algorithms (e.g. CANARY, CGHcall, IMPUTE).

Results

Here we present a new R package, CNVassoc, that can deal with different types of CNV arising from different platforms such as MLPA o aCGH. Through a real data example we illustrate that our method is able to incorporate uncertainty in the association process. We also show how our package can also be useful when analyzing imputed data when analyzing imputed SNPs. Through a simulation study we show that CNVassoc outperforms CNVtools in terms of computing time as well as in convergence failure rate.

Conclusions

We provide a package that outperforms the existing ones in terms of modelling flexibility, power, convergence rate, ease of covariate adjustment, and requirements for sample size and signal quality. Therefore, we offer CNVassoc as a method for routine use in CNV association studies.  相似文献   

6.

Introduction

Concussions are a major health concern as they cause significant acute symptoms and in some athletes, long-term neurologic dysfunction. Diagnosis of concussion can be difficult, as are the decisions to stop play.

Objective

To determine if concussions in adolescent male hockey players could be diagnosed using plasma metabolomics profiling.

Methods

Plasma was obtained from 12 concussed and 17 non-concussed athletes, and assayed for 174 metabolites with proton nuclear magnetic resonance and direct injection liquid chromatography tandem mass spectrometry. Data were analysed with multivariate statistical analysis and machine learning.

Results

The estimated time from concussion occurrence to blood draw at the first clinic visit was 2.3 ± 0.7 days. Using principal component analysis, the leading 10 components, each containing 9 metabolites, were shown to account for 82 % of the variance between cohorts, and relied heavily on changes in glycerophospholipids. Cross-validation of the classifier using a leave-one out approach demonstrated a 92 % accuracy rate in diagnosing a concussion (P < 0.0001). The number of metabolites required to achieve the 92 % diagnostic accuracy was minimized from 174 to as few as 17 metabolites. Receiver operating characteristic analyses generated an area under the curve of 0.91, indicating excellent concussion diagnostic potential.

Conclusion

Metabolomics profiling, together with multivariate statistical analysis and machine learning, identified concussed athletes with >90 % certainty. Metabolomics profiling represents a novel diagnostic method for concussion, and may be amenable to point-of-care testing.
  相似文献   

7.

Background

Models of biochemical systems are typically complex, which may complicate the discovery of cardinal biochemical principles. It is therefore important to single out the parts of a model that are essential for the function of the system, so that the remaining non-essential parts can be eliminated. However, each component of a mechanistic model has a clear biochemical interpretation, and it is desirable to conserve as much of this interpretability as possible in the reduction process. Furthermore, it is of great advantage if we can translate predictions from the reduced model to the original model.

Results

In this paper we present a novel method for model reduction that generates reduced models with a clear biochemical interpretation. Unlike conventional methods for model reduction our method enables the mapping of predictions by the reduced model to the corresponding detailed predictions by the original model. The method is based on proper lumping of state variables interacting on short time scales and on the computation of fraction parameters, which serve as the link between the reduced model and the original model. We illustrate the advantages of the proposed method by applying it to two biochemical models. The first model is of modest size and is commonly occurring as a part of larger models. The second model describes glucose transport across the cell membrane in baker's yeast. Both models can be significantly reduced with the proposed method, at the same time as the interpretability is conserved.

Conclusions

We introduce a novel method for reduction of biochemical models that is compatible with the concept of zooming. Zooming allows the modeler to work on different levels of model granularity, and enables a direct interpretation of how modifications to the model on one level affect the model on other levels in the hierarchy. The method extends the applicability of the method that was previously developed for zooming of linear biochemical models to nonlinear models.  相似文献   

8.

Purpose

A complete assessment of water use in life cycle assessment (LCA) involves modelling both consumptive and degradative water use. Due to the range of environmental mechanisms involved, the results are typically reported as a profile of impact category indicator results. However, there is also demand for a single score stand-alone water footprint, analogous to the carbon footprint. To facilitate single score reporting, the critical dilution volume approach has been used to express a degradative emission in terms of a theoretical water volume, sometimes referred to as grey water. This approach has not received widespread acceptance and a new approach is proposed which takes advantage of the complex fate and effects models normally employed in LCA.

Methods

Results for both consumptive and degradative water use are expressed in the reference unit H2Oe, enabling summation and reporting as a single stand-alone value. Consumptive water use is assessed taking into consideration the local water stress relative to the global average water stress (0.602). Concerning degradative water use, each emission is modelled separately using the ReCiPe impact assessment methodology, with results subsequently normalised, weighted and converted to the reference unit (H2Oe) by comparison to the global average value for consumptive water use (1.86?×?10?3 ReCiPe points m?3).

Results and discussion

The new method, illustrated in a simplified case study, incorporates best practice in terms of life cycle impact assessment modelling for eutrophication, human and eco-toxicity, and is able to assimilate new developments relating to these and any other impact assessment models relevant to water pollution.

Conclusions

The new method enables a more comprehensive and robust assessment of degradative water use in a single score stand-alone water footprint than has been possible in the past.  相似文献   

9.
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC ( arameter stimation in a on- mensionalized -system with onstraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.  相似文献   

10.

Aims

This study aimed to compare stepwise multiple linear regression (SMLR), partial least squares regression (PLSR) and support vector machine regression (SVMR) for estimating soil total nitrogen (TN) contents with laboratory visible/near-infrared reflectance (Vis/NIR) of selected coarse and heterogeneous soils. Moreover, the effects of the first (1st) vs. second (2nd) derivative of spectral reflectance and the importance wavelengths were explored.

Methods

The TN contents and the Vis/NIR were measured in the laboratory. Several methods were employed for Vis/NIR data pre-processing. The SMLR, PLSR and SVMR models were calibrated and validated using independent datasets.

Results

Results showed that the SVMR and the PLSR models had similar performances, and better performances than the SMLR. The spectral bands near 1450, 1850, 2250, 2330 and 2430 nm in the PLSR model were important wavelengths. In addition, the 1st derivative was more appropriate than the 2nd derivative for spectral data pre-processing.

Conclusions

PLSR was the most suitable method for estimating TN contents in this study. SVMR may be a promising technique, and its potential needs to be further explored. Moreover, the future studies using outdoor and airborne/satellite hyperspectral data for estimating TN content are necessary for testing the findings.  相似文献   

11.

Purpose

The purpose of this paper is to supply an open method for weighting different environmental impacts, open to basically different evaluation approaches and open to easy revisions. From the partial, diverse and conflicting weighing methods available, a most consistent and flexible meta-method is constructed, allowing for a transparent discussion on weighting.

Methods

The methods incorporated are as general as possible, each single one being as pure as possible. We surveyed encompassing operational methods for evaluation, applicable in LCA and in larger systems like countries. They differ in terms of modelling, as to midpoint or as to endpoint; as to evaluation set-up, in terms of collective preferences or individual preferences; and as to being either revealed or stated. The first is midpoint modelling with collectively stated preferences, with operational weighting schemes from Dutch and US government applications. Second is the LCA-type endpoint approach using individual stated preferences, with public examples from Japan and the Netherlands. The third is the integrated modelling approach by economists.

Results

All methods are internally inconsistent, as in terms of treatment of place and time, and they are incomplete, lacking environmental interventions and effect routes. We repaired only incompleteness, by methods transfer. Finally, we combined the three groups of methods into a meta-weighting method, aligned to the ILCD Handbook requirements for impact assessment. Application to time series data on EU-27 consumption shows how the EU developed in terms of overall environmental decoupling.

Conclusions

The disparate methods available all can be improved substantially. For now, a user adjustable meta-method is the best option, allowing for public discussion. A flexible regularly updated spreadsheet is supplied with the article.  相似文献   

12.

Purpose

Topsoil erosion due to land use has been characterised as one of the most damaging problems from the perspective of soil-resource depletion, changes in soil fertility and net soil productivity and damage to aquatic ecosystems. On-site environmental damage to topsoil by water erosion has begun to be considered in Life Cycle Assessment (LCA) within the context of ecosystem services. However, a framework for modelling soil erosion by water, addressing off-site deposition in surface water systems, to support life cycle inventory (LCI) modelling is still lacking. The objectives of this paper are to conduct an overview of existing methods addressing topsoil erosion issues in LCA and to develop a framework to support LCI modelling of topsoil erosion, transport and deposition in surface water systems, to establish a procedure for assessing the environmental damage from topsoil erosion on water ecosystems.

Methods

The main features of existing methods addressing topsoil erosion issues in LCA are analysed, particularly with respect to LCI and Life Cycle Impact Assessment methodologies. An overview of nine topsoil erosion models is performed to estimate topsoil erosion by water, soil particle transport through the landscape and its in-stream deposition. The type of erosion evaluated by each of the models, as well as their applicable spatial scale, level of input data requirements and operational complexity issues are considered. The WATEM-SEDEM model is proposed as the most adequate to perform LCI erosion analysis.

Results and discussion

The definition of land use type, the area of assessment, spatial location and system boundaries are the main elements discussed. Depending on the defined system boundaries and the inherent routing network of the detached soil particles to the water systems, the solving of the multifunctionality of the system assumes particular relevance. Simplifications related to the spatial variability of the input data parameters are recommended. Finally, a sensitivity analysis is recommended to evaluate the effects of the transport capacity coefficient in the LCI results.

Conclusions

The published LCA methods focus only on the changes of soil properties due to topsoil erosion by water. This study provides a simplified framework to perform an LCI of topsoil erosion by considering off-site deposition of eroded particles in surface water systems. The widespread use of the proposed framework would require the development of LCI erosion databases. The issues of topsoil erosion impact on aquatic biodiversity, including the development of characterisation factors, are now the subject of on-going research.  相似文献   

13.

Background

Proteomic profiling is a rapidly developing technology that may enable early disease screening and diagnosis. Surface-enhanced laser desorption ionization–time of flight mass spectrometry (SELDI-TOF MS) has demonstrated promising results in screening and early detection of many diseases. In particular, it has emerged as a high-throughput tool for detection and differentiation of several cancer types. This review aims to appraise published data on the impact of SELDI-TOF MS in breast cancer.

Methods

A systematic literature search between 1965 and 2009 was conducted using the PubMed, EMBASE, and Cochrane Library databases. Studies covering different aspects of breast cancer proteomic profiling using SELDI-TOF MS technology were critically reviewed by researchers and specialists in the field.

Results

Fourteen key studies involving breast cancer biomarker discovery using SELDI-TOF MS proteomic profiling were identified. The studies differed in their inclusion and exclusion criteria, biologic samples, preparation protocols, arrays used, and analytical settings. Taken together, the numerous studies suggest that SELDI-TOF MS methodology may be used as a fast and robust approach to study the breast cancer proteome and enable the analysis of the correlations between proteomic expression patterns and breast cancer.

Conclusion

SELDI-TOF MS is a promising high-throughput technology with potential applications in breast cancer screening, detection, and prognostication. Further studies are needed to resolve current limitations and facilitate clinical utility.  相似文献   

14.

Background

Pigs are widely used as models for human physiological changes in intervention studies, because of the close resemblance between human and porcine physiology and the high degree of experimental control when using an animal model. Cloned animals have, in principle, identical genotypes and possibly also phenotypes and this offer an extra level of experimental control which could possibly make them a desirable tool for intervention studies. Therefore, in the present study, we address how phenotype and phenotypic variation is affected by cloning, through comparison of cloned pigs and normal outbred pigs.

Results

The metabolic phenotype of cloned pigs (n = 5) was for the first time elucidated by nuclear magnetic resonance (NMR)-based metabolomic analysis of multiple bio-fluids including plasma, bile and urine. The metabolic phenotype of the cloned pigs was compared with normal outbred pigs (n = 6) by multivariate data analysis, which revealed differences in the metabolic phenotypes. Plasma lactate was higher for cloned vs control pigs, while multiple metabolites were altered in the bile. However a lower inter-individual variability for cloned pigs compared with control pigs could not be established.

Conclusions

From the present study we conclude that cloned and normal outbred pigs are phenotypically different. However, it cannot be concluded that the use of cloned animals will reduce the inter-individual variation in intervention studies, though this is based on a limited number of animals.  相似文献   

15.

Purpose

As a consequence of the multi-functionality of land, the impact assessment of land use in Life Cycle Impact Assessment requires the modelling of several impact pathways covering biodiversity and ecosystem services. To provide consistency amongst these separate impact pathways, general principles for their modelling are provided in this paper. These are refinements to the principles that have already been proposed in publications by the UNEP-SETAC Life Cycle Initiative. In particular, this paper addresses the calculation of land use interventions and land use impacts, the issue of impact reversibility, the spatial and temporal distribution of such impacts and the assessment of absolute or relative ecosystem quality changes. Based on this, we propose a guideline to build methods for land use impact assessment in Life Cycle Assessment (LCA).

Results

Recommendations are given for the development of new characterization models and for which a series of key elements should explicitly be stated, such as the modelled land use impact pathways, the land use/cover typology covered, the level of biogeographical differentiation used for the characterization factors, the reference land use situation used and if relative or absolute quality changes are used to calculate land use impacts. Moreover, for an application of the characterisation factors (CFs) in an LCA study, data collection should be transparent with respect to the data input required from the land use inventory and the regeneration times. Indications on how generic CFs can be used for the background system as well as how spatial-based CFs can be calculated for the foreground system in a specific LCA study and how land use change is to be allocated should be detailed. Finally, it becomes necessary to justify the modelling period for which land use impacts of land transformation and occupation are calculated and how uncertainty is accounted for.

Discussion

The presented guideline is based on a number of assumptions: Discrete land use types are sufficient for an assessment of land use impacts; ecosystem quality remains constant over time of occupation; time and area of occupation are substitutable; transformation time is negligible; regeneration is linear and independent from land use history and landscape configuration; biodiversity and multiple ecosystem services are independent; the ecological impact is linearly increasing with the intervention; and there is no interaction between land use and other drivers such as climate change. These assumptions might influence the results of land use Life Cycle Impact Assessment and need to be critically reflected.

Conclusions and recommendations

In this and the other papers of the special issue, we presented the principles and recommendations for the calculation of land use impacts on biodiversity and ecosystem services on a global scale. In the framework of LCA, they are mainly used for the assessment of land use impacts in the background system. The main areas for further development are the link to regional ecological models running in the foreground system, relative weighting of the ecosystem services midpoints and indirect land use.  相似文献   

16.
17.

Background

Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems.

Results

In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters.

Conclusion

The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out on the constrained optimization problem and yield realistic model parameters that are more likely to hold up in extrapolations with the model.  相似文献   

18.
19.

Purpose

Consequential Life Cycle Assessment (C-LCA) is a “system modelling approach in which activities in a product system are linked so that activities are included in the product system to the extent that they are expected to change as a consequence of a change in demand”. Hence, C-LCA focuses on micro-economic actions linked to macro-economic consequences, by identifying the (marginal) suppliers and technologies prone to be affected by variable scale changes in the demand of a product. Detecting the direct and indirect environmental effects due to changes in the production system is not an easy task. Hence, researchers have combined the consequential perspective with different econometric models. Therefore, the aim of this study is to assess an increase in biocrops cultivation in Luxembourg using three different consequential modelling approaches to understand the benefits, drawbacks and assumptions linked to each approach as applied to the case study selected.

Methods

Firstly, a partial equilibrium (PE) model is used to detect changes in land cultivation based on the farmers’ revenue maximisation. Secondly, another PE model is proposed, which considers a different perspective aiming at minimising a total adaptation cost (so-called opportunity cost) to satisfy a given new demand of domestically produced biofuel. Finally, the consequential system delimitation for agricultural LCA approach, as proposed by Schmidt (Int J Life Cycle Assess 13:350–364, 2008), is applied.

Results and discussion

The two PE models present complex shifts in crop rotation land use changes (LUCs), linked to the optimisation that is performed, while the remaining approach has limited consequential impact on changes in crop patterns since the expert opinion decision tree constitutes a simplification of the ongoing LUCs. However, environmental consequences in the latter were considerably higher due to intercontinental trade assumptions recommended by the experts that were not accounted for in the economic models. Environmental variations between the different scenarios due to LUCs vary based on the different expert- or computational-based assumptions. Finally, environmental consequences as compared with the current state-of-the-art are lame due to the limited impact of the shock within the global trade market.

Conclusions

The use of several consequential modelling approaches within the same study may help widen the interpretation of the advantages or risks of applying a specific change to a production system. In fact, different models may not only be good alternatives in terms of comparability of scenarios and assumptions, but there may also be room for complementing these within a unique framework to reduce uncertainties in an integrated way.  相似文献   

20.

Introduction

Virtually all existing expectation-maximization (EM) algorithms for quantitative trait locus (QTL) mapping overlook the covariance structure of genetic effects, even though this information can help enhance the robustness of model-based inferences.

Results

Here, we propose fast EM and pseudo-EM-based procedures for Bayesian shrinkage analysis of QTLs, designed to accommodate the posterior covariance structure of genetic effects through a block-updating scheme. That is, updating all genetic effects simultaneously through many cycles of iterations.

Conclusion

Simulation results based on computer-generated and real-world marker data demonstrated the ability of our method to swiftly produce sensible results regarding the phenotype-to-genotype association. Our new method provides a robust and remarkably fast alternative to full Bayesian estimation in high-dimensional models where the computational burden associated with Markov chain Monte Carlo simulation is often unwieldy. The R code used to fit the model to the data is provided in the online supplementary material.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号